Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
90% found this document useful (10 votes)
5K views

Introduction To Algorithms

Introduction to algorithms 6.046J / 18.401J LECTURE 1 analysis of algorithms the theoretical study of computer-program performance and resource usage. What's more important than performance? modularity user-friendliness correctness programmer time maintainability simplicity functionality extensibility robustness reliability. Performance is the currency of computing. The lessons of program performance generalize to other computing resources. Speed is fun!

Uploaded by

Shri Man
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
90% found this document useful (10 votes)
5K views

Introduction To Algorithms

Introduction to algorithms 6.046J / 18.401J LECTURE 1 analysis of algorithms the theoretical study of computer-program performance and resource usage. What's more important than performance? modularity user-friendliness correctness programmer time maintainability simplicity functionality extensibility robustness reliability. Performance is the currency of computing. The lessons of program performance generalize to other computing resources. Speed is fun!

Uploaded by

Shri Man
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 769

Introduction to Algorithms

6.046J/18.401J
LECTURE 1
Analysis of Algorithms
Insertion sort
Asymptotic analysis
Merge sort
Recurrences

Prof. Charles E. Leiserson


Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Course information
1.
2.
3.
4.
5.
6.
7.

Staff
Distance learning
Prerequisites
Lectures
Recitations
Handouts
Textbook

September 7, 2005

8.
9.
10.
11.
12.
13.
14.

Course website
Extra help
Registration
Problem sets
Describing algorithms
Grading policy
Collaboration policy

Introduction to Algorithms

L1.2

Analysis of algorithms
The theoretical study of computer-program
performance and resource usage.
Whats more important than performance?
modularity
user-friendliness
correctness
programmer time
maintainability
simplicity
functionality
extensibility
robustness
reliability
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.3

Why study algorithms and


performance?
Algorithms help us to understand scalability.
Performance often draws the line between what
is feasible and what is impossible.
Algorithmic mathematics provides a language
for talking about program behavior.
Performance is the currency of computing.
The lessons of program performance generalize
to other computing resources.
Speed is fun!
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.4

The problem of sorting


Input: sequence a1, a2, , an of numbers.
Output: permutation a'1, a'2, , a'n such
that a'1 a'2 a'n .
Example:
Input: 8 2 4 9 3 6
Output: 2 3 4 6 8 9
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.5

Insertion sort

pseudocode

September 7, 2005

INSERTION-SORT (A, n)
A[1 . . n]
for j 2 to n
do key A[ j]
ij1
while i > 0 and A[i] > key
do A[i+1] A[i]
ii1
A[i+1] = key

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.6

Insertion sort
INSERTION-SORT (A, n)
A[1 . . n]
for j 2 to n
do key A[ j]
ij1
while i > 0 and A[i] > key
do A[i+1] A[i]
ii1
A[i+1] = key

pseudocode

A:
sorted
September 7, 2005

key

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.7

Example of insertion sort


8

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.8

Example of insertion sort


8

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.9

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.10

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.11

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.12

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.13

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.14

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.15

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.16

Example of insertion sort

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.17

Example of insertion sort

September 7, 2005

9 done

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.18

Running time
The running time depends on the input: an
already sorted sequence is easier to sort.
Parameterize the running time by the size of
the input, since short sequences are easier to
sort than long ones.
Generally, we seek upper bounds on the
running time, because everybody likes a
guarantee.

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.19

Kinds of analyses
Worst-case: (usually)
T(n) = maximum time of algorithm
on any input of size n.
Average-case: (sometimes)
T(n) = expected time of algorithm
over all inputs of size n.
Need assumption of statistical
distribution of inputs.
Best-case: (bogus)
Cheat with a slow algorithm that
works fast on some input.
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.20

Machine-independent time
What is insertion sorts worst-case time?
It depends on the speed of our computer:
relative speed (on the same machine),
absolute speed (on different machines).
BIG IDEA:
Ignore machine-dependent constants.
Look at growth of T(n) as n .
Asymptotic Analysis
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.21

-notation
Math:

(g(n)) = { f (n) : there exist positive constants c1, c2, and


n0 such that 0 c1 g(n) f (n) c2 g(n)
for all n n0 }

Engineering:
Drop low-order terms; ignore leading constants.
Example: 3n3 + 90n2 5n + 6046 = (n3)

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.22

Asymptotic performance
When n gets large enough, a (n2) algorithm
always beats a (n3) algorithm.

T(n)

n
September 7, 2005

n0

We shouldnt ignore
asymptotically slower
algorithms, however.
Real-world design
situations often call for a
careful balancing of
engineering objectives.
Asymptotic analysis is a
useful tool to help to
structure our thinking.

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.23

Insertion sort analysis


Worst case: Input reverse sorted.

T ( n) =

2)
(

(
j
)
=

[arithmetic series]

j =2

Average case: All permutations equally likely.

T ( n) =

( j / 2) = (n 2 )
j =2

Is insertion sort a fast sorting algorithm?


Moderately so, for small n.
Not at all, for large n.
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.24

Merge sort
MERGE-SORT A[1 . . n]
1. If n = 1, done.
2. Recursively sort A[ 1 . . n/2 ]
and A[ n/2+1 . . n ] .
3. Merge the 2 sorted lists.
Key subroutine: MERGE

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.25

Merging two sorted arrays


20 12
13 11
7

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.26

Merging two sorted arrays


20 12
13 11
7

1
1

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.27

Merging two sorted arrays


20 12

20 12

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.28

Merging two sorted arrays


20 12

20 12

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.29

Merging two sorted arrays


20 12

20 12

20 12

13 11

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.30

Merging two sorted arrays


20 12

20 12

20 12

13 11

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.31

Merging two sorted arrays


20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.32

Merging two sorted arrays


20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.33

Merging two sorted arrays


20 12

20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.34

Merging two sorted arrays


20 12

20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

13 11

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

11

L1.35

Merging two sorted arrays


20 12

20 12

20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

13 11

13

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

11

L1.36

Merging two sorted arrays


20 12

20 12

20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

13 11

13

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

11

12

L1.37

Merging two sorted arrays


20 12

20 12

20 12

20 12

20 12

20 12

13 11

13 11

13 11

13 11

13 11

13

11

12

Time = (n) to merge a total


of n elements (linear time).
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.38

Analyzing merge sort


T(n)
MERGE-SORT A[1 . . n]
(1)
1. If n = 1, done.
2T(n/2) 2. Recursively sort A[ 1 . . n/2 ]
Abuse
and A[ n/2+1 . . n ] .
(n)
3. Merge the 2 sorted lists
Sloppiness: Should be T( n/2 ) + T( n/2 ) ,
but it turns out not to matter asymptotically.

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.39

Recurrence for merge sort


T(n) =

(1) if n = 1;
2T(n/2) + (n) if n > 1.

We shall usually omit stating the base


case when T(n) = (1) for sufficiently
small n, but only when it has no effect on
the asymptotic solution to the recurrence.
CLRS and Lecture 2 provide several ways
to find a good upper bound on T(n).

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.40

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.41

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
T(n)

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.42

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
T(n/2)

T(n/2)

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.43

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/2

cn/2
T(n/4)

September 7, 2005

T(n/4)

T(n/4)

T(n/4)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.44

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/2

cn/2
cn/4

cn/4

cn/4

cn/4
(1)
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.45

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/2

cn/2
cn/4

cn/4

cn/4

h = lg n cn/4
(1)
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.46

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn
cn/2

cn/2
cn/4

cn/4

cn/4

h = lg n cn/4
(1)
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.47

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn
cn/2

cn/2
cn/4

cn/4

cn/4

h = lg n cn/4

cn

(1)
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.48

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn
cn/2

cn/2
cn/4

cn/4

cn/4

cn

h = lg n cn/4

cn

(1)
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.49

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn
cn/2

cn/2
cn/4

cn/4

(1)
September 7, 2005

cn/4

cn

h = lg n cn/4

cn

#leaves = n
Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

(n)
L1.50

Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn
cn/2

cn/2
cn/4

cn/4

cn/4

(1)

cn

h = lg n cn/4

cn

(n)

#leaves = n

Total = (n lg n)
September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.51

Conclusions
(n lg n) grows more slowly than (n2).
Therefore, merge sort asymptotically
beats insertion sort in the worst case.
In practice, merge sort beats insertion
sort for n > 30 or so.
Go test it out for yourself!

September 7, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

Introduction to Algorithms

L1.52

Introduction to Algorithms
6.046J/18.401J
LECTURE 2
Asymptotic Notation
O-, -, and -notation
Recurrences
Substitution method
Iterating the recurrence
Recursion tree
Master method
Prof. Erik Demaine
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.1

Asymptotic notation
O-notation (upper bounds):
We
We write
write f(n)
f(n) == O(g(n))
O(g(n)) ifif there
there
exist
such
exist constants
constants cc >> 0,
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n) for
for all
all nn nn00..

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.2

Asymptotic notation
O-notation (upper bounds):
We
We write
write f(n)
f(n) == O(g(n))
O(g(n)) ifif there
there
exist
such
exist constants
constants cc >> 0,
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n) for
for all
all nn nn00..
EXAMPLE: 2n2 = O(n3)

September 12, 2005

(c = 1, n0 = 2)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.3

Asymptotic notation
O-notation (upper bounds):
We
We write
write f(n)
f(n) == O(g(n))
O(g(n)) ifif there
there
exist
such
exist constants
constants cc >> 0,
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n) for
for all
all nn nn00..
EXAMPLE: 2n2 = O(n3)

(c = 1, n0 = 2)

functions,
not values
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.4

Asymptotic notation
O-notation (upper bounds):
We
We write
write f(n)
f(n) == O(g(n))
O(g(n)) ifif there
there
exist
such
exist constants
constants cc >> 0,
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n) for
for all
all nn nn00..
EXAMPLE: 2n2 = O(n3)
functions,
not values
September 12, 2005

(c = 1, n0 = 2)
funny, one-way
equality

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.5

Set definition of O-notation


O(g(n))
O(g(n)) == {{ f(n)
f(n) :: there
there exist
exist constants
constants
cc >> 0,
such
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n)
for
for all
all nn nn00 }}

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.6

Set definition of O-notation


O(g(n))
O(g(n)) == {{ f(n)
f(n) :: there
there exist
exist constants
constants
cc >> 0,
such
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n)
for
for all
all nn nn00 }}
EXAMPLE: 2n2 O(n3)

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.7

Set definition of O-notation


O(g(n))
O(g(n)) == {{ f(n)
f(n) :: there
there exist
exist constants
constants
cc >> 0,
such
0, nn00 >> 00 such
that
that 00 f(n)
f(n) cg(n)
cg(n)
for
for all
all nn nn00 }}
EXAMPLE: 2n2 O(n3)
(Logicians: n.2n2 O(n.n3), but its
convenient to be sloppy, as long as we
understand whats really going on.)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.8

Macro substitution
Convention: A set in a formula represents
an anonymous function in the set.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.9

Macro substitution
Convention: A set in a formula represents
an anonymous function in the set.
EXAMPLE:

September 12, 2005

f(n) = n3 + O(n2)
means
f(n) = n3 + h(n)
for some h(n) O(n2) .

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.10

Macro substitution
Convention: A set in a formula represents
an anonymous function in the set.
EXAMPLE:

September 12, 2005

n2 + O(n) = O(n2)
means
for any f(n) O(n):
n2 + f(n) = h(n)
for some h(n) O(n2) .
Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.11

-notation (lower bounds)


O-notation is an upper-bound notation. It
makes no sense to say f(n) is at least O(n2).

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.12

-notation (lower bounds)


O-notation is an upper-bound notation. It
makes no sense to say f(n) is at least O(n2).
(g(n))
(g(n)) == {{ f(n)
f(n) :: there
there exist
exist constants
constants
cc >> 0,
such
0, nn00 >> 00 such
that
that 00 cg(n)
cg(n) f(n)
f(n)
for
for all
all nn nn00 }}

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.13

-notation (lower bounds)


O-notation is an upper-bound notation. It
makes no sense to say f(n) is at least O(n2).
(g(n))
(g(n)) == {{ f(n)
f(n) :: there
there exist
exist constants
constants
cc >> 0,
such
0, nn00 >> 00 such
that
that 00 cg(n)
cg(n) f(n)
f(n)
for
for all
all nn nn00 }}
EXAMPLE:
September 12, 2005

n = (lg n) (c = 1, n0 = 16)
Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.14

-notation (tight bounds)


(g(n))
(g(n)) == O(g(n))
O(g(n))
(g(n))
(g(n))

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.15

-notation (tight bounds)


(g(n))
(g(n)) == O(g(n))
O(g(n))
(g(n))
(g(n))
EXAMPLE:

September 12, 2005

1
2

n 2 n = ( n )
2

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.16

-notation and -notation


O-notation and -notation are like and .
o-notation and -notation are like < and >.

(g(n))
(g(n)) == {{ f(n)
f(n) :: for
for any
any constant
constant cc >> 0,
0,

there
there is
is aa constant
constant nn00 >> 00
such
such that
that 00 f(n)
f(n) << cg(n)
cg(n)
for
for all
all nn nn00 }}

EXAMPLE: 2n2 = o(n3)


September 12, 2005

(n0 = 2/c)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.17

-notation and -notation


O-notation and -notation are like and .
o-notation and -notation are like < and >.

(g(n))
(g(n)) == {{ f(n)
f(n) :: for
for any
any constant
constant cc >> 0,
0,

there
there is
is aa constant
constant nn00 >> 00
such
such that
that 00 cg(n)
cg(n) << f(n)
f(n)
for
for all
all nn nn00 }}

EXAMPLE:
September 12, 2005

n = (lg n)

(n0 = 1+1/c)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.18

Solving recurrences
The analysis of merge sort from Lecture 1
required us to solve a recurrence.
Recurrences are like solving integrals,
differential equations, etc.
o Learn a few tricks.
Lecture 3: Applications of recurrences to
divide-and-conquer algorithms.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.19

Substitution method
The most general method:
1. Guess the form of the solution.
2. Verify by induction.
3. Solve for constants.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.20

Substitution method
The most general method:
1. Guess the form of the solution.
2. Verify by induction.
3. Solve for constants.
EXAMPLE: T(n) = 4T(n/2) + n
[Assume that T(1) = (1).]
Guess O(n3) . (Prove O and separately.)
Assume that T(k) ck3 for k < n .
Prove T(n) cn3 by induction.
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.21

Example of substitution
T (n) = 4T (n / 2) + n
4c ( n / 2 ) 3 + n
= ( c / 2) n 3 + n
desired residual
= cn3 ((c / 2)n3 n)
cn3 desired
whenever (c/2)n3 n 0, for example,
if c 2 and n 1.
residual
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.22

Example (continued)
We must also handle the initial conditions,
that is, ground the induction with base
cases.
Base: T(n) = (1) for all n < n0, where n0
is a suitable constant.
For 1 n < n0, we have (1) cn3, if we
pick c big enough.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.23

Example (continued)
We must also handle the initial conditions,
that is, ground the induction with base
cases.
Base: T(n) = (1) for all n < n0, where n0
is a suitable constant.
For 1 n < n0, we have (1) cn3, if we
pick c big enough.
This bound is not tight!
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.24

A tighter upper bound?


We shall prove that T(n) = O(n2).

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.25

A tighter upper bound?


We shall prove that T(n) = O(n2).
Assume that T(k) ck2 for k < n:
T (n) = 4T (n / 2) + n
4 c ( n / 2) 2 + n
= cn 2 + n
2
= O(n )

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.26

A tighter upper bound?


We shall prove that T(n) = O(n2).
Assume that T(k) ck2 for k < n:
T (n) = 4T (n / 2) + n
4c ( n / 2) 2 + n
= cn 2 + n
2
= O(n ) Wrong! We must prove the I.H.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.27

A tighter upper bound?


We shall prove that T(n) = O(n2).
Assume that T(k) ck2 for k < n:
T (n) = 4T (n / 2) + n
4c ( n / 2) 2 + n
= cn 2 + n
2
= O(n ) Wrong! We must prove the I.H.
= cn 2 ( n) [ desired residual ]
cn 2 for no choice of c > 0. Lose!
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.28

A tighter upper bound!


IDEA: Strengthen the inductive hypothesis.
Subtract a low-order term.
Inductive hypothesis: T(k) c1k2 c2k for k < n.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.29

A tighter upper bound!


IDEA: Strengthen the inductive hypothesis.
Subtract a low-order term.
Inductive hypothesis: T(k) c1k2 c2k for k < n.
T(n) = 4T(n/2) + n
= 4(c1(n/2)2 c2(n/2)) + n
= c1n2 2c2n + n
= c1n2 c2n (c2n n)
c1n2 c2n if c2 1.
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.30

A tighter upper bound!


IDEA: Strengthen the inductive hypothesis.
Subtract a low-order term.
Inductive hypothesis: T(k) c1k2 c2k for k < n.
T(n) = 4T(n/2) + n
= 4(c1(n/2)2 c2(n/2)) + n
= c1n2 2c2n + n
= c1n2 c2n (c2n n)
c1n2 c2n if c2 1.

Pick c1 big enough to handle the initial conditions.


September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.31

Recursion-tree method
A recursion tree models the costs (time) of a
recursive execution of an algorithm.
The recursion-tree method can be unreliable,
just like any method that uses ellipses ().
The recursion-tree method promotes intuition,
however.
The recursion tree method is good for
generating guesses for the substitution method.
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.32

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.33

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
T(n)

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.34

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
T(n/4)

September 12, 2005

T(n/2)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.35

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2
T(n/16)

September 12, 2005

T(n/8)

(n/2)2
T(n/8)

T(n/4)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.36

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2
(n/8)2

(n/8)2

(n/4)2

(n/16)2

(n/2)2

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.37

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2
(n/8)2

(n/2)2
(n/8)2

(n/4)2

(n/16)2

n2

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.38

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2
(n/8)2

(n/8)2

5 n2
16

(n/4)2

(n/16)2

(n/2)2

n2

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.39

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2

(n/8)2

(n/4)2

5 n2
16
25 n 2
256

(n/8)2

(n/16)2

(n/2)2

n2

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.40

Example of recursion tree


Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/2)2
(n/8)2

(n/8)2

(1)
September 12, 2005

(n/4)2

( ) +( )

2
5
5
1 + 16 + 16

Total = n
= (n2)

5 n2
16
25 n 2
256

(n/4)2
(n/16)2

n2

5 3
16

+L
geometric series

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.41

The master method


The master method applies to recurrences of
the form
T(n) = a T(n/b) + f (n) ,
where a 1, b > 1, and f is asymptotically
positive.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.42

Three common cases


Compare f (n) with nlogba:
1. f (n) = O(nlogba ) for some constant > 0.
f (n) grows polynomially slower than nlogba
(by an n factor).
Solution: T(n) = (nlogba) .

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.43

Three common cases


Compare f (n) with nlogba:
1. f (n) = O(nlogba ) for some constant > 0.
f (n) grows polynomially slower than nlogba
(by an n factor).
Solution: T(n) = (nlogba) .
2. f (n) = (nlogba lgkn) for some constant k 0.
f (n) and nlogba grow at similar rates.
Solution: T(n) = (nlogba lgk+1n) .
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.44

Three common cases (cont.)


Compare f (n) with nlogba:
3. f (n) = (nlogba + ) for some constant > 0.
f (n) grows polynomially faster than nlogba (by
an n factor),
and f (n) satisfies the regularity condition that
a f (n/b) c f (n) for some constant c < 1.
Solution: T(n) = ( f (n)) .

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.45

Examples
EX. T(n) = 4T(n/2) + n
a = 4, b = 2 nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 ) for = 1.
T(n) = (n2).

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.46

Examples
EX. T(n) = 4T(n/2) + n
a = 4, b = 2 nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 ) for = 1.
T(n) = (n2).
EX. T(n) = 4T(n/2) + n2
a = 4, b = 2 nlogba = n2; f (n) = n2.
CASE 2: f (n) = (n2lg0n), that is, k = 0.
T(n) = (n2lg n).
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.47

Examples
EX. T(n) = 4T(n/2) + n3
a = 4, b = 2 nlogba = n2; f (n) = n3.
CASE 3: f (n) = (n2 + ) for = 1
and 4(n/2)3 cn3 (reg. cond.) for c = 1/2.
T(n) = (n3).

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.48

Examples
EX. T(n) = 4T(n/2) + n3
a = 4, b = 2 nlogba = n2; f (n) = n3.
CASE 3: f (n) = (n2 + ) for = 1
and 4(n/2)3 cn3 (reg. cond.) for c = 1/2.
T(n) = (n3).
EX. T(n) = 4T(n/2) + n2/lg n
a = 4, b = 2 nlogba = n2; f (n) = n2/lg n.
Master method does not apply. In particular,
for every constant > 0, we have n = (lg n).
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.49

Idea of master theorem


Recursion tree:

f (n)

a
f (n/b) f (n/b) f (n/b)
a
f (n/b2) f (n/b2) f (n/b2)

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.50

Idea of master theorem


Recursion tree:

f (n)

a f (n/b)
a2 f (n/b2)

a
f (n/b) f (n/b) f (n/b)
a
f (n/b2) f (n/b2) f (n/b2)

f (n)

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.51

Idea of master theorem


Recursion tree:

f (n)

a f (n/b)
a2 f (n/b2)

a
f (n/b) f (n/b) f (n/b)
a
h = logbn
f (n/b2) f (n/b2) f (n/b2)

f (n)

(1)
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.52

Idea of master theorem


f (n)

a
f (n/b) f (n/b) f (n/b)
a
h = logbn
f (n/b2) f (n/b2) f (n/b2)

(1)
September 12, 2005

#leaves = ah
= alogbn
= nlogba
Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

f (n)
a f (n/b)
a2 f (n/b2)

Recursion tree:

nlogba (1)
L2.53

Idea of master theorem


f (n)

a
f (n/b) f (n/b) f (n/b)
a
h = logbn
f (n/b2) f (n/b2) f (n/b2)
C
CASE
ASE 1:
1: The
The weight
weight increases
increases
geometrically
geometrically from
from the
the root
root to
to the
the
(1) leaves.
leaves. The
The leaves
leaves hold
hold aa constant
constant
fraction
fraction of
of the
the total
total weight.
weight.
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

f (n)
a f (n/b)
a2 f (n/b2)

Recursion tree:

nlogba (1)
(nlogba)
L2.54

Idea of master theorem


Recursion tree:

f (n)

f (n)

(1)
September 12, 2005

C
CASE
ASE 2:
2: (k
(k == 0)
0) The
The weight
weight
isis approximately
approximately the
the same
same on
on
each
each of
of the
the log
logbbnn levels.
levels.

a f (n/b)
a2 f (n/b2)

a
f (n/b) f (n/b) f (n/b)
a
h = logbn
f (n/b2) f (n/b2) f (n/b2)

nlogba (1)
(nlogbalg n)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.55

Idea of master theorem


f (n)

a
f (n/b) f (n/b) f (n/b)
a
h = logbn
f (n/b2) f (n/b2) f (n/b2)
C
CASE
ASE 3:
3: The
The weight
weight decreases
decreases
geometrically
geometrically from
from the
the root
root to
to the
the
(1) leaves.
leaves. The
The root
root holds
holds aa constant
constant
fraction
fraction of
of the
the total
total weight.
weight.
September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

f (n)
a f (n/b)
a2 f (n/b2)

Recursion tree:

nlogba (1)
( f (n))
L2.56

Appendix: geometric series


n +1
x
1

for x 1
1 + x + x2 + L + xn =
1 x

1
1+ x + x +L =
for |x| < 1
1 x
2

Return to last
slide viewed.

September 12, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.57

Introduction to Algorithms
6.046J/18.401J
LECTURE 3
Divide and Conquer
Binary search
Powering a number
Fibonacci numbers
Matrix multiplication
Strassens algorithm
VLSI tree layout
Prof. Erik D. Demaine
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.1

The divide-and-conquer
design paradigm
1. Divide the problem (instance)
into subproblems.
2. Conquer the subproblems by
solving them recursively.
3. Combine subproblem solutions.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.2

Merge sort
1. Divide: Trivial.
2. Conquer: Recursively sort 2 subarrays.
3. Combine: Linear-time merge.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.3

Merge sort
1. Divide: Trivial.
2. Conquer: Recursively sort 2 subarrays.
3. Combine: Linear-time merge.
T(n) = 2 T(n/2) + (n)
# subproblems
subproblem size
September 14, 2005

work dividing
and combining

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.4

Master theorem (reprise)


T(n) = a T(n/b) + f (n)
CASE 1: f (n) = O(nlogba ), constant > 0
T(n) = (nlogba) .
CASE 2: f (n) = (nlogba lgkn), constant k 0
T(n) = (nlogba lgk+1n) .
CASE 3: f (n) = (nlogba + ), constant > 0,
and regularity condition
T(n) = ( f (n)) .

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.5

Master theorem (reprise)


T(n) = a T(n/b) + f (n)
CASE 1: f (n) = O(nlogba ), constant > 0
T(n) = (nlogba) .
CASE 2: f (n) = (nlogba lgkn), constant k 0
T(n) = (nlogba lgk+1n) .
CASE 3: f (n) = (nlogba + ), constant > 0,
and regularity condition
T(n) = ( f (n)) .
Merge sort: a = 2, b = 2 nlogba = nlog22 = n
CASE 2 (k = 0) T(n) = (n lg n) .
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.6

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.7

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3
September 14, 2005

12

15

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.8

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3
September 14, 2005

12

15

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.9

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3
September 14, 2005

12

15

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.10

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3
September 14, 2005

12

15

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.11

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3
September 14, 2005

12

15

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.12

Binary search
Find an element in a sorted array:
1. Divide: Check middle element.
2. Conquer: Recursively search 1 subarray.
3. Combine: Trivial.
Example: Find 9
3
September 14, 2005

12

15

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.13

Recurrence for binary search


T(n) = 1 T(n/2) + (1)
# subproblems
subproblem size

September 14, 2005

work dividing
and combining

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.14

Recurrence for binary search


T(n) = 1 T(n/2) + (1)
# subproblems
subproblem size

work dividing
and combining

nlogba = nlog21 = n0 = 1 CASE 2 (k = 0)


T(n) = (lg n) .
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.15

Powering a number
Problem: Compute a n, where n N.
Naive algorithm: (n).

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.16

Powering a number
Problem: Compute a n, where n N.
Naive algorithm: (n).
Divide-and-conquer algorithm:
an

September 14, 2005

a n/2 a n/2
a (n1)/2 a (n1)/2 a

if n is even;
if n is odd.

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.17

Powering a number
Problem: Compute a n, where n N.
Naive algorithm: (n).
Divide-and-conquer algorithm:
an

a n/2 a n/2
a (n1)/2 a (n1)/2 a

if n is even;
if n is odd.

T(n) = T(n/2) + (1) T(n) = (lg n) .


September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.18

Fibonacci numbers
Recursive definition:
0
if n = 0;
if n = 1;
Fn = 1
Fn1 + Fn2 if n 2.
0

September 14, 2005

8 13 21 34 L

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.19

Fibonacci numbers
Recursive definition:
0
if n = 0;
if n = 1;
Fn = 1
Fn1 + Fn2 if n 2.
0

8 13 21 34 L

Naive recursive algorithm: ( n)


(exponential time), where = (1 + 5) / 2
is the golden ratio.
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.20

Computing Fibonacci
numbers
Bottom-up:
Compute F0, F1, F2, , Fn in order, forming
each number by summing the two previous.
Running time: (n).

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.21

Computing Fibonacci
numbers
Bottom-up:
Compute F0, F1, F2, , Fn in order, forming
each number by summing the two previous.
Running time: (n).
Naive recursive squaring:
Fn = n/ 5 rounded to the nearest integer.
Recursive squaring: (lg n) time.
This method is unreliable, since floating-point
arithmetic is prone to round-off errors.
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.22

Recursive squaring
Fn +1
Theorem:
Fn

September 14, 2005

Fn 1 1
.
=

Fn 1 1 0

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.23

Recursive squaring
Fn +1
Theorem:
Fn

Fn 1 1
.
=

Fn 1 1 0

Algorithm: Recursive squaring.


Time = (lg n) .

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.24

Recursive squaring
Fn +1
Theorem:
Fn

Fn 1 1
.
=

Fn 1 1 0

Algorithm: Recursive squaring.


Time = (lg n) .

Proof of theorem. (Induction on n.)


1
F
F
2
1 1 1
Base (n = 1):
.
=

F1 F0 1 0
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.25

Recursive squaring
Inductive step (n 2):

Fn +1
F
n

September 14, 2005

.
Fn Fn
Fn 1 1 1

Fn 1 Fn 1 Fn 2 1 0
n1
1 1
1 1
=

1 0
1 0
n
1 1
=

1
0

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.26

Matrix multiplication
Input: A = [aij], B = [bij].
Output: C = [cij] = A B.
c11 c12
c c
21 22
M M
c c
n1 n 2

L c1n a11 a12


L c2 n a21 a22
=
M
O M M
L cnn an1 an 2

i, j = 1, 2, , n.

L a1n b11 b12


L a2 n b21 b22

O M M M
L ann bn1 bn 2

L b1n
L b2 n

O M
L bnn

cij = aik bkj


k =1

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.27

Standard algorithm
for i 1 to n
do for j 1 to n
do cij 0
for k 1 to n
do cij cij + aik bkj

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.28

Standard algorithm
for i 1 to n
do for j 1 to n
do cij 0
for k 1 to n
do cij cij + aik bkj

Running time = (n3)

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.29

Divide-and-conquer algorithm
IDEA:
nn matrix = 22 matrix of (n/2)(n/2) submatrices:
r s a b e f
t u = c d g h

C
r
s
t
u

= ae + bg
= af + bh
= ce + dg
= cf + dh
September 14, 2005

8 mults of (n/2)(n/2) submatrices


4 adds of (n/2)(n/2) submatrices
Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.30

Divide-and-conquer algorithm
IDEA:
nn matrix = 22 matrix of (n/2)(n/2) submatrices:
r s a b e f
t u = c d g h

r
s
t
u

= ae + bg
= af + bh
= ce + dh
= cf + dg
September 14, 2005

C = A B
recursive
8 mults of (n/2)(n/2) submatrices
^
4 adds of (n/2)(n/2) submatrices
Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.31

Analysis of D&C algorithm


T(n) = 8 T(n/2) + (n2)
# submatrices
submatrix size

September 14, 2005

work adding
submatrices

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.32

Analysis of D&C algorithm


T(n) = 8 T(n/2) + (n2)
# submatrices
submatrix size

work adding
submatrices

nlogba = nlog28 = n3 CASE 1 T(n) = (n3).

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.33

Analysis of D&C algorithm


T(n) = 8 T(n/2) + (n2)
# submatrices
submatrix size

work adding
submatrices

nlogba = nlog28 = n3 CASE 1 T(n) = (n3).


No better than the ordinary algorithm.
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.34

Strassens idea
Multiply 22 matrices with only 7 recursive mults.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.35

Strassens idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a ( f h)
P2 = (a + b) h
P3 = (c + d) e
P4 = d (g e)
P5 = (a + d) (e + h)
P6 = (b d) (g + h)
P7 = (a c) (e + f )
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.36

Strassens idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a ( f h)
P2 = (a + b) h
P3 = (c + d) e
P4 = d (g e)
P5 = (a + d) (e + h)
P6 = (b d) (g + h)
P7 = (a c) (e + f )
September 14, 2005

r
s
t
u

= P5 + P4 P2 + P6
= P1 + P2
= P3 + P4
= P5 + P1 P3 P7

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.37

Strassens idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a ( f h)
P2 = (a + b) h
P3 = (c + d) e
P4 = d (g e)
P5 = (a + d) (e + h)
P6 = (b d) (g + h)
P7 = (a c) (e + f )
September 14, 2005

r
s
t
u

= P5 + P4 P2 + P6
= P1 + P2
= P3 + P4
= P5 + P1 P3 P7

77 mults,
mults, 18
18 adds/subs.
adds/subs.
Note:
Note: No
No reliance
reliance on
on
commutativity
commutativity of
of mult!
mult!

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.38

Strassens idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a ( f h)
P2 = (a + b) h
P3 = (c + d) e
P4 = d (g e)
P5 = (a + d) (e + h)
P6 = (b d) (g + h)
P7 = (a c) (e + f )
September 14, 2005

r = P5 + P4 P2 + P6
= (a + d) (e + h)
+ d (g e) (a + b) h
+ (b d) (g + h)
= ae + ah + de + dh
+ dg de ah bh
+ bg + bh dg dh
= ae + bg

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.39

Strassens algorithm
1. Divide: Partition A and B into
(n/2)(n/2) submatrices. Form terms
to be multiplied using + and .
2. Conquer: Perform 7 multiplications of
(n/2)(n/2) submatrices recursively.
3. Combine: Form C using + and on
(n/2)(n/2) submatrices.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.40

Strassens algorithm
1. Divide: Partition A and B into
(n/2)(n/2) submatrices. Form terms
to be multiplied using + and .
2. Conquer: Perform 7 multiplications of
(n/2)(n/2) submatrices recursively.
3. Combine: Form C using + and on
(n/2)(n/2) submatrices.

T(n) = 7 T(n/2) + (n2)


September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.41

Analysis of Strassen
T(n) = 7 T(n/2) + (n2)

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.42

Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
nlogba = nlog27 n2.81 CASE 1 T(n) = (nlg 7).

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.43

Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
nlogba = nlog27 n2.81 CASE 1 T(n) = (nlg 7).
The number 2.81 may not seem much smaller than
3, but because the difference is in the exponent, the
impact on running time is significant. In fact,
Strassens algorithm beats the ordinary algorithm
on todays machines for n 32 or so.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.44

Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
nlogba = nlog27 n2.81 CASE 1 T(n) = (nlg 7).
The number 2.81 may not seem much smaller than
3, but because the difference is in the exponent, the
impact on running time is significant. In fact,
Strassens algorithm beats the ordinary algorithm
on todays machines for n 32 or so.
Best to date (of theoretical interest only): (n2.376L).
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.45

VLSI layout
Problem: Embed a complete binary tree
with n leaves in a grid using minimal area.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.46

VLSI layout
Problem: Embed a complete binary tree
with n leaves in a grid using minimal area.

W(n)
H(n)

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.47

VLSI layout
Problem: Embed a complete binary tree
with n leaves in a grid using minimal area.

W(n)
H(n)
H(n) = H(n/2) + (1)
= (lg n)
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.48

VLSI layout
Problem: Embed a complete binary tree
with n leaves in a grid using minimal area.

W(n)
H(n)
H(n) = H(n/2) + (1)
= (lg n)
September 14, 2005

W(n) = 2 W(n/2) + (1)


= (n)

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.49

VLSI layout
Problem: Embed a complete binary tree
with n leaves in a grid using minimal area.

W(n)
H(n)
H(n) = H(n/2) + (1) W(n) = 2 W(n/2) + (1)
= (lg n)
= (n)
Area = (n lg n)
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.50

H-tree embedding
L(n)

L(n)

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.51

H-tree embedding
L(n)

L(n)

L(n/4) (1) L(n/4)


September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.52

H-tree embedding
L(n)
L(n) = 2 L(n/4) + (1)
= ( n )

L(n)

Area = (n)
L(n/4) (1) L(n/4)
September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.53

Conclusion
Divide and conquer is just one of several
powerful techniques for algorithm design.
Divide-and-conquer algorithms can be
analyzed using recurrences and the master
method (so practice this math).
The divide-and-conquer strategy often leads
to efficient algorithms.

September 14, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L2.54

Introduction to Algorithms
6.046J/18.401J
LECTURE 4
Quicksort
Divide and conquer
Partitioning
Worst-case analysis
Intuition
Randomized quicksort
Analysis
Prof. Charles E. Leiserson
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.1

Quicksort
Proposed by C.A.R. Hoare in 1962.
Divide-and-conquer algorithm.
Sorts in place (like insertion sort, but not
like merge sort).
Very practical (with tuning).

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.2

Divide and conquer


Quicksort an n-element array:
1. Divide: Partition the array into two subarrays
around a pivot x such that elements in lower
subarray x elements in upper subarray.
xx
xx
xx
2. Conquer: Recursively sort the two subarrays.
3. Combine: Trivial.
Key: Linear-time partitioning subroutine.
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.3

Partitioning subroutine
PARTITION(A, p, q) A[ p . . q]
x A[ p]
pivot = A[ p]
Running
Running time
time
ip
== O(n)
O(n) for
for nn
for j p + 1 to q
elements.
elements.
do if A[ j] x
then i i + 1
exchange A[i] A[ j]
exchange A[ p] A[i]
return i

Invariant: xx
p
September 21, 2005

xx

xx
i

??
j

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

q
L4.4

Example of partitioning
66 10
10 13
13 55
i
j

September 21, 2005

88

33

22 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.5

Example of partitioning
66 10
10 13
13 55
i
j

September 21, 2005

88

33

22 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.6

Example of partitioning
66 10
10 13
13 55
i
j

September 21, 2005

88

33

22 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.7

Example of partitioning
66 10
10 13
13 55
66

September 21, 2005

88

33

22 11
11

55 13
13 10
10 88
i
j

33

22 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.8

Example of partitioning
66 10
10 13
13 55
66

September 21, 2005

88

33

22 11
11

55 13
13 10
10 88
i
j

33

22 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.9

Example of partitioning
66 10
10 13
13 55
66

September 21, 2005

88

33

22 11
11

55 13
13 10
10 88
i

33
j

22 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.10

Example of partitioning
66 10
10 13
13 55

88

33

22 11
11

66

55 13
13 10
10 88

33

22 11
11

66

55

September 21, 2005

33 10
10 88 13
13 22 11
11
i
j

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.11

Example of partitioning
66 10
10 13
13 55

88

33

22 11
11

66

55 13
13 10
10 88

33

22 11
11

66

55

September 21, 2005

33 10
10 88 13
13 22 11
11
i
j

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.12

Example of partitioning
66 10
10 13
13 55

88

33

22 11
11

66

55 13
13 10
10 88

33

22 11
11

66

55

33 10
10 88 13
13 22 11
11

66

55

33

September 21, 2005

22
i

88 13
13 10
10 11
11
j

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.13

Example of partitioning
66 10
10 13
13 55

88

33

22 11
11

66

55 13
13 10
10 88

33

22 11
11

66

55

33 10
10 88 13
13 22 11
11

66

55

33

September 21, 2005

22
i

88 13
13 10
10 11
11
j

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.14

Example of partitioning
66 10
10 13
13 55

88

33

22 11
11

66

55 13
13 10
10 88

33

22 11
11

66

55

33 10
10 88 13
13 22 11
11

66

55

33

September 21, 2005

22
i

88 13
13 10
10 11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.15

Example of partitioning
66 10
10 13
13 55

88

33

22 11
11

66

55 13
13 10
10 88

33

22 11
11

66

55

33 10
10 88 13
13 22 11
11

66

55

33

22

88 13
13 10
10 11
11

22

55

33

66
i

88 13
13 10
10 11
11

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.16

Pseudocode for quicksort


QUICKSORT(A, p, r)
if p < r
then q PARTITION(A, p, r)
QUICKSORT(A, p, q1)
QUICKSORT(A, q+1, r)
Initial call: QUICKSORT(A, 1, n)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.17

Analysis of quicksort
Assume all input elements are distinct.
In practice, there are better partitioning
algorithms for when duplicate input
elements may exist.
Let T(n) = worst-case running time on
an array of n elements.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.18

Worst-case of quicksort
Input sorted or reverse sorted.
Partition around min or max element.
One side of partition always has no elements.

T (n) = T (0) + T (n 1) + (n)


= (1) + T (n 1) + (n)
= T (n 1) + (n)
= ( n 2 )
September 21, 2005

(arithmetic series)

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.19

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.20

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn
T(n)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.21

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn
cn
T(0) T(n1)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.22

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn
cn
T(0) c(n1)
T(0) T(n2)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.23

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn
cn
T(0) c(n1)
T(0) c(n2)
T(0)

O
(1)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.24

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn
cn
T(0) c(n1)
T(0) c(n2)
T(0)

n
k = (n 2 )
k =1
O
(1)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.25

Worst-case recursion tree


T(n) = T(0) + T(n1) + cn
cn
(1) c(n1)
h=n

(1) c(n2)
(1)

n
k = (n 2 )
k =1
O

T(n) = (n) + (n2)


= (n2)
(1)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.26

Best-case analysis
(For intuition only!)

If were lucky, PARTITION splits the array evenly:


T(n) = 2T(n/2) + (n)
(same as merge sort)
= (n lg n)
What if the split is

1 9
always 10 : 10 ?

T (n) = T (101 n ) + T (109 n ) + (n)


What is the solution to this recurrence?
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.27

Analysis of almost-best case


T (n)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.28

Analysis of almost-best case


cn

T (101 n )

September 21, 2005

T (109 n )

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.29

Analysis of almost-best case


cn
1
10

cn

9
10

cn

9
9
81
1
T (100
n ) T (100
n ) T (100
n )T (100
n)

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.30

Analysis of almost-best case


1
10

cn

9
100

9
10

cn

O(n)
O(n) leaves
leaves

(1)

cn

log10/9n
9
81
cn
cn
100
100

1
100

cn

cn
cn
cn

cn

(1)
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.31

Analysis of almost-best case


1
10

9
100

9
10

cn

(n lg n)
Lucky!
September 21, 2005

O(n)
O(n) leaves
leaves

(1)

cn

log10/9n
9
81
cn
cn
100
100

log10n
1
cn
100

cn

cn
cn
cn

cn

(1)
cn log10n T(n) cn log10/9n + (n)
Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.32

More intuition
Suppose we alternate lucky, unlucky,
lucky, unlucky, lucky, .
L(n) = 2U(n/2) + (n) lucky
U(n) = L(n 1) + (n) unlucky
Solving:
L(n) = 2(L(n/2 1) + (n/2)) + (n)
= 2L(n/2 1) + (n)
= (n lg n) Lucky!
How can we make sure we are usually lucky?
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.33

Randomized quicksort
IDEA: Partition around a random element.
Running time is independent of the input
order.
No assumptions need to be made about
the input distribution.
No specific input elicits the worst-case
behavior.
The worst case is determined only by the
output of a random-number generator.
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.34

Randomized quicksort
analysis
Let T(n) = the random variable for the running
time of randomized quicksort on an input of size
n, assuming random numbers are independent.
For k = 0, 1, , n1, define the indicator
random variable
Xk =

1 if PARTITION generates a k : nk1 split,


0 otherwise.

E[Xk] = Pr{Xk = 1} = 1/n, since all splits are


equally likely, assuming elements are distinct.
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.35

Analysis (continued)
T(n) =

T(0) + T(n1) + (n) if 0 : n1 split,


T(1) + T(n2) + (n) if 1 : n2 split,
M
T(n1) + T(0) + (n) if n1 : 0 split,

n 1

= X k (T (k ) + T (n k 1) + (n) )
k =0

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.36

Calculating expectation
n 1

E[T (n)] = E X k (T (k ) + T (n k 1) + (n) )


k =0

Take expectations of both sides.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.37

Calculating expectation
n 1

E[T (n)] = E X k (T (k ) + T (n k 1) + (n) )


k =0

n 1

E[ X k (T (k ) + T (n k 1) + (n) )]

k =0

Linearity of expectation.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.38

Calculating expectation
n 1

E[T (n)] = E X k (T (k ) + T (n k 1) + (n) )


k =0

=
=

n 1

E[ X k (T (k ) + T (n k 1) + (n) )]

k =0
n 1

E[ X k ] E[T (k ) + T (n k 1) + (n)]

k =0

Independence of Xk from other random


choices.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.39

Calculating expectation
n 1

E[T (n)] = E X k (T (k ) + T (n k 1) + (n) )


k =0

=
=

n 1

E[ X k (T (k ) + T (n k 1) + (n) )]

k =0
n 1

E[ X k ] E[T (k ) + T (n k 1) + (n)]

k =0
n 1

n 1

n 1

= 1 E [T (k )] + 1 E [T (n k 1)] + 1 (n)
n k =0
n k =0
n k =0

Linearity of expectation; E[Xk] = 1/n .


September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.40

Calculating expectation

n 1
E[T (n)] = E X k (T (k ) + T ( n k 1) + (n) )

k =0
=
=

n 1

E[ X k (T (k ) + T (n k 1) + (n) )]

k =0
n 1

E[ X k ] E[T (k ) + T (n k 1) + (n)]

k =0
n 1

n 1

n 1

= 1 E [T (k )] + 1 E [T (n k 1)] + 1 (n)
n k =0
n k =0
n k =0
n 1

= 2 E [T (k )] + (n)
n k =1
September 21, 2005

Summations have
identical terms.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.41

Hairy recurrence
n 1

E[T (n)] = 2 E [T (k )] + (n)


n k =2
(The k = 0, 1 terms can be absorbed in the (n).)
Prove: E[T(n)] an lg n for constant a > 0 .
Choose a large enough so that an lg n
dominates E[T(n)] for sufficiently small n 2.
n 1

Use fact:

1 n 2 lg n 1n 2
k
lg
k

(exercise).

2
8

k =2
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.42

Substitution method
n 1

E [T (n)] 2 ak lg k + (n)
n k =2
Substitute inductive hypothesis.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.43

Substitution method
n 1

E [T (n)] 2 ak lg k + (n)
n k =2
2a 1 n 2 lg n 1 n 2 + (n)
n 2
8
Use fact.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.44

Substitution method
n 1

E [T (n)] 2 ak lg k + (n)
n k =2
2a 1 n 2 lg n 1 n 2 + (n)
n 2
8
= an lg n an (n)
4

Express as desired residual.

September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.45

Substitution method
n 1

E [T (n)] 2 ak lg k + (n)
n k =2
= 2a 1 n 2 lg n 1 n 2 + (n)
n 2
8
= an lg n an (n)
4

an lg n ,
if a is chosen large enough so that
an/4 dominates the (n).
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.46

Quicksort in practice
Quicksort is a great general-purpose
sorting algorithm.
Quicksort is typically over twice as fast
as merge sort.
Quicksort can benefit substantially from
code tuning.
Quicksort behaves well even with
caching and virtual memory.
September 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L4.47

Introduction to Algorithms
6.046J/18.401J
LECTURE 5
Sorting Lower Bounds
Decision trees
Linear-Time Sorting
Counting sort
Radix sort
Appendix: Punched cards
Prof. Erik Demaine
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.1

How fast can we sort?


All the sorting algorithms we have seen so far
are comparison sorts: only use comparisons to
determine the relative order of elements.
E.g., insertion sort, merge sort, quicksort,
heapsort.
The best worst-case running time that weve
seen for comparison sorting is O(n lg n) .
Is O(n lg n) the best we can do?
Decision trees can help us answer this question.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.2

Decision-tree example
Sort a1, a2, , an

1:2
1:2

2:3
2:3
123
123

1:3
1:3
213
213

1:3
1:3
132
132

312
312

2:3
2:3
231
231

321
321

Each internal node is labeled i:j for i, j {1, 2,, n}.


The left subtree shows subsequent comparisons if ai aj.
The right subtree shows subsequent comparisons if ai aj.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.3

Decision-tree example
Sort a1, a2, a3
= 9, 4, 6 :

1:2
1:2

94

2:3
2:3

123
123

1:3
1:3
213
213

1:3
1:3
132
132

312
312

2:3
2:3
231
231

321
321

Each internal node is labeled i:j for i, j {1, 2,, n}.


The left subtree shows subsequent comparisons if ai aj.
The right subtree shows subsequent comparisons if ai aj.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.4

Decision-tree example
Sort a1, a2, a3
= 9, 4, 6 :

1:2
1:2
2:3
2:3

123
123

1:3
1:3
213
213

1:3
1:3
132
132

312
312

96
2:3
2:3

231
231

321
321

Each internal node is labeled i:j for i, j {1, 2,, n}.


The left subtree shows subsequent comparisons if ai aj.
The right subtree shows subsequent comparisons if ai aj.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.5

Decision-tree example
Sort a1, a2, a3
= 9, 4, 6 :

1:2
1:2
2:3
2:3

123
123

1:3
1:3
213
213 4 6 2:3
2:3

1:3
1:3
132
132

312
312

231
231

321
321

Each internal node is labeled i:j for i, j {1, 2,, n}.


The left subtree shows subsequent comparisons if ai aj.
The right subtree shows subsequent comparisons if ai aj.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.6

Decision-tree example
Sort a1, a2, a3
= 9, 4, 6 :

1:2
1:2
2:3
2:3

123
123

1:3
1:3
213
213

1:3
1:3
132
132

312
312

2:3
2:3
231
231

321
321

469
Each leaf contains a permutation (1), (2),, (n) to
indicate that the ordering a(1) a(2) L a(n) has been
established.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.7

Decision-tree model
A decision tree can model the execution of
any comparison sort:
One tree for each input size n.
View the algorithm as splitting whenever
it compares two elements.
The tree contains the comparisons along
all possible instruction traces.
The running time of the algorithm = the
length of the path taken.
Worst-case running time = height of tree.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.8

Lower bound for decisiontree sorting


Theorem. Any decision tree that can sort n
elements must have height (n lg n) .
Proof. The tree must contain n! leaves, since
there are n! possible permutations. A height-h
binary tree has 2h leaves. Thus, n! 2h .
h lg(n!)
(lg is mono. increasing)
(Stirlings formula)
lg ((n/e)n)
= n lg n n lg e
= (n lg n) .
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.9

Lower bound for comparison


sorting
Corollary. Heapsort and merge sort are
asymptotically optimal comparison sorting
algorithms.

September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.10

Sorting in linear time


Counting sort: No comparisons between elements.
Input: A[1 . . n], where A[ j]{1, 2, , k} .
Output: B[1 . . n], sorted.
Auxiliary storage: C[1 . . k] .

September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.11

Counting sort
for i 1 to k
do C[i] 0
for j 1 to n
do C[A[ j]] C[A[ j]] + 1 C[i] = |{key = i}|
for i 2 to k
do C[i] C[i] + C[i1]
C[i] = |{key i}|
for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.12

Counting-sort example
A:

44

11

33

44

33

C:

B:

September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.13

Loop 1
A:

44

11

33

44

33

C:

00

00

00

00

B:
for i 1 to k
do C[i] 0
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.14

Loop 2
A:

44

11

33

44

33

C:

00

00

00

11

B:
for j 1 to n
do C[A[ j]] C[A[ j]] + 1 C[i] = |{key = i}|
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.15

Loop 2
A:

44

11

33

44

33

C:

11

00

00

11

B:
for j 1 to n
do C[A[ j]] C[A[ j]] + 1 C[i] = |{key = i}|
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.16

Loop 2
A:

44

11

33

44

33

C:

11

00

11

11

B:
for j 1 to n
do C[A[ j]] C[A[ j]] + 1 C[i] = |{key = i}|
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.17

Loop 2
A:

44

11

33

44

33

C:

11

00

11

22

B:
for j 1 to n
do C[A[ j]] C[A[ j]] + 1 C[i] = |{key = i}|
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.18

Loop 2
A:

44

11

33

44

33

C:

11

00

22

22

B:
for j 1 to n
do C[A[ j]] C[A[ j]] + 1 C[i] = |{key = i}|
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.19

Loop 3
A:

44

11

33

44

33

B:
for i 2 to k
do C[i] C[i] + C[i1]
September 26, 2005

C:

11

00

22

22

C':

11

11

22

22

C[i] = |{key i}|

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.20

Loop 3
A:

44

11

33

44

33

B:
for i 2 to k
do C[i] C[i] + C[i1]
September 26, 2005

C:

11

00

22

22

C':

11

11

33

22

C[i] = |{key i}|

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.21

Loop 3
A:

44

11

33

44

33

B:
for i 2 to k
do C[i] C[i] + C[i1]
September 26, 2005

C:

11

00

22

22

C':

11

11

33

55

C[i] = |{key i}|

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.22

Loop 4
A:

44

11

33

44

33

B:

33

C:

11

11

33

55

C':

11

11

22

55

for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.23

Loop 4
A:

44

11

33

44

33
44

B:

33

C:

11

11

22

55

C':

11

11

22

44

for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.24

Loop 4
A:

B:

44

11

33

44

33

33

33

44

C:

11

11

22

44

C':

11

11

11

44

for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.25

Loop 4
1

A:

44

11

33

44

33

B:

11

33

33

44

C:

11

11

11

44

C':

00

11

11

44

for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.26

Loop 4
1

A:

44

11

33

44

33

B:

11

33

33

44

44

C:

00

11

11

44

C':

00

11

11

33

for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.27

Analysis
(k)
(n)
(k)
(n)

for i 1 to k
do C[i] 0
for j 1 to n
do C[A[ j]] C[A[ j]] + 1
for i 2 to k
do C[i] C[i] + C[i1]
for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] 1

(n + k)
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.28

Running time
If k = O(n), then counting sort takes (n) time.
But, sorting takes (n lg n) time!
Wheres the fallacy?
Answer:
Comparison sorting takes (n lg n) time.
Counting sort is not a comparison sort.
In fact, not a single comparison between
elements occurs!
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.29

Stable sorting
Counting sort is a stable sort: it preserves
the input order among equal elements.
A:

44

11

33

44

33

B:

11

33

33

44

44

Exercise: What other sorts have this property?


September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.30

Radix sort
Origin: Herman Holleriths card-sorting
machine for the 1890 U.S. Census. (See
Appendix .)
Digit-by-digit sort.
Holleriths original (bad) idea: sort on
most-significant digit first.
Good idea: Sort on least-significant digit
first with auxiliary stable sort.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.31

Operation of radix sort


329
457
657
839
436
720
355

September 26, 2005

720
355
436
457
657
329
839

720
329
436
839
355
457
657

329
355
436
457
657
720
839

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.32

Correctness of radix sort


Induction on digit position
Assume that the numbers
are sorted by their low-order
t 1 digits.
Sort on digit t

September 26, 2005

720
329
436
839
355
457
657

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

329
355
436
457
657
720
839

L5.33

Correctness of radix sort


Induction on digit position
Assume that the numbers
are sorted by their low-order
t 1 digits.
Sort on digit t
Two numbers that differ in
digit t are correctly sorted.

September 26, 2005

720
329
436
839
355
457
657

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

329
355
436
457
657
720
839

L5.34

Correctness of radix sort


Induction on digit position
Assume that the numbers
are sorted by their low-order
t 1 digits.
Sort on digit t
Two numbers that differ in
digit t are correctly sorted.
Two numbers equal in digit t
are put in the same order as
the input correct order.
September 26, 2005

720
329
436
839
355
457
657

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

329
355
436
457
657
720
839

L5.35

Analysis of radix sort


Assume counting sort is the auxiliary stable sort.
Sort n computer words of b bits each.
Each word can be viewed as having b/r base-2r
digits.
8
8
8
8
Example: 32-bit word
r = 8 b/r = 4 passes of counting sort on
base-28 digits; or r = 16 b/r = 2 passes of
counting sort on base-216 digits.
How many passes should we make?
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.36

Analysis (continued)
Recall: Counting sort takes (n + k) time to
sort n numbers in the range from 0 to k 1.
If each b-bit word is broken into r-bit pieces,
each pass of counting sort takes (n + 2r) time.
Since there are b/r passes, we have

T (n, b) = b (n + 2 r ) .
r

Choose r to minimize T(n, b):


Increasing r means fewer passes, but as
r >> lg n, the time grows exponentially.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.37

Choosing r
T (n, b) = b (n + 2 r )
r

Minimize T(n, b) by differentiating and setting to 0.


Or, just observe that we dont want 2r >> n, and
theres no harm asymptotically in choosing r as
large as possible subject to this constraint.
Choosing r = lg n implies T(n, b) = (bn/lg n) .
For numbers in the range from 0 to n d 1, we
have b = d lg n radix sort runs in (d n) time.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.38

Conclusions
In practice, radix sort is fast for large inputs, as
well as simple to code and maintain.
Example (32-bit numbers):
At most 3 passes when sorting 2000 numbers.
Merge sort and quicksort do at least lg 2000 =
11 passes.
Downside: Unlike quicksort, radix sort displays
little locality of reference, and thus a well-tuned
quicksort fares better on modern processors,
which feature steep memory hierarchies.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.39

Appendix: Punched-card
technology
Herman Hollerith (1860-1929)
Punched cards
Holleriths tabulating system
Operation of the sorter
Origin of radix sort
Modern IBM card
Web resources on punchedcard technology
September 26, 2005

Return to last
slide viewed.

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.40

Herman Hollerith
(1860-1929)
The 1880 U.S. Census took almost 10 years to
process.
While a lecturer at MIT, Hollerith prototyped
punched-card technology.
His machines, including a card sorter, allowed
the 1890 census total to be reported in 6 weeks.
He founded the Tabulating Machine Company in
1911, which merged with other companies in 1924
to form International Business Machines.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.41

Punched cards
Punched card = data record.
Hole = value.
Algorithm = machine + human operator.
Hollerith's tabulating system, punch card
in Genealogy Article on the Internet
Image removed due to copyright restrictions.

Replica of punch card from the 1900 U.S. census. [Howells 2000]

September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.42

Holleriths
tabulating
system
Pantograph card
punch
Hand-press reader
Dial counters
Sorting box
September 26, 2005

Image removed due to copyright


restrictions.

Hollerith Tabulator and Sorter:


Showing details of the
mechanical counter and the
tabulator press. Figure from
[Howells 2000].

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.43

Operation of the sorter


An operator inserts a card into
the press.
Pins on the press reach through
the punched holes to make
Image removed due to copyright restrictions.
electrical contact with mercuryHollerith Tabulator, Pantograph, Press, and
filled cups beneath the card.
Sorter
(http://www.columbia.edu/acis/history/census Whenever a particular digit
tabulator.html)
value is punched, the lid of the
corresponding sorting bin lifts.
The operator deposits the card
into the bin and closes the lid.
When all cards have been processed, the front panel is opened, and
the cards are collected in order, yielding one pass of a stable sort.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.44

Origin of radix sort


Holleriths original 1889 patent alludes to a mostsignificant-digit-first radix sort:
The most complicated combinations can readily be
counted with comparatively few counters or relays by first
assorting the cards according to the first items entering
into the combinations, then reassorting each group
according to the second item entering into the combination,
and so on, and finally counting on a few counters the last
item of the combination for each group of cards.

Least-significant-digit-first radix sort seems to be


a folk invention originated by machine operators.
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.45

Modern IBM card


One character per column.
Image removed due to copyright restrictions.

To view image, visit


http://www.museumwaalsdorp.nl/computer/ima
ges/ibmcard.jpg

Produced by
the WWW
Virtual PunchCard Server.

So, thats why text windows have 80 columns!


September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.46

Web resources on punchedcard technology


Doug Joness punched card index
Biography of Herman Hollerith
The 1890 U.S. Census
Early history of IBM
Pictures of Holleriths inventions
Holleriths patent application (borrowed
from Gordon Bells CyberMuseum)
Impact of punched cards on U.S. history
September 26, 2005

Copyright 2001-5 Erik D. Demaine and Charles E. Leiserson

L5.47

Introduction to Algorithms
6.046J/18.401J
LECTURE 6
Order Statistics
Randomized divide and
conquer
Analysis of expected time
Worst-case linear-time
order statistics
Analysis
Prof. Erik Demaine
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.1

Order statistics
Select the ith smallest of n elements (the
element with rank i).
i = 1: minimum;
i = n: maximum;
i = (n+1)/2 or (n+1)/2: median.
Naive algorithm: Sort and index ith element.
Worst-case running time = (n lg n) + (1)
= (n lg n),
using merge sort or heapsort (not quicksort).
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.2

Randomized divide-andconquer algorithm


RAND-SELECT(A, p, q, i) ith smallest of A[ p . . q]
if p = q then return A[ p]
r RAND-PARTITION(A, p, q)
k = rank(A[r])
krp+1
if i = k then return A[ r]
if i < k
then return RAND-SELECT(A, p, r 1, i )
else return RAND-SELECT(A, r + 1, q, i k )
k
A[r]
A[r]
A[r]
A[r]
p
r
q
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.3

Example
Select the i = 7th smallest:
66 10
10 13
13 55
pivot
Partition:
22 55

33

66

88

33

22 11
11

i=7

88 13
13 10
10 11
11

k=4

Select the 7 4 = 3rd smallest recursively.


September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.4

Intuition for analysis


(All our analyses today assume that all elements
are distinct.)
Lucky:
n log10 / 9 1 = n 0 = 1
T(n) = T(9n/10) + (n)
CASE 3
= (n)
Unlucky:
arithmetic series
T(n) = T(n 1) + (n)
= (n2)
Worse than sorting!
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.5

Analysis of expected time


The analysis follows that of randomized
quicksort, but its a little different.
Let T(n) = the random variable for the running
time of RAND-SELECT on an input of size n,
assuming random numbers are independent.
For k = 0, 1, , n1, define the indicator
random variable
1 if PARTITION generates a k : nk1 split,
Xk =
0 otherwise.
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.6

Analysis (continued)
To obtain an upper bound, assume that the ith
element always falls in the larger side of the
partition:
T(max{0, n1}) + (n) if 0 : n1 split,
T(max{1, n2}) + (n) if 1 : n2 split,
T(n) =
M
T(max{n1, 0}) + (n) if n1 : 0 split,
=

n 1

X k (T (max{k , n k 1}) + (n)) .

k =0
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.7

Calculating expectation
n 1

E[T (n)] = E X k (T (max{k , n k 1}) + (n) )


k =0

Take expectations of both sides.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.8

Calculating expectation
n 1

E[T (n)] = E X k (T (max{k , n k 1}) + (n) )


k =0

n 1

E[ X k (T (max{k , n k 1}) + (n) )]

k =0

Linearity of expectation.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.9

Calculating expectation
n 1

E[T (n)] = E X k (T (max{k , n k 1}) + (n) )


k =0

=
=

n 1

E[ X k (T (max{k , n k 1}) + (n) )]

k =0
n 1

E[ X k ] E[T (max{k , n k 1}) + (n)]

k =0

Independence of Xk from other random


choices.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.10

Calculating expectation
n 1

E[T (n)] = E X k (T (max{k , n k 1}) + (n) )


k =0

=
=

n 1

E[ X k (T (max{k , n k 1}) + (n) )]

k =0
n 1

E[ X k ] E[T (max{k , n k 1}) + (n)]

k =0
n 1

n 1

= 1 E [T (max{k , n k 1})] + 1 (n)


n k =0
n k =0

Linearity of expectation; E[Xk] = 1/n .


September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.11

Calculating expectation

n 1
E[T (n)] = E X k (T (max{k , n k 1}) + (n) )

k =0
=
=

n 1

E[ X k (T (max{k , n k 1}) + (n) )]

k =0
n 1

E[ X k ] E[T (max{k , n k 1}) + (n)]

k =0
n 1

n 1

= 1 E [T (max{k , n k 1})] + 1 (n)


n k =0
n k =0
n 1

2 E [T (k )] + (n)
n k = n / 2
September 28, 2005

Upper terms
appear twice.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.12

Hairy recurrence
(But not quite as hairy as the quicksort one.)
n 1

E[T (n)] = 2 E [T (k )] + (n)


n k= n/2

Prove: E[T(n)] cn for constant c > 0 .
The constant c can be chosen large enough
so that E[T(n)] cn for the base cases.
n 1

3n 2
k

8 (exercise).
Use fact:
k = n / 2
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.13

Substitution method
n 1

E [T (n)] 2 ck + (n)
n k= n/2

Substitute inductive hypothesis.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.14

Substitution method
n 1

E [T (n)] 2 ck + (n)
n k= n/2

2c 3 n 2 + (n)
n 8
Use fact.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.15

Substitution method
n 1

E [T (n)] 2 ck + (n)
n k= n/2

2c 3 n 2 + (n)
n 8
= cn cn (n)
4

Express as desired residual.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.16

Substitution method
n 1

E [T (n)] 2 ck + (n)
n k= n/2

2c 3 n 2 + (n)
n 8
= cn cn (n)

4
cn ,
if c is chosen large enough so
that cn/4 dominates the (n).
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.17

Summary of randomized
order-statistic selection
Works fast: linear expected time.
Excellent algorithm in practice.
But, the worst case is very bad: (n2).
Q. Is there an algorithm that runs in linear
time in the worst case?
A. Yes, due to Blum, Floyd, Pratt, Rivest,
and Tarjan [1973].
IDEA: Generate a good pivot recursively.
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.18

Worst-case linear-time order


statistics

SELECT(i, n)

1. Divide the n elements into groups of 5. Find


the median of each 5-element group by rote.
2. Recursively SELECT the median x of the n/5
group medians to be the pivot.
3. Partition around the pivot x. Let k = rank(x).
4. if i = k then return x
elseif i < k
then recursively SELECT the ith
smallest element in the lower part
else recursively SELECT the (ik)th
smallest element in the upper part
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

Same as
RANDSELECT

L6.19

Choosing the pivot

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.20

Choosing the pivot

1. Divide the n elements into groups of 5.

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.21

Choosing the pivot

1. Divide the n elements into groups of 5. Find lesser


the median of each 5-element group by rote.
greater
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.22

Choosing the pivot

1. Divide the n elements into groups of 5. Find lesser


the median of each 5-element group by rote.
2. Recursively SELECT the median x of the n/5
group medians to be the pivot.
greater
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.23

Analysis

At least half the group medians are x, which


is at least n/5 /2 = n/10 group medians.

lesser

greater
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.24

Analysis

(Assume all elements are distinct.)

At least half the group medians are x, which


is at least n/5 /2 = n/10 group medians.
Therefore, at least 3 n/10 elements are x.

lesser

greater
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.25

Analysis

(Assume all elements are distinct.)

At least half the group medians are x, which


is at least n/5 /2 = n/10 group medians.
Therefore, at least 3 n/10 elements are x.
Similarly, at least 3 n/10 elements are x.
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

lesser

greater
L6.26

Minor simplification
For n 50, we have 3 n/10 n/4.
Therefore, for n 50 the recursive call to
SELECT in Step 4 is executed recursively
on 3n/4 elements.
Thus, the recurrence for running time
can assume that Step 4 takes time
T(3n/4) in the worst case.
For n < 50, we know that the worst-case
time is T(n) = (1).
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.27

Developing the recurrence


T(n)
(n)
T(n/5)
(n)

T(3n/4)

SELECT(i, n)
1. Divide the n elements into groups of 5. Find
the median of each 5-element group by rote.
2. Recursively SELECT the median x of the n/5
group medians to be the pivot.
3. Partition around the pivot x. Let k = rank(x).
4. if i = k then return x
elseif i < k
then recursively SELECT the ith
smallest element in the lower part
else recursively SELECT the (ik)th
smallest element in the upper part

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.28

Solving the recurrence


T (n) = T 1 n + T 3 n + (n)
4
5
T (n) 1 cn + 3 cn + (n)
5
4
= 19 cn + (n)
20
= cn 1 cn (n)

20
cn ,
if c is chosen large enough to handle both the
(n) and the initial conditions.

Substitution:
T(n) cn

September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.29

Conclusions
Since the work at each level of recursion
is a constant fraction (19/20) smaller, the
work per level is a geometric series
dominated by the linear work at the root.
In practice, this algorithm runs slowly,
because the constant in front of n is large.
The randomized algorithm is far more
practical.
Exercise: Why not divide into groups of 3?
September 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L6.30

Introduction to Algorithms
6.046J/18.401J

LECTURE 7
Hashing I
Direct-access tables
Resolving collisions by
chaining
Choosing hash functions
Open addressing
Prof. Charles E. Leiserson
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.1

Symbol-table problem
Symbol table S holding n records:
x

record

key[x]
key[x]
Other fields
containing
satellite data

Operations on S:
INSERT(S, x)
DELETE(S, x)
SEARCH(S, k)

How should the data structure S be organized?


October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.2

Direct-access table
IDEA: Suppose that the keys are drawn from
the set U {0, 1, , m1}, and keys are
distinct. Set up an array T[0 . . m1]:
x
if x K and key[x] = k,
T[k] =
NIL otherwise.
Then, operations take (1) time.
Problem: The range of keys can be large:
64-bit numbers (which represent
18,446,744,073,709,551,616 different keys),
character strings (even larger!).
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.3

Hash functions
Solution: Use a hash function h to map the
universe U of all keys into
T
{0, 1, , m1}:
0
k1

S
k2

k4

k5
k3

h(k1)
h(k4)
h(k2) = h(k5)
h(k3)
m1

When a record to be inserted maps to an already


As
each key
h maps
it to a slot of T.
occupied
slotisininserted,
T, a collision
occurs.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.4

Resolving collisions by
chaining
Link records in the same slot into a list.
T
Worst case:

Every
key
i
hashes to the
49
49 86
86 52
52
same slot.
Access time =
(n) if |S| = n
h(49) = h(86) = h(52) = i

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.5

Average-case analysis of chaining


We make the assumption of simple uniform
hashing:
Each key k S is equally likely to be hashed
to any slot of table T, independent of where
other keys are hashed.
Let n be the number of keys in the table, and
let m be the number of slots.
Define the load factor of T to be
= n/m
= average number of keys per slot.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.6

Search cost
The expected time for an unsuccessful
search for a record with a given key is
= (1 + ).

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.7

Search cost
The expected time for an unsuccessful
search for a record with a given key is
= (1 + ).
search
the list

apply hash function


and access slot

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.8

Search cost
The expected time for an unsuccessful
search for a record with a given key is
= (1 + ).
search
the list

apply hash function


and access slot

Expected search time = (1) if = O(1),


or equivalently, if n = O(m).

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.9

Search cost
The expected time for an unsuccessful
search for a record with a given key is
= (1 + ).
search
the list

apply hash function


and access slot

Expected search time = (1) if = O(1),


or equivalently, if n = O(m).
A successful search has same asymptotic
bound, but a rigorous argument is a little
more complicated. (See textbook.)
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.10

Choosing a hash function


The assumption of simple uniform hashing
is hard to guarantee, but several common
techniques tend to work well in practice as
long as their deficiencies can be avoided.
Desirata:
A good hash function should distribute the
keys uniformly into the slots of the table.
Regularity in the key distribution should
not affect this uniformity.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.11

Division method
Assume all keys are integers, and define
h(k) = k mod m.
Deficiency: Dont pick an m that has a small
divisor d. A preponderance of keys that are
congruent modulo d can adversely affect
uniformity.
Extreme deficiency: If m = 2r, then the hash
doesnt even depend on all the bits of k:
If k = 10110001110110102 and r = 6, then
h(k) = 0110102 .
h(k)
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.12

Division method (continued)


h(k) = k mod m.
Pick m to be a prime not too close to a power
of 2 or 10 and not otherwise used prominently
in the computing environment.
Annoyance:
Sometimes, making the table size a prime is
inconvenient.
But, this method is popular, although the next
method well see is usually superior.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.13

Multiplication method
Assume that all keys are integers, m = 2r, and our
computer has w-bit words. Define
h(k) = (Ak mod 2w) rsh (w r),
where rsh is the bitwise right-shift operator and
A is an odd integer in the range 2w1 < A < 2w.
Dont pick A too close to 2w1 or 2w.
Multiplication modulo 2w is fast compared to
division.
The rsh operator is fast.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.14

Multiplication method
example
h(k) = (Ak mod 2w) rsh (w r)
Suppose that m = 8 = 23 and that our computer
has w = 7-bit words:
1011001 =A

1101011 =k
10010100110011
A
h(k)

0
7 1
5 4 3

Modular wheel
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

3A

.
2A
L7.15

Resolving collisions by open


addressing
No storage is used outside of the hash table itself.
Insertion systematically probes the table until an
empty slot is found.
The hash function depends on both the key and
probe number:
h : U {0, 1, , m1} {0, 1, , m1}.
The probe sequence h(k,0), h(k,1), , h(k,m1)
should be a permutation of {0, 1, , m1}.
The table may fill up, and deletion is difficult (but
not impossible).
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.16

Example of open addressing


Insert key k = 496:
0. Probe h(496,0)

586
133
204

collision

481
m1

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.17

Example of open addressing


Insert key k = 496:
0. Probe h(496,0)
1. Probe h(496,1)

T
586
133

collision

204
481
m1

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.18

Example of open addressing


Insert key k = 496:
0. Probe h(496,0)
1. Probe h(496,1)
2. Probe h(496,2)

586
133
204
496
481

insertion
m1

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.19

Example of open addressing


Search for key k = 496:
0. Probe h(496,0)
1. Probe h(496,1)
2. Probe h(496,2)

586
133
204
496
481

Search uses the same probe


sequence, terminating sucm1
cessfully if it finds the key
and unsuccessfully if it encounters an empty slot.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.20

Probing strategies
Linear probing:
Given an ordinary hash function h(k), linear
probing uses the hash function
h(k,i) = (h(k) + i) mod m.
This method, though simple, suffers from primary
clustering, where long runs of occupied slots build
up, increasing the average search time. Moreover,
the long runs of occupied slots tend to get longer.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.21

Probing strategies
Double hashing
Given two ordinary hash functions h1(k) and h2(k),
double hashing uses the hash function
h(k,i) = (h1(k) + i h2(k)) mod m.
This method generally produces excellent results,
but h2(k) must be relatively prime to m. One way
is to make m a power of 2 and design h2(k) to
produce only odd numbers.
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.22

Analysis of open addressing


We make the assumption of uniform hashing:
Each key is equally likely to have any one of
the m! permutations as its probe sequence.
Theorem. Given an open-addressed hash
table with load factor = n/m < 1, the
expected number of probes in an unsuccessful
search is at most 1/(1).

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.23

Proof of the theorem


Proof.
At least one probe is always necessary.
With probability n/m, the first probe hits an
occupied slot, and a second probe is necessary.
With probability (n1)/(m1), the second probe
hits an occupied slot, and a third probe is
necessary.
With probability (n2)/(m2), the third probe
hits an occupied slot, etc.

n
i
n
<
= for i = 1, 2, , n.
Observe that
mi m
October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.24

Proof (continued)
Therefore, the expected number of probes is

n
n

1
n

2
1
1 + 1 +
L
L 1 +
1 +
m m 1 m 2 m n + 1
1 + (1 + (1 + (L (1 + )L)))
1+ + 2 +3 +L

i =0

= 1 .
1
October 3, 2005

The textbook has a


more rigorous proof
and an analysis of
successful searches.
Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.25

Implications of the theorem


If is constant, then accessing an openaddressed hash table takes constant time.
If the table is half full, then the expected
number of probes is 1/(10.5) = 2.
If the table is 90% full, then the expected
number of probes is 1/(10.9) = 10.

October 3, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.26

Introduction to Algorithms

6.046J/18.401J

LECTURE 8
Hashing II
Universal hashing
Universality theorem
Constructing a set of
universal hash functions
Perfect hashing
Prof. Charles E. Leiserson

October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.1

A weakness of hashing

Problem: For any hash function h, a set

of keys exists that can cause the average

access time of a hash table to skyrocket.

An adversary can pick all keys from

{k U : h(k) = i} for some slot i.

IDEA: Choose the hash function at random,


independently of the keys.
Even if an adversary can see your code,
he or she cannot find a bad set of keys,
since he or she doesnt know exactly
which hash function will be chosen.
October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.2

Universal hashing

Definition. Let U be a universe of keys, and


let H be a finite collection of hash functions,
each mapping U to {0, 1, , m1}. We say
H is universal if for all x, y U, where x y,
we have |{h H : h(x) = h(y)}| = |H|/ m.
That is, the chance

{h : h(x) = h(y)}

of a collision
between x and y is

1/m if we choose h
|H|

randomly from H.

October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.3

Universality is good

Theorem. Let h be a hash function chosen

(uniformly) at random from a universal set H


of hash functions. Suppose h is used to hash
n arbitrary keys into the m slots of a table T.
Then, for a given key x, we have
E[#collisions with x] < n/m.

October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.4

Proof of theorem

Proof. Let Cx be the random variable denoting


the total number of collisions of keys in T with
x, and let
1 if h(x) = h(y),
cxy =
0 otherwise.
Note: E[cxy] = 1/m and C x =

October 5, 2005

cxy .

yT { x}

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.5

Proof (continued)

E
[
C x ]
=
E

c xy

yT { x}

October 5, 2005

Take expectation

of both sides.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.6

Proof (continued)

E
[
C x ]
=
E
c xy

yT { x
}

E[cxy ]

yT { x}

October 5, 2005

Take expectation
of both sides.
Linearity of
expectation.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.7

Proof (continued)

E
[
C x ]
=
E

c xy

yT { x
}

E[cxy ]

Linearity of
expectation.

1/ m

E[cxy] = 1/m.

yT { x
}

yT { x}

October 5, 2005

Take expectation

of both sides.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.8

Proof (continued)

x ]
=
E
c xy
E
[C

yT {
x}

E[cxy ]

Linearity of
expectation.

1/ m

E[cxy] = 1/m.

yT { x}

yT { x}

=
n

1 .
m
October 5, 2005

Take expectation
of both sides.

Algebra.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.9

Constructing a set of
universal hash functions
Let m be prime. Decompose key k into r + 1
digits, each with value in the set {0, 1, , m1}.
That is, let k = k0, k1, , kr, where 0 ki < m.
Randomized strategy:

Pick a = a0, a1, , ar where each ai is chosen

randomly from {0, 1, , m1}.

Dot
product,

Define ha (k ) = ai ki mod m .
modulo
m
i=0
How big is H = {ha}? |H| = mr + 1. REMEMBER
THIS!

October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.10

Universality of dot-product
hash functions
Theorem. The set H = {ha} is universal.

Proof. Suppose that x = x0, x1, , xr and y =


y0, y1, , yr be distinct keys. Thus, they differ
in at least one digit position, wlog position 0.
For how many ha H do x and y collide?
We must have ha(x) = ha(y), which implies that
r

i=0

i=0

ai xi ai yi
October 5, 2005

(mod m) .

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.11

Proof (continued)

Equivalently, we have

ai (xi yi ) 0

(mod m)

i=0

or

r
a0 (x0 y0 ) + ai (xi yi ) 0

(mod m) ,

i=1

which implies that

a0 (x0 y0 ) ai (xi yi )

(mod m) .

i=1

October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.12

Fact from number theory

Theorem. Let m be prime. For any z Zm


such that z 0, there exists a unique z1 Zm
such that
z z1 1 (mod m).
Example: m = 7.

October 5, 2005

z1

1 4

4 5

5 2 3 6

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.13

Back to the proof

We have

a0 (x0 y0 ) ai (xi yi )

(mod m) ,

i=1

and since x0 y0 , an inverse (x0 y0 )1 must exist,


which implies that

a0 ai (xi yi ) (x0 y0 ) 1

i=1

(mod m) .

Thus, for any choices of a1, a2, , ar, exactly


one choice of a0 causes x and y to collide.
October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.14

Proof (completed)

Q. How many has cause x and y to collide?


A. There are m choices for each of a1, a2, , ar ,
but once these are chosen, exactly one choice
for a0 causes x and y to collide, namely

a0 =

ai
( xi

y
i
)

( x0

y
0
) mod m .

i =
1

Thus, the number of h s that cause x and y

a
r
r
to collide is m 1 = m = |H|/m.
October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.15

Perfect hashing

Given a set of n keys, construct a static hash

table of size m = O(n) such that SEARCH takes

(1) time in the worst case.

IDEA: Twolevel scheme


with universal
hashing at
both levels.
No collisions
at level 2!
October 5, 2005

T
0

44 31
31

S1
14
27
1427
h31(14) = h31(27) = 1

11 00
00

S4
26
26

99 86
86
m a

40
22
40 37
37
22
0 1 2 3 4 5 6 7 8

S6

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.16

Collisions at level 2

Theorem. Let H be a class of universal hash


functions for a table of size m = n2. Then, if we
use a random h H to hash n keys into the table,
the expected number of collisions is at most 1/2.
Proof. By the definition of universality, the
probability that 2 given keys in the table collide

n
2
under h is 1/m = 1/n . Since there are (
2 ) pairs
of keys that can possibly collide, the expected
number of collisions is
n
1

n
(
n
1)
1

2 <
1 .

2 =

2
2
n

2
n
October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.17

No collisions at level 2

Corollary. The probability of no collisions


is at least 1/2.

Proof. Markovs inequality says that for any


nonnegative random variable X, we have
Pr{X t} E[X]/t.
Applying this inequality with t = 1, we find
that the probability of 1 or more collisions is
at most 1/2.
Thus, just by testing random hash functions

in H, well quickly find one that works.

October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.18

Analysis of storage

For the level-1 hash table T, choose m = n, and


let ni be random variable for the number of keys
that hash to slot i in T. By using ni2 slots for the
level-2 hash table Si, the expected total storage
required for the two-level scheme is therefore

m1
2
E

(ni
) =
(n) ,

i
=
0

since the analysis is identical to the analysis from


recitation of the expected running time of bucket
sort. (For a probability bound, apply Markov.)
October 5, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.19

Introduction to Algorithms
6.046J/18.401J

LECTURE 9
Randomly built binary
search trees
Expected node depth
Analyzing height
Convexity lemma
Jensens inequality
Exponential height

Post mortem
Prof. Erik Demaine
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.1

Binary-search-tree sort
T
Create an empty BST
for i = 1 to n
do TREE-INSERT(T, A[i])
Perform an inorder tree walk of T.
Example:
A = [3 1 8 2 6 7 5]

33
11

Tree-walk time = O(n),


but how long does it
take to build the BST?
October 17, 2005

88
22

66
55

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

77
L7.2

Analysis of BST sort


BST sort performs the same comparisons as
quicksort, but in a different order!
3 1 8 2 6 7 5
1 2

8 6 7 5
2

6 75
5

The expected time to build the tree is asymptotically the same as the running time of quicksort.
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.3

Node depth
The depth of a node = the number of comparisons
made during TREE-INSERT. Assuming all input
permutations are equally likely, we have
Average node depth
n

1
= E (# comparisons to insert node i )
n i =1

= 1 O(n lg n)
n
= O(lg n) .
October 17, 2005

(quicksort analysis)

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.4

Expected tree height


But, average node depth of a randomly built
BST = O(lg n) does not necessarily mean that its
expected height is also O(lg n) (although it is).
Example.

lg n
Ave. depth 1 n lg n + n n
n
2
= O(lg n)
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

h= n

L7.5

Height of a randomly built


binary search tree
Outline of the analysis:
Prove Jensens inequality, which says that
f(E[X]) E[f(X)] for any convex function f and
random variable X.
Analyze the exponential height of a randomly
built BST on n nodes, which is the random
variable Yn = 2Xn, where Xn is the random
variable denoting the height of the BST.
Prove that 2E[Xn] E[2Xn ] = E[Yn] = O(n3),
and hence that E[Xn] = O(lg n).
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.6

Convex functions
A function f : R R is convex if for all
, 0 such that + = 1, we have
f(x + y) f(x) + f(y)
for all x,y R.
f
f(y)

f(x) + f(y)
f(x)
f(x + y)
x
October 17, 2005

x + y

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.7

Convexity lemma
Lemma. Let f : R R be a convex function,
and let 1, 2 , , n be nonnegative real
numbers such that k k = 1. Then, for any
real numbers x1, x2, , xn, we have

n
n
f k xk k f ( xk ) .
k =1
k =1
Proof. By induction on n. For n = 1, we have
1 = 1, and hence f(1x1) 1f(x1) trivially.
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.8

Proof (continued)
Inductive step:
n

f k xk =
k =1

n 1

k
f n xn + (1 n )
xk

k =11 n

Algebra.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.9

Proof (continued)
Inductive step:
n 1

k
f n xn + (1 n )
xk

k =11 n
n1 k

n f ( xn ) + (1 n ) f
xk
k =11 n

f k xk =

k =1

Convexity.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.10

Proof (continued)
Inductive step:
n 1

k
f n xn + (1 n )
xk

k =11 n
n1 k

n f ( xn ) + (1 n ) f
xk
k =11 n

f k xk =
k =1

n 1

k
n f ( xn ) + (1 n )
f ( xk )
k =11 n

Induction.
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.11

Proof (continued)
Inductive step:
n 1

k
f n xn + (1 n )
xk

k =11 n
n1 k

n f ( xn ) + (1 n ) f
xk
k =11 n

f k xk =
k =1

n 1

k
n f ( xn ) + (1 n )
f ( xk )
k =11 n
n

= k f ( xk ) .
k =1

October 17, 2005

Algebra.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.12

Convexity lemma: infinite case


Lemma. Let f : R R be a convex function,
and let 1, 2 , , be nonnegative real numbers
such that k k = 1. Then, for any real
numbers x1, x2, , we have



f k xk k f ( xk ) ,
k =1
k =1
assuming that these summations exist.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.13

Convexity lemma: infinite case


Proof. By the convexity lemma, for any n 1,
n
n
f n k xk n k f ( xk ) .
k =1
k =1

i =1 i
i =1 i

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.14

Convexity lemma: infinite case


Proof. By the convexity lemma, for any n 1,
n
n
f n k xk n k f ( xk ) .
k =1
k =1

i =1 i
i =1 i

Taking the limit of both sides


(and because the inequality is not strict):
1
lim f n
n
i =1 i

k xk lim n

n
k =1
i =1 i

1 k xk
k =1

October 17, 2005

k =1

f ( xk )

k f ( xk )
k =1

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.15

Jensens inequality
Lemma. Let f be a convex function, and let X
be a random variable. Then, f (E[X]) E[ f (X)].
Proof.

f ( E[ X ]) = f k Pr{ X = k}

k =

Definition of expectation.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.16

Jensens inequality
Lemma. Let f be a convex function, and let X
be a random variable. Then, f (E[X]) E[ f (X)].
Proof.

f ( E[ X ]) = f k Pr{ X = k}
k =

f (k ) Pr{X = k}

k =

Convexity lemma (infinite case).


October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.17

Jensens inequality
Lemma. Let f be a convex function, and let X
be a random variable. Then, f (E[X]) E[ f (X)].
Proof.

f ( E[ X ]) = f k Pr{ X = k}

k =

f (k ) Pr{X = k}

k =

= E[ f ( X )] .

Tricky step, but truethink about it.


October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.18

Analysis of BST height


Let Xn be the random variable denoting
the height of a randomly built binary
search tree on n nodes, and let Yn = 2Xn
be its exponential height.
If the root of the tree has rank k, then
Xn = 1 + max{Xk1, Xnk} ,
since each of the left and right subtrees
of the root are randomly built. Hence,
we have
Yn = 2 max{Yk1, Ynk} .
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.19

Analysis (continued)
Define the indicator random variable Znk as
1 if the root has rank k,
Znk =
0 otherwise.
Thus, Pr{Znk = 1} = E[Znk] = 1/n, and
n

Yn = Z nk (2 max{Yk 1 , Ynk }) .
k =1

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.20

Exponential height recurrence


n

E [Yn ] = E Z nk (2 max{Yk 1 , Ynk })


k =1

Take expectation of both sides.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.21

Exponential height recurrence

n
E [Yn ] = E Z nk (2 max{Yk 1 , Ynk })

k =1
n

= E [Z nk (2 max{Yk 1 , Ynk })]


k =1

Linearity of expectation.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.22

Exponential height recurrence


n

E [Yn ] = E Z nk (2 max{Yk 1 , Ynk })

k =1
n

= E [Z nk (2 max{Yk 1 , Ynk })]


k =1
n

= 2 E[ Z nk ] E[max{Yk 1 , Ynk }]
k =1

Independence of the rank of the root


from the ranks of subtree roots.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.23

Exponential height recurrence


n

E [Yn ] = E Z nk (2 max{Yk 1 , Ynk })

k =1
n

= E [Z nk (2 max{Yk 1 , Ynk })]


k =1
n

= 2 E[ Z nk ] E[max{Yk 1 , Ynk }]
k =1
n

2 E[Yk 1 + Ynk ]
n k =1

The max of two nonnegative numbers


is at most their sum, and E[Znk] = 1/n.
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.24

Exponential height recurrence


n

E[Yn ] = E Z nk (2 max{Yk 1 , Ynk })

k =1
n

= E[Z nk (2 max{Yk 1 , Ynk })]


k =1
n

= 2 E[ Z nk ] E[max{Yk 1 , Ynk }]
k =1
n

2 E[Yk 1 + Ynk ]
n k =1
n 1

= 4 E[Yk ]
n k =0
October 17, 2005

Each term appears


twice, and reindex.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.25

Solving the recurrence


Use substitution to
show that E[Yn] cn3
for some positive
constant c, which we
can pick sufficiently
large to handle the
initial conditions.

October 17, 2005

n 1

E [Yn ] = 4 E[Yk ]
n k =0

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.26

Solving the recurrence


Use substitution to
show that E[Yn] cn3
for some positive
constant c, which we
can pick sufficiently
large to handle the
initial conditions.

October 17, 2005

n 1

E [Yn ] = 4 E[Yk ]
n k =0
n 1

4 ck 3
n k =0

Substitution.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.27

Solving the recurrence


Use substitution to
show that E[Yn] cn3
for some positive
constant c, which we
can pick sufficiently
large to handle the
initial conditions.

n 1

E [Yn ] = 4 E[Yk ]
n k =0
n 1

4 ck 3
n k =0
n 3
4
c
x dx
n 0

Integral method.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.28

Solving the recurrence


Use substitution to
show that E[Yn] cn3
for some positive
constant c, which we
can pick sufficiently
large to handle the
initial conditions.

n 1

E [Yn ] = 4 E[Yk ]
n k =0
n 1

4 ck 3
n k =0
n 3
4
c
x dx
n 0
4

4
c
n
=
n 4

Solve the integral.


October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.29

Solving the recurrence


Use substitution to
show that E[Yn] cn3
for some positive
constant c, which we
can pick sufficiently
large to handle the
initial conditions.

October 17, 2005

n 1

E [Yn ] = 4 E[Yk ]
n k =0
n 1

4 ck 3
n k =0
n 3
c
4
x dx
n 0
4

c
n
4
=
n 4
= cn3. Algebra.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.30

The grand finale


Putting it all together, we have
2E[Xn] E[2Xn ]
Jensens inequality, since
f(x) = 2x is convex.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.31

The grand finale


Putting it all together, we have
2E[Xn] E[2Xn ]
= E[Yn]
Definition.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.32

The grand finale


Putting it all together, we have
2E[Xn] E[2Xn ]
= E[Yn]
cn3 .
What we just showed.

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.33

The grand finale


Putting it all together, we have
2E[Xn] E[2Xn ]
= E[Yn]
cn3 .
Taking the lg of both sides yields
E[Xn] 3 lg n +O(1) .

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.34

Post mortem
Q. Does the analysis have to be this hard?
Q. Why bother with analyzing exponential
height?
Q. Why not just develop the recurrence on
Xn = 1 + max{Xk1, Xnk}
directly?
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.35

Post mortem (continued)


A. The inequality
max{a, b} a + b .
provides a poor upper bound, since the RHS
approaches the LHS slowly as |a b| increases.
The bound
max{2a, 2b} 2a + 2b
allows the RHS to approach the LHS far more
quickly as |a b| increases. By using the
convexity of f(x) = 2x via Jensens inequality,
we can manipulate the sum of exponentials,
resulting in a tight analysis.
October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.36

Thought exercises
See what happens when you try to do the
analysis on Xn directly.
Try to understand better why the proof
uses an exponential. Will a quadratic do?
See if you can find a simpler argument.
(This argument is a little simpler than the
one in the bookI hope its correct!)

October 17, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.37

Introduction to Algorithms

6.046J/18.401J

LECTURE 10
Balanced Search Trees
Red-black trees
Height of a red-black tree
Rotations
Insertion

Prof. Erik Demaine

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.1

Balanced search trees

Balanced search tree: A search-tree data


structure for which a height of O(lg n) is
guaranteed when implementing a dynamic
set of n items.

Examples:

October 19, 2005

AVL trees

2-3 trees
2-3-4 trees
B-trees
Red-black trees

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.2

Red-black trees

This data structure requires an extra onebit color field in each node.
Red-black properties:
1. Every node is either red or black.
2. The root and leaves (NILs) are black.
3. If a node is red, then its parent is black.

4. All simple paths from any node x to a


descendant leaf have the same number
of black nodes = black-height(x).
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.3

Example of a red-black tree


77
33
NIL

18
18
NIL

10
10
88

11
11

NIL NIL NIL NIL

October 19, 2005

h=4

22
22
NIL

26
26
NIL

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

NIL

L7.4

Example of a red-black tree


77
33
NIL

18
18
NIL

10
10
88

22
22
11
11

NIL NIL NIL NIL

NIL

26
26
NIL

NIL

1. Every node is either red or black.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.5

Example of a red-black tree


77
33
NIL

18
18
NIL

10
10
88

22
22
11
11

NIL NIL NIL NIL

NIL

26
26
NIL

NIL

2. The root and leaves (NILs) are black.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.6

Example of a red-black tree


77
33
NIL

18
18
NIL

10
10
88

22
22
11
11

NIL NIL NIL NIL

NIL

26
26
NIL

NIL

3. If a node is red, then its parent is black.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.7

Example of a red-black tree


77 bh = 2
33
NIL

18
18 bh = 2
NIL

bh = 1 10
10
bh = 1

88

22
22
11
11

bh = 0 NIL NIL NIL NIL

NIL

26
26
NIL

NIL

4. All simple paths from any node x to a


descendant leaf have the same number of
black nodes = black-height(x).
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.8

Height of a red-black tree

Theorem. A red-black tree with n keys has height

h 2 lg(n + 1).

Proof. (The book uses induction. Read carefully.)

INTUITION:

Merge red nodes


into their black
parents.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.9

Height of a red-black tree

Theorem. A red-black tree with n keys has height

h 2 lg(n + 1).

Proof. (The book uses induction. Read carefully.)

INTUITION:

Merge red nodes


into their black
parents.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.10

Height of a red-black tree

Theorem. A red-black tree with n keys has height

h 2 lg(n + 1).

Proof. (The book uses induction. Read carefully.)

INTUITION:

Merge red nodes


into their black
parents.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.11

Height of a red-black tree

Theorem. A red-black tree with n keys has height

h 2 lg(n + 1).

Proof. (The book uses induction. Read carefully.)

INTUITION:

Merge red nodes


into their black
parents.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.12

Height of a red-black tree

Theorem. A red-black tree with n keys has height

h 2 lg(n + 1).

Proof. (The book uses induction. Read carefully.)

INTUITION:

Merge red nodes


into their black
parents.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.13

Height of a red-black tree

Theorem. A red-black tree with n keys has height

h 2 lg(n + 1).

Proof. (The book uses induction. Read carefully.)

INTUITION:

Merge red nodes


h
into their black
parents.
This process produces a tree in which each node
has 2, 3, or 4 children.
The 2-3-4 tree has uniform depth h of leaves.
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.14

Proof (continued)

We have
h h/2, since
at most half
the leaves on any path
are red.

The number of leaves

in each tree is n + 1

n + 1 2h'
lg(n + 1) h' h/2
h 2 lg(n + 1).

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.15

Query operations

Corollary. The queries SEARCH, MIN,


MAX, SUCCESSOR, and PREDECESSOR
all run in O(lg n) time on a red-black
tree with n nodes.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.16

Modifying operations

The operations INSERT and DELETE cause


modifications to the red-black tree:
the operation itself,

color changes,
restructuring the links of the tree via
rotations.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.17

Rotations
RIGHT-ROTATE(B)

BB

LEFT-ROTATE(A)

AA

AA

BB

Rotations maintain the inorder ordering of keys:

a , b , c a A b B c.

A rotation can be performed in O(1) time.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.18

Insertion into a red-black tree

IDEA: Insert x in tree. Color x red. Only redblack property 3 might be violated. Move the
violation up the tree by recoloring until it can
be fixed with rotations and recoloring.
Example:

77
33

18
18
10
10
88

October 19, 2005

22
22
11
11

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

26
26

L7.19

Insertion into a red-black tree

IDEA: Insert x in tree. Color x red. Only redblack property 3 might be violated. Move the
violation up the tree by recoloring until it can
be fixed with rotations and recoloring.
Example:
33
Insert x =15.

Recolor, moving the

violation up the tree.

77
18
18
10
10
88

22
22
11
11

26
26

15
15
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.20

Insertion into a red-black tree

IDEA: Insert x in tree. Color x red. Only redblack property 3 might be violated. Move the
violation up the tree by recoloring until it can
be fixed with rotations and recoloring.
Example:
33
Insert x =15.

Recolor, moving the

violation up the tree.

RIGHT-ROTATE(18).
October 19, 2005

77
18
18
10
10
88

22
22
11
11

26
26

15
15

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.21

Insertion into a red-black tree

IDEA: Insert x in tree. Color x red. Only redblack property 3 might be violated. Move the
violation up the tree by recoloring until it can
be fixed with rotations and recoloring.
77

Example:
33
Insert x =15.

88
Recolor, moving the

violation up the tree.

RIGHT-ROTATE(18).
LEFT-ROTATE(7) and recolor.
October 19, 2005

10
10
18
18
11
11
15
15

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

22
22
26
26
L7.22

Insertion into a red-black tree

IDEA: Insert x in tree. Color x red. Only redblack property 3 might be violated. Move the
violation up the tree by recoloring until it can
be fixed with rotations and recoloring.
Example:
77
Insert x =15.

Recolor, moving the


33
88
violation up the tree.

RIGHT-ROTATE(18).
LEFT-ROTATE(7) and recolor.
October 19, 2005

10
10
18
18
11
11
15
15

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

22
22
26
26

L7.23

Pseudocode

RB-INSERT(T, x)
TREE-INSERT(T, x)
color[x] RED only RB property 3 can be violated
while x root[T] and color[p[x]] = RED
do if p[x] = left[p[p[x]]
then y right[p[p[x]]
y = aunt/uncle of x
if color[y] = RED
then Case 1
else if x = right[p[x]]
then Case 2 Case 2 falls into Case 3
Case 3
else then clause with left and right swapped
color[root[T]] BLACK
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.24

Graphical notation

Let
All

October 19, 2005

denote a subtree with a black root.


s have the same black-height.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.25

Case 1
Recolor

CC
DD

AA

BB

new x
DD

AA
BB

(Or, children of
A are swapped.)

October 19, 2005

CC

Push Cs black onto

A and D, and recurse,


since Cs parent may
be red.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.26

Case 2

CC
AA

BB

LEFT-ROTATE(A)

CC

BB

AA

Transform to Case 3.

October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.27

Case 3

CC
BB

AA

RIGHT-ROTATE(C)
y

BB

AA

CC

Done! No more
violations of RB
property 3 are
possible.
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.28

Analysis

Go up the tree performing Case 1, which only


recolors nodes.
If Case 2 or Case 3 occurs, perform 1 or 2
rotations, and terminate.
Running time: O(lg n) with O(1) rotations.

RB-DELETE same asymptotic running time


and number of rotations as RB-INSERT (see
textbook).
October 19, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L7.29

Introduction to Algorithms
6.046J/18.401J

LECTURE 11
Augmenting Data
Structures
Dynamic order statistics
Methodology
Interval trees
Prof. Charles E. Leiserson
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.1

Dynamic order statistics


OS-SELECT(i, S): returns the i th smallest element
in the dynamic set S.
OS-RANK(x, S): returns the rank of x S in the
sorted order of Ss elements.
IDEA: Use a red-black tree for the set S, but keep
subtree sizes in the nodes.
Notation for nodes:
October 24, 2005

key
key
size
size

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.2

Example of an OS-tree
M
M
99
CC
55

PP
33

AA
11

FF
33
DD
11

NN
11

QQ
11

HH
11

size[x] = size[left[x]] + size[right[x]] + 1


October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.3

Selection
Implementation trick: Use a sentinel
(dummy record) for NIL such that size[NIL] = 0.
OS-SELECT(x, i) ith smallest element in the
subtree rooted at x
k size[left[x]] + 1 k = rank(x)
if i = k then return x
if i < k
then return OS-SELECT(left[x], i )
else return OS-SELECT(right[x], i k )
(OS-RANK is in the textbook.)
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.4

Example
OS-SELECT(root, 5)
i=5
k=6
i=5
k=2

M
M
99

CC
55

PP
33

AA
11

FF
33
DD
11

i=3
k=2
HH
11

NN
11

QQ
11

i=1
k=1

Running time = O(h) = O(lg n) for red-black trees.


October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.5

Data structure maintenance


Q. Why not keep the ranks themselves
in the nodes instead of subtree sizes?
A. They are hard to maintain when the
red-black tree is modified.
Modifying operations: INSERT and DELETE.
Strategy: Update subtree sizes when
inserting or deleting.

October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.6

Example of insertion
INSERT(K)

M
M
10
910
9

CC
6565

PP
33

AA
11

FF
4343
DD
11

NN
11

QQ
11

HH
2121
KK
11

October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.7

Handling rebalancing
Dont forget that RB-INSERT and RB-DELETE may
also need to modify the red-black tree in order to
maintain balance.
Recolorings: no effect on subtree sizes.
Rotations: fix up subtree sizes in O(1) time.

Example:

EE
16
16

CC
11
11
7

CC
16
16
4

EE
88

7
3

RB-INSERT and RB-DELETE still run in O(lg n) time.


October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.8

Data-structure augmentation
Methodology: (e.g., order-statistics trees)
1. Choose an underlying data structure (redblack trees).
2. Determine additional information to be
stored in the data structure (subtree sizes).
3. Verify that this information can be
maintained for modifying operations (RBINSERT, RB-DELETE dont forget rotations).
4. Develop new dynamic-set operations that use
the information (OS-SELECT and OS-RANK).
These steps are guidelines, not rigid rules.
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.9

Interval trees
Goal: To maintain a dynamic set of intervals,
such as time intervals.
i = [7, 10]
low[i] = 7
5
4

10 = high[i]
11
17
15

19
18 22

23

Query: For a given query interval i, find an


interval in the set that overlaps i.
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.10

Following the methodology


1. Choose an underlying data structure.
Red-black tree keyed on low (left) endpoint.
2. Determine additional information to be
stored in the data structure.
Store in each node x the largest value m[x]
in the subtree rooted at x, as well as the
interval int[x] corresponding to the key.
int
int
m
m
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.11

Example interval tree


17,19
17,19
23
23
5,11
5,11
18
18
4,8
4,8
88

15,18
15,18
18
18
7,10
7,10
10
10

October 24, 2005

22,23
22,23
23
23

m[x] = max

high[int[x]]
m[left[x]]
m[right[x]]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.12

Modifying operations
3. Verify that this information can be maintained
for modifying operations.
INSERT: Fix ms on the way down.
Rotations Fixup = O(1) time per rotation:
11,15
11,15
30
30
6,20
6,20
30
30
30
30

6,20
6,20
30
30
19
19

14
14

11,15
11,15
19
19

30
30
14
14

19
19

Total INSERT time = O(lg n); DELETE similar.


October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.13

New operations
4. Develop new dynamic-set operations that use
the information.
INTERVAL-SEARCH(i)
x root
while x NIL and (low[i] > high[int[x]]
or low[int[x]] > high[i])
do i and int[x] dont overlap
if left[x] NIL and low[i] m[left[x]]
then x left[x]
else x right[x]
return x
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.14

Example 1: INTERVAL-SEARCH([14,16])
x

17,19
17,19
23
23

5,11
5,11
18
18
4,8
4,8
88

15,18
15,18
18
18
7,10
7,10
10
10

October 24, 2005

22,23
22,23
23
23

x root
[14,16] and [17,19] dont overlap
14 18 x left[x]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.15

Example 1: INTERVAL-SEARCH([14,16])
17,19
17,19
23
23

5,11
5,11
18
18

4,8
4,8
88

15,18
15,18
18
18
7,10
7,10
10
10

October 24, 2005

22,23
22,23
23
23

[14,16] and [5,11] dont overlap


14 > 8 x right[x]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.16

Example 1: INTERVAL-SEARCH([14,16])
17,19
17,19
23
23
5,11
5,11
18
18
4,8
4,8
88

22,23
22,23
23
23

x
7,10
7,10
10
10

October 24, 2005

15,18
15,18
18
18

[14,16] and [15,18] overlap


return [15,18]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.17

Example 2: INTERVAL-SEARCH([12,14])
x

17,19
17,19
23
23

5,11
5,11
18
18
4,8
4,8
88

15,18
15,18
18
18
7,10
7,10
10
10

October 24, 2005

22,23
22,23
23
23

x root
[12,14] and [17,19] dont overlap
12 18 x left[x]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.18

Example 2: INTERVAL-SEARCH([12,14])
17,19
17,19
23
23

5,11
5,11
18
18

4,8
4,8
88

15,18
15,18
18
18
7,10
7,10
10
10

October 24, 2005

22,23
22,23
23
23

[12,14] and [5,11] dont overlap


12 > 8 x right[x]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.19

Example 2: INTERVAL-SEARCH([12,14])
17,19
17,19
23
23
5,11
5,11
18
18
4,8
4,8
88

22,23
22,23
23
23

x
7,10
7,10
10
10

October 24, 2005

15,18
15,18
18
18

[12,14] and [15,18] dont overlap


12 > 10 x right[x]

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.20

Example 2: INTERVAL-SEARCH([12,14])
17,19
17,19
23
23
5,11
5,11
18
18
4,8
4,8
88

15,18
15,18
18
18
7,10
7,10
10
10

October 24, 2005

22,23
22,23
23
23

x
x = NIL no interval that
overlaps [12,14] exists

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.21

Analysis
Time = O(h) = O(lg n), since INTERVAL-SEARCH
does constant work at each level as it follows a
simple path down the tree.
List all overlapping intervals:
Search, list, delete, repeat.
Insert them all again at the end.
Time = O(k lg n), where k is the total number of
overlapping intervals.
This is an output-sensitive bound.
Best algorithm to date: O(k + lg n).
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.22

Correctness
Theorem. Let L be the set of intervals in the
left subtree of node x, and let R be the set of
intervals in xs right subtree.
If the search goes right, then
{ i L : i overlaps i } = .
If the search goes left, then
{i L : i overlaps i } =
{i R : i overlaps i } = .
In other words, its always safe to take only 1
of the 2 children: well either find something,
or nothing was to be found.
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.23

Correctness proof
Proof. Suppose first that the search goes right.
If left[x] = NIL, then were done, since L = .
Otherwise, the code dictates that we must have
low[i] > m[left[x]]. The value m[left[x]]
corresponds to the high endpoint of some
interval j L, and no other interval in L can
have a larger high endpoint than high[ j].
i
j
L
high[ j] = m[left[x]]

low(i)

Therefore, {i L : i overlaps i } = .
October 24, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.24

Proof (continued)
Suppose that the search goes left, and assume that
{i L : i overlaps i } = .
Then, the code dictates that low[i] m[left[x]] =
high[ j] for some j L.
Since j L, it does not overlap i, and hence
high[i] < low[ j].
But, the binary-search-tree property implies that
for all i R, we have low[ j] low[i].
But then {i R : i overlaps i } = .
i
October 24, 2005

j
i

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.25

Introduction to Algorithms
6.046J/18.401J

LECTURE 12
Skip Lists
Data structure
Randomized insertion
With-high-probability bound
Analysis
Coin flipping
Prof. Erik D. Demaine
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.1

Skip lists
Simple randomized dynamic search structure
Invented by William Pugh in 1989
Easy to implement

Maintains a dynamic set of n elements in


O(lg n) time per operation in expectation and
with high probability
Strong guarantee on tail of distribution of T(n)
O(lg n) almost always
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.2

One linked list


Start from simplest data structure:
(sorted) linked list
Searches take (n) time in worst case
How can we speed up searches?

14
14

23
23
October 26, 2005

34
34

42
42

50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.3

Two linked lists


Suppose we had two sorted linked lists
(on subsets of the elements)
Each element can appear in one or both lists
How can we speed up searches?

14
14

23
23
October 26, 2005

34
34

42
42

50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.4

Two linked lists as a subway


IDEA: Express and local subway lines
( la New York City 7th Avenue Line)
Express line connects a few of the stations
Local line connects all stations
Links between lines at common stations
14
14
14
14

23
23
October 26, 2005

34
34

42
42

34
34

42
42

72
72
50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.5

Searching in two linked lists


SEARCH(x):
Walk right in top linked list (L1)
until going right would go too far
Walk down to bottom linked list (L2)
Walk right in L2 until element found (or not)
14
14
14
14

23
23
October 26, 2005

34
34

42
42

34
34

42
42

72
72
50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.6

Searching in two linked lists


EXAMPLE: SEARCH(59)

Too far:
59 < 72
14
14
14
14

23
23
October 26, 2005

34
34

42
42

34
34

42
42

72
72
50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.7

Design of two linked lists


QUESTION: Which nodes should be in L1?
In a subway, the popular stations
Here we care about worst-case performance
Best approach: Evenly space the nodes in L1
But how many nodes should be in L1?
14
14
14
14

23
23
October 26, 2005

34
34

42
42

34
34

42
42

72
72
50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.8

Analysis of two linked lists


ANALYSIS:
L2
Search cost is roughly L1 +
L
1
Minimized (up to
constant factors) when terms are equal
2
L1 = L2 = n L1 = n
14
14
14
14

23
23
October 26, 2005

34
34

42
42

34
34

42
42

72
72
50
50

59
59

66
66

72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.9

Analysis of two linked lists


ANALYSIS:
L2 = n
L1 = n ,
Search cost is roughly
L2
n
L1 +
= n+
=2 n
L1
n
14
14
14
14

42
42
23
23
n
October 26, 2005

34
34

42
42

66
66
50
50

59
59

66
66

72
72

n
Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79

n
L11.10

More linked lists


What if we had more sorted linked lists?
2 sorted lists 2 n
3 sorted lists 3 3 n
k sorted lists k k n
lg n
lg n sorted lists lg n n = 2 lg n
14
14
14
14

42
42
23
23
n
October 26, 2005

34
34

42
42

66
66
50
50

59
59

66
66

72
72

n
Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79

n
L11.11

lg n linked lists
lg n sorted linked lists are like a binary tree
(in fact, level-linked B+-tree; see Problem Set 5)
14
79
79
14
14
14

50
50

14
14
14
14

34
34
23
23
October 26, 2005

34
34

79
79

50
50
42
42

50
50

66
66
59
59

66
66

79
79
72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.12

Searching in lg n linked lists


EXAMPLE: SEARCH(72)
14
14

79
79

14
14

50
50

14
14
14
14

34
34
23
23
October 26, 2005

34
34

79
79

50
50
42
42

50
50

66
66
59
59

66
66

79
79
72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.13

Skip lists
Ideal skip list is this lg n linked list structure
Skip list data structure maintains roughly this
structure subject to updates (insert/delete)
14
79
79
14
14
14

50
50

14
14
14
14

34
34
23
23
October 26, 2005

34
34

79
79

50
50
42
42

50
50

66
66
59
59

66
66

79
79
72
72

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

79
79
L11.14

INSERT(x)
To insert an element x into a skip list:
SEARCH(x) to see where x fits in bottom list
Always insert into bottom list
INVARIANT: Bottom list contains all elements
Insert into some of the lists above
QUESTION: To which other lists should we add x?
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.15

INSERT(x)
QUESTION: To which other lists should we add x?
IDEA: Flip a (fair) coin; if HEADS,
promote x to next level up and flip again
Probability of promotion to next level = 1/2
On average:

1/2 of the elements promoted 0 levels Approx.


1/4 of the elements promoted 1 level balanced
1/8 of the elements promoted 2 levels
?
etc.

October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.16

Example of skip list


EXERCISE: Try building a skip list from scratch
by repeated insertion using a real coin
Small change:
Add special
value to every list
can search with
the same algorithm
October 26, 2005

50
50

34
34
23
23

34
34

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

50
50
42
42
L11.17

50
50

Skip lists
A skip list is the result of insertions (and
deletions) from an initially empty structure
(containing just )
INSERT(x) uses random coin flips to decide
promotion level
DELETE(x) removes x from all lists containing it

October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.18

Skip lists
A skip list is the result of insertions (and
deletions) from an initially empty structure
(containing just )
INSERT(x) uses random coin flips to decide
promotion level
DELETE(x) removes x from all lists containing it
How good are skip lists? (speed/balance)
INTUITIVELY: Pretty good on average
CLAIM: Really, really good, almost always
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.19

With-high-probability theorem
THEOREM: With high probability, every search
in an n-element skip list costs O(lg n)

October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.20

With-high-probability theorem
THEOREM: With high probability, every search
in a skip list costs O(lg n)
INFORMALLY: Event E occurs with high
probability (w.h.p.) if, for any 1, there is an
appropriate choice of constants for which
E occurs with probability at least 1 O(1/n)
In fact, constant in O(lg n) depends on

FORMALLY: Parameterized event E occurs


with high probability if, for any 1, there is
an appropriate choice of constants for which
E occurs with probability at least 1 c/n
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.21

With-high-probability theorem
THEOREM: With high probability, every search
in a skip list costs O(lg n)
INFORMALLY: Event E occurs with high
probability (w.h.p.) if, for any 1, there is an
appropriate choice of constants for which
E occurs with probability at least 1 O(1/n)
IDEA: Can make error probability O(1/n)
very small by setting large, e.g., 100
Almost certainly, bound remains true for entire
execution of polynomial-time algorithm
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.22

Booles inequality / union bound


Recall:
BOOLES INEQUALITY / UNION BOUND:
For any random events E1, E2, , Ek ,
Pr{E1 E2 Ek}
Pr{E1} + Pr{E2} + + Pr{Ek}
Application to with-high-probability events:
If k = nO(1), and each Ei occurs with high
probability, then so does E1 E2 Ek
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.23

Analysis Warmup
LEMMA: With high probability,
n-element skip list has O(lg n) levels
PROOF:
Error probability for having at most c lg n levels
= Pr{more than c lg n levels}
n Pr{element x promoted at least c lg n times}
(by Booles Inequality)
= n (1/2c lg n)
= n (1/nc)
= 1/nc 1
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.24

Analysis Warmup
LEMMA: With high probability,
n-element skip list has O(lg n) levels
PROOF:
Error probability for having at most c lg n levels
1/nc 1
This probability is polynomially small,
i.e., at most n for = c 1.
We can make arbitrarily large by choosing the
constant c in the O(lg n) bound accordingly.
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.25

Proof of theorem
THEOREM: With high probability, every search
in an n-element skip list costs O(lg n)
COOL IDEA: Analyze search backwardsleaf to root
Search starts [ends] at leaf (node in bottom level)
At each node visited:
If node wasnt promoted higher (got TAILS here),
then we go [came from] left
If node was promoted higher (got HEADS here),
then we go [came from] up

Search stops [starts] at the root (or )


October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.26

Proof of theorem
THEOREM: With high probability, every search
in an n-element skip list costs O(lg n)
COOL IDEA: Analyze search backwardsleaf to root
PROOF:
Search makes up and left moves
until it reaches the root (or )
Number of up moves < number of levels
c lg n w.h.p. (Lemma)
w.h.p., number of moves is at most the number
of times we need to flip a coin to get c lg n HEADs
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.27

Coin flipping analysis


CLAIM: Number of coin flips until c lg n HEADs
= (lg n) with high probability
PROOF:
Obviously (lg n): at least c lg n
Prove O(lg n) by example:
Say we make 10 c lg n flips
When are there at least c lg n HEADs?
(Later generalize to arbitrary values of 10)
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.28

Coin flipping analysis


CLAIM: Number of coin flips until c lg n HEADs
= (lg n) with high probability
PROOF:
c lg n
9 c lg n
10c lg n 1
1
Pr{exactly c lg n HEADs} = c lg n
2

orders

HEADs

10c lg n 1
Pr{at most c lg n HEADs}

c lg n 2

TAILs

9 c lg n

overestimate TAILs
on orders
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.29

Coin flipping analysis (contd)


y y x y y x
Recall bounds on : e
x x x x

10c lg n 1

Pr{at most c lg n HEADs}
c lg n 2
c lg n

10c lg n

e
c lg n
c lg n 9 c lg n
= (10e ) 2

9 c lg n

1

2

9 c lg n

= 2lg(10 e )c lg n 2 9 c lg n
= 2[lg(10 e ) 9 ]c lg n
= 1 / n for = [9 lg(10e)] c
October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.30

Coin flipping analysis (contd)


Pr{at most c lg n HEADs} 1/n for = [9lg(10e)]c
KEY PROPERTY: as 10 , for any c
So set 10, i.e., constant in O(lg n) bound,
large enough to meet desired
This completes the proof of the coin-flipping claim
and the proof of the theorem.

October 26, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L11.31

Introduction to Algorithms
6.046J/18.401J

LECTURE 13
Amortized Analysis
Dynamic tables
Aggregate method
Accounting method
Potential method
Prof. Charles E. Leiserson
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.1

How large should a hash


table be?
Goal: Make the table as small as possible, but
large enough so that it wont overflow (or
otherwise become inefficient).
Problem: What if we dont know the proper size
in advance?
Solution: Dynamic tables.
IDEA: Whenever the table overflows, grow it
by allocating (via malloc or new) a new, larger
table. Move all items from the old table into the
new one, and free the storage for the old table.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.2

Example of a dynamic table


1. INSERT
2. INSERT

October 31, 2005

1
overflow

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.3

Example of a dynamic table


1. INSERT
2. INSERT

October 31, 2005

11
overflow

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.4

Example of a dynamic table


1. INSERT
2. INSERT

October 31, 2005

11
2

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.5

Example of a dynamic table


1. INSERT
2. INSERT
3. INSERT

October 31, 2005

11
22
overflow

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.6

Example of a dynamic table


1. INSERT
2. INSERT
3. INSERT

October 31, 2005

1
2
overflow

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.7

Example of a dynamic table


1. INSERT
2. INSERT
3. INSERT

October 31, 2005

1
2

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.8

Example of a dynamic table


1.
2.
3.
4.

INSERT
INSERT
INSERT
INSERT

October 31, 2005

1
2
3
4

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.9

Example of a dynamic table


1.
2.
3.
4.
5.

INSERT
INSERT
INSERT
INSERT
INSERT

October 31, 2005

1
2
3
4
overflow

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.10

Example of a dynamic table


1.
2.
3.
4.
5.

INSERT
INSERT
INSERT
INSERT
INSERT

October 31, 2005

1
2
3
4
overflow

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.11

Example of a dynamic table


1.
2.
3.
4.
5.

INSERT
INSERT
INSERT
INSERT
INSERT

October 31, 2005

1
2
3
4

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.12

Example of a dynamic table


1.
2.
3.
4.
5.
6.
7.

INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT

October 31, 2005

1
2
3
4
5
6
7

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.13

Worst-case analysis
Consider a sequence of n insertions. The
worst-case time to execute one insertion is
(n). Therefore, the worst-case time for n
insertions is n (n) = (n2).
WRONG! In fact, the worst-case cost for
n insertions is only (n) (n2).
Lets see why.

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.14

Tighter analysis
Let ci = the cost of the i th insertion
i if i 1 is an exact power of 2,
=
1 otherwise.
i

sizei

16 16

ci

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

10
1

L13.15

Tighter analysis
Let ci = the cost of the i th insertion
i if i 1 is an exact power of 2,
=
1 otherwise.
i

sizei

16 16

1
1

1
2

1
4

1
8

ci

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

10
1

L13.16

Tighter analysis (continued)


n

Cost of n insertions = ci
i =1

n+

lg( n 1)

2j

j =0

3n
= ( n ) .
Thus, the average cost of each dynamic-table
operation is (n)/n = (1).
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.17

Amortized analysis
An amortized analysis is any strategy for
analyzing a sequence of operations to
show that the average cost per operation is
small, even though a single operation
within the sequence might be expensive.
Even though were taking averages, however,
probability is not involved!
An amortized analysis guarantees the
average performance of each operation in
the worst case.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.18

Types of amortized analyses


Three common amortization arguments:
the aggregate method,
the accounting method,
the potential method.
Weve just seen an aggregate analysis.
The aggregate method, though simple, lacks the
precision of the other two methods. In particular,
the accounting and potential methods allow a
specific amortized cost to be allocated to each
operation.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.19

Accounting method
Charge i th operation a fictitious amortized cost
i, where $1 pays for 1 unit of work (i.e., time).
This fee is consumed to perform the operation.
Any amount not immediately consumed is stored
in the bank for use by subsequent operations.
The bank balance must not go negative! We
must ensure that n
n
ci ci
i =1

i =1

for all n.
Thus, the total amortized costs provide an upper
bound on the total true costs.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.20

Accounting analysis of
dynamic tables
Charge an amortized cost of i = $3 for the i th
insertion.
$1 pays for the immediate insertion.
$2 is stored for later table doubling.
When the table doubles, $1 pays to move a
recent item, and $1 pays to move an old item.
Example:
$0
$0 $0
$0 $0
$0 $2
$2 $2
$2 $2 $2 overflow
$0 $0

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.21

Accounting analysis of
dynamic tables
Charge an amortized cost of i = $3 for the i th
insertion.
$1 pays for the immediate insertion.
$2 is stored for later table doubling.
When the table doubles, $1 pays to move a
recent item, and $1 pays to move an old item.
Example:
overflow
$0
$0 $0
$0 $0
$0 $0
$0 $0
$0 $0
$0 $0
$0
$0 $0
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.22

Accounting analysis of
dynamic tables
Charge an amortized cost of i = $3 for the i th
insertion.
$1 pays for the immediate insertion.
$2 is stored for later table doubling.
When the table doubles, $1 pays to move a
recent item, and $1 pays to move an old item.
Example:

$0
$0 $0
$0 $0
$0 $0
$0 $0
$0 $0
$0 $0
$0 $2 $2 $2
$0 $0
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.23

Accounting analysis
(continued)
Key invariant: Bank balance never drops below 0.
Thus, the sum of the amortized costs provides an
upper bound on the sum of the true costs.
i

sizei

16 16

ci

2* 3

banki

10

*Okay, so I lied. The first operation costs only $2, not $3.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.24

Potential method
IDEA: View the bank account as the potential
energy ( la physics) of the dynamic set.
Framework:
Start with an initial data structure D0.
Operation i transforms Di1 to Di.
The cost of operation i is ci.
Define a potential function : {Di} R,
such that (D0 ) = 0 and (Di ) 0 for all i.
The amortized cost i with respect to is
defined to be i = ci + (Di) (Di1).
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.25

Understanding potentials
i = ci + (Di) (Di1)
potential difference i

If i > 0, then i > ci. Operation i stores


work in the data structure for later use.
If i < 0, then i < ci. The data structure
delivers up stored work to help pay for
operation i.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.26

The amortized costs bound


the true costs
The total amortized cost of n operations is
n

i =1

i =1

ci = (ci + ( Di ) ( Di1 ))
Summing both sides.

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.27

The amortized costs bound


the true costs
The total amortized cost of n operations is
n

i =1

i =1
n

ci = (ci + ( Di ) ( Di1 ))
= ci + ( Dn ) ( D0 )
i =1

The series telescopes.

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.28

The amortized costs bound


the true costs
The total amortized cost of n operations is
n

i =1

i =1
n

ci = (ci + ( Di ) ( Di1 ))
= ci + ( Dn ) ( D0 )
i =1
n

ci
i =1

October 31, 2005

since (Dn) 0 and


(D0 ) = 0.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.29

Potential analysis of table


doubling
Define the potential of the table after the ith
insertion by (Di) = 2i 2lg i. (Assume that
2lg 0 = 0.)
Note:
(D0 ) = 0,
(Di) 0 for all i.
Example:

= 26 23 = 4

$0
$0 $0
$0 $0
$0 $2
$2 $2
$2
$0 $0

accounting method)

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.30

Calculation of amortized costs


The amortized cost of the i th insertion is
i = ci + (Di) (Di1)

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.31

Calculation of amortized costs


The amortized cost of the i th insertion is
i = ci + (Di) (Di1)
i if i 1 is an exact power of 2,
1 otherwise;

+ (2i 2lg i) (2(i 1) 2lg (i1))

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.32

Calculation of amortized costs


The amortized cost of the i th insertion is
i = ci + (Di) (Di1)
i if i 1 is an exact power of 2,
1 otherwise;

+ (2i 2lg i) (2(i 1) 2lg (i1))


i if i 1 is an exact power of 2,
1 otherwise;

+ 2 2lg i + 2lg (i1) .


October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.33

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.34

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.35

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)
= i + 2 2i + 2 + i 1

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.36

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)
= i + 2 2i + 2 + i 1
=3

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.37

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)
= i + 2 2i + 2 + i 1
=3
Case 2: i 1 is not an exact power of 2.
i = 1 + 2 2lg i + 2lg (i1)

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.38

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)
= i + 2 2i + 2 + i 1
=3
Case 2: i 1 is not an exact power of 2.
i = 1 + 2 2lg i + 2lg (i1)
=3
(since 2lg i = 2lg (i1) )

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.39

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)
= i + 2 2i + 2 + i 1
=3
Case 2: i 1 is not an exact power of 2.
i = 1 + 2 2lg i + 2lg (i1)
=3
Therefore, n insertions cost (n) in the worst case.

October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.40

Calculation
Case 1: i 1 is an exact power of 2.
i = i + 2 2lg i + 2lg (i1)
= i + 2 2(i 1) + (i 1)
= i + 2 2i + 2 + i 1
=3
Case 2: i 1 is not an exact power of 2.
i = 1 + 2 2lg i + 2lg (i1)
=3
Therefore, n insertions cost (n) in the worst case.
Exercise: Fix the bug in this analysis to show that
the amortized cost of the first insertion is only 2.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.41

Conclusions
Amortized costs can provide a clean abstraction
of data-structure performance.
Any of the analysis methods can be used when
an amortized analysis is called for, but each
method has some situations where it is arguably
the simplest or most precise.
Different schemes may work for assigning
amortized costs in the accounting method, or
potentials in the potential method, sometimes
yielding radically different bounds.
October 31, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L13.42

Introduction to Algorithms
6.046J/18.401J

LECTURE 14
Competitive Analysis
Self-organizing lists
Move-to-front heuristic
Competitive analysis of
MTF

Prof. Charles E. Leiserson


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.1

Self-organizing lists
List L of n elements
The operation ACCESS(x) costs rankL(x) =
distance of x from the head of L.
L can be reordered by transposing adjacent
elements at a cost of 1.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.2

Self-organizing lists
List L of n elements
The operation ACCESS(x) costs rankL(x) =
distance of x from the head of L.
L can be reordered by transposing adjacent
elements at a cost of 1.
Example:
L

November 2, 2005

12
12

33

50
50

14
14

17
17

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

44

L14.3

Self-organizing lists
List L of n elements
The operation ACCESS(x) costs rankL(x) =
distance of x from the head of L.
L can be reordered by transposing adjacent
elements at a cost of 1.
Example:
L

12
12

33

50
50

14
14

17
17

44

Accessing the element with key 14 costs 4.


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.4

Self-organizing lists
List L of n elements
The operation ACCESS(x) costs rankL(x) =
distance of x from the head of L.
L can be reordered by transposing adjacent
elements at a cost of 1.
Example:
L

12
12

50
33
50

50
33
50

14
14

17
17

44

Transposing 3 and 50 costs 1.


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.5

On-line and off-line problems


Definition. A sequence S of
operations is provided one at a
time. For each operation, an
on-line algorithm A must execute
the operation immediately
without any knowledge of future
operations (e.g., Tetris).
An off-line algorithm may see
the whole sequence S in advance.
Goal: Minimize the total cost CA(S).
November 2, 2005

The game of Tetris

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.6

Worst-case analysis of selforganizing lists


An adversary always accesses the tail
(nth) element of L. Then, for any on-line
algorithm A, we have
CA(S) = (|S| n)
in the worst case.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.7

Average-case analysis of selforganizing lists


Suppose that element x is accessed with
probability p(x). Then, we have

E[C A ( S )] =

p( x) rank L ( x) ,

xL

which is minimized when L is sorted in


decreasing order with respect to p.
Heuristic: Keep a count of the number of
times each element is accessed, and
maintain L in order of decreasing count.
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.8

The move-to-front heuristic


Practice: Implementers discovered that the
move-to-front (MTF) heuristic empirically
yields good results.
IDEA: After accessing x, move x to the head
of L using transposes:
cost = 2 rankL(x) .
The MTF heuristic responds well to locality
in the access sequence S.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.9

Competitive analysis
Definition. An on-line algorithm A is
-competitive if there exists a constant k
such that for any sequence S of operations,
CA(S) COPT(S) + k ,
where OPT is the optimal off-line algorithm
(Gods algorithm).

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.10

MTF is O(1)-competitive
Theorem. MTF is 4-competitive for selforganizing lists.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.11

MTF is O(1)-competitive
Theorem. MTF is 4-competitive for selforganizing lists.
Proof. Let Li be MTFs list after the ith access,
and let Li* be OPTs list after the ith access.
Let ci = MTFs cost for the ith operation
= 2 rankL (x) if it accesses x;
i1
ci* = MTFs cost for the ith operation
= rankL *(x) + ti ,
i1
where ti is the number of transposes that OPT
performs.
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.12

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.13

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

November 2, 2005

CC

A
A

BB

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.14

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{}|
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.15

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{(E,C), }|
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.16

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{(E,C), (E,A), }|


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.17

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{(E,C), (E,A), (E,D), }|


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.18

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{(E,C), (E,A), (E,D), (E,B), }|


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.19

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{(E,C), (E,A), (E,D), (E,B), (D,B)}|


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.20

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Example.
EE
CC
A
D
BB
Li
A
D
Li*

CC

A
A

BB

D
D

EE

(Li) = 2 |{(E,C), (E,A), (E,D), (E,B), (D,B)}|


= 10 .
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.21

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.22

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Note that
(Li) 0 for i = 0, 1, ,
(L0) = 0 if MTF and OPT start with the
same list.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.23

Potential function
Define the potential function :{Li} R by
(Li) = 2 |{(x, y) : x pL y and y pL * x}|
i
i
= 2 # inversions .
Note that
(Li) 0 for i = 0, 1, ,
(L0) = 0 if MTF and OPT start with the
same list.
How much does change from 1 transpose?
A transpose creates/destroys 1 inversion.
= 2 .
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.24

What happens on an access?


Suppose that operation i accesses element x,
and define
A = {y Li1 : y pL x and y pL * x},
i1
i1
B = {y Li1 : y pL x and y fL * x},
i1
i1
C = {y Li1 : y fL x and y pL * x},
i1
i1
D ={y Li1 : y fL x and y fL * x}.
i1

Li1
Li1*
November 2, 2005

AB
AC

i1

CD

BD

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.25

What happens on an access?


Li1
Li1*

AB

CD

r = rankLi1(x)

AC

BD

r* = rankLi1* (x)

We have r = |A| + |B| + 1 and r* = |A| + |C| + 1.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.26

What happens on an access?


Li1
Li1*

AB

CD

r = rankLi1(x)

AC

BD

r* = rankLi1* (x)

We have r = |A| + |B| + 1 and r* = |A| + |C| + 1.


When MTF moves x to the front, it creates |A|
inversions and destroys |B| inversions. Each
transpose by OPT creates 1 inversion. Thus,
we have
(Li) (Li1) 2(|A| |B| + ti) .
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.27

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.28

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)
2r + 2(|A| |B| + ti)

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.29

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)
2r + 2(|A| |B| + ti)
= 2r + 2(|A| (r 1 |A|) + ti)
(since r = |A| + |B| + 1)

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.30

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)
2r + 2(|A| |B| + ti)
= 2r + 2(|A| (r 1 |A|) + ti)
= 2r + 4|A| 2r + 2 + 2ti

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.31

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)
2r + 2(|A| |B| + ti)
= 2r + 2(|A| (r 1 |A|) + ti)
= 2r + 4|A| 2r + 2 + 2ti
= 4|A| + 2 + 2ti

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.32

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)
2r + 2(|A| |B| + ti)
= 2r + 2(|A| (r 1 |A|) + ti)
= 2r + 4|A| 2r + 2 + 2ti
= 4|A| + 2 + 2ti
4(r* + ti)
(since r* = |A| + |C| + 1 |A| + 1)
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.33

Amortized cost
The amortized cost for the ith operation of
MTF with respect to is
i = ci + (Li) (Li1)
2r + 2(|A| |B| + ti)
= 2r + 2(|A| (r 1 |A|) + ti)
= 2r + 4|A| 2r + 2 + 2ti
= 4|A| + 2 + 2ti
4(r* + ti)
= 4ci*.
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.34

The grand finale


Thus, we have

CMTF ( S ) = ci
i =1

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.35

The grand finale


Thus, we have

CMTF ( S ) = ci
i =1
S

= (ci + ( Li 1 ) ( Li ) )
i =1

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.36

The grand finale


Thus, we have

CMTF ( S ) = ci
i =1
S

= (ci + ( Li 1 ) ( Li ) )
i =1
S

4ci* + ( L0 ) ( L S )

i =1

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.37

The grand finale


Thus, we have

CMTF ( S ) = ci
i =1
S

= (ci + ( Li 1 ) ( Li ) )
i =1
S

4ci* + ( L0 ) ( L S )

i =1

4 COPT ( S ) ,

since (L0) = 0 and (L|S|) 0.


November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.38

Addendum
If we count transpositions that move x toward the
front as free (models splicing x in and out of L
in constant time), then MTF is 2-competitive.

November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.39

Addendum
If we count transpositions that move x toward the
front as free (models splicing x in and out of L
in constant time), then MTF is 2-competitive.
What if L0 L0*?
Then, (L0) might be (n2) in the worst case.
Thus, CMTF(S) 4 COPT(S) + (n2), which is
still 4-competitive, since n2 is constant as
|S| .
November 2, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L14.40

Introduction to Algorithms
6.046J/18.401J

LECTURE 15
Dynamic Programming
Longest common
subsequence
Optimal substructure
Overlapping subproblems

Prof. Charles E. Leiserson


November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.1

Dynamic programming
Design technique, like divide-and-conquer.
Example: Longest Common Subsequence (LCS)
Given two sequences x[1 . . m] and y[1 . . n], find
a longest subsequence common to them both.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.2

Dynamic programming
Design technique, like divide-and-conquer.
Example: Longest Common Subsequence (LCS)
Given two sequences x[1 . . m] and y[1 . . n], find
a longest subsequence common to them both.
a not the

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.3

Dynamic programming
Design technique, like divide-and-conquer.
Example: Longest Common Subsequence (LCS)
Given two sequences x[1 . . m] and y[1 . . n], find
a longest subsequence common to them both.
a not the
x: A B
C
B
D A B
y: B

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.4

Dynamic programming
Design technique, like divide-and-conquer.
Example: Longest Common Subsequence (LCS)
Given two sequences x[1 . . m] and y[1 . . n], find
a longest subsequence common to them both.
a not the
x: A B
C
B
D A B
BCBA =
LCS(x, y)
y: B
D C
A B
A
functional notation,
but not a function
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.5

Brute-force LCS algorithm


Check every subsequence of x[1 . . m] to see
if it is also a subsequence of y[1 . . n].

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.6

Brute-force LCS algorithm


Check every subsequence of x[1 . . m] to see
if it is also a subsequence of y[1 . . n].
Analysis
Checking = O(n) time per subsequence.
2m subsequences of x (each bit-vector of
length m determines a distinct subsequence
of x).
Worst-case running time = O(n2m)
= exponential time.
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.7

Towards a better algorithm


Simplification:
1. Look at the length of a longest-common
subsequence.
2. Extend the algorithm to find the LCS itself.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.8

Towards a better algorithm


Simplification:
1. Look at the length of a longest-common
subsequence.
2. Extend the algorithm to find the LCS itself.
Notation: Denote the length of a sequence s
by | s |.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.9

Towards a better algorithm


Simplification:
1. Look at the length of a longest-common
subsequence.
2. Extend the algorithm to find the LCS itself.
Notation: Denote the length of a sequence s
by | s |.
Strategy: Consider prefixes of x and y.
Define c[i, j] = | LCS(x[1 . . i], y[1 . . j]) |.
Then, c[m, n] = | LCS(x, y) |.
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.10

Recursive formulation
Theorem.
c[i, j] =

November 7, 2005

c[i1, j1] + 1
if x[i] = y[j],
max{c[i1, j], c[i, j1]} otherwise.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.11

Recursive formulation
Theorem.

c[i1, j1] + 1
if x[i] = y[j],
c[i, j] = max{c[i1, j], c[i, j1]} otherwise.
Proof. Case x[i] = y[ j]:
x:

1
1

y:

November 7, 2005

2
2

L
j

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.12

Recursive formulation
Theorem.

c[i1, j1] + 1
if x[i] = y[j],
c[i, j] = max{c[i1, j], c[i, j1]} otherwise.
Proof. Case x[i] = y[ j]:
x:

1
1

y:

2
2

L
j

Let z[1 . . k] = LCS(x[1 . . i], y[1 . . j]), where c[i, j]


= k. Then, z[k] = x[i], or else z could be extended.
Thus, z[1 . . k1] is CS of x[1 . . i1] and y[1 . . j1].
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.13

Proof (continued)
Claim: z[1 . . k1] = LCS(x[1 . . i1], y[1 . . j1]).
Suppose w is a longer CS of x[1 . . i1] and
y[1 . . j1], that is, | w | > k1. Then, cut and
paste: w || z[k] (w concatenated with z[k]) is a
common subsequence of x[1 . . i] and y[1 . . j]
with | w || z[k] | > k. Contradiction, proving the
claim.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.14

Proof (continued)
Claim: z[1 . . k1] = LCS(x[1 . . i1], y[1 . . j1]).
Suppose w is a longer CS of x[1 . . i1] and
y[1 . . j1], that is, | w | > k1. Then, cut and
paste: w || z[k] (w concatenated with z[k]) is a
common subsequence of x[1 . . i] and y[1 . . j]
with | w || z[k] | > k. Contradiction, proving the
claim.
Thus, c[i1, j1] = k1, which implies that c[i, j]
= c[i1, j1] + 1.
Other cases are similar.
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.15

Dynamic-programming
hallmark #1
Optimal substructure
An optimal solution to a problem
(instance) contains optimal
solutions to subproblems.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.16

Dynamic-programming
hallmark #1
Optimal substructure
An optimal solution to a problem
(instance) contains optimal
solutions to subproblems.
If z = LCS(x, y), then any prefix of z is
an LCS of a prefix of x and a prefix of y.
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.17

Recursive algorithm for LCS


LCS(x, y, i, j)
if x[i] = y[ j]
then c[i, j] LCS(x, y, i1, j1) + 1
else c[i, j] max{ LCS(x, y, i1, j),
LCS(x, y, i, j1)}

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.18

Recursive algorithm for LCS


LCS(x, y, i, j)
if x[i] = y[ j]
then c[i, j] LCS(x, y, i1, j1) + 1
else c[i, j] max{ LCS(x, y, i1, j),
LCS(x, y, i, j1)}
Worst-case: x[i] y[ j], in which case the
algorithm evaluates two subproblems, each
with only one parameter decremented.
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.19

Recursion tree
m = 3, n = 4:

3,4
3,4

2,4
2,4
1,4
1,4

3,3
3,3
2,3
2,3

1,3
1,3

November 7, 2005

3,2
3,2

2,3
2,3
2,2
2,2

1,3
1,3

2,2
2,2

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.20

Recursion tree
m = 3, n = 4:

3,4
3,4

2,4
2,4
1,4
1,4

3,3
3,3
2,3
2,3

1,3
1,3

3,2
3,2

2,3
2,3
2,2
2,2

1,3
1,3

m+n

2,2
2,2

Height = m + n work potentially exponential.


November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.21

Recursion tree
m = 3, n = 4:

3,4
3,4

2,4
2,4
1,4
1,4

2,3
2,3
1,3
1,3

3,3
3,3

same
subproblem

3,2
3,2

2,3
2,3
2,2
2,2

1,3
1,3

m+n

2,2
2,2

Height = m + n work potentially exponential.,


but were solving subproblems already solved!
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.22

Dynamic-programming
hallmark #2
Overlapping subproblems
A recursive solution contains a
small number of distinct
subproblems repeated many times.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.23

Dynamic-programming
hallmark #2
Overlapping subproblems
A recursive solution contains a
small number of distinct
subproblems repeated many times.
The number of distinct LCS subproblems for
two strings of lengths m and n is only mn.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.24

Memoization algorithm
Memoization: After computing a solution to a
subproblem, store it in a table. Subsequent calls
check the table to avoid redoing work.

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.25

Memoization algorithm
Memoization: After computing a solution to a
subproblem, store it in a table. Subsequent calls
check the table to avoid redoing work.
LCS(x, y, i, j)
if c[i, j] = NIL
then if x[i] = y[j]
then c[i, j] LCS(x, y, i1, j1) + 1
else c[i, j] max{ LCS(x, y, i1, j),
LCS(x, y, i, j1)}

November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

same
as
before

L15.26

Memoization algorithm
Memoization: After computing a solution to a
subproblem, store it in a table. Subsequent calls
check the table to avoid redoing work.
LCS(x, y, i, j)
if c[i, j] = NIL
then if x[i] = y[j]
then c[i, j] LCS(x, y, i1, j1) + 1
else c[i, j] max{ LCS(x, y, i1, j),
LCS(x, y, i, j1)}

same
as
before

Time = (mn) = constant work per table entry.


Space = (mn).
November 7, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.27

Dynamic-programming
algorithm
IDEA:
Compute the
table bottom-up.

A B C B D
00 00 00 00 00 00
B 00 00 11 11 11 11
D 00 00 11 11 11 22
C 00 00 11
A 00 11 11
B 00 11 22
A 00 11 22

November 7, 2005

A B
00 00
11 11

22 22
22 22 22 22 22
22 22 22 33 33
22 33 33 33 44
22 33 33 44 44

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.28

Dynamic-programming
algorithm
IDEA:
Compute the
table bottom-up.
Time = (mn).

A B C B D
00 00 00 00 00 00
B 00 00 11 11 11 11
D 00 00 11 11 11 22
C 00 00 11
A 00 11 11
B 00 11 22
A 00 11 22

November 7, 2005

A B
00 00
11 11

22 22
22 22 22 22 22
22 22 22 33 33
22 33 33 33 44
22 33 33 44 44

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.29

Dynamic-programming
algorithm
IDEA:
Compute the
table bottom-up.
Time = (mn).
Reconstruct
LCS by tracing
backwards.

November 7, 2005

A B C B D
00 00 00 00 00 00
B 00 00 11 11 11 11
D 00 00 11 11 11 22
C 00 00 11
A 00 11 11
B 00 11 22
A 00 11 22

A B
00 00
11 11

22 22
22 22 22 22 22
22 22 22 33 33
22 33 33 33 44
22 33 33 44 44

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.30

Dynamic-programming
algorithm
IDEA:
Compute the
table bottom-up.
Time = (mn).
Reconstruct
LCS by tracing
backwards.
Space = (mn).
Exercise:
O(min{m, n}).
November 7, 2005

A B C B D
00 00 00 00 00 00
B 00 00 11 11 11 11
D 00 00 11 11 11 22
C 00 00 11
A 00 11 11
B 00 11 22
A 00 11 22

A B
00 00
11 11

22 22
22 22 22 22 22
22 22 22 33 33
22 33 33 33 44
22 33 33 44 44

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L15.31

Introduction to Algorithms
6.046J/18.401J

LECTURE 16
Greedy Algorithms (and
Graphs)
Graph representation
Minimum spanning trees
Optimal substructure
Greedy choice
Prims greedy MST
algorithm
Prof. Charles E. Leiserson
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.1

Graphs (review)
Definition. A directed graph (digraph)
G = (V, E) is an ordered pair consisting of
a set V of vertices (singular: vertex),
a set E V V of edges.
In an undirected graph G = (V, E), the edge
set E consists of unordered pairs of vertices.
In either case, we have | E | = O(V 2). Moreover,
if G is connected, then | E | | V | 1, which
implies that lg | E | = (lg V).
(Review CLRS, Appendix B.)
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.2

Adjacency-matrix
representation
The adjacency matrix of a graph G = (V, E), where
V = {1, 2, , n}, is the matrix A[1 . . n, 1 . . n]
given by
1 if (i, j) E,
A[i, j] =
0 if (i, j) E.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.3

Adjacency-matrix
representation
The adjacency matrix of a graph G = (V, E), where
V = {1, 2, , n}, is the matrix A[1 . . n, 1 . . n]
given by
1 if (i, j) E,
A[i, j] =
0 if (i, j) E.
22

11

33

44

November 9, 2005

A 1 2 3 4
1 0 1 1 0
2 0 0 1 0
3 0 0 0 0
4 0 0 1 0

(V 2) storage
dense
representation.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.4

Adjacency-list representation
An adjacency list of a vertex v V is the list Adj[v]
of vertices adjacent to v.
Adj[1] = {2, 3}
22
11
33

November 9, 2005

44

Adj[2] = {3}
Adj[3] = {}
Adj[4] = {3}

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.5

Adjacency-list representation
An adjacency list of a vertex v V is the list Adj[v]
of vertices adjacent to v.
Adj[1] = {2, 3}
22
11
Adj[2] = {3}
Adj[3] = {}
Adj[4] = {3}

33
44
For undirected graphs, | Adj[v] | = degree(v).
For digraphs, | Adj[v] | = out-degree(v).

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.6

Adjacency-list representation
An adjacency list of a vertex v V is the list Adj[v]
of vertices adjacent to v.
Adj[1] = {2, 3}
22
11
Adj[2] = {3}
Adj[3] = {}
Adj[4] = {3}

33
44
For undirected graphs, | Adj[v] | = degree(v).
For digraphs, | Adj[v] | = out-degree(v).
Handshaking Lemma: vV = 2 |E| for undirected
graphs adjacency lists use (V + E) storage
a sparse representation (for either type of graph).
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.7

Minimum spanning trees


Input: A connected, undirected graph G = (V, E)
with weight function w : E R.
For simplicity, assume that all edge weights are
distinct. (CLRS covers the general case.)

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.8

Minimum spanning trees


Input: A connected, undirected graph G = (V, E)
with weight function w : E R.
For simplicity, assume that all edge weights are
distinct. (CLRS covers the general case.)
Output: A spanning tree T a tree that connects
all vertices of minimum weight:
w(T ) = w(u , v) .
(u ,v )T

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.9

Example of MST
6

12
9

5
14

8
3

November 9, 2005

15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.10

Example of MST
6

12
9

5
14

8
3

November 9, 2005

15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.11

Optimal substructure
MST T:
(Other edges of G
are not shown.)

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.12

Optimal substructure
MST T:

(Other edges of G
are not shown.)

Remove any edge (u, v) T.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.13

Optimal substructure
MST T:

(Other edges of G
are not shown.)

Remove any edge (u, v) T.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.14

Optimal substructure
MST T:
(Other edges of G
are not shown.)

u
T1

T2
v

Remove any edge (u, v) T. Then, T is partitioned


into two subtrees T1 and T2.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.15

Optimal substructure
MST T:
(Other edges of G
are not shown.)

u
T1

T2
v

Remove any edge (u, v) T. Then, T is partitioned


into two subtrees T1 and T2.
Theorem. The subtree T1 is an MST of G1 = (V1, E1),
the subgraph of G induced by the vertices of T1:
V1 = vertices of T1,
E1 = { (x, y) E : x, y V1 }.
Similarly for T2.
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.16

Proof of optimal substructure


Proof. Cut and paste:
w(T) = w(u, v) + w(T1) + w(T2).
If T1 were a lower-weight spanning tree than T1 for
G1, then T = {(u, v)} T1 T2 would be a
lower-weight spanning tree than T for G.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.17

Proof of optimal substructure


Proof. Cut and paste:
w(T) = w(u, v) + w(T1) + w(T2).
If T1 were a lower-weight spanning tree than T1 for
G1, then T = {(u, v)} T1 T2 would be a
lower-weight spanning tree than T for G.
Do we also have overlapping subproblems?
Yes.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.18

Proof of optimal substructure


Proof. Cut and paste:
w(T) = w(u, v) + w(T1) + w(T2).
If T1 were a lower-weight spanning tree than T1 for
G1, then T = {(u, v)} T1 T2 would be a
lower-weight spanning tree than T for G.
Do we also have overlapping subproblems?
Yes.
Great, then dynamic programming may work!
Yes, but MST exhibits another powerful property
which leads to an even more efficient algorithm.
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.19

Hallmark for greedy


algorithms
Greedy-choice property
A locally optimal choice
is globally optimal.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.20

Hallmark for greedy


algorithms
Greedy-choice property
A locally optimal choice
is globally optimal.
Theorem. Let T be the MST of G = (V, E),
and let A V. Suppose that (u, v) E is the
least-weight edge connecting A to V A.
Then, (u, v) T.
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.21

Proof of theorem
Proof. Suppose (u, v) T. Cut and paste.
T:

v
A
VA

November 9, 2005

u
(u, v) = least-weight edge
connecting A to V A

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.22

Proof of theorem
Proof. Suppose (u, v) T. Cut and paste.
T:

v
A
VA

u
(u, v) = least-weight edge
connecting A to V A

Consider the unique simple path from u to v in T.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.23

Proof of theorem
Proof. Suppose (u, v) T. Cut and paste.
T:

v
A
VA

u
(u, v) = least-weight edge
connecting A to V A

Consider the unique simple path from u to v in T.


Swap (u, v) with the first edge on this path that
connects a vertex in A to a vertex in V A.
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.24

Proof of theorem
Proof. Suppose (u, v) T. Cut and paste.
T :
A
VA

v
u
(u, v) = least-weight edge
connecting A to V A

Consider the unique simple path from u to v in T.


Swap (u, v) with the first edge on this path that
connects a vertex in A to a vertex in V A.
A lighter-weight spanning tree than T results.
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.25

Prims algorithm
IDEA: Maintain V A as a priority queue Q. Key
each vertex in Q with the weight of the leastweight edge connecting it to a vertex in A.
QV
key[v] for all v V
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
do if v Q and w(u, v) < key[v]
then key[v] w(u, v)
DECREASE-KEY
[v] u

At the end, {(v, [v])} forms the MST.


November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.26

Example of Prims algorithm


A
VA

14

8
3

November 9, 2005

12
9

7
15
00

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.27

Example of Prims algorithm


A
VA

14

8
3

November 9, 2005

12
9

7
15
00

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.28

Example of Prims algorithm


A
VA

12

14

77
7

8
3

November 9, 2005

00
10
10

9
15

15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.29

Example of Prims algorithm


A
VA

12

14

77
7

8
3

November 9, 2005

00
10
10

9
15

15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.30

Example of Prims algorithm


A
VA

12
12

55
14

77
7

8
3

November 9, 2005

12

00
10
10

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.31

Example of Prims algorithm


A
VA

12
12

55
14

77
7

8
3

November 9, 2005

12

00
10
10

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.32

Example of Prims algorithm


A
VA

66

55
14
14
14

77
7

8
3

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.33

Example of Prims algorithm


A
VA

66

55
14
14
14

77
7

8
3

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.34

Example of Prims algorithm


A
VA

66

55
14
14
14

77
7

8
3

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.35

Example of Prims algorithm


A
VA

66

55
14
3

77
7

33

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.36

Example of Prims algorithm


A
VA

66

55
14
3

77
7

33

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.37

Example of Prims algorithm


A
VA

66

55
14
3

77
7

33

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.38

Example of Prims algorithm


A
VA

66

55
14
3

77
7

33

November 9, 2005

12

00
88

9
15

99
15
15

10

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.39

Analysis of Prim
QV
key[v] for all v V
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
do if v Q and w(u, v) < key[v]
then key[v] w(u, v)
[v] u

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.40

Analysis of Prim
(V)
total

November 9, 2005

QV
key[v] for all v V
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
do if v Q and w(u, v) < key[v]
then key[v] w(u, v)
[v] u

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.41

Analysis of Prim
(V)
total

|V |
times

November 9, 2005

QV
key[v] for all v V
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
do if v Q and w(u, v) < key[v]
then key[v] w(u, v)
[v] u

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.42

Analysis of Prim
QV
(V)
key[v] for all v V
total
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
|V |
do if v Q and w(u, v) < key[v]
times degree(u)
times
then key[v] w(u, v)
[v] u

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.43

Analysis of Prim
QV
(V)
key[v] for all v V
total
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
|V |
do if v Q and w(u, v) < key[v]
times degree(u)
times
then key[v] w(u, v)
[v] u
Handshaking Lemma (E) implicit DECREASE-KEYs.
November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.44

Analysis of Prim
QV
(V)
key[v] for all v V
total
key[s] 0 for some arbitrary s V
while Q
do u EXTRACT-MIN(Q)
for each v Adj[u]
|V |
do if v Q and w(u, v) < key[v]
times degree(u)
times
then key[v] w(u, v)
[v] u
Handshaking Lemma (E) implicit DECREASE-KEYs.

Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY


November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.45

Analysis of Prim (continued)


Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.46

Analysis of Prim (continued)


Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

November 9, 2005

Total

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.47

Analysis of Prim (continued)


Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

array

November 9, 2005

O(V)

O(1)

Total
O(V2)

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.48

Analysis of Prim (continued)


Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

Total

array

O(V)

O(1)

O(V2)

binary
heap

O(lg V)

O(lg V)

O(E lg V)

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.49

Analysis of Prim (continued)


Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

Total

array

O(V)

O(1)

O(V2)

binary
heap

O(lg V)

O(lg V)

O(E lg V)

Fibonacci O(lg V)
heap
amortized
November 9, 2005

O(1)
O(E + V lg V)
amortized worst case

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.50

MST algorithms
Kruskals algorithm (see CLRS):
Uses the disjoint-set data structure (Lecture 10).
Running time = O(E lg V).

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.51

MST algorithms
Kruskals algorithm (see CLRS):
Uses the disjoint-set data structure (Lecture 10).
Running time = O(E lg V).
Best to date:
Karger, Klein, and Tarjan [1993].
Randomized algorithm.
O(V + E) expected time.

November 9, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L16.52

Introduction to Algorithms
6.046J/18.401J
LECTURE 17
Shortest Paths I
Properties of shortest paths
Dijkstras algorithm
Correctness
Analysis
Breadth-first search
Prof. Erik Demaine
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.1

Paths in graphs
Consider a digraph G = (V, E) with edge-weight
function w : E R. The weight of path p = v1
v2 L vk is defined to be
k 1

w( p ) = w(vi , vi +1 ) .
i =1

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.2

Paths in graphs
Consider a digraph G = (V, E) with edge-weight
function w : E R. The weight of path p = v1
v2 L vk is defined to be
k 1

w( p ) = w(vi , vi +1 ) .
i =1

Example:

vv11

vv22

November 14, 2005

vv33

vv44

vv55
w(p) = 2

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.3

Shortest paths
A shortest path from u to v is a path of
minimum weight from u to v. The shortestpath weight from u to v is defined as
(u, v) = min{w(p) : p is a path from u to v}.
Note: (u, v) = if no path from u to v exists.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.4

Optimal substructure
Theorem. A subpath of a shortest path is a
shortest path.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.5

Optimal substructure
Theorem. A subpath of a shortest path is a
shortest path.

Proof. Cut and paste:

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.6

Optimal substructure
Theorem. A subpath of a shortest path is a
shortest path.

Proof. Cut and paste:

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.7

Triangle inequality
Theorem. For all u, v, x V, we have
(u, v) (u, x) + (x, v).

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.8

Triangle inequality
Theorem. For all u, v, x V, we have
(u, v) (u, x) + (x, v).

Proof.
(u, v)

uu
(u, x)

vv
(x, v)

xx
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.9

Well-definedness of shortest
paths
If a graph G contains a negative-weight cycle,
then some shortest paths may not exist.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.10

Well-definedness of shortest
paths
If a graph G contains a negative-weight cycle,
then some shortest paths may not exist.
Example:

<0

uu
November 14, 2005

vv
Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.11

Single-source shortest paths


Problem. From a given source vertex s V, find
the shortest-path weights (s, v) for all v V.
If all edge weights w(u, v) are nonnegative, all
shortest-path weights must exist.
IDEA: Greedy.
1. Maintain a set S of vertices whose shortestpath distances from s are known.
2. At each step add to S the vertex v V S
whose distance estimate from s is minimal.
3. Update the distance estimates of vertices
adjacent to v.
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.12

Dijkstras algorithm
d[s] 0
for each v V {s}
do d[v]
S
QV
Q is a priority queue maintaining V S

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.13

Dijkstras algorithm
d[s] 0
for each v V {s}
do d[v]
S
QV
Q is a priority queue maintaining V S
while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
do if d[v] > d[u] + w(u, v)
then d[v] d[u] + w(u, v)

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.14

Dijkstras algorithm
d[s] 0
for each v V {s}
do d[v]
S
QV
Q is a priority queue maintaining V S
while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
relaxation
do if d[v] > d[u] + w(u, v)
then d[v] d[u] + w(u, v)
step

Implicit DECREASE-KEY
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.15

Example of Dijkstras
algorithm
Graph with
nonnegative
edge weights:

10

AA

1 4
3

November 14, 2005

BB

CC

2
8

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

D
D
7 9

EE

L17.16

Example of Dijkstras
algorithm

BB

Initialize:
10

0 AA
Q: A B C D E
0

1 4
3

CC

2
8

D
D
7 9

EE

S: {}
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.17

Example of Dijkstras
algorithm
A EXTRACT-MIN(Q):
10

0 AA
Q: A B C D E
0

BB

1 4
3

CC

2
8

D
D
7 9

EE

S: { A }
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.18

Example of Dijkstras
algorithm
Relax all edges leaving A:
10

0 AA
Q: A B C D E
0

10

10
BB

1 4
3

CC
3

2
8

D
D
7 9

EE

S: { A }
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.19

Example of Dijkstras
algorithm
C EXTRACT-MIN(Q):
10

0 AA
Q: A B C D E
0

10

10
BB

1 4
3

CC
3

2
8

D
D
7 9

EE

S: { A, C }
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.20

Example of Dijkstras
algorithm
Relax all edges leaving C:
10

0 AA
Q: A B C D E
0

10
7

November 14, 2005

11

7
BB

1 4
3

CC
3

2
8

11
D
D
7 9

EE
5

S: { A, C }

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.21

Example of Dijkstras
algorithm
E EXTRACT-MIN(Q):
10

0 AA
Q: A B C D E
0

10
7

November 14, 2005

11

7
BB

1 4
3

CC
3

2
8

11
D
D
7 9

EE
5

S: { A, C, E }

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.22

Example of Dijkstras
algorithm
Relax all edges leaving E:
10

0 AA
Q: A B C D E
0

10
7
7

November 14, 2005

11
11

7
BB

1 4
3

CC
3

2
8

11
D
D
7 9

EE
5

S: { A, C, E }

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.23

Example of Dijkstras
algorithm
B EXTRACT-MIN(Q):
10

0 AA
Q: A B C D E
0

10
7
7

November 14, 2005

11
11

7
BB

1 4
3

CC
3

2
8

11
D
D
7 9

EE
5

S: { A, C, E, B }

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.24

Example of Dijkstras
algorithm
Relax all edges leaving B:
10

0 AA
Q: A B C D E
0

10
7
7

November 14, 2005

11
11
9

7
BB

1 4
3

CC
3

9
D
D

2
8

7 9

EE
5

S: { A, C, E, B }

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.25

Example of Dijkstras
algorithm
D EXTRACT-MIN(Q):
10

0 AA
Q: A B C D E
0

10
7
7

November 14, 2005

11
11
9

7
BB

1 4
3

CC
3

2
8

9
D
D
7 9

EE
5

S: { A, C, E, B, D }

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.26

Correctness Part I
Lemma. Initializing d[s] 0 and d[v] for all
v V {s} establishes d[v] (s, v) for all v V,
and this invariant is maintained over any sequence
of relaxation steps.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.27

Correctness Part I
Lemma. Initializing d[s] 0 and d[v] for all
v V {s} establishes d[v] (s, v) for all v V,
and this invariant is maintained over any sequence
of relaxation steps.
Proof. Suppose not. Let v be the first vertex for
which d[v] < (s, v), and let u be the vertex that
caused d[v] to change: d[v] = d[u] + w(u, v). Then,
d[v] < (s, v)
supposition
(s, u) + (u, v) triangle inequality
(s,u) + w(u, v) sh. path specific path
d[u] + w(u, v)
v is first violation
Contradiction.
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.28

Correctness Part II
Lemma. Let u be vs predecessor on a shortest
path from s to v. Then, if d[u] = (s, u) and edge
(u, v) is relaxed, we have d[v] = (s, v) after the
relaxation.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.29

Correctness Part II
Lemma. Let u be vs predecessor on a shortest
path from s to v. Then, if d[u] = (s, u) and edge
(u, v) is relaxed, we have d[v] = (s, v) after the
relaxation.
Proof. Observe that (s, v) = (s, u) + w(u, v).
Suppose that d[v] > (s, v) before the relaxation.
(Otherwise, were done.) Then, the test d[v] >
d[u] + w(u, v) succeeds, because d[v] > (s, v) =
(s, u) + w(u, v) = d[u] + w(u, v), and the
algorithm sets d[v] = d[u] + w(u, v) = (s, v).
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.30

Correctness Part III


Theorem. Dijkstras algorithm terminates with
d[v] = (s, v) for all v V.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.31

Correctness Part III


Theorem. Dijkstras algorithm terminates with
d[v] = (s, v) for all v V.
Proof. It suffices to show that d[v] = (s, v) for every
v V when v is added to S. Suppose u is the first
vertex added to S for which d[u] > (s, u). Let y be the
first vertex in V S along a shortest path from s to u,
and let x be its predecessor:

uu
S, just before
adding u.
November 14, 2005

ss

xx

yy

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.32

Correctness Part III


(continued)
S
ss

uu
xx

yy

Since u is the first vertex violating the claimed


invariant, we have d[x] = (s, x). When x was
added to S, the edge (x, y) was relaxed, which
implies that d[y] = (s, y) (s, u) < d[u]. But,
d[u] d[y] by our choice of u. Contradiction.
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.33

Analysis of Dijkstra
while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
do if d[v] > d[u] + w(u, v)
then d[v] d[u] + w(u, v)

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.34

Analysis of Dijkstra
|V |
times

November 14, 2005

while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
do if d[v] > d[u] + w(u, v)
then d[v] d[u] + w(u, v)

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.35

Analysis of Dijkstra
|V |
times

while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
degree(u)
do if d[v] > d[u] + w(u, v)
times
then d[v] d[u] + w(u, v)

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.36

Analysis of Dijkstra
|V |
times

while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
degree(u)
do if d[v] > d[u] + w(u, v)
times
then d[v] d[u] + w(u, v)

Handshaking Lemma (E) implicit DECREASE-KEYs.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.37

Analysis of Dijkstra
|V |
times

while Q
do u EXTRACT-MIN(Q)
S S {u}
for each v Adj[u]
degree(u)
do if d[v] > d[u] + w(u, v)
times
then d[v] d[u] + w(u, v)

Handshaking Lemma (E) implicit DECREASE-KEYs.

Time = (VTEXTRACT-MIN + ETDECREASE-KEY)


Note: Same formula as in the analysis of Prims
minimum spanning tree algorithm.
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.38

Analysis of Dijkstra
(continued)
Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

November 14, 2005

Total

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.39

Analysis of Dijkstra
(continued)
Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

array

November 14, 2005

O(V)

O(1)

Total
O(V2)

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.40

Analysis of Dijkstra
(continued)
Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

Total

array

O(V)

O(1)

O(V2)

binary
heap

O(lg V)

O(lg V)

O(E lg V)

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.41

Analysis of Dijkstra
(continued)
Time = (V)TEXTRACT-MIN + (E)TDECREASE-KEY
Q

TEXTRACT-MIN TDECREASE-KEY

Total

array

O(V)

O(1)

O(V2)

binary
heap

O(lg V)

O(lg V)

O(E lg V)

Fibonacci O(lg V)
heap
amortized
November 14, 2005

O(1)
O(E + V lg V)
amortized worst case

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.42

Unweighted graphs
Suppose that w(u, v) = 1 for all (u, v) E.
Can Dijkstras algorithm be improved?

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.43

Unweighted graphs
Suppose that w(u, v) = 1 for all (u, v) E.
Can Dijkstras algorithm be improved?
Use a simple FIFO queue instead of a priority
queue.

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.44

Unweighted graphs
Suppose that w(u, v) = 1 for all (u, v) E.
Can Dijkstras algorithm be improved?
Use a simple FIFO queue instead of a priority
queue.
Breadth-first search
while Q
do u DEQUEUE(Q)
for each v Adj[u]
do if d[v] =
then d[v] d[u] + 1
ENQUEUE(Q, v)

November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.45

Unweighted graphs
Suppose that w(u, v) = 1 for all (u, v) E.
Can Dijkstras algorithm be improved?
Use a simple FIFO queue instead of a priority
queue.
Breadth-first search
while Q
do u DEQUEUE(Q)
for each v Adj[u]
do if d[v] =
then d[v] d[u] + 1
ENQUEUE(Q, v)

Analysis: Time = O(V + E).


November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.46

Example of breadth-first
search
aa

ff

hh

dd
bb

gg
ee

ii

cc
Q:
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.47

Example of breadth-first
search
0

aa

ff

hh

dd
bb

gg
ee

ii

cc
0

Q: a
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.48

Example of breadth-first
search
0

aa

ff

hh

dd
1

bb

gg
ee

ii

cc
1 1

Q: a b d
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.49

Example of breadth-first
search
0

aa

ff

hh

dd
1

bb

gg
ee

cc

ii

2
1 2 2

Q: a b d c e
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.50

Example of breadth-first
search
0

aa

ff

hh

dd
1

bb

gg
ee

cc

ii

2
2 2

Q: a b d c e
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.51

Example of breadth-first
search
0

aa

ff

hh

dd
1

bb

gg
ee

cc

ii

2
2

Q: a b d c e
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.52

Example of breadth-first
search
0

aa

dd
1

bb
cc

ff

1
3

hh

gg

ee

ii

3
3 3

Q: a b d c e g i
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.53

Example of breadth-first
search
4
0

aa

dd
1

bb
cc

ff

1
3

hh

gg

ee

ii

3
3 4

Q: a b d c e g i f
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.54

Example of breadth-first
search
0

aa

dd
1

bb
cc

ff

hh

gg

ee

ii

3
4 4

Q: a b d c e g i f h
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.55

Example of breadth-first
search
0

aa

dd
1

bb
cc

ff

hh

gg

ee

ii

3
4

Q: a b d c e g i f h
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.56

Example of breadth-first
search
0

aa

dd
1

bb
cc

ff

hh

gg

ee

ii

Q: a b d c e g i f h
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.57

Example of breadth-first
search
0

aa

dd
1

bb
cc

ff

hh

gg

ee

ii

Q: a b d c e g i f h
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.58

Correctness of BFS
while Q
do u DEQUEUE(Q)
for each v Adj[u]
do if d[v] =
then d[v] d[u] + 1
ENQUEUE(Q, v)

Key idea:
The FIFO Q in breadth-first search mimics
the priority queue Q in Dijkstra.
Invariant: v comes after u in Q implies that
d[v] = d[u] or d[v] = d[u] + 1.
November 14, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L17.59

Introduction to Algorithms
6.046J/18.401J

LECTURE 18
Shortest Paths II
Bellman-Ford algorithm
Linear programming and
difference constraints
VLSI layout compaction

Prof. Erik Demaine


November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.1

Negative-weight cycles
Recall: If a graph G = (V, E) contains a negativeweight cycle, then some shortest paths may not exist.

Example:
<0
uu

November 16, 2005

vv

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.2

Negative-weight cycles
Recall: If a graph G = (V, E) contains a negativeweight cycle, then some shortest paths may not exist.

Example:
<0
uu

vv

Bellman-Ford algorithm: Finds all shortest-path


lengths from a source s V to all v V or
determines that a negative-weight cycle exists.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.3

Bellman-Ford algorithm
d[s] 0
for each v V {s}
do d[v]

initialization

for i 1 to | V | 1
do for each edge (u, v) E
do if d[v] > d[u] + w(u, v)
relaxation
then d[v] d[u] + w(u, v)
step
for each edge (u, v) E
do if d[v] > d[u] + w(u, v)
then report that a negative-weight cycle exists
At the end, d[v] = (s, v), if no negative-weight cycles.
Time = O(VE).
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.4

Example of Bellman-Ford
BB

1
3

AA
4

November 16, 2005

CC

1
5

2
2

D
D

EE
3

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.5

Example of Bellman-Ford

BB

AA
4

CC

1
5

EE

D
D

Initialization.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.6

Example of Bellman-Ford

BB

0
AA

4
5

CC

1
3

1
5

2
8

D
D

EE

Order of edge relaxation.


November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.7

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

BB
7

CC

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.8

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

BB
7

CC

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.9

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

BB
7

CC

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.10

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

BB
7

CC

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.11

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
4

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.12

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
4

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.13

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2
4

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.14

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.15

Example of Bellman-Ford
1
BB

0
AA

4
5

CC
2

1
3

2
8

D
D

EE

End of pass 1.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.16

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.17

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.18

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D
1

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.19

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D
1

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.20

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D
1

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.21

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D
1

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.22

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D
1

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.23

Example of Bellman-Ford

0
AA

4
5

November 16, 2005

1
BB
7

CC
2

1
3

1
5

2
8

D
D
2
1

EE

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.24

Example of Bellman-Ford
1
BB

0
AA

4
5

CC
2

1
3

1
5

2
8

D
D
2

EE

End of pass 2 (and 3 and 4).


November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.25

Correctness
Theorem. If G = (V, E) contains no negativeweight cycles, then after the Bellman-Ford
algorithm executes, d[v] = (s, v) for all v V.

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.26

Correctness
Theorem. If G = (V, E) contains no negativeweight cycles, then after the Bellman-Ford
algorithm executes, d[v] = (s, v) for all v V.
Proof. Let v V be any vertex, and consider a shortest
path p from s to v with the minimum number of edges.

s
p: vv0
0

vv11

vv22

vv33

vvkk

Since p is a shortest path, we have


(s, vi) = (s, vi1) + w(vi1, vi) .
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.27

Correctness (continued)
s
p: vv0
0

vv11

vv22

vv33

v
vvkk

Initially, d[v0] = 0 = (s, v0), and d[v0] is unchanged by


subsequent relaxations (because of the lemma from
Lecture 14 that d[v] (s, v)).
After 1 pass through E, we have d[v1] = (s, v1).
After 2 passes through E, we have d[v2] = (s, v2).
M
After k passes through E, we have d[vk] = (s, vk).
Since G contains no negative-weight cycles, p is simple.
Longest simple path has |V| 1 edges.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.28

Detection of negative-weight
cycles
Corollary. If a value d[v] fails to converge after
|V| 1 passes, there exists a negative-weight
cycle in G reachable from s.

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.29

Linear programming
Let A be an mn matrix, b be an m-vector, and c
be an n-vector. Find an n-vector x that maximizes
cTx subject to Ax b, or determine that no such
solution exists.
n
m

.
A

November 16, 2005

x b

maximizing
cT

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

x
L18.30

Linear-programming
algorithms
Algorithms for the general problem
Simplex methods practical, but worst-case
exponential time.
Interior-point methods polynomial time and
competes with simplex.

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.31

Linear-programming
algorithms
Algorithms for the general problem
Simplex methods practical, but worst-case
exponential time.
Interior-point methods polynomial time and
competes with simplex.
Feasibility problem: No optimization criterion.
Just find x such that Ax b.
In general, just as hard as ordinary LP.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.32

Solving a system of difference


constraints
Linear programming where each row of A contains
exactly one 1, one 1, and the rest 0s.
Example:
x1 x2 3
x2 x3 2
xj xi wij
x1 x3 2

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.33

Solving a system of difference


constraints
Linear programming where each row of A contains
exactly one 1, one 1, and the rest 0s.
Example:
Solution:
x1 = 3
x1 x2 3
x2 = 0
x2 x3 2
xj xi wij
x3 = 2
x1 x3 2

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.34

Solving a system of difference


constraints
Linear programming where each row of A contains
exactly one 1, one 1, and the rest 0s.
Example:
Solution:
x1 = 3
x1 x2 3
x2 = 0
x2 x3 2
xj xi wij
x3 = 2
x1 x3 2
Constraint graph:
xj xi wij
November 16, 2005

vvii

wij

vvjj

(The A
matrix has
dimensions
|E | |V |.)

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.35

Unsatisfiable constraints
Theorem. If the constraint graph contains
a negative-weight cycle, then the system of
differences is unsatisfiable.

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.36

Unsatisfiable constraints
Theorem. If the constraint graph contains
a negative-weight cycle, then the system of
differences is unsatisfiable.
Proof. Suppose that the negative-weight cycle is
v1 v2 L vk v1. Then, we have
x2 x1
x3 x2

w12
w23
M

xk xk1 wk1, k
x1 xk wk1

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.37

Unsatisfiable constraints
Theorem. If the constraint graph contains
a negative-weight cycle, then the system of
differences is unsatisfiable.
Proof. Suppose that the negative-weight cycle is
v1 v2 L vk v1. Then, we have
x2 x1
x3 x2

w12
w23
M

xk xk1 wk1, k
x1 xk wk1
0
November 16, 2005

weight of cycle
<0

Therefore, no
values for the xi
can satisfy the
constraints.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.38

Satisfying the constraints


Theorem. Suppose no negative-weight cycle
exists in the constraint graph. Then, the
constraints are satisfiable.

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.39

Satisfying the constraints


Theorem. Suppose no negative-weight cycle
exists in the constraint graph. Then, the
constraints are satisfiable.

Proof. Add a new vertex s to V with a 0-weight edge


to each vertex vi V.

vv11
vv44
vv77
November 16, 2005

vv99
vv33

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.40

Satisfying the constraints


Theorem. Suppose no negative-weight cycle
exists in the constraint graph. Then, the
constraints are satisfiable.

Proof. Add a new vertex s to V with a 0-weight edge


to each vertex vi V.

0
s

vv11
vv44
vv77

November 16, 2005

vv99
vv33

Note:
No negative-weight
cycles introduced
shortest paths exist.

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.41

Proof (continued)
Claim: The assignment xi = (s, vi) solves the constraints.
Consider any constraint xj xi wij, and consider the
shortest paths from s to vj and vi:

ss

(s, vi)
(s, vj)

vvii
wij

vvjj

The triangle inequality gives us (s,vj) (s, vi) + wij.


Since xi = (s, vi) and xj = (s, vj), the constraint xj xi
wij is satisfied.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.42

Bellman-Ford and linear


programming
Corollary. The Bellman-Ford algorithm can
solve a system of m difference constraints on n
variables in O(m n) time.
Single-source shortest paths is a simple LP
problem.
In fact, Bellman-Ford maximizes x1 + x2 + L + xn
subject to the constraints xj xi wij and xi 0
(exercise).
Bellman-Ford also minimizes maxi{xi} mini{xi}
(exercise).
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.43

Application to VLSI layout


compaction
Integrated
-circuit
features:
minimum separation
Problem: Compact (in one dimension) the
space between the features of a VLSI layout
without bringing any features too close together.
November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.44

VLSI layout compaction


d1
11

x1

x2

x2 x1 d 1 +
Bellman-Ford minimizes maxi{xi} mini{xi},
which compacts the layout in the x-dimension.
Constraint:

November 16, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L18.45

Introduction to Algorithms
6.046J/18.401J

LECTURE 19
Shortest Paths III
All-pairs shortest paths
Matrix-multiplication
algorithm
Floyd-Warshall algorithm
Johnsons algorithm
Prof. Charles E. Leiserson
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.1

Shortest paths
Single-source shortest paths
Nonnegative edge weights

Dijkstras algorithm: O(E + V lg V)

General

Bellman-Ford algorithm: O(VE)

DAG

One pass of Bellman-Ford: O(V + E)

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.2

Shortest paths
Single-source shortest paths
Nonnegative edge weights

Dijkstras algorithm: O(E + V lg V)

General

Bellman-Ford: O(VE)

DAG

One pass of Bellman-Ford: O(V + E)

All-pairs shortest paths


Nonnegative edge weights

Dijkstras algorithm |V| times: O(VE + V 2 lg V)

General

Three algorithms today.

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.3

All-pairs shortest paths


Input: Digraph G = (V, E), where V = {1, 2,
, n}, with edge-weight function w : E R.
Output: n n matrix of shortest-path lengths
(i, j) for all i, j V.

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.4

All-pairs shortest paths


Input: Digraph G = (V, E), where V = {1, 2,
, n}, with edge-weight function w : E R.
Output: n n matrix of shortest-path lengths
(i, j) for all i, j V.
IDEA:
Run Bellman-Ford once from each vertex.
Time = O(V 2E).
Dense graph (n2 edges) (n 4) time in the
worst case.
Good first try!
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.5

Dynamic programming
Consider the n n adjacency matrix A = (aij)
of the digraph, and define
dij(m) = weight of a shortest path from
i to j that uses at most m edges.
Claim: We have
0 if i = j,
(0)
dij =
if i j;
and for m = 1, 2, , n 1,
dij(m) = mink{dik(m1) + akj }.
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.6

Proof of claim

ks

dij(m) = mink{dik(m1) + akj }

es
g
d
1e

ii

s
e
g
d
e
1
m
m
1
edg
es

jj
M

m 1 edges

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.7

Proof of claim

ks

dij(m) = mink{dik(m1) + akj }

es
g
d
1e

ii
Relaxation!

s
e
g
d
e
1
m
m
1
edg
es

for k 1 to n
do if dij > dik + akj
then dij dik + akj

November 21, 2005

jj
M

m 1 edges

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.8

Proof of claim

ks

dij(m) = mink{dik(m1) + akj }

es
g
d
1e

ii
Relaxation!

s
e
g
d
e
1
m
m
1
edg
es

for k 1 to n
do if dij > dik + akj
then dij dik + akj

jj
M

m 1 edges

Note: No negative-weight cycles implies


(i, j) = dij (n1) = dij (n) = dij (n+1) = L
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.9

Matrix multiplication
Compute C = A B, where C, A, and B are n n
matrices:
n
cij = aik bkj .
k =1

Time = (n3) using the standard algorithm.

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.10

Matrix multiplication
Compute C = A B, where C, A, and B are n n
matrices:
n
cij = aik bkj .
k =1

Time = (n3) using the standard algorithm.


What if we map + min and +?

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.11

Matrix multiplication
Compute C = A B, where C, A, and B are n n
matrices:
n
cij = aik bkj .
k =1

Time = (n3) using the standard algorithm.


What if we map + min and +?
cij = mink {aik + bkj}.
Thus, D(m) = D(m1) A.
Identity matrix = I =
November 21, 2005

0
0
0
0

= D0 = (dij(0)).

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.12

Matrix multiplication
(continued)
The (min, +) multiplication is associative, and
with the real numbers, it forms an algebraic
structure called a closed semiring.
Consequently, we can compute
D(1) = D(0) A = A1
D(2) = D(1) A = A2
M
M
D(n1) = D(n2) A = An1 ,
yielding D(n1) = ((i, j)).
Time = (nn3) = (n4). No better than n B-F.
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.13

Improved matrix
multiplication algorithm
Repeated squaring: A2k = Ak Ak.
lg(n1)
2
4
2
.
Compute A , A , , A

O(lg n) squarings
Note: An1 = An = An+1 = L.
Time = (n3 lg n).
To detect negative-weight cycles, check the
diagonal for negative values in O(n) additional
time.
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.14

Floyd-Warshall algorithm
Also dynamic programming, but faster!
Define cij(k) = weight of a shortest path from i
to j with intermediate vertices
belonging to the set {1, 2, , k}.

ii

kk

kk

kk

kk

jj

Thus, (i, j) = cij(n). Also, cij(0) = aij .


November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.15

Floyd-Warshall recurrence
cij(k) = mink {cij(k1), cik(k1) + ckj(k1)}
cik
ii

(k1)

cij(k1)

ckj(k1)
jj

intermediate vertices in {1, 2, , k}

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.16

Pseudocode for FloydWarshall


for k 1 to n
do for i 1 to n
do for j 1 to n
do if cij > cik + ckj
then cij cik + ckj

relaxation

Notes:
Okay to omit superscripts, since extra relaxations
cant hurt.
Runs in (n3) time.
Simple to code.
Efficient in practice.
November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.17

Transitive closure of a
directed graph
Compute tij =

1 if there exists a path from i to j,


0 otherwise.

IDEA: Use Floyd-Warshall, but with (, ) instead


of (min, +):

tij(k) = tij(k1) (tik(k1) tkj(k1)).


Time = (n3).

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.18

Graph reweighting
Theorem. Given a function h : V R, reweight each
edge (u, v) E by wh(u, v) = w(u, v) + h(u) h(v).
Then, for any two vertices, all paths between them are
reweighted by the same amount.

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.19

Graph reweighting
Theorem. Given a function h : V R, reweight each
edge (u, v) E by wh(u, v) = w(u, v) + h(u) h(v).
Then, for any two vertices, all paths between them are
reweighted by the same amount.
Proof. Let p = v1 v2 L vk be a path in G. We
k 1
have
wh ( p ) =
=

wh ( vi ,vi+1 )

i =1
k 1

( w( vi ,vi+1 )+ h ( vi ) h ( vi+1 ) )

i =1
k 1

w( vi ,vi+1 ) + h ( v1 ) h ( vk ) Same
i =1

= w ( p ) + h ( v1 ) h ( v k ) .
November 21, 2005

amount!

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.20

Shortest paths in reweighted


graphs
Corollary. h(u, v) = (u, v) + h(u) h(v).

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.21

Shortest paths in reweighted


graphs
Corollary. h(u, v) = (u, v) + h(u) h(v).
IDEA: Find a function h : V R such that
wh(u, v) 0 for all (u, v) E. Then, run
Dijkstras algorithm from each vertex on the
reweighted graph.
NOTE: wh(u, v) 0 iff h(v) h(u) w(u, v).

November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.22

Johnsons algorithm
1. Find a function h : V R such that wh(u, v) 0 for
all (u, v) E by using Bellman-Ford to solve the
difference constraints h(v) h(u) w(u, v), or
determine that a negative-weight cycle exists.
Time = O(V E).
2. Run Dijkstras algorithm using wh from each vertex
u V to compute h(u, v) for all v V.
Time = O(V E + V 2 lg V).
3. For each (u, v) V V, compute
(u, v) = h(u, v) h(u) + h(v) .
Time = O(V 2).

Total time = O(V E + V 2 lg V).


November 21, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L19.23

Introduction to Algorithms
6.046J/18.401J

LECTURE 20
Quiz 2 Review

6.046 Staff
November 23, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L20.1

Introduction to Algorithms
6.046J/18.401J

LECTURE 21
Take-Home Quiz
Instructions
Academic honesty
Strategies for doing well

Prof. Charles E. Leiserson


November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.1

Take-home quiz
The take-home quiz contains 5 problems worth
25 points each, for a total of 125 points.
1 easy
2 moderate
1 hard
1 very hard

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.2

End of quiz
Your exam is due between 10:00 and
11:00 A.M. on Monday, November 22,
2004.
Late exams will not be accepted unless
you obtain a Deans Excuse or make
prior arrangements with your
recitation instructor.
You must hand in your own exam in
person.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.3

Planning
The quiz should take you about 12 hours to
do, but you have five days in which to do it.
Plan your time wisely. Do not overwork,
and get enough sleep.
Ample partial credit will be given for good
solutions, especially if they are well written.
The better your asymptotic running-time
bounds, the higher your score.
Bonus points will be given for exceptionally
efficient or elegant solutions.
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.4

Format
Each problem should be answered on
a separate sheet (or sheets) of 3-hole
punched paper.
Mark the top of each problem with
your name,
6.046J/18.410J,
the problem number,
your recitation time,
and your TA.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.5

Executive summary
Your solution to a problem should start with
a topic paragraph that provides an executive
summary of your solution.
This executive summary should describe
the problem you are solving,
the techniques you use to solve it,
any important assumptions you make, and
the running time your algorithm achieves.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.6

Solutions
Write up your solutions cleanly and concisely
to maximize the chance that we understand
them.
Be explicit about running time and algorithms.
For example, don't just say you sort n numbers,
state that you are using heapsort, which sorts the n
numbers in O(n lg n) time in the worst case.

When describing an algorithm, give an English


description of the main idea of the algorithm.
Use pseudocode only if necessary to clarify
your solution.
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.7

Solutions
Give examples, and draw figures.
Provide succinct and convincing arguments
for the correctness of your solutions.
Do not regurgitate material presented in class.
Cite algorithms and theorems from CLRS,
lecture, and recitation to simplify your
solutions.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.8

Assumptions
Part of the goal of this exam is to test
engineering common sense.
If you find that a question is unclear or
ambiguous, make reasonable assumptions
in order to solve the problem.
State clearly in your write-up what
assumptions you have made.
Be careful what you assume, however,
because you will receive little credit if you
make a strong assumption that renders a
problem trivial.
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.9

Bugs, etc.
If you think that youve found a bug, please send
email to 6.046 course staff.
Corrections and clarifications will be sent to the
class via email.
Check your email daily to avoid missing
potentially important announcements.
If you did not receive an email last night
reminding you about Quiz 2, then you are not on
the class email list. Please let your recitation
instructor know immediately.
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.10

Academic honesty
This quiz is limited open book.
You may use
your course notes,
the CLRS textbook,
lecture videos,
basic reference materials such as dictionaries,
and
any of the handouts posted on the course web
page.
No other sources whatsoever may be consulted!
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.11

Academic honesty
For example, you may not use notes or solutions
from other times that this course or other related
courses have been taught, or materials on the
Web.
These materials will not help you, but you may not
use them anyhow.

You may not communicate with any person


except members of the 6.046 staff about any
aspect of the exam until after noon on Monday,
November 22, even if you have already handed
in your exam.
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.12

Academic honesty
If at any time you feel that you may have
violated this policy, it is imperative that you
contact the course staff immediately.
It will be much the worse for you if third parties
divulge your indiscretion.
If you have any questions about what resources
may or may not be used during the quiz, send
email to 6.046 course staff.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.13

Poll of 78 quiz takers


Question 1: Did you cheat?

76 No.
1 Yes.
1 Abstain.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.14

Poll of 78 quiz takers


Question 2: How many people do you know
who cheated?

72 None.
2 3 people compared answers.
1 Suspect 2, but dont know.
1 Either 0 or 2.
1 Abstain.
1 10 (the cheater).
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.15

Reread instructions

Please reread the exam


instructions in their entirety at
least once a day during the exam.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.16

Test-taking strategies
Manage your time.
Manage your psyche.
Brainstorm.
Write-up early and often.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.17

Manage your time


Work on all problems the first day.
Budget time for write-ups and debugging.
Dont get sucked into one problem at the
expense of others.
Replan your strategy every day.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.18

Manage your psyche


Get enough sleep.
Maintain a patient, persistent, and
positive attitude.
Use adrenaline productively.
Relax, and have fun.
Its not the end of the world!

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.19

Brainstorm
Get an upper bound, even if it is loose.
Look for analogies with problems youve seen.
Exploit special structure.
Solve a simpler problem.
Draw diagrams.
Contemplate.
Be wary of self-imposed constraints think
out of the box.
Work out small examples, and abstract.
Understand things in two ways: sanity checks.
November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.20

Write up early and often


Write up partial solutions.
Groom your work every day.
Work on shortening and simplifying.
Provide an executive summary.
Ample partial credit will be given!
Unnecessarily long answers will be
penalized.

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.21

Positive attitude

November 28, 2005

Copyright 2001-5 by Erik D. Demaine and Charles E. Leiserson

L21.22

You might also like