Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

DynamicProgramming

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

DynamicProgramming

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Matrix Chain-Products

(not in book)
Dynamic Programming is a general
algorithm design paradigm.
Dynamic Programming  Rather than give the general structure, let us
first give a motivating example:
f
 Matrix Chain-Products
B
Review: Matrix Multiplication.
 C = A*B
e j
 A is d × e and B is e × f
e −1
C[i, j ] = ∑ A[i, k ] * B[k , j ] e
k =0
A C
 O(def ) time d i i,j d

© 2004 Goodrich, Tamassia Dynamic Programming 1 © 2004 Goodrich, Tamassia Dynamic Programming f 2

Matrix Chain-Products An Enumeration Approach


Matrix Chain-Product: Matrix Chain-Product Alg.:
 Compute A=A0*A1*…*An-1  Try all possible ways to parenthesize
 Ai is di × di+1 A=A0*A1*…*An-1
 Problem: How to parenthesize?  Calculate number of ops for each one
Example  Pick the one that is best
 B is 3 × 100 Running time:
 C is 100 × 5  The number of paranethesizations is equal
 D is 5 × 5 to the number of binary trees with n nodes
 (B*C)*D takes 1500 + 75 = 1575 ops  This is exponential!
 B*(C*D) takes 1500 + 2500 = 4000 ops  It is called the Catalan number, and it is
almost 4n.
 This is a terrible algorithm!
© 2004 Goodrich, Tamassia Dynamic Programming 3 © 2004 Goodrich, Tamassia Dynamic Programming 4
A Greedy Approach Another Greedy Approach
Idea #1: repeatedly select the product that Idea #2: repeatedly select the product that uses
uses (up) the most operations. the fewest operations.
Counter-example: Counter-example:
 A is 101 × 11
 A is 10 × 5
 B is 11 × 9
 B is 5 × 10  C is 9 × 100
 C is 10 × 5  D is 100 × 99
 D is 5 × 10  Greedy idea #2 gives A*((B*C)*D)), which takes
 Greedy idea #1 gives (A*B)*(C*D), which takes 109989+9900+108900=228789 ops
500+1000+500 = 2000 ops  (A*B)*(C*D) takes 9999+89991+89100=189090 ops
 A*((B*C)*D) takes 500+250+250 = 1000 ops The greedy approach is not giving us the optimal
value.
© 2004 Goodrich, Tamassia Dynamic Programming 5 © 2004 Goodrich, Tamassia Dynamic Programming 6

A Characterizing
A “Recursive” Approach Equation
Define subproblems: The global optimal has to be defined in terms of
 Find the best parenthesization of Ai*Ai+1*…*Aj. optimal subproblems, depending on where the final
 Let Ni,j denote the number of operations done by this multiply is at.
subproblem. Let us consider all possible places for that final multiply:
 The optimal solution for the whole problem is N0,n-1.  Recall that Ai is a di × di+1 dimensional matrix.
Subproblem optimality: The optimal solution can be  So, a characterizing equation for Ni,j is the following:
defined in terms of optimal subproblems
There has to be a final multiplication (root of the expression
N i , j = min{N i ,k + N k +1, j + d i d k +1d j +1}

tree) for the optimal solution.
 Say, the final multiply is at index i: (A0*…*Ai)*(Ai+1*…*An-1). i ≤k < j
 Then the optimal solution N0,n-1 is the sum of two optimal
subproblems, N0,i and Ni+1,n-1 plus the time for the last multiply.
Note that subproblems are not independent--the
If the global optimum did not have these optimal
subproblems overlap.

subproblems, we could define an even better “optimal”
solution.
© 2004 Goodrich, Tamassia Dynamic Programming 7 © 2004 Goodrich, Tamassia Dynamic Programming 8
A Dynamic Programming A Dynamic Programming
Algorithm Algorithm Visualization
Since subproblems N i , j = min{N i ,k + N k +1, j + d i d k +1d j +1}
overlap, we don’t Algorithm matrixChain(S): The bottom-up i ≤k < j
answer
use recursion. Input: sequence S of n matrices to be multiplied construction fills in the N 0 1 2 j …
n-1

N array by diagonals 0
Instead, we Output: number of operations in an optimal
construct optimal paranethization of S Ni,j gets values from 1

subproblems for i ← 1 to n-1 do pervious entries in i-th
row and j-th column i
“bottom-up.” Ni,i ← 0
Ni,i’s are easy, so for b ← 1 to n-1 do Filling in each entry in
start with them for i ← 0 to n-b-1 do the N table takes O(n)
j ← i+b time.
Then do length
2,3,… subproblems, Ni,j ← +infinity Total run time: O(n3)
and so on. for k ← i to j-1 do Getting actual n-1
The running time is Ni,j ← min{Ni,j , Ni,k +Nk+1,j +di dk+1 dj+1} parenthesization can be
O(n3) done by remembering
“k” for each N entry
© 2004 Goodrich, Tamassia Dynamic Programming 9 © 2004 Goodrich, Tamassia Dynamic Programming 10

The General Dynamic


Programming Technique Subsequences
Applies to a problem that at first seems to A subsequence of a character string
require a lot of time (possibly exponential),
provided we have:
x0x1x2…xn-1 is a string of the form
 Simple subproblems: the subproblems can be xi xi …xi , where ij < ij+1.
1 2 k

defined in terms of a few variables, such as j, k, l, Not the same as substring!


m, and so on.
 Subproblem optimality: the global optimum value Example String: ABCDEFGHIJK
can be defined in terms of optimal subproblems
 Subsequence: ACEGJIK
 Subproblem overlap: the subproblems are not
independent, but instead they overlap (hence,  Subsequence: DFGHK
should be constructed bottom-up).
 Not subsequence: DAGH

© 2004 Goodrich, Tamassia Dynamic Programming 11 © 2004 Goodrich, Tamassia Dynamic Programming 12
The Longest Common A Poor Approach to the
Subsequence (LCS) Problem LCS Problem
Given two strings X and Y, the longest A Brute-force solution:
common subsequence (LCS) problem is  Enumerate all subsequences of X
to find a longest subsequence common
 Test which ones are also subsequences of Y
to both X and Y
 Pick the longest one.
Has applications to DNA similarity
testing (alphabet is {A,C,G,T}) Analysis:
Example: ABCDEFG and XZACKDFWGH  If X is of length n, then it has 2n
have ACDFG as a longest common subsequences
subsequence  This is an exponential-time algorithm!

© 2004 Goodrich, Tamassia Dynamic Programming 13 © 2004 Goodrich, Tamassia Dynamic Programming 14

A Dynamic-Programming
Approach to the LCS Problem An LCS Algorithm
Define L[i,j] to be the length of the longest common Algorithm LCS(X,Y ):
subsequence of X[0..i] and Y[0..j]. Input: Strings X and Y with n and m elements, respectively
Allow for -1 as an index, so L[-1,k] = 0 and L[k,-1]=0, to Output: For i = 0,…,n-1, j = 0,...,m-1, the length L[i, j] of a longest string
indicate that the null part of X or Y has no match with the that is a subsequence of both the string X[0..i] = x0x1x2…xi and the
other. string Y [0.. j] = y0y1y2…yj
for i =1 to n-1 do
Then we can define L[i,j] in the general case as follows:
L[i,-1] = 0
1. If xi=yj, then L[i,j] = L[i-1,j-1] + 1 (we can add this match)
for j =0 to m-1 do
2. If xi≠yj, then L[i,j] = max{L[i-1,j], L[i,j-1]} (we have no
L[-1,j] = 0
match here)
for i =0 to n-1 do
Case 1: Case 2: for j =0 to m-1 do
if xi = yj then
L[i, j] = L[i-1, j-1] + 1
else
L[i, j] = max{L[i-1, j] , L[i, j-1]}
return array L

© 2004 Goodrich, Tamassia Dynamic Programming 15 © 2004 Goodrich, Tamassia Dynamic Programming 16
Visualizing the LCS Algorithm Analysis of LCS Algorithm
We have two nested loops
 The outer one iterates n times
 The inner one iterates m times
 A constant amount of work is done inside
each iteration of the inner loop
 Thus, the total running time is O(nm)
Answer is contained in L[n,m] (and the
subsequence can be recovered from the
L table).

© 2004 Goodrich, Tamassia Dynamic Programming 17 © 2004 Goodrich, Tamassia Dynamic Programming 18

You might also like