AdvancedAlgorithms NN
AdvancedAlgorithms NN
Dynamic Programming 1
Dynamic programming
Like divide-and-conquer, solve problem by combining the solutions
to sub-problems.
Approach-
• solve sub problems,
• store results,
• use the results while solving bigger instances of the problem with
out re-computing
DP vs DC
Divide and conquer is aimed at dividing the problem into smaller
instances, solve instances and combine to get final solution.
Dynamic Programming, divides, solves, combines but there could be
overlaps, memorizes earlier solutions and uses it
Application domain of DP
• Optimization problem: find a solution with
optimal (maximum or minimum) value.
• An optimal solution, not the optimal
solution, since may more than one optimal
solution, any one is OK.
Dynamic Programming 3
Typical steps of DP
• Characterize the structure of an optimal
solution.
• Recursively define the value of an optimal
solution.
• Compute the value of an optimal solution in
a bottom-up fashion.
• Compute an optimal solution from
computed/stored information.
Dynamic Programming 4
Matrix Multiplication
• Note that any matrix multiplication between a matrix with
dimensions pxq and another with dimensions qxr will
perform pxqxr element multiplications creating an answer
that is a matrix with dimensions pxr.
• Also note the condition that the second dimension in the first
matrix and the first dimension in the second matrix must be
equal in order to allow matrix multiplication (the number of
columns of the first matrix should be equal to the number of
rows of the second matrix) .
Dynamic Programming 5
Matrix Multiplication
No. of multiplications =
3*6=18
OR: p*q*r=3*3*2=18
* Dynamic Programming 6
Matrix Multiplication
Algorithm to Multiply 2 Matrices
Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)
Result: Matrix Cp×r resulting from the product A·B
MATRIX-MULTIPLY(Ap×q , Bq×r)
1. for i ← 1 to p
2. for j ← 1 to r
3. C[i, j] ← 0
4. for k ← 1 to q
5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
6. return C
11-10
Matrix-chain multiplication
• It may appear that the amount of work done won’t change if you
change the parenthesization of the expression, but we can prove that
is not the case!
• Let us use the following example:
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
* Dynamic Programming 11
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
* Dynamic Programming 12
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) =
* Dynamic Programming 13
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
* Dynamic Programming 14
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
• Consider computing (AB)C:
– # multiplications for (AB) =
– # multiplications for (AB)C =
– Total multiplications =
* Dynamic Programming 15
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
• Consider computing (AB)C:
– # multiplications for (AB) = 2x10x50 = 1000, creating a 2x50
answer matrix
– # multiplications for (AB)C =
– Total multiplications =
* Dynamic Programming 16
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
• Consider computing (AB)C:
– # multiplications for (AB) = 2x10x50 = 1000, creating a 2x50
answer matrix
– # multiplications for (AB)C = 2x50x20 = 2000,
– Total multiplications = 1000 + 2000 = 3000
* Dynamic Programming 17
Matrix-chain multiplication
• The second way is faster than the first!!!
• The multiplication sequence (parenthesization) is important.
• Different parenthesizations will have different number of
multiplications for product of multiple matrices.
• Thus, our goal is:
“Given a chain of matrices to multiply, determine the fewest
number of multiplications necessary to compute the product.”
* Dynamic Programming 18
Matrix-chain multiplication –MCM DP
• Denote <A1, A2, …,An> by < p0,p1,p2,…,pn>
– i.e, A1(p0,p1), A2(p1,p2), …, Ai(pi-1,pi),… An(pn-1,pn)
• Intuitive brute-force solution: Counting the number of
parenthesizations by exhaustively checking all possible
parenthesizations.
• Let P(n) denote the number of alternative parenthesizations
of a sequence of n matrices:
– P(n) = 1 if n=1
∑k=1n-1 P(k)P(n-k) if n≥2
* Dynamic Programming 19
Quiz
Four matrices M1, M2, M3 and M4 of dimensions pxq, qxr, rxs and sxt respectively can
be multiplied is several ways with different number of total scalar multiplications. For
example, when multiplied as
➢ ((M1 X M2) X (M3 X M4)), the total number of multiplications is pqr + rst + prt.
➢ (((M1 X M2) X M3) X M4), the total number of multiplications is pqr + prs + pst.
If p = 10, q = 100, r = 20, s = 5 and t = 80, then the number of scalar multiplications
needed is
• 248000
• 44000
• 19000
• 25000
* CSE, BMSCE 20
MCP DP Steps
• Step 1: structure of an optimal parenthesization
– Let Ai..j (i≤j) denote the matrix resulting from Ai×Ai+1×…×Aj
– Any parenthesization of Ai×Ai+1×…×Aj must split the product
between Ak and Ak+1 for some k, (i≤k<j).
– The cost = # of computing Ai..k + # of computing Ak+1..j + #
Ai..k × Ak+1..j.
– If k is the position for an optimal parenthesization, the
parenthesization of “prefix” subchain Ai×Ai+1×…×Ak within
this optimal parenthesization of Ai×Ai+1×…×Aj must be an
optimal parenthesization of Ai×Ai+1×…×Ak.
– Ai×Ai+1×…×Ak × Ak+1×…×Aj
* Dynamic Programming 21
MCP DP Steps
* Dynamic Programming 22
MCP DP Steps
* Dynamic Programming 23
MCP DP Steps
* Dynamic Programming 24
MCP DP Steps
• Step 2: a recursive relation
– Let m[i,j] be the minimum number of multiplications
for Ai×Ai+1×…×Aj
– m[i,j] = 0 if i = j
min {m[i,k] + m[k+1,j] +pi-1pkpj } if i<j
i≤k<j
* Dynamic Programming 25
MCP DP Steps
* Dynamic Programming 26
MCP DP Steps
* Dynamic Programming 27
MCP DP Steps
* Dynamic Programming 28
MCM DP Steps
• Step 3, Computing the optimal cost
– Recursive algorithm will encounter the same
subproblem many times.
– In tabling the answers for subproblems, each
subproblem is only solved once.
– The second hallmark of DP: overlapping subproblems
and solve every subproblem just once.
* Dynamic Programming 29
MCM DP Steps
* Dynamic Programming 30
MCM DP Steps
* Dynamic Programming 31
MCM DP Steps
• Step 3, Algorithm,
– array m[1..n,1..n], with m[i,j] records the optimal
cost for Ai×Ai+1×…×Aj .
– array s[1..n,1..n], s[i,j] records index k which
achieved the optimal cost when computing m[i,j].
– Suppose the input to the algorithm is p=< p0 , p1
,…, pn >.
* Dynamic Programming 32
MCM DP Steps
* Dynamic Programming 33
MCM DP
* Dynamic Programming 34
MCM DP—order of matrix computations
m(1,1) m(1,2) m(1,3) m(1,4) m(1,5) m(1,6)
m(2,2) m(2,3) m(2,4) m(2,5) m(2,6)
m(3,3) m(3,4) m(3,5) m(3,6)
m(4,4) m(4,5) m(4,6)
m(5,5) m(5,6)
m(6,6)
* Dynamic Programming 35
* Dynamic Programming 36
Example
* Dynamic Programming 37
Example
* Dynamic Programming 38
Example
* Dynamic Programming 39
Example
* Dynamic Programming 40
Example
* Dynamic Programming 41
Example
* Dynamic Programming 42
4 3 2 1
1 3 1 1 0
2 3 2 0
3 3 0
4 0
• (A1A2A3)(A4)
• ((A1)(A2A3))(A4)
MCM DP Steps
• Step 4, constructing a parenthesization
order for the optimal solution.
– Since s[1..n,1..n] is computed, and s[i,j] is the
split position for AiAi+1…Aj , i.e, Ai…As[i,j] and
As[i,j] +1…Aj , thus, the parenthesization order
can be obtained from s[1..n,1..n] recursively,
beginning from s[1,n].
* Dynamic Programming 44
MCM DP Steps
• Step 4, algorithm
* Dynamic Programming 45
MCM DP Steps
* Dynamic Programming 46
Running Time
* Dynamic Programming 47
Running Time
• #overall subproblems × #choices.
– In matrix-chain multiplication, O(n2) × O(n) = O(n3)
* Dynamic Programming 48
Example
• Show how to multiply this Matrix Dimension
matrix chain optimally
A1 30×35
A2 35×15
A3 15×5
A4 5×10
A5 10×20
A6 20×25
11-49
MCM DP Example
* Dynamic Programming 50
Example
• Show how to multiply this Matrix Dimension
matrix chain optimally
A1 30×35
((A1)(A2.A3))((A4)(A5.A6))
A2 35×15
A3 15×5
A4 5×10
A5 10×20
A6 20×25
11-51
Acknowledgments
• Slides adapted from:
http://www.cs.ucf.edu/~dmarino/ucf/cop3503/lectures/
• https://home.cse.ust.hk/~dekai/271/notes/L12/L12.pdf
• https://www.youtube.com/watch?v=_WncuhSJZyA
• Cormen H. T., Leiserson C. E., Rivest R. L. and Stein C., “ Introduction to Algorithms”,
Chapter 29, Second Ed., PHI, India, 2006
* Dynamic Programming 52
Longest Common Subsequence (LCS)
• DNA analysis/DNA similarity testing, two DNA string comparison.
• DNA string: a sequence of symbols A,C,G,T.
– S=ACCGGTCGAGCTTCGAAT
• LCS problem: given X=<x1, x2,…, xm> and Y=<y1, y2,…, yn>, find their
LCS.
* Dynamic Programming 53
LCS Intuitive Solution –brute force
• List all possible subsequences of X, check
whether they are also subsequences of Y,
keep the longer one each time.
• Each subsequence corresponds to a subset
of the indices {1,2,…,m}, there are 2m. So
exponential.
* Dynamic Programming 54
LCS DP –Step 1: Optimal Substructure
• Characterize optimal substructure of LCS.
• Theorem : Let X=<x1, x2,…, xm> (= Xm) and
Y=<y1, y2,…,yn> (= Yn)
and Z=<z1, z2,…, zk> (= Zk) be any LCS of X and Y,
1. if xm= yn, then zk= xm= yn, and Zk-1 is the LCS of Xm-1 and Yn-1.
* Dynamic Programming 58
Recursive Algorithm for LCS
* Dynamic Programming 59
Recursive Tree
Recursive solution contains a small number of distinct
subproblems repeated many times.
* Dynamic Programming 60
LCS computation example
62
LCS
* Dynamic Programming 63
LCS DP Algorithm
* Dynamic Programming 64
LCS DP –step 4: Constructing LCS
We reconstruct the path by calling Print-LCS(b, X, n, m) and
following the arrows, printing out characters of X that
correspond to the diagonal arrows (a Θ(n + m) traversal from
the lower right of the matrix to the origin):
* Dynamic Programming 65
Solve
• X=<BACDB> Y=<BDCB>
* Dynamic Programming 66
Solution
* Dynamic Programming 67
Solution
* Dynamic Programming 68
LCS computation example
69
Exercise
Determine LCS in (1,0,0,1,0,1,0,1) and
(0,1,0,1,1,0,1,1,0)
* Dynamic Programming 70
Acknowledgments
• Slides adapted from:
http://www.facweb.iitkgp.ac.in/~sourav/Lecture-12.pdf
• https://www.ics.uci.edu/~goodrich/teach/cs260P/notes/LC
S.pdf
* Dynamic Programming 71
Finding Shortest Path in Multistage Graph using
Dynamic Programming
* CSE, BMSCE 72
Multistage Graph
• To find the shortest path between the source vertex s and the destination
vertex t.
• A multistage graph is a directed graph which is divided into stagesV1, V2, ….
• Vertices from one stage are connected to vertices of next stage (no edges between
vertices of the same stage and from a vertex of current stage to previous stage).
• The first and the last stage have single vertex.
* CSE, BMSCE 73
Applying Greedy approach to solve
• Greedy Choice 1:
• Edge:(1,5) (5,8) (8,10) (10,12)
• Cost: 2 + 8 + 5 + 2
=17
• Choice 2:
• Edge:(1,2) (2,7) (7,10) (10,12)
• Cost: 9 + 2 + 3 + 2
=16
• Greedy choice fails
* CSE, BMSCE 74
Applying Brute force to solve
Brute Force: Enumerate all possible paths
And find the minimum cost path.
* CSE, BMSCE 75
Solving using Dynamic Programming
- Forward approach
- Backward approach
* CSE, BMSCE 76
Solving using Dynamic Programming
* CSE, BMSCE 77
Solving using Dynamic Programming
* CSE, BMSCE 78
Solving using Dynamic Programming: Forward approach
* CSE, BMSCE 79
Solving using Dynamic Programming: Forward approach
* CSE, BMSCE 80
Solving using Dynamic Programming: Forward approach
verte 1 2 3 4 5 6 7 8 9 10 11 12
x
Cost 7 9 18 15 7 5 7 4 2 5 0
d- 7 6 8 8 10 10 10 12 12 12 12
destin
* CSE, BMSCE 81
ation
Solving using Dynamic Programming: Forward approach
verte 1 2 3 4 5 6 7 8 9 10 11 12
x
Cost 16 7 9 18 15 7 5 7 4 2 5 0
d- 2/3 7 6 8 8 10 10 10 12 12 12 12
destin
ation
* CSE, BMSCE 82
Multistage Graph pseudo code : forward approach
* CSE, BMSCE 83
Solving using Dynamic Programming: Backward approach
• bcost(i,j) : Minimum cost path from vertex s to
vertex j in Vi.
• bcost(i,j) = min{bcost(i-1, k) + c(k,j)}
• k єVi-1
• bcost(2,2) = 9
• bcost(2,3) = 7
• bcost(2,4) = 3
• bcost(2,5) = 2
• bcost(3,6) =min(bcost(2,2)+c(2,6),
• bcost(2,3)+c(3,6)}
• =min{9+4, 7+2}=9
• bcost(3,7) =min(bcost(2,2)+c(2,7),
• bcost(2,3)+c(3,7)
• bcost(2,5)+c(2,7)}
• =min{9+2, 7+7, 2+11}=11
* CSE, BMSCE 84
Solving using Dynamic Programming: Backward approach
* CSE, BMSCE 85
Multistage Graph pseudo code: backward approach
* CSE, BMSCE 86
Solve
Find minimum cost path from s to t in the multistage graph
given below using:
a.Forward approach
b.Backward approach
* CSE, BMSCE 87
Longest Increasing Subsequence
(LIS)
* Dynamic Programming 88
Longest Increasing Subsequence (LIS)
• Given a sequence A of size N, find the length of
the longest increasing subsequence from a given
sequence .
The longest increasing subsequence means to find a
subsequence of a given sequence in which the
subsequence's elements are in sorted order, lowest to
highest, and in which the subsequence is as long as
possible. This subsequence is not necessarily contiguous,
or unique.
• Note: Duplicate numbers are not counted as increasing
subsequence.
* Dynamic Programming 89
Longest Increasing Subsequence (LIS)
* Dynamic Programming 91
Longest Increasing Subsequence (LIS)
• Method 1: Recursion.
• Optimal Substructure: Let arr[0..n-1] be
the input array and L(i) be the length of the
LIS ending at index i such that arr[i] is the
last element of the LIS.
•
* Dynamic Programming 92
Longest Increasing Subsequence (LIS)
• Then, L(i) can be recursively written as:
• L(i) = 1 + max( L(j) ) where 0 < j < i and arr[j] < arr[i];
or L(i) = 1, if no such j exists.
• To find the LIS for a given array, we need to return max(L(i)) where 0 < i < n.
• Formally, the length of the longest increasing subsequence ending at index i,
will be 1 greater than the maximum of lengths of all longest increasing
subsequences ending at indices before i, where arr[j] < arr[i] (j < i). Thus, we
see the LIS problem satisfies the optimal substructure property as the main
problem can be solved using solutions to subproblems. The recursive tree is
given below:
* Dynamic Programming 93
Longest Increasing Subsequence (LIS)
Complexity Analysis:
• Time Complexity: The time complexity of
this recursive approach is exponential as
there is a case of overlapping subproblems
as explained in the recursive tree diagram.
•
* Dynamic Programming 94
Longest Increasing Subsequence (LIS)
• Method 2: Dynamic Programming.
• We can see that there are many subproblems
in the above recursive solution which are
solved again and again. So this problem has
Overlapping Substructure property and
recomputation of same subproblems can be
avoided by either using Memoization or
Tabulation.
* Dynamic Programming 95
Longest Increasing Subsequence (LIS)
• Input : arr[] = {3, 10, 2, 11} LIS[] = {1, 1, 1, 1} (initially) Iteration-wise
simulation :
• arr[2] > arr[1] {LIS[2] = max(LIS [2], LIS[1]+1)=2}
• arr[3] < arr[1] {No change}
• arr[3] < arr[2] {No change}
• arr[4] > arr[1] {LIS[4] = max(LIS [4], LIS[1]+1)
=max(1,1+1)=2}
• arr[4] > arr[2] {LIS[4] = max(LIS [4], LIS[2]+1)
=max(1,2+1)=3}
• arr[4] > arr[3] {LIS[4] = max(LIS [4], LIS[3]+1)
=max(1,1+1)=2}
• We can avoid recomputation of subproblems by using tabulation as shown
next:
* Dynamic Programming 96
Longest Increasing Subsequence (LIS)
• Input : arr[] = {3, 10, 2, 11}
• We can avoid recomputation of subproblems by using tabulation as shown
next:
arr[ ] 3 10 2 11
LIS[ ] 1 2 1 3
* Dynamic Programming 97
Dynamic Programming implementation
• /* lis() returns the length of the longest
• increasing subsequence in arr[] of size n */
• int lis( int arr[], int n )
• { int lis[n];
• lis[0] = 1;
• /* Compute optimized LIS values in
• bottom up manner */
• for (int i = 0; i < n; i++ )
• lis[i] = 1;
• for (int i = 1; i < n; i++ )
• { for (int j = 0; j < i; j++ )
• if ( arr[i] > arr[j] && lis[i] < lis[j] + 1)
• lis[i] = lis[j] + 1;
• }
• // Return maximum value in lis[]
• return *max_element(lis, lis+n);
• }* Dynamic Programming 98
Longest Increasing Subsequence (LIS)
• # Dynamic programming Python implementation of LIS problem
• # lis returns length of the longest increasing subsequence in arr of size n
• def lis(arr):
• n = len(arr)
• # Declare the list (array)for LIS and initialize LIS values for all indexes
• lis = [1]*n
• # Compute optimized LIS values in bottom up manner
• for i in range (1 , n):
• for j in range(0 , i):
• if arr[i] > arr[j] and lis[i]< lis[j] + 1 :
• lis[i] = lis[j]+1
•
* Dynamic Programming 99
Longest Increasing Subsequence (LIS)
• # Initialize maximum to 0 to get
• # the maximum of all LIS
• maximum = 0
• # Pick maximum of all LIS values
• for i in range(n):
• maximum = max(maximum , lis[i])
• return maximum
• # end of lis function
• # Driver program to test above function
• arr = [10, 22, 9, 33, 21, 50, 41, 60]
• print "Length of lis is", lis(arr)
Output:Length of lis is 5
Time Complexity: O(n2).
Arr[ ] 10 22 9 33 21 50 41 60
LIS 1 `
Arr[ ] 10 22 9 33 21 50 41 60
LIS 1 2 1 3 2 4 4 5
Rod cutting
Dynamic programming
Approach-
• solve sub problems,
• store results,
• use the results while solving bigger instances of the problem
with out re-computing
DP vs DC
Divide and conquer is aimed at dividing the problem into smaller
instances, solve instances and combine to get final solution.
Dynamic Programming, divides, solves, combines but there could
be overlaps, memorizes earlier solutions and uses it
Rod Cutting
Problem: Given a rod of length ‘n’ and a table of prices Pi , sell it by cutting it
into pieces so as to maximize revenue rn
Length i 1 2 3 4 5 6 7 8 9 10
Price Pi 1 5 8 9 10 17 17 20 24 30
Rod cutting
In general, rod of length ‘n’ can be cut in 2n−1
different ways, since we can choose cutting,
or not cutting, at all distances i (1 ≤ i ≤ n − 1)
from the left end
rn = max(pn, r1 + rn-1, r2+ rn-2,.........., rn-1 + r1)
Recursive approach is
rn = max(pi + rn−i) 1≤i≤n
Rod cutting -4 inch rod
example
Recursive top down implementation
DP for rod cutting
• After solving smaller instances of problem,
store the values, it can be used to solve the
bigger instances
• At the cost of memory speed up execution
• Two approaches- Top down or Bottom up
• Both have same time complexity O(n2)
C(i)=max{Vk + C(i-k)}
1<=k<=i
Length 1 2 3 4 5 6 7 8
Price 1 5 8 9 10 17 17 20
Len(i) 1 2 3 4 5 6 7 8
Optimal 1 5 8 10
Lengt 1 2 3 4 5 6 7 8
h
Price 1 5 8 9 10 17 17 20
Len(i) 1 2 3 4 5 6 7 8
Optimal 1 5 8 10 13 17 18 22
Lengt 1 2 3 4 5 6 7 8
h
Price 1 5 8 9 10 17 17 20
Len(i) 1 2 3 4 5 6 7 8
Optimal 1 5 8 10 13 17 18 22
r[i] 0 1 5 8 10 13 17 18 22 25 30
s[i]
Exercise
I 0 1 2 3 4 5 6 7 8 9 10
r[i] 0 1 5 8 10 13 17 18 22 25 30
s[i] 0 1 2 3 2 2 6 1 2 3 10
Acknowledgements
• https://www.youtube.com/watch?v=ElFrskb
y_7M
Dynamic programming
Egg Dropping Puzzle
Egg Dropping Puzzle
• Input
n eggs, building with k floors
• output
Find the number of attempts it takes to find
out from which floor the egg will break.
OR
finding threshold/critical/pivot floor
Assumptions
• Suppose that we wish to know which stories in a 20-story
building are safe to drop eggs from, and which will cause the
eggs to break on landing.
• We make a few assumptions:
Eggs/Floors- 0 1 2 3 4 5 6
>
1
2
3
Egg Dropping Puzzle-recursion
eggDrop(n,k) = 1+min{max( eggDrop(n-1,x-1) , eggDrop(n,k-x) ) ,
x in 1:k}
Eggs/Floors- 0 1 2 3 4 5 6
>
1 0 1 2 3 4 5 6
2 0 1 2 2 3 3 3
3 0 1 2 2 3 3 3
Egg Dropping Puzzle-Recursion
int max(int a, int b)
{ return (a > b) ? a : b; }
if (n == 1)
return k;
for i from 1 to m
for j from 1 to n
if s[i] = t[j] then
d[i,j]=d[i-1,j-1]
else
d[i, j] := minimum(
d[i-1, j] + 1, //insertion
d[i, j-1] + 1, // deletion
d[i-1, j-1] + 1 // substitution )
return d[m,n]
Edit Distance
• Consider the following example with s=“keep" and
t=“hello".
– To deal with empty strings, an extra row and column
have been added to the chart below:
Replace Remove
Min(Insert, Remove,Replace)+1
Insert
Null a b c f g
Null 0 1 2 3 4 5
a 1
d 2
c 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”
Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2
c 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”
Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2 1 1 2 3 4
c 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”
Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2 1 1 2 3 4
c 3 2 2 1 2 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”
Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2 1 1 2 3 4
c 3 2 2 1 2 3
e 4 3 3 2 2 3
g 5 4 4 3 3 2
• https://www.geeksforgeeks.org/edit-
distance-dp-5/