Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
44 views

AdvancedAlgorithms NN

Uploaded by

Ahmedi Begum
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

AdvancedAlgorithms NN

Uploaded by

Ahmedi Begum
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 152

Advanced Algorithms

Dynamic Programming 1
Dynamic programming
Like divide-and-conquer, solve problem by combining the solutions
to sub-problems.
Approach-
• solve sub problems,
• store results,
• use the results while solving bigger instances of the problem with
out re-computing
DP vs DC
Divide and conquer is aimed at dividing the problem into smaller
instances, solve instances and combine to get final solution.
Dynamic Programming, divides, solves, combines but there could be
overlaps, memorizes earlier solutions and uses it
Application domain of DP
• Optimization problem: find a solution with
optimal (maximum or minimum) value.
• An optimal solution, not the optimal
solution, since may more than one optimal
solution, any one is OK.

Dynamic Programming 3
Typical steps of DP
• Characterize the structure of an optimal
solution.
• Recursively define the value of an optimal
solution.
• Compute the value of an optimal solution in
a bottom-up fashion.
• Compute an optimal solution from
computed/stored information.
Dynamic Programming 4
Matrix Multiplication
• Note that any matrix multiplication between a matrix with
dimensions pxq and another with dimensions qxr will
perform pxqxr element multiplications creating an answer
that is a matrix with dimensions pxr.

• Also note the condition that the second dimension in the first
matrix and the first dimension in the second matrix must be
equal in order to allow matrix multiplication (the number of
columns of the first matrix should be equal to the number of
rows of the second matrix) .

Dynamic Programming 5
Matrix Multiplication

No. of multiplications =
3*6=18
OR: p*q*r=3*3*2=18

* Dynamic Programming 6
Matrix Multiplication
Algorithm to Multiply 2 Matrices
Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)
Result: Matrix Cp×r resulting from the product A·B

MATRIX-MULTIPLY(Ap×q , Bq×r)
1. for i ← 1 to p
2. for j ← 1 to r
3. C[i, j] ← 0
4. for k ← 1 to q
5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
6. return C

Scalar multiplication in line 5 dominates time to compute C


Number of scalar multiplications = pqr
* Dynamic Programming 7
Matrix Chain Multiplication
• Problem: given <A1, A2, …,An>, compute the product:
A1×A2×…×An , find the fastest way (i.e., minimum number of
multiplications) to compute it.
• Given some matrices to multiply, determine the best order to
multiply them so you can minimize the number of single element
multiplications.
– i.e. Determine the way the matrices are parenthesized.

• First off, it should be noted that matrix multiplication is associative,


but not commutative. Since it is associative, we always have:

• ((AB)(CD)) = (A(B(CD))), or any other grouping as long as the


matrices are in the same consecutive order.
i.e. Parenthesization does not change the result.

• BUT : ((AB)(CD)) ≠ ((BA)(DC))


Matrix-chain Multiplication
• Example: consider the chain A1, A2, A3, A4
of 4 matrices
– Let us compute the product A1A2A3A4
• There are 5 possible ways:
1. (A1(A2(A3A4)))
2. (A1((A2A3)A4))
3. ((A1A2)(A3A4))
4. ((A1(A2A3))A4)
5. (((A1A2)A3)A4)
11-9
Matrix-chain Multiplication
• Matrix-chain multiplication problem
– Given a chain A1, A2, …, An of n matrices,
where for i=1, 2, …, n, matrix Ai has dimension
pi-1×pi
– Parenthesize the product A1A2…An such that the
total number of scalar multiplications is
minimized
• Brute force method of exhaustive search
takes time exponential in n

11-10
Matrix-chain multiplication
• It may appear that the amount of work done won’t change if you
change the parenthesization of the expression, but we can prove that
is not the case!
• Let us use the following example:
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix

* Dynamic Programming 11
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix

* Dynamic Programming 12
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) =

* Dynamic Programming 13
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.

* Dynamic Programming 14
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
• Consider computing (AB)C:
– # multiplications for (AB) =
– # multiplications for (AB)C =
– Total multiplications =

* Dynamic Programming 15
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
• Consider computing (AB)C:
– # multiplications for (AB) = 2x10x50 = 1000, creating a 2x50
answer matrix
– # multiplications for (AB)C =
– Total multiplications =
* Dynamic Programming 16
Matrix-chain multiplication
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
– # multiplications for (BC) = 10x50x20 = 10000, creating a 10x20
answer matrix
– # multiplications for A(BC) = 2x10x20 = 400
– Total multiplications = 10000 + 400 = 10400.
• Consider computing (AB)C:
– # multiplications for (AB) = 2x10x50 = 1000, creating a 2x50
answer matrix
– # multiplications for (AB)C = 2x50x20 = 2000,
– Total multiplications = 1000 + 2000 = 3000
* Dynamic Programming 17
Matrix-chain multiplication
• The second way is faster than the first!!!
• The multiplication sequence (parenthesization) is important.
• Different parenthesizations will have different number of
multiplications for product of multiple matrices.
• Thus, our goal is:
“Given a chain of matrices to multiply, determine the fewest
number of multiplications necessary to compute the product.”

* Dynamic Programming 18
Matrix-chain multiplication –MCM DP
• Denote <A1, A2, …,An> by < p0,p1,p2,…,pn>
– i.e, A1(p0,p1), A2(p1,p2), …, Ai(pi-1,pi),… An(pn-1,pn)
• Intuitive brute-force solution: Counting the number of
parenthesizations by exhaustively checking all possible
parenthesizations.
• Let P(n) denote the number of alternative parenthesizations
of a sequence of n matrices:

– P(n) = 1 if n=1
∑k=1n-1 P(k)P(n-k) if n≥2

• The solution to the recursion is Ω(2n).


• So brute-force will not work.

* Dynamic Programming 19
Quiz

Four matrices M1, M2, M3 and M4 of dimensions pxq, qxr, rxs and sxt respectively can
be multiplied is several ways with different number of total scalar multiplications. For
example, when multiplied as
➢ ((M1 X M2) X (M3 X M4)), the total number of multiplications is pqr + rst + prt.
➢ (((M1 X M2) X M3) X M4), the total number of multiplications is pqr + prs + pst.
If p = 10, q = 100, r = 20, s = 5 and t = 80, then the number of scalar multiplications
needed is
• 248000
• 44000
• 19000
• 25000

* CSE, BMSCE 20
MCP DP Steps
• Step 1: structure of an optimal parenthesization
– Let Ai..j (i≤j) denote the matrix resulting from Ai×Ai+1×…×Aj
– Any parenthesization of Ai×Ai+1×…×Aj must split the product
between Ak and Ak+1 for some k, (i≤k<j).
– The cost = # of computing Ai..k + # of computing Ak+1..j + #
Ai..k × Ak+1..j.
– If k is the position for an optimal parenthesization, the
parenthesization of “prefix” subchain Ai×Ai+1×…×Ak within
this optimal parenthesization of Ai×Ai+1×…×Aj must be an
optimal parenthesization of Ai×Ai+1×…×Ak.
– Ai×Ai+1×…×Ak × Ak+1×…×Aj

* Dynamic Programming 21
MCP DP Steps

* Dynamic Programming 22
MCP DP Steps

* Dynamic Programming 23
MCP DP Steps

* Dynamic Programming 24
MCP DP Steps
• Step 2: a recursive relation
– Let m[i,j] be the minimum number of multiplications
for Ai×Ai+1×…×Aj

– m[i,j] = 0 if i = j
min {m[i,k] + m[k+1,j] +pi-1pkpj } if i<j
i≤k<j

* Dynamic Programming 25
MCP DP Steps

* Dynamic Programming 26
MCP DP Steps

* Dynamic Programming 27
MCP DP Steps

* Dynamic Programming 28
MCM DP Steps
• Step 3, Computing the optimal cost
– Recursive algorithm will encounter the same
subproblem many times.
– In tabling the answers for subproblems, each
subproblem is only solved once.
– The second hallmark of DP: overlapping subproblems
and solve every subproblem just once.

* Dynamic Programming 29
MCM DP Steps

* Dynamic Programming 30
MCM DP Steps

* Dynamic Programming 31
MCM DP Steps
• Step 3, Algorithm,
– array m[1..n,1..n], with m[i,j] records the optimal
cost for Ai×Ai+1×…×Aj .
– array s[1..n,1..n], s[i,j] records index k which
achieved the optimal cost when computing m[i,j].
– Suppose the input to the algorithm is p=< p0 , p1
,…, pn >.

* Dynamic Programming 32
MCM DP Steps

* Dynamic Programming 33
MCM DP

* Dynamic Programming 34
MCM DP—order of matrix computations
m(1,1) m(1,2) m(1,3) m(1,4) m(1,5) m(1,6)
m(2,2) m(2,3) m(2,4) m(2,5) m(2,6)
m(3,3) m(3,4) m(3,5) m(3,6)
m(4,4) m(4,5) m(4,6)
m(5,5) m(5,6)
m(6,6)

* Dynamic Programming 35
* Dynamic Programming 36
Example

* Dynamic Programming 37
Example

* Dynamic Programming 38
Example

* Dynamic Programming 39
Example

* Dynamic Programming 40
Example

* Dynamic Programming 41
Example

* Dynamic Programming 42
4 3 2 1
1 3 1 1 0
2 3 2 0
3 3 0
4 0

• (A1A2A3)(A4)
• ((A1)(A2A3))(A4)
MCM DP Steps
• Step 4, constructing a parenthesization
order for the optimal solution.
– Since s[1..n,1..n] is computed, and s[i,j] is the
split position for AiAi+1…Aj , i.e, Ai…As[i,j] and
As[i,j] +1…Aj , thus, the parenthesization order
can be obtained from s[1..n,1..n] recursively,
beginning from s[1,n].

* Dynamic Programming 44
MCM DP Steps
• Step 4, algorithm

* Dynamic Programming 45
MCM DP Steps

* Dynamic Programming 46
Running Time

* Dynamic Programming 47
Running Time
• #overall subproblems × #choices.
– In matrix-chain multiplication, O(n2) × O(n) = O(n3)

* Dynamic Programming 48
Example
• Show how to multiply this Matrix Dimension
matrix chain optimally
A1 30×35
A2 35×15
A3 15×5
A4 5×10
A5 10×20
A6 20×25
11-49
MCM DP Example

* Dynamic Programming 50
Example
• Show how to multiply this Matrix Dimension
matrix chain optimally
A1 30×35
((A1)(A2.A3))((A4)(A5.A6))
A2 35×15
A3 15×5
A4 5×10
A5 10×20
A6 20×25
11-51
Acknowledgments
• Slides adapted from:
http://www.cs.ucf.edu/~dmarino/ucf/cop3503/lectures/
• https://home.cse.ust.hk/~dekai/271/notes/L12/L12.pdf
• https://www.youtube.com/watch?v=_WncuhSJZyA
• Cormen H. T., Leiserson C. E., Rivest R. L. and Stein C., “ Introduction to Algorithms”,
Chapter 29, Second Ed., PHI, India, 2006

* Dynamic Programming 52
Longest Common Subsequence (LCS)
• DNA analysis/DNA similarity testing, two DNA string comparison.
• DNA string: a sequence of symbols A,C,G,T.
– S=ACCGGTCGAGCTTCGAAT

• Subsequence (of X): is X with some symbols left out.


– Z=CGTC is a subsequence of X=ACGCTAC.

• Common subsequence Z (of X and Y): a subsequence of X and also a


subsequence of Y.
– Z=CGA is a common subsequence of both X=ACGCTAC and Y=CTGACA.

• Longest Common Subsequence (LCS): the longest one of common


subsequences.
– Z' =CGCA is the LCS of the above X and Y.

• LCS problem: given X=<x1, x2,…, xm> and Y=<y1, y2,…, yn>, find their
LCS.

* Dynamic Programming 53
LCS Intuitive Solution –brute force
• List all possible subsequences of X, check
whether they are also subsequences of Y,
keep the longer one each time.
• Each subsequence corresponds to a subset
of the indices {1,2,…,m}, there are 2m. So
exponential.

* Dynamic Programming 54
LCS DP –Step 1: Optimal Substructure
• Characterize optimal substructure of LCS.
• Theorem : Let X=<x1, x2,…, xm> (= Xm) and
Y=<y1, y2,…,yn> (= Yn)
and Z=<z1, z2,…, zk> (= Zk) be any LCS of X and Y,

1. if xm= yn, then zk= xm= yn, and Zk-1 is the LCS of Xm-1 and Yn-1.

2. if xm≠ yn, then zk ≠ xm implies Z is the LCS of Xm-1 and Yn.

3. if xm≠ yn, then zk ≠ yn implies Z is the LCS of Xm and Yn-1.


* Dynamic Programming 55
LCS DP –Step 2:Recursive Solution
• What the theorem says:
– If xm= yn, find LCS of Xm-1 and Yn-1, then append xm.
– If xm ≠ yn, find LCS of Xm-1 and Yn and LCS of Xm and Yn-1,
take which one is longer.
• Overlapping substructure:
– Both LCS of Xm-1 and Yn and LCS of Xm and Yn-1 will need to
solve LCS of Xm-1 and Yn-1.

• c[i, j] is the length of LCS of Xi and Yj .


c[i, j]= 0 if i=0, or j=0
c[i-1, j-1]+1 if i,j>0 and xi= yj,
max{c[i-1, j], c[i, j-1]} if i,j>0 and xi ≠ yj,
56
Dynamic Programming
LCS DP –Step 2:Recursive Solution
• What the theorem says:
– If xm= yn, find LCS of Xm-1 and Yn-1, then append xm.
– If xm ≠ yn, find LCS of Xm-1 and Yn and LCS of Xm and Yn-1,
take which one is longer.

• c[i,j] is the length of LCS of Xi and Yj .


c[i,j]= 0 if i=0, or j=0
c[i-1,j-1]+1 if i,j>0 and xi= yj,
max{c[i-1,j], c[i,j-1]} if i,j>0 and xi ≠ yj,
* Dynamic Programming 57
LCS DP-- Step 3:Computing the Length of
LCS
• c[0..m,0..n], where c[i,j] is defined as
above.
– c[m,n] is the answer (length of LCS).
• b[1..m,1..n], where b[i,j] points to the table
entry corresponding to the optimal
subproblem solution chosen when
computing c[i,j].
– From b[m,n] backward to find the LCS.

* Dynamic Programming 58
Recursive Algorithm for LCS

* Dynamic Programming 59
Recursive Tree
Recursive solution contains a small number of distinct
subproblems repeated many times.

Solution – memoization: It is an optimization technique used


primarily to speed up computer programs by storing the results
of expensive function calls and returning the cached result
when the same inputs occur again. Time taken is Θ(mn);

* Dynamic Programming 60
LCS computation example

c[i,j]= 0 if i=0, or j=0


c[i-1,j-1]+1 if i,j>0 and xi= yj,
max{c[i-1,j], c[i,j-1]} if i,j>0 and xi ≠ yj,
61
LCS computation example

62
LCS

* Dynamic Programming 63
LCS DP Algorithm

* Dynamic Programming 64
LCS DP –step 4: Constructing LCS
We reconstruct the path by calling Print-LCS(b, X, n, m) and
following the arrows, printing out characters of X that
correspond to the diagonal arrows (a Θ(n + m) traversal from
the lower right of the matrix to the origin):

* Dynamic Programming 65
Solve
• X=<BACDB> Y=<BDCB>

* Dynamic Programming 66
Solution

* Dynamic Programming 67
Solution

* Dynamic Programming 68
LCS computation example

69
Exercise
Determine LCS in (1,0,0,1,0,1,0,1) and
(0,1,0,1,1,0,1,1,0)

* Dynamic Programming 70
Acknowledgments
• Slides adapted from:
http://www.facweb.iitkgp.ac.in/~sourav/Lecture-12.pdf
• https://www.ics.uci.edu/~goodrich/teach/cs260P/notes/LC
S.pdf

* Dynamic Programming 71
Finding Shortest Path in Multistage Graph using
Dynamic Programming

* CSE, BMSCE 72
Multistage Graph
• To find the shortest path between the source vertex s and the destination
vertex t.
• A multistage graph is a directed graph which is divided into stagesV1, V2, ….
• Vertices from one stage are connected to vertices of next stage (no edges between
vertices of the same stage and from a vertex of current stage to previous stage).
• The first and the last stage have single vertex.

* CSE, BMSCE 73
Applying Greedy approach to solve

• Greedy Choice 1:
• Edge:(1,5) (5,8) (8,10) (10,12)
• Cost: 2 + 8 + 5 + 2
=17
• Choice 2:
• Edge:(1,2) (2,7) (7,10) (10,12)
• Cost: 9 + 2 + 3 + 2
=16
• Greedy choice fails

* CSE, BMSCE 74
Applying Brute force to solve
Brute Force: Enumerate all possible paths
And find the minimum cost path.

* CSE, BMSCE 75
Solving using Dynamic Programming
- Forward approach
- Backward approach

* CSE, BMSCE 76
Solving using Dynamic Programming

* CSE, BMSCE 77
Solving using Dynamic Programming

* CSE, BMSCE 78
Solving using Dynamic Programming: Forward approach

* CSE, BMSCE 79
Solving using Dynamic Programming: Forward approach

* CSE, BMSCE 80
Solving using Dynamic Programming: Forward approach

verte 1 2 3 4 5 6 7 8 9 10 11 12
x
Cost 7 9 18 15 7 5 7 4 2 5 0

d- 7 6 8 8 10 10 10 12 12 12 12
destin
* CSE, BMSCE 81
ation
Solving using Dynamic Programming: Forward approach

verte 1 2 3 4 5 6 7 8 9 10 11 12
x
Cost 16 7 9 18 15 7 5 7 4 2 5 0

d- 2/3 7 6 8 8 10 10 10 12 12 12 12
destin
ation

* CSE, BMSCE 82
Multistage Graph pseudo code : forward approach

* CSE, BMSCE 83
Solving using Dynamic Programming: Backward approach
• bcost(i,j) : Minimum cost path from vertex s to
vertex j in Vi.
• bcost(i,j) = min{bcost(i-1, k) + c(k,j)}
• k єVi-1
• bcost(2,2) = 9
• bcost(2,3) = 7
• bcost(2,4) = 3
• bcost(2,5) = 2
• bcost(3,6) =min(bcost(2,2)+c(2,6),
• bcost(2,3)+c(3,6)}
• =min{9+4, 7+2}=9
• bcost(3,7) =min(bcost(2,2)+c(2,7),
• bcost(2,3)+c(3,7)
• bcost(2,5)+c(2,7)}
• =min{9+2, 7+7, 2+11}=11

* CSE, BMSCE 84
Solving using Dynamic Programming: Backward approach

* CSE, BMSCE 85
Multistage Graph pseudo code: backward approach

* CSE, BMSCE 86
Solve
Find minimum cost path from s to t in the multistage graph
given below using:
a.Forward approach
b.Backward approach

* CSE, BMSCE 87
Longest Increasing Subsequence
(LIS)

* Dynamic Programming 88
Longest Increasing Subsequence (LIS)
• Given a sequence A of size N, find the length of
the longest increasing subsequence from a given
sequence .
The longest increasing subsequence means to find a
subsequence of a given sequence in which the
subsequence's elements are in sorted order, lowest to
highest, and in which the subsequence is as long as
possible. This subsequence is not necessarily contiguous,
or unique.
• Note: Duplicate numbers are not counted as increasing
subsequence.

* Dynamic Programming 89
Longest Increasing Subsequence (LIS)

• The Longest Increasing Subsequence (LIS)


problem is to find the length of the longest
subsequence of a given sequence such that
all elements of the subsequence are sorted
in increasing order.
• For example, the length of LIS for
• {10, 22, 9, 33, 21, 50, 41, 60, 80} is 6 and
LIS is {10, 22, 33, 50, 60, 80}
or {10, 22, 33, 41, 60, 80}
•* . Dynamic Programming 90
Longest Increasing Subsequence (LIS)

* Dynamic Programming 91
Longest Increasing Subsequence (LIS)

• Method 1: Recursion.
• Optimal Substructure: Let arr[0..n-1] be
the input array and L(i) be the length of the
LIS ending at index i such that arr[i] is the
last element of the LIS.

* Dynamic Programming 92
Longest Increasing Subsequence (LIS)
• Then, L(i) can be recursively written as:
• L(i) = 1 + max( L(j) ) where 0 < j < i and arr[j] < arr[i];
or L(i) = 1, if no such j exists.
• To find the LIS for a given array, we need to return max(L(i)) where 0 < i < n.
• Formally, the length of the longest increasing subsequence ending at index i,
will be 1 greater than the maximum of lengths of all longest increasing
subsequences ending at indices before i, where arr[j] < arr[i] (j < i). Thus, we
see the LIS problem satisfies the optimal substructure property as the main
problem can be solved using solutions to subproblems. The recursive tree is
given below:

* Dynamic Programming 93
Longest Increasing Subsequence (LIS)

Complexity Analysis:
• Time Complexity: The time complexity of
this recursive approach is exponential as
there is a case of overlapping subproblems
as explained in the recursive tree diagram.

* Dynamic Programming 94
Longest Increasing Subsequence (LIS)
• Method 2: Dynamic Programming.
• We can see that there are many subproblems
in the above recursive solution which are
solved again and again. So this problem has
Overlapping Substructure property and
recomputation of same subproblems can be
avoided by either using Memoization or
Tabulation.

* Dynamic Programming 95
Longest Increasing Subsequence (LIS)
• Input : arr[] = {3, 10, 2, 11} LIS[] = {1, 1, 1, 1} (initially) Iteration-wise
simulation :
• arr[2] > arr[1] {LIS[2] = max(LIS [2], LIS[1]+1)=2}
• arr[3] < arr[1] {No change}
• arr[3] < arr[2] {No change}
• arr[4] > arr[1] {LIS[4] = max(LIS [4], LIS[1]+1)
=max(1,1+1)=2}
• arr[4] > arr[2] {LIS[4] = max(LIS [4], LIS[2]+1)
=max(1,2+1)=3}
• arr[4] > arr[3] {LIS[4] = max(LIS [4], LIS[3]+1)
=max(1,1+1)=2}
• We can avoid recomputation of subproblems by using tabulation as shown
next:

* Dynamic Programming 96
Longest Increasing Subsequence (LIS)
• Input : arr[] = {3, 10, 2, 11}
• We can avoid recomputation of subproblems by using tabulation as shown
next:

arr[ ] 3 10 2 11

LIS[ ] 1 2 1 3

* Dynamic Programming 97
Dynamic Programming implementation
• /* lis() returns the length of the longest
• increasing subsequence in arr[] of size n */
• int lis( int arr[], int n )
• { int lis[n];
• lis[0] = 1;
• /* Compute optimized LIS values in
• bottom up manner */
• for (int i = 0; i < n; i++ )
• lis[i] = 1;
• for (int i = 1; i < n; i++ )
• { for (int j = 0; j < i; j++ )
• if ( arr[i] > arr[j] && lis[i] < lis[j] + 1)
• lis[i] = lis[j] + 1;
• }
• // Return maximum value in lis[]
• return *max_element(lis, lis+n);
• }* Dynamic Programming 98
Longest Increasing Subsequence (LIS)
• # Dynamic programming Python implementation of LIS problem
• # lis returns length of the longest increasing subsequence in arr of size n
• def lis(arr):
• n = len(arr)
• # Declare the list (array)for LIS and initialize LIS values for all indexes
• lis = [1]*n
• # Compute optimized LIS values in bottom up manner
• for i in range (1 , n):
• for j in range(0 , i):
• if arr[i] > arr[j] and lis[i]< lis[j] + 1 :
• lis[i] = lis[j]+1

* Dynamic Programming 99
Longest Increasing Subsequence (LIS)
• # Initialize maximum to 0 to get
• # the maximum of all LIS
• maximum = 0
• # Pick maximum of all LIS values
• for i in range(n):
• maximum = max(maximum , lis[i])
• return maximum
• # end of lis function
• # Driver program to test above function
• arr = [10, 22, 9, 33, 21, 50, 41, 60]
• print "Length of lis is", lis(arr)

Output:Length of lis is 5
Time Complexity: O(n2).

* Dynamic Programming 100


Longest Increasing Subsequence (LIS)

Arr[ ] 10 22 9 33 21 50 41 60

LIS 1 `

* Dynamic Programming 101


Longest Increasing Subsequence (LIS)

Arr[ ] 10 22 9 33 21 50 41 60

LIS 1 2 1 3 2 4 4 5

* Dynamic Programming 102


Acknowledgments
• https://www.geeksforgeeks.org/longest-
increasing-subsequence-dp-3/

* Dynamic Programming 103


Dynamic programming

Rod cutting
Dynamic programming
Approach-
• solve sub problems,
• store results,
• use the results while solving bigger instances of the problem
with out re-computing
DP vs DC
Divide and conquer is aimed at dividing the problem into smaller
instances, solve instances and combine to get final solution.
Dynamic Programming, divides, solves, combines but there could
be overlaps, memorizes earlier solutions and uses it
Rod Cutting
Problem: Given a rod of length ‘n’ and a table of prices Pi , sell it by cutting it
into pieces so as to maximize revenue rn

Example: rod of 4 inch length


• Cut into pieces of length
• P1 + P1 + P1 + P1 = 1+1+1+1=4
• P2 + P2 = 5+5=10
• P1 + P1 + P2 = 1+1+5=7 OR P1 + P2 + P1 = 1+5+1=7 OR P2 + P1 + P1
= 5+1+1=7
• P1 + P3 = 1+8=9 OR P3 + P1 = 8+1=9
• P4 =9
• Price of the Optimal cut : P2 + P2 = 5+5=10

Length i 1 2 3 4 5 6 7 8 9 10

Price Pi 1 5 8 9 10 17 17 20 24 30
Rod cutting
In general, rod of length ‘n’ can be cut in 2n−1
different ways, since we can choose cutting,
or not cutting, at all distances i (1 ≤ i ≤ n − 1)
from the left end
rn = max(pn, r1 + rn-1, r2+ rn-2,.........., rn-1 + r1)
Recursive approach is
rn = max(pi + rn−i) 1≤i≤n
Rod cutting -4 inch rod
example
Recursive top down implementation
DP for rod cutting
• After solving smaller instances of problem,
store the values, it can be used to solve the
bigger instances
• At the cost of memory speed up execution
• Two approaches- Top down or Bottom up
• Both have same time complexity O(n2)
C(i)=max{Vk + C(i-k)}
1<=k<=i
Length 1 2 3 4 5 6 7 8

Price 1 5 8 9 10 17 17 20
Len(i) 1 2 3 4 5 6 7 8
Optimal 1 5 8 10
Lengt 1 2 3 4 5 6 7 8
h
Price 1 5 8 9 10 17 17 20
Len(i) 1 2 3 4 5 6 7 8
Optimal 1 5 8 10 13 17 18 22
Lengt 1 2 3 4 5 6 7 8
h
Price 1 5 8 9 10 17 17 20
Len(i) 1 2 3 4 5 6 7 8
Optimal 1 5 8 10 13 17 18 22

The optimal way to cut the rod of size 8 is to have a cut


of size two and a cut of size six.
Rod Cutting
Top down approach
MEMOIZED-CUT-ROD(p,n)
Let r[0…n] be a new array
For i=0 to n
r[i]=-∞
Return MEMOIZED-CUT-ROD-AUX(p,n,r)
Top down approach(contd.)
MEMOIZED-CUT-ROD-AUX(p,n,r)
if r[n]>=0 // Checks to see if the desired value is already computed/known
return r[n]
if n==0
q=0
else
q=-∞
for i=1 to n
q=max(q, p[i]+MEMOIZED-CUT-ROD-AUX(p,n-i,r)
r[n]=q
return q
• The top-down approach has the advantages that it is easy to write
given the recursive structure of the problem, and only those
subproblems that are actually needed will be computed.
• It has the disadvantage of the overhead of recursion.
Bottom up approach
BOTTOM-UP-CUT-ROD(p,n)
Let r[0…n] be a new array
r[0]=0
for j=1 to n
q=-∞
for i=1 to j
q=max(q, p[i]+r[j-i])
r[j]=q
return r[n]
Extended-bottom up- approach
• EXTENDED-BOTTOM-UP-CUT-ROD(p, n)
1. let r[0..n] and s[0..n] be new arrays
2. r[0] = 0
3. for j = 1 to n
4. q = -INF
5. for i = 1 to j
6. if q < p[i] + r[j-i]
7. q = p[i] + r[j-i]
8. s[j] = i
9. r[j] = q
10. // Print optimal cuts :We then trace the choices made back through the table s with
this procedure
11. i = n
12. while i > 0
13. print s[i]
14. i = i - s[i]
15. return r and s
• The bottom-up approach requires extra thought to ensure we arrange to
solve the subproblems before they are needed.
• (Here, the array reference r[j - i] ensures that we only reference subproblems
smaller than j, the one we are currently working on.)
Printing result
// Print optimal cuts :We then trace the choices made
back through the table s with this procedure
11. i = n
12. while i > 0
13. print s[i]
14. i = i - s[i]
15. return r and s
Exercise
I 0 1 2 3 4 5 6 7 8 9 10

r[i] 0 1 5 8 10 13 17 18 22 25 30

s[i]
Exercise
I 0 1 2 3 4 5 6 7 8 9 10

r[i] 0 1 5 8 10 13 17 18 22 25 30

s[i] 0 1 2 3 2 2 6 1 2 3 10
Acknowledgements
• https://www.youtube.com/watch?v=ElFrskb
y_7M
Dynamic programming
Egg Dropping Puzzle
Egg Dropping Puzzle
• Input
n eggs, building with k floors
• output
Find the number of attempts it takes to find
out from which floor the egg will break.
OR
finding threshold/critical/pivot floor
Assumptions
• Suppose that we wish to know which stories in a 20-story
building are safe to drop eggs from, and which will cause the
eggs to break on landing.
• We make a few assumptions:

⮚ An egg that survives a fall can be used again.


⮚ A broken egg must be discarded.
⮚ The effect of a fall is the same for all eggs.
⮚ If an egg breaks when dropped, then it would break if
dropped from a higher floor.
⮚ If an egg survives a fall then it would survive a shorter fall.
Egg Dropping Puzzle
• Linear checking
• Binary search approach
• Binary search and linear combined
approach
• Block approach
• DP approach
• This problem has many applications in the real world such as avoiding a call
out to the slow HDD, or attempting to minimize cache misses, or running a
large number of expensive queries on a database.
Egg Dropping Puzzle-recursion
• N eggs, k floors
• Recursion: try dropping an egg from each floor from 1 to k and calculate the
minimum number of dropping needed in worst case.
• Base cases –
– Eggs – 1, floors – x : play safe and drop from floor 1, if egg does not break
then drop from floor 2 and so on. So in worst case x times an egg needs to be
dropped to find the solution.
– Floors = 0: No trials are required.
– Floors = 1: 1 trails is required.
– Eggs = 0: no trails
– Egg = 1: x trails
• For rest of the case, if an egg is dropped from xth floor then there are only 2
outcomes which are possible. Either egg will break OR egg will not break.
– If egg breaks – check the floors lower than x. So problem is reduced is n-1
eggs and x-1 floors.
– If egg does not break – check the floors higher than x floors with all the n
eggs are remaining. So problem is reduced to n eggs and k-x floors.

eggDrop(n,k) = 1+min{max (eggDrop(n-1,x-1) , eggDrop(n,k-x) ) , x in 1:k}


Egg Dropping Puzzle-recursion
eggDrop(n,k) = 1+min{max (eggDrop(n-1,x-1) , eggDrop(n,k-x) ) ,
x in 1:k}

(2,2) = 1+ min{max(ed(1,0) , ed(2,1)) ,max(ed(1,1), ed(2,0))}


Eggs Floors-> 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
1 0 1 2 3 4 5 6
2 0 1
Egg Dropping Puzzle-recursion
eggDrop(n,k) = 1+min{max( eggDrop(n-1,x-1) ,
eggDrop(n,k-x) ) , x in 1:k}

Eggs/Floors- 0 1 2 3 4 5 6
>
1
2
3
Egg Dropping Puzzle-recursion
eggDrop(n,k) = 1+min{max( eggDrop(n-1,x-1) , eggDrop(n,k-x) ) ,
x in 1:k}
Eggs/Floors- 0 1 2 3 4 5 6
>
1 0 1 2 3 4 5 6
2 0 1 2 2 3 3 3
3 0 1 2 2 3 3 3
Egg Dropping Puzzle-Recursion
int max(int a, int b)
{ return (a > b) ? a : b; }

int eggDrop(int n, int k)


{
if (k == 1 || k == 0)
return k;

if (n == 1)
return k;

int min = INT_MAX, x, res;

for (x = 1; x <= k; x++)


{
res = max( eggDrop(n - 1, x - 1), eggDrop(n, k - x));
if (res < min) min = res;
}
return min + 1;
}
Egg Dropping Puzzle-DP
for (i = 2; i <= n; i++)
int max(int a, int b) { for (j = 2; j <= k; j++)
{
return (a > b) ? a : b; {eggFloor[i][j] = INT_MAX;
} for (x = 1; x <= j; x++)
{
int eggDrop(int n, int k) res = 1 + max(
{ eggFloor[i - 1][x - 1],
int eggFloor[n + 1][k + 1]; eggFloor[i][j - x]);
int res,i,j,x; if (res < eggFloor[i][j])
eggFloor[i][j] = res;
for (i = 1; i <= n; i++)
{ }
eggFloor[i][1] = 1; }
eggFloor[i][0] = 0; }
}
// eggFloor[n][k] holds the result
for (j = 1; j <= k; j++) return eggFloor[n][k];
eggFloor[1][j] = j; }
References
• https://medium.com/@parv51199/egg-drop-problem-
using-dynamic-programming-e22f67a1a7c3
• https://www.geeksforgeeks.org/egg-dropping-puzzle-dp-
11/
Edit Distance
• The Edit Distance (or Levenshtein
distance) is a metric for measuring the
amount of difference between two strings.
– The Edit Distance is defined as the minimum
number of edits needed to transform one string
into the other.

• It has many applications, such as spell


checkers, natural language translation, and
bioinformatics.
Edit Distance
• The problem of finding an edit distance between
two strings is as follows (i.e. the minimum
distance to convert one string to another string):
– Given an initial string s, and a target string t, what
is the minimum number of changes that have to be
applied to s to turn it into t ?

• The list of valid changes are:


1)Inserting a character
2)Deleting a character
3)Changing a character to another character (replace).
Example
Suppose we have two strings s,t
e.g. s = kitten
t = sitting
and we want to transform s into t.
We use edit operations:
1. insertions
2. deletions
3. substitutions (replace)
Example
• Input: str1 = “bmsse", str2 = “bmscse"
Output: 1 We can convert str1 into str2 by inserting a ‘c'.
• Input: str1 = "cat", str2 = "cut"
Output: 1 We can convert str1 into str2 by replacing 'a' with
'u'.
• Input: str1 = "sunday", str2 = "saturday"
Output: 3 Last three and first characters are same.
We basically need to convert "un" to "atur".
This can be done using below three operations.
Replace 'n' with 'r', insert t, insert a
• What about:
s = darladidirladada
t = marmelladara
Tough…
Solution
• The idea is process all characters one by
one staring from either from left or right
sides of both strings.
• Let us traverse from right corner, there are
two possibilities for every pair of character
being traversed.
m: Length of str1 (first string)
n: Length of str2 (second string)
Solution
• If last characters of two strings are same, nothing
much to do. Ignore last characters and get count
for remaining strings. So we recur for lengths m-1
and n-1.
• Else (If last characters are not same), we consider
all operations on ‘str1’, consider all three
operations on last character of first string,
recursively compute minimum cost for all three
operations and take minimum of three values.
– Insert: Recur for m and n-1
– Remove: Recur for m-1 and n
– Replace: Recur for m-1 and n-1
Edit Distance
• Now, how do we use this to create a DP
solution?
– We simply need to store the answers to all the
possible recursive calls.
– In particular, all the possible recursive calls we
are interested in are determining the edit
distance between prefixes of s and t.
D(i,j) = score of best alignment from s1..si to t1..tj
D(i-1,j-1), if si=tj //copy
= min {
D(i-1,j-1)+1, if si!=tj //substitute
D(i-1,j)+1 //insert
D(i,j-1)+1 //delete
}
Algorithm
int EditDistance(char s[1..n], char t[1..m])
int d[0..m, 0..n]

for i from 0 to n d[i, 0] := i


for j from 1 to m d[0, j] := j

for i from 1 to m
for j from 1 to n
if s[i] = t[j] then
d[i,j]=d[i-1,j-1]
else
d[i, j] := minimum(
d[i-1, j] + 1, //insertion
d[i, j-1] + 1, // deletion
d[i-1, j-1] + 1 // substitution )
return d[m,n]
Edit Distance
• Consider the following example with s=“keep" and
t=“hello".
– To deal with empty strings, an extra row and column
have been added to the chart below:

• An entry in this table simply holds the edit distance


between two prefixes of the two strings.
– For example, the highlighted square indicates that the
edit distance between the strings "he" and "keep" is 3.
Edit Distance
• In order to fill in all the values in this table we
will do the following:
1) Initialize values corresponding to the base case in
the recursive solution.
Null
String h e l l o
Null String
0 1 2 3 4 5
k 1
e 2
e 3
p 4
Table entries
If (s[i] != t[j])

Replace Remove

Min(Insert, Remove,Replace)+1
Insert

If (s[i] = t[j] ) copy the diagonal value


Edit Distance
• In order to fill in all the values in this table we will do the following:
2) Loop through the table from the top left to the bottom right. In doing so,
simply follow the recursive solution.
• If the characters you are looking at match,
h e l l o Just bring down
0 1 2 3 4 5
the up&left value.
k 1 1 2 3 4 5
e 2 2
e 3
P 4

• Else the characters don't match,


h e l l o
0 1 2 3 4 5 min ( 1+ above,
k 1 1 2 3 4 5 1+ left, 1+diag up)
e 2 2 1
e 3
p 4
Exercise
• str1= “adceg” str2= “abcfg”

Null a b c f g
Null 0 1 2 3 4 5
a 1
d 2
c 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”

Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2
c 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”

Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2 1 1 2 3 4
c 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”

Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2 1 1 2 3 4
c 3 2 2 1 2 3
e 4
g 5
Exercise
• str1= “adceg” str2= “abcfg”

Null a b c f g
Null 0 1 2 3 4 5
a 1 0 1 2 3 4
d 2 1 1 2 3 4
c 3 2 2 1 2 3
e 4 3 3 2 2 3
g 5 4 4 3 3 2

• Totally two operations are required.


Reference

• https://www.geeksforgeeks.org/edit-
distance-dp-5/

You might also like