Dynamic Programming and Greedy Approaches
Dynamic Programming and Greedy Approaches
• Sum of Subsets
• Longest Increasing Sequence
• Maximum Contiguous Sum
• Traveling Salesman Problem
2 Copyright Jose Ortiz
ffi
What is Dynamic
Programming?
• For instance, when we optimize the Fibonacci problem, in its recursive form,
n
from time complexity O(2 ) to O(n), we are applying dynamic programing
with memoization. So, operations are stored and used again as needed.
( k ) k!(n − k)!
n n!
=
(2) (3)
4 6
For example: = 6 and = 20
( k ) k!(n − k)!
n n!
=
(2) (3)
4 6
For example: = 6 and = 20
( k ) k!(n − k)!
n n! N/K 0 1 2
=
0 1 0 0
1 1 1 0
C(n, n) = 1
2 1 1
C(n,0) = 1
3 1
( k ) k!(n − k)!
n n! N/K 0 1 2
=
0 1 0 0
C(2,1) = 2 1 1 1 0
2 1 2 1
C(3,1) = 3
3 1 3
C(4,1) = 4
4 1 4
( k ) k!(n − k)!
n n! N/K 0 1 2
=
0 1 0 0
C(3,2) = 3
1 1 1 0
2 1 2 1
C(4,2) = 6
3 1 3 3
4 1 4 6
1 1 1 0
When we check for all the
results using the above 2 1 2 1
method, we can devise a
pattern that repeats in all 3 1 3 3
the results shown in the
table. 4 1 4 6
B(n, n) = 1
0 1 0 0
B(n,0) = 1
1 1 1 0
B(n, k) = 0 when k > n
2 1 2 1
Then,
B(n, k) = B(n − 1,k − 1) + B(n − 1,k) 3 1 3 3
4 1 4 6
1 1 1 0
check : 2 1 2 1
( 2 ) (2)!(3)!
5 5! 120 3 1 3 3
= = = 10
12 4 1 4 6
5 1 5 10
n
T(n) = T(n − 1,k − 1) + T(n − 1,k) + O(1) = O(2 )
Time complexity is bad. This is the same as the Fibonacci recursive approach
since most of the steps done by the algorithm are repeated.
∑ (k)
n
n n n−k k
(a + b) = a b
k
• For example:
S = {2,4,5,8,10}; w = 11 so 2 + 4 + 5 = 11
A brute force solution for this problem would be
• How?
1. Assume S = {2,4,5,8,10}; w = 11
2 0 1 F F F F F F F F F F F
4 1 1 F F F F F F F F F F F
5 2 1 F F F F F F F F F F F
8 3 1 F F F F F F F F F F F
10 4 1 F F F F F F F F F F F
We set all the rows M[i][0] = 1 because any subset of S can make a sum of 0.
Copyright Jose Ortiz
Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values i/j
0 1 2 3 4 5 6 7 8 9 10 11
2 0 T F T F F F F F F F F F
4 1 T F F F F F F F F F F F
5 2 T F F F F F F F F F F F
8 3 T F F F F F F F F F F F
10 4 T F F F F F F F F F F F
Can any subset of value 2 do a subset of sum w=1? No!, then we set M[0][1] = 0
Can any subset of value 2 do a subset of sum w=2? Yes!, then we set M[0][2] = 1
Can any subset of value 2 do a subset of sum when w > 2. No!, then M[0][3,...,11] =0
Copyright Jose Ortiz
Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values i/j
0 1 2 3 4 5 6 7 8 9 10 11
2 0 T F T F F F F F F F F F
4 1 T F T F T F T F F F F F
5 2 T F F F F F F F F F F F
8 3 T F F F F F F F F F F F
10 4 T F F F F F F F F F F F
When i ≥ 1, while w < S[i] we set M[i][ j] = M[i − 1][ j] (green color)
Otherwise, M[i][ j] = M[i − 1][ j] | | M[i − 1][ j − S[i]) (red color)
2 0 T F T F F F F F F F F F
4 1 T F T F T F T F F F F F
5 2 T F T F T T T T F T F T
8 3 T F T F T T T T T T T T
10 4 T F T F T T T T T T T T
This is the nal matrix after all the computations are completed.
n
∑
But, how do we nd the resulting subset s where w = si ?
i=0
See next slide….
Copyright Jose Ortiz
fi
fi
Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values I/j
0 1 2 3 4 5 6 7 8 9 10 11
2 0 T F T F F F F F F F F F
4 1 T F T F T F T F F F F F
5 2 T F T F T T T T F T F T
8 3 T F T F T T T T T T T T
10 4 T F T F T T T T T T T T
s = {5}
2 0 T F T F F F F F F F F F
4 1 T F T F T F T F F F F F
5 2 T F T F T T T T F T F T
8 3 T F T F T T T T T T T T
10 4 T F T F T T T T T T T T
s = {5, 4}
2 0 T F T F F F F F F F F F
4 1 T F T F T F T F F F F F
5 2 T F T F T T T T F T F T
8 3 T F T F T T T T T T T T
10 4 T F T F T T T T T T T T
s = {5,4,2}; w = 11
M = matrix[n][w+1] —> nw
Time complexity
for (i=0; i<n; i++) { —> nw T(n) = 2(nw) + n = O(nw)
for (j=0; j<=w; j++) {
if (j < S[I])
M[i][j] = M[i-1][j]
else
M[j][j] = M[i-1][j] || M[i-1][j-S[i]]
}
}
subset = create_subset(M, n, w) —> n
return subset
}
Dynamic Programming
Longest Increasing Sequence
• For example:
• In the recursive approach, the same problem and solution will be computed
n
several times making its time complexity exponential T(n) = O(2 )
Longest Increasing Sequence
Dynamic Programing Approach
LIS 1 1 1 1 1 1
S 3 1 -2 -1 8 -4
Longest Increasing Sequence
Dynamic Programing Approach
1. S[1] > S[0]? No! then i + + and j = 0 3. S[2] > S[1]? No! then i + + and j = 0
i/j j i i/j j i
LIS 1 1 1 1 1 1 LIS 1 1 1 1 1 1
S 3 1 -2 -1 8 -4 S 3 1 -2 -1 8 -4
2. S[2] > S[0]? No! then j + + 4. S[3] > S[0]? No! then j + +
i/j j i i/j j i
LIS 1 1 1 1 1 1 LIS 1 1 1 1 1 1
S 3 1 -2 -1 8 -4 S 3 1 -2 -1 8 -4
Longest Increasing Sequence
Dynamic Programing Approach
5. S[3] > S[1]? No! then j + + 7. S[4] > S[0]? Yes!, then
i/j j i
LIS[i] = max{LIS[4], LSI[0] + 1} and j + +
LIS 1 1 1 1 1 1
i/j j i
S 3 1 -2 -1 8 -4
LIS 1 1 1 2 2 1
S 3 1 -2 -1 8 -4
6. S[3] > S[2]? Yes!, then 8. S[4] > S[1]? Yes! then
LIS[i] = max{LIS[3], LSI[2] + 1} LIS[i] = max{LIS[4], LSI[1] + 1} and j + +
i/j j i i/j j i
LIS 1 1 1 2 1 1 LIS 1 1 1 2 2 1
S 3 1 -2 -1 8 -4 S 3 1 -2 -1 8 -4
i + + and j = 0
Longest Increasing Sequence
Dynamic Programing Approach
9. S[4] > S[2]? Yes! then 11. Since S[5] < S[ j . . .4], then we are done
for ( j = 0; j < i; j + + ) { 2
T(n) = O(n )
if S[i] > S[ j]
LIS[i] = max(LIS[i], LIS[ j] + 1) Can it be further optimized?
}
} Yes! . It can be done in O(nlogn)
return max(LIS)
}
Dynamic Programming
Maximum Contiguous Sum
• For example:
• s = {15. − 30, 10} is a contiguous subsequence but s = {5, 15, 40} is not
• An improved version of the above approach would take into account the following:
• After implementing this optimization, time complexity of the brute force approach
2
is improved to O(n )
}
How? Dynamic Programming
Maximum Contiguous Sum
Dynamic Programming Approach
S = {5, 15, − 30, 10, − 5, 40, 10}
i 0 1 2 3 4 5 6
S 5 15 -30 10 -5 40 10
Sum 5 20 -10 10 5 45 55
S 5 15 -30 10 -5 40 10
i 0 1 2 3 4 5 6
S 5 15 -30 10 -5 40 10
• Given a a list of cities and the distances between each pair of cities, what is the shortest
possible route that visits each city exactly once and returns to the original city?
• Brute force (next slide) will compute all the possible permutations in order to get the shortest
possible route in O(n!). Not good!
• We can do slightly better using dynamic programming since it will take only into account the
2 n
minimum shortest paths computed so far in O(n 2 ). This may not seem a great improvement.
However, it will make the problem more feasible with graphs of at most 20 nodes
approximately using a typical computer.
• The traveling salesman problem is considered a NP-Hard but it’s not NP. Therefore, the
problem cannot be considered NP-Complete. More about this at the end of the semester…..
Traveling Salesman Problem
Brute Force Approach O(n!)
...
A B C D ABCD
A B
ABDC
.
A 0 16 11 6
ADBC
B 8 0 12 16
C 4 7 0 9
D 5 12 2 0
C D DABC
All Possible Permutations
Traveling Salesman Problem
2 n
Dynamic Programming Approach O(n 2 )
A
A B
23
17 + 6
22 + 16
28 + 11
C D B C D
22 28 17
14 + 13 6 + 16 21 + 7 20 + 9 16 + 12 15 + 2
A B C D
C D B D B C
A 0 16 11 6 14 6 21 20 17 15
B 8 0 13 16 5 + 16 4 + 13 8+7
5+9 4+2 8 + 12
C 4 7 0 9
D C D B C B
D 5 12 2 0 5 4 5 8 4 8
Dis(D, A) = 5 Dis(C, A) = 4 Dis(D, A) = 5 Dis(B, A) = 8 Dis(C, A) = 4 Dis(B, A) = 8
Traveling Salesman Problem
2 n
Dynamic Programming Approach O(n 2 )
• For instance, in dynamic programming, when we are solving the “sum of subsets” problem, we
need to find all dependent subproblems (local decision problem T or F). Finally, we need to go
back again to see which items can be included in the subset based on those local decisions.
• On the other hand, a greedy algorithm would make a choice (wether including the item or not)
based on the optimal local solution to the subproblem (T or F decision). So, it won’t go back
again
• In summary, a Greedy algorithm never reconsiders its choices whereas Dynamic Programming
may consider one or more of the previous states. Greedy algorithms are less efficient than
dynamic programming ones. Why?
D F
3
D F
2 6
E
D F
3
3
D F
2 6
E
D F
3
3
D F
2 6
E
2
E
13 15
A B
D F 7
3 9
2 6
E 3
D F
D F
3
D F
2 6
E
2
E
D F
3 3
D F
2 6
E
2
E
D F
3 3
D F
2 6
E
2
E
D F
3 3
D F
2 6
E
2
E
• With a Fibonacci heap, it can be improved to Θ(E + log | V | ) where E is the number of
edges
• MST-Kruskal’s Algorithm
2
• This algorithm has time complexity Θ( | V | | E | ) or Θ(n )
• With the help of a Min Heap data structure, we can improve the time complexity of this
algorithm to Θ( | E | log | V | )
• In general, for large graphs, Prim’s algorithm runs faster than Kruskal’s one
11
A B Dijkstra′s Greedy Algorithm
7 9
Shortest Path
A
For example vi = {Vertex considered (A)}
d(vi, B) = {distance from vi to vertex B}
16 Copyright Jose Ortiz
Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9
13 15
Dijkstra′s Greedy Algorithm
Vertex D is included because {A, F, D} = 10 is the minimum distance
D F
3
vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)
2 6
E
{A} A,11 A,13 ∞ A,7
F,13
Shortest Path {A, F,} A,11 F,10 A,7
13 15
Dijkstra′s Greedy Algorithm
Finally the remaining vertex (E) is included
D F
3
vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)
6
2
E {A} A,11 A,13 ∞ A,7
11
{A, B} {A, F, D, E} A,11 F,10 D,12 A,7
B Shortest Path
11 7 3 2
A B A F D E
7 9 p(B), d(B) p(D), d(D) p(E), d(E) p(F), d(F)
11
13 15 {A}
B
{A, F,}
D F
3 {A, F, D} {A} → {F} = 7
{A, F, D, E}
{A} → {B} = 11
2 6
E {A} → {D} = 10
{A} → {E} = 12
• Fibonacci Heap:
O(E + V log V)
0
Create a matrix A representing the graph
3
0
4
8
5 A 1 2 3 4 5
9 2 1 0 6 5 ∞ 2
7
2 ∞ 0 1 ∞ ∞
3 1
5
3 7 4 0 ∞ ∞
4
6
1 4 ∞ ∞ 9 0 8
2
5 ∞ ∞ ∞ 3 0
3
0 1
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 6 5 ∞ 2
9 2
7 2 ∞ 0 1 ∞ ∞ 2 ∞ 0
3 1
5 3 7 4 0 ∞ ∞ 3 7 0
4
1
6 4 ∞ ∞ 9 0 8 4 ∞ 0
2
5 ∞ ∞ ∞ 3 0 5 ∞ 0
0
A 1 2 3 4 5 A 1 1 2 3 4 5 A 1[2][3] = min(A 0[2][3], A 0[2][1] + A 0[1][3]) = 1
1 0 0 0
∞ A [2][4] = min(A [2][4], A [2][1] + A [1][4]) = ∞
1 0 6 5 2 1 0 6 5 ∞ 2
1 0 0 0
A [2][5] = min(A [2][5], A [2][1] + A [1][5]) = ∞
2 ∞ 0 1 ∞ ∞ 2 ∞ 0 1 ∞ ∞ A 1[3][2] = min(A 0[3][2], A 0[3][1] + A 0[1][2]) = 4
3
1 2
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 6
9 2
7 2 ∞ 0 1 ∞ ∞ 2 ∞ 0 1 ∞ ∞
3 1
5 3 7 4 0 ∞ 9 3 4 0
4
1
6 4 ∞ ∞ 9 0 8 4 ∞ 0
2
5 ∞ ∞ ∞ 3 0 5 ∞ 0
2
A 1 1 2 3 4 5 A 1 2 3 4 5
1 0 6 5 ∞ 2 1 0 6 5 ∞ 2
2 ∞ 0 1 ∞ ∞ 2 ∞ 0 1 ∞ ∞ 2 1
After computations, A = A
3 7 4 0 ∞ 9 3 7 4 0 ∞ 9
4 ∞ ∞ 9 0 8 4 ∞ ∞ 9 0 8
5 ∞ ∞ ∞ 3 0 5 ∞ ∞ ∞ 3 0
3
2 3
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 5
9 2
7 2 ∞ 0 1 ∞ ∞ 2 0 1
3 1
5 3 7 4 0 ∞ 9 3 7 4 0 ∞ 9
4
1
6 4 ∞ ∞ 9 0 8 4 9 0
2
5 ∞ ∞ ∞ 3 0 5 ∞ 0
2 3
A 1 2 3 4 5 A 1 2 3 4 5
A 3[1][2] = min(A 2[1][2], A 2[1][3] + A 2[3][2]) = 6
1 0 6 5 ∞ 2 1 0 6 5 ∞ 2 A 3[1][4] = min(A 2[1][4], A 2[1][3] + A 2[3][4]) = ∞
A 3[1][5] = min(A 2[1][5], A 2[1][3] + A 2[3][5]) = 2
2 ∞ 0 1 ∞ ∞ 2 8 0 1 ∞ 10 A 3[2][1] = min(A 2[2][1], A 2[2][3] + A 2[3][5]) = 8
A 3[2][4] = min(A 2[2][4], A 2[2][3] + A 2[3][4]) = ∞
3 7 4 0 ∞ 9 3 7 4 0 ∞ 9 3 2 2 2
A [2][5] = min(A [2][5], A [2][3] + A [3][5]) = 10
4 ∞ ∞ 9 0 8 4 16 13 9 0 8 A 3[4][1] = min(A 2[4][1], A 2[4][3] + A 2[3][1]) = 16
3 2 2 2
A [4][2] = min(A [4][2], A [4][3] + A [3][2]) = 13
5 ∞ ∞ ∞ 3 0 5 ∞ ∞ ∞ 3 0 .....
35 Copyright Jose Ortiz
Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
4 3
Create matrix A only with values colum[4] and and row[4] from A
3
3 4
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 ∞
9 2
7 2 8 0 1 ∞ 10 2 0 ∞
3 1
5 3 7 4 0 ∞ 9 3 0 ∞
4
6 4 16 13 9 0 8 4 16 13 9 0 8
1
2
5 ∞ ∞ ∞ 3 0 5 3 0
.....
3 4
A 1 2 3 4 5 A 1 2 3 4 5
1 0 6 5 ∞ 2 1 0 6 5 ∞ 2
A 4[5][1] = min(A 3[5][1], A 3[5][3] + A 3[3][1]) = 19
2 8 0 1 ∞ 10 2 8 0 1 ∞ 10 A 4[5][2] = min(A 3[5][2], A 3[5][3] + A 3[3][2]) = 16
A 4[5][3] = min(A 3[5][3], A 3[5][3] + A 3[3][3]) = 12
3 7 4 0 ∞ 9 3 7 4 0 ∞ 9
4 16 13 9 0 8 4 16 13 9 0 8
.....
5 ∞ ∞ ∞ 3 0 5 19 16 12 3 0
3
4 5
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 2
9 2
7 2 8 0 1 ∞ 10 2 0 10
3 1
5 3 7 4 0 ∞ 9 3 0 9
4
6 4 16 13 9 0 8 4 0 8
1
2
5 19 16 12 3 0 5 19 16 12 3 0
A 4 1 2 3 4 5 A 5 1 2 3 4 5 .....
1 0 6 5 ∞ 2 1 0 6 5 5 2 A 5[1][4] = min(A 4[1][4], A 4[1][5] + A 4[5][4]) = 5
5 19 16 12 3 0 5 19 16 12 3 0
A = [V][V]
Max
Capacity
w = 11 kg
w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 1
8 4 2
1 2 3
15 5 4
w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 1 0 0 0 3 3 3 3 3 3 3 3 3
8 4 2
1 2 3
15 5 4
How many items can we add to the knapsack with w = 3? one . So, p[1] = 3 is added
47 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11
w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 1 0 0 0 3 3 3 3 3 3 3 3 3
8 4 2 0 0 0 3 8 8 8 11 11 11 11 11
1 2 3
15 5 4
How many items can we add to the knapsack with w = 4? one . So, p[2] = 8 is added
How many items can we add to the knapsack with w = 7? one . So, p[2] = 8
However, item[1] must also be considered . So, since M[i − 1][w] < M[i − 1][w − wi] + p[i] then M[i][w] = 8 + 3 = 11
48 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11
w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 1 0 0 0 3 3 3 3 3 3 3 3 3
8 4 2 0 0 0 3 8 8 8 11 11 11 11 11
1 2 3 0 0 1 3 8 8 9 11 11 12 12 12
15 5 4
w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 1 0 0 0 3 3 3 3 3 3 3 3 3
8 4 2 0 0 0 3 8 8 8 11 11 11 11 11
1 2 3 0 0 1 3 8 8 9 11 11 12 12 12
15 5 4 0 0 1 3 8 15 15 16 18 23 23 24
w = 11 n
∑
MaxWeight = xiwi = 3(2/3) + 4 + 0 + 5 = 11
i=1
n
∑
MaxProfit = xi pi = 3(2/3) + 8 + 0 + 15 = 25
52 i=1
Copyright Jose Ortiz