Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
7 views

Dynamic Programming and Greedy Approaches

Algorithm Analysis

Uploaded by

robelayelew.1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Dynamic Programming and Greedy Approaches

Algorithm Analysis

Uploaded by

robelayelew.1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 103

CSC510-04 Analysis of Algorithms

Dynamic Programming Algorithms


Jose Ortiz
jortizco@sfsu.edu

Copyright Jose Ortiz


Overview
• What is Dynamic Programming?
• When to use Dynamic Programming
• Binomial Coe cients

• Sum of Subsets
• Longest Increasing Sequence
• Maximum Contiguous Sum
• Traveling Salesman Problem
2 Copyright Jose Ortiz
ffi
What is Dynamic
Programming?

3 Copyright Jose Ortiz


What is Dynamic Programing?
• Dynamic programing is a technique to optimize the amount of work done by
a recursive algorithm. Note that a recursive algorithm, in dynamic
programming, is also an algorithm that repeats computations that were
already done by the function.

• For instance, when we optimize the Fibonacci problem, in its recursive form,
n
from time complexity O(2 ) to O(n), we are applying dynamic programing
with memoization. So, operations are stored and used again as needed.

• Where and when to use dynamic programing?


• Dynamic programming is implemented when a problem can be divided
into similar sub-problems where their result can be re-used.

4 Copyright Jose Ortiz


Binomial Coef cients

5 Copyright Jose Ortiz


fi
Binomials Coef cients
• Mathematically speaking, we solve a binomial coe cient problem by
applying the following formula:

( k ) k!(n − k)!
n n!
=

(2) (3)
4 6
For example: = 6 and = 20

How to code the above problems?

Complexity time? Space complexity ?

6 Copyright Jose Ortiz


fi
ffi
Binomials Coef cients
• Mathematically speaking, we solve a binomial coe cient problem by applying the
following formula:

( k ) k!(n − k)!
n n!
=

(2) (3)
4 6
For example: = 6 and = 20

How to code the above problems? In factorial time, n = (1 * 2 * 3 * . . . * n),


k = (1 * 2 * 3 * . . . * k)

Complexity time? Space complexity ? O(n!) and O(1)

7 Copyright Jose Ortiz


fi
ffi
Dynamic Programing Technique
Binomials Coef cients
Step 1: Create a table for the binomial coe cients using some values for n and k.

( k ) k!(n − k)!
n n! N/K 0 1 2

=
0 1 0 0

1 1 1 0
C(n, n) = 1
2 1 1
C(n,0) = 1
3 1

C(n, k) = 0 when k > n 4 1

8 Copyright Jose Ortiz


fi
ffi
Dynamic Programing Technique
Binomials Coef cients

( k ) k!(n − k)!
n n! N/K 0 1 2
=
0 1 0 0

C(2,1) = 2 1 1 1 0

2 1 2 1
C(3,1) = 3
3 1 3

C(4,1) = 4
4 1 4

9 Copyright Jose Ortiz


fi
Dynamic Programing Technique
Binomials Coef cients

( k ) k!(n − k)!
n n! N/K 0 1 2
=
0 1 0 0

C(3,2) = 3
1 1 1 0

2 1 2 1

C(4,2) = 6
3 1 3 3

4 1 4 6

10 Copyright Jose Ortiz


fi
Dynamic Programing Technique
Binomials Coef cients
Step 2: Find patterns on the data that will help you to de ne each result

Notice how the result from N/K 0 1 2


B(3,2)=3 comes from the
addition of B(2,1) + B(2,2) 0 1 0 0

1 1 1 0
When we check for all the
results using the above 2 1 2 1
method, we can devise a
pattern that repeats in all 3 1 3 3
the results shown in the
table. 4 1 4 6

11 Copyright Jose Ortiz


fi
fi
Dynamic Programing Technique
Binomials Coef cients
Step 3: determining the formula to be followed in the dynamic programming technique

We know that: N/K 0 1 2

B(n, n) = 1
0 1 0 0
B(n,0) = 1
1 1 1 0
B(n, k) = 0 when k > n
2 1 2 1
Then,
B(n, k) = B(n − 1,k − 1) + B(n − 1,k) 3 1 3 3

4 1 4 6

12 Copyright Jose Ortiz


fi
Dynamic Programing Technique
Binomials Coef cients
Step 4: testing using the dynamic programming approach and checking the new result with
the binomial coe cient formula

B(n, k) = B(n − 1,k − 1) + B(n − 1,k) N/K 0 1 2

B(5,2) = B(4,1) + B(4,2) = 4 + 6 = 10 0 1 0 0

1 1 1 0

check : 2 1 2 1

( 2 ) (2)!(3)!
5 5! 120 3 1 3 3

= = = 10
12 4 1 4 6

5 1 5 10

13 Copyright Jose Ortiz


ffi
fi
Recursive Approach
Binomials Coef cients Pseudocode
func binomialCoeff(n, k) {

if (k==0 or k==n) return 1;

return binomialCoeff(n-1,k-1) + binomialCoeff(n-1,k);

n
T(n) = T(n − 1,k − 1) + T(n − 1,k) + O(1) = O(2 )

Time complexity is bad. This is the same as the Fibonacci recursive approach
since most of the steps done by the algorithm are repeated.

14 Copyright Jose Ortiz


fi
Recursive Approach (Memoization) Top Down
Binomials Coef cients Pseudocode
func binomialCoeff(n, k) { // dynamic programming top down approach
m = [n+1][k+1]
Initialize all values of m to -1.
if (k==0 or k==n) return 1;
if (m[n][k] == -1)
m[n][k] = binomialCoeff(n-1,k-1) + binomialCoeff(n-1,k)
return m[n][k];
}
n
T(n) = O(nk) + O(1) + O(1) = O(nk) much better than O(2 )

15 Copyright Jose Ortiz


fi
Iterative Dynamic Programming Bottom Up
Binomials Coef cients Pseudocode
func binomialCoeff(n, k) { // dynamic programming bottom up approach
m = create_matrix(n,k)
for (i=0; i<=n; i++) {
for (j=0; j<=k; j++) {
if (j > i) m[i][j] = 0
if (j == 0 || j == i) m[i][j] = 1
else m[i][j] = m[i-1][j-1] + m[i-1][j]
}
}
return m[n][k];
}
T(n) = O(nk) + O(nk) = O(nk) Challenge: can you improve this time complexity?
16 Copyright Jose Ortiz
fi
Binomials Coef cients Expansion
With Dynamic Programming
n
• How to solve (a + b) binomial expansion with dynamic programing?

∑ (k)
n
n n n−k k
(a + b) = a b
k

Examples will be provided during lectures.

17 Copyright Jose Ortiz


fi
Binomials Coef cients Exam Review

18 Copyright Jose Ortiz


fi
Dynamic Programming
Sum of Subsets

19 Copyright Jose Ortiz


Sum of Subsets (Brute Force)
• Problem Statement:
Given a set of integers S = {1,2,....,n}, nds subset s such that the
sum of all the elements from s = w

• For example:
S = {2,4,5,8,10}; w = 11 so 2 + 4 + 5 = 11
A brute force solution for this problem would be

Copyright Jose Ortiz


fi
Sum of Subsets (Brute Force)
S = {2,4,5,8,10}; w = 11 so sum = 2 + 4 + 5 = 11
A brute force solution for this problem would be:
2
2,4 Finding the solution will take
5 n
2,4,5 2 = O(2 )
2,4,5,8
2,4,5,8,10

Copyright Jose Ortiz


Sum of Subsets (Dynamic Programming)
• Can we do better? Of course we can. Using dynamic programming.

• How?

1. Assume S = {2,4,5,8,10}; w = 11

2. Create a matrix of size M = [n][w + 1]


3. Then see next slide

Copyright Jose Ortiz


Sum of Subsets (Dynamic Programming)
S = {2,4,5,8,10}; w = 11
Values i/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 1 F F F F F F F F F F F

4 1 1 F F F F F F F F F F F

5 2 1 F F F F F F F F F F F

8 3 1 F F F F F F F F F F F

10 4 1 F F F F F F F F F F F

We set all the rows M[i][0] = 1 because any subset of S can make a sum of 0.
Copyright Jose Ortiz
Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values i/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 T F T F F F F F F F F F

4 1 T F F F F F F F F F F F

5 2 T F F F F F F F F F F F

8 3 T F F F F F F F F F F F

10 4 T F F F F F F F F F F F

Can any subset of value 2 do a subset of sum w=1? No!, then we set M[0][1] = 0
Can any subset of value 2 do a subset of sum w=2? Yes!, then we set M[0][2] = 1
Can any subset of value 2 do a subset of sum when w > 2. No!, then M[0][3,...,11] =0
Copyright Jose Ortiz
Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values i/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 T F T F F F F F F F F F

4 1 T F T F T F T F F F F F

5 2 T F F F F F F F F F F F

8 3 T F F F F F F F F F F F

10 4 T F F F F F F F F F F F

When i ≥ 1, while w < S[i] we set M[i][ j] = M[i − 1][ j] (green color)
Otherwise, M[i][ j] = M[i − 1][ j] | | M[i − 1][ j − S[i]) (red color)

Copyright Jose Ortiz


Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values I/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 T F T F F F F F F F F F

4 1 T F T F T F T F F F F F

5 2 T F T F T T T T F T F T

8 3 T F T F T T T T T T T T

10 4 T F T F T T T T T T T T

This is the nal matrix after all the computations are completed.
n


But, how do we nd the resulting subset s where w = si ?
i=0
See next slide….
Copyright Jose Ortiz
fi
fi
Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values I/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 T F T F F F F F F F F F

4 1 T F T F T F T F F F F F

5 2 T F T F T T T T F T F T

8 3 T F T F T T T T T T T T

10 4 T F T F T T T T T T T T

s = {5}

Copyright Jose Ortiz


Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values I/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 T F T F F F F F F F F F

4 1 T F T F T F T F F F F F

5 2 T F T F T T T T F T F T

8 3 T F T F T T T T T T T T

10 4 T F T F T T T T T T T T

s = {5, 4}

Copyright Jose Ortiz


Sum of Subsets (Dynamic Programming)
• S = {2,4,5,8,10}; w = 11
Values I/j
0 1 2 3 4 5 6 7 8 9 10 11

2 0 T F T F F F F F F F F F

4 1 T F T F T F T F F F F F

5 2 T F T F T T T T F T F T

8 3 T F T F T T T T T T T T

10 4 T F T F T T T T T T T T

s = {5,4,2}; w = 11

Copyright Jose Ortiz


Sum of Subsets Dynamic Programming
Pseudocode
sum_of_subset(S,w) { // S is the set of values, and w is the sum
n = len(S)
M = matrix[n][w+1] // initialize if M[i][0] = True, otherwise all values to False
for (i=0; i<n; i++) {
for (j=0; j<=w; j++) {
if (j < S[I])
M[i][j] = M[i-1][j]
else
M[j][j] = M[i-1][j] || M[i-1][j-S[i]]
}
}
subset = create_subset(M, n, w)
return subset
}
Sum of Subsets Dynamic Programming
Time complexity
sum_of_subset(S,w) {
n = len(S)

M = matrix[n][w+1] —> nw
Time complexity
for (i=0; i<n; i++) { —> nw T(n) = 2(nw) + n = O(nw)
for (j=0; j<=w; j++) {
if (j < S[I])
M[i][j] = M[i-1][j]
else
M[j][j] = M[i-1][j] || M[i-1][j-S[i]]
}
}
subset = create_subset(M, n, w) —> n
return subset
}
Dynamic Programming
Longest Increasing Sequence

32 Copyright Jose Ortiz


Longest Increasing Sequence
• Problem Statement

• Given a set of unordered signed integers, nd the longest


increasing sequence

• For example:

• S = {3,1, − 2, − 1,8, − 4} → s = {−2, − 1,8}

• Longest increasing sequence length(s) =3


fi
Longest Increasing Sequence
• Problem Statement

• S = {3,1, − 2, − 1,8, − 4} → s = {−2, − 1,8}; length(s) = 3


• Time complexity of a recursive approach to solve this problem is
exponential because we must check every sequence on the set to
determine the largest increased one.

• In the recursive approach, the same problem and solution will be computed
n
several times making its time complexity exponential T(n) = O(2 )
Longest Increasing Sequence
Dynamic Programing Approach

Create an array of size LIS = [n]


Initialize all the values when
j ≤ i to 1
Initialize the rest of values to 0

LIS 1 1 1 1 1 1
S 3 1 -2 -1 8 -4
Longest Increasing Sequence
Dynamic Programing Approach
1. S[1] > S[0]? No! then i + + and j = 0 3. S[2] > S[1]? No! then i + + and j = 0

i/j j i i/j j i
LIS 1 1 1 1 1 1 LIS 1 1 1 1 1 1
S 3 1 -2 -1 8 -4 S 3 1 -2 -1 8 -4

2. S[2] > S[0]? No! then j + + 4. S[3] > S[0]? No! then j + +

i/j j i i/j j i
LIS 1 1 1 1 1 1 LIS 1 1 1 1 1 1
S 3 1 -2 -1 8 -4 S 3 1 -2 -1 8 -4
Longest Increasing Sequence
Dynamic Programing Approach
5. S[3] > S[1]? No! then j + + 7. S[4] > S[0]? Yes!, then

i/j j i
LIS[i] = max{LIS[4], LSI[0] + 1} and j + +
LIS 1 1 1 1 1 1
i/j j i
S 3 1 -2 -1 8 -4
LIS 1 1 1 2 2 1
S 3 1 -2 -1 8 -4

6. S[3] > S[2]? Yes!, then 8. S[4] > S[1]? Yes! then
LIS[i] = max{LIS[3], LSI[2] + 1} LIS[i] = max{LIS[4], LSI[1] + 1} and j + +
i/j j i i/j j i
LIS 1 1 1 2 1 1 LIS 1 1 1 2 2 1
S 3 1 -2 -1 8 -4 S 3 1 -2 -1 8 -4
i + + and j = 0
Longest Increasing Sequence
Dynamic Programing Approach
9. S[4] > S[2]? Yes! then 11. Since S[5] < S[ j . . .4], then we are done

LIS[i] = max{LIS[4], LSI[2] + 1} and j + + i/j j i


LIS 1 1 1 2 3 1
i/j j i S 3 1 -2 -1 8 -4
LIS 1 1 1 2 2 1 So the Longest Increasing Sequence is
S 3 1 -2 -1 8 -4
max(LIS) = 3
10. S[4] > S[3]? Yes!, then But how to get the subset s?
LIS[i] = max{LIS[3], LSI[2] + 1}
LIS 1 1 1 2 3 1
i/j j i
S 3 1 -2 -1 8 -4
LIS 1 1 1 2 3 1
LIS[i]
LIS[i]
S 3 1 -2 -1 8 -4
s = {−2, − 1,8}
i + + and j = 0
Longest Increasing Sequence
Dynamic Programing Pseudocode
longest_increasing_sequence(S, LIS) {
Time Complexity
# Note : values of LIS initialized to 1
n i n
n = length_of(S) n(n + 1)
∑∑ ∑
T(n) = 1= i=
for (i = 0; i < n; i + + ) { i=0 j=0 i=0
2

for ( j = 0; j < i; j + + ) { 2
T(n) = O(n )
if S[i] > S[ j]
LIS[i] = max(LIS[i], LIS[ j] + 1) Can it be further optimized?
}
} Yes! . It can be done in O(nlogn)
return max(LIS)
}
Dynamic Programming
Maximum Contiguous Sum

40 Copyright Jose Ortiz


Maximum Contiguous Sum
• Problem Statement

• Given a a list S made up of a sequence of elements, nd a contiguous


subsequence of maximum sum ( a subsequence of length zero has sum 0 )

• For example:

• S = {5, 15, − 30, 10, − 5, 40, 10}

• s = {15. − 30, 10} is a contiguous subsequence but s = {5, 15, 40} is not

• The max contiguous sum of S is s = {10, − 5, 40, 10} = 55


fi
Maximum Contiguous Sum
Brute Force Approach
• A brute force approach would consider all contiguous subsequences and take the
3
maximum. This can be done in O(n )

• An improved version of the above approach would take into account the following:

• S[i . . . j + 1] = S[ j + 1] + (sum of subsequence A[i . . . j])

• After implementing this optimization, time complexity of the brute force approach
2
is improved to O(n )

• Can we do better? Yes!, with a Divide and Conquer O(n log n)


Maximum Contiguous Sum
Divide and Conquer Pseudocode
func maxContSum(Array, low, high) { Time Complexity
// Array has only one element
n
if (low == high) return Array[high]; T(n) = 2T( ) + 2n
middle = (high + low)/2
2
a=2
// Compute max sum of left part of this partition
b=2
Iterate from low to middle, and compute maxLeftSum // --> n
k=1
// Compute max sum of right part of this partition p=0
Iterate from middle+1 to high, and compute maxRightSum // --> n
k p+1
// recursive calls to do the above for each partition p > − 1; → Θ(n log n)
1 0+1
maxleftArray = maxContSum(Array, low, middle) // --> T(n/2) T(n) = Θ(n log n)
maxRightArray = maxContSum(Array, middle+1, high) // --> T(n/2)
T(n) = Θ(n log n)
// max from both sides
Can this time complexity be
maxSum = max(maxLeftArray, maxRightArray)
further optimized ? Yes!
return max(maxSum, leftMaxSum + rightMaxSum)

}
How? Dynamic Programming
Maximum Contiguous Sum
Dynamic Programming Approach
S = {5, 15, − 30, 10, − 5, 40, 10}

i 0 1 2 3 4 5 6

S 5 15 -30 10 -5 40 10

Sum 5 20 -10 10 5 45 55

MaximumContSum(S, n) { Time Complexity


Sum = [n]; Sum[0] = S[0]; T(n) = n + 1
iterate from i + 1 to n − 1; T(n) = O(n)
Sum[i] = max(S[i], S[i] + Sum[i − 1]) Space Complexity
return max(Sum) T(n) = O(n)
}
Maximum Contiguous Sum
Dynamic Programming Approach
Can we futher improve the space complexity? Yes! we can make it O(1)
S = {5, 15, − 30, 10, − 5, 40, 10}
i 0 1 2 3 4 5 6

S 5 15 -30 10 -5 40 10

MaximumContSum(S, n) { Time Complexity


sum = S[0]; maxSum = sum T(n) = n + 1
iterate from i + 1 to n − 1; T(n) = O(n)
sum = max(S[i], S[i] + sum) Space Complexity
maxSum = max(maxSum, sum) T(n) = O(1)
return maxSum # returns 55
}
Maximum Contiguous Sum
Dynamic Programming Approach
How to get the subset of items with the max contiguous sum?
S = {5, 15, − 30, 10, − 5, 40, 10}

i 0 1 2 3 4 5 6

S 5 15 -30 10 -5 40 10

MaximumContSum(S, n) { Time Complexity


sum = maxSum = S[0]; i = start = end = 0; T(n) = n + 1
iterate from j + 1 to n − 1;
T(n) = O(n)
sum = max(S[ j], S[ j] + sum)
if sum > maxSum Space Complexity
maxSum = sum; start = i; end = j; T(n) = O(1)
else if sum < 0 Challenge: make this
i=j+1 algorithm recursive
return start, end, maxSum # 3, 6, 55 keeping the same time
} complexity
Dynamic Programming
Traveling Salesman Problem

47 Copyright Jose Ortiz


Traveling Salesman Problem
• Problem Statement

• Given a a list of cities and the distances between each pair of cities, what is the shortest
possible route that visits each city exactly once and returns to the original city?

• Brute force (next slide) will compute all the possible permutations in order to get the shortest
possible route in O(n!). Not good!

• We can do slightly better using dynamic programming since it will take only into account the
2 n
minimum shortest paths computed so far in O(n 2 ). This may not seem a great improvement.
However, it will make the problem more feasible with graphs of at most 20 nodes
approximately using a typical computer.

• The traveling salesman problem is considered a NP-Hard but it’s not NP. Therefore, the
problem cannot be considered NP-Complete. More about this at the end of the semester…..
Traveling Salesman Problem
Brute Force Approach O(n!)

Graph Representation Matrix Representation Brute Force

...
A B C D ABCD
A B
ABDC

.
A 0 16 11 6
ADBC
B 8 0 12 16

C 4 7 0 9

D 5 12 2 0
C D DABC
All Possible Permutations
Traveling Salesman Problem
2 n
Dynamic Programming Approach O(n 2 )

A
A B
23
17 + 6
22 + 16
28 + 11

C D B C D
22 28 17
14 + 13 6 + 16 21 + 7 20 + 9 16 + 12 15 + 2
A B C D
C D B D B C
A 0 16 11 6 14 6 21 20 17 15
B 8 0 13 16 5 + 16 4 + 13 8+7
5+9 4+2 8 + 12
C 4 7 0 9
D C D B C B
D 5 12 2 0 5 4 5 8 4 8
Dis(D, A) = 5 Dis(C, A) = 4 Dis(D, A) = 5 Dis(B, A) = 8 Dis(C, A) = 4 Dis(B, A) = 8
Traveling Salesman Problem
2 n
Dynamic Programming Approach O(n 2 )

g(i, S) = min(w(i, j) + g( j, {S − j})


A B
g(A, {B, C, D}) = min[(w(A, B) + g(B, {C, D}), (w(A, C) + g(C, {B, D}),
(w(A, D) + g(D, {B, C})] = 23
g(B, {C, D}) = min[(w(B, C) + g(C, {D}), (w(B, D) + g(D, {C})] = 22
C D
g(C, {B, D}) = min[(w(C, B) + g(B, {D}), (w(C, D) + g(D, {B})] = 18
g(D, {B, C}) = min[(w(D, B) + g(B, {C}), (w(D, C) + g(C, {B})] = 17
A B C D
A 0 16 11 6 g(C, {D}) = [(w(C, D) + g(D, {A})] = 9 + 5 = 14
B 8 0 13 16 g(D, {C}) = [(w(D, C) + g(C, {A})] = 2 + 4 = 6
C 4 7 0 9
g(B, {D}) = [(w(B, D) + g(D, {A})] = 16 + 5 = 21
D 5 12 2 0
g(D, {B}) = [(w(D, B) + g(B, {A})] = 12 + 8 = 20
g(B, {C}) = [(w(B, C) + g(C, {A})] = 13 + 4 = 17
A D C B A
g(C, {B}) = [(w(C, B) + g(B, {A})] = 7 + 8 = 15
CSC510-04 Analysis of Algorithms
Greedy Algorithms
Jose Ortiz
jortizco@sfsu.edu

Copyright Jose Ortiz


Overview
• Greedy algorithms
• Minimum spanning trees
• Prim’s Algorithm
• Kruskal’s Algorithm
• Dijkstra’s Algorithm
• Greedy vs. Dynamic Programming
• Floyd’s Algorithm
• Bellman-Ford Algorithm
• Knapsack problem
• 0-1, Fractional
2 Copyright Jose Ortiz
What are greedy algorithms ?
• Greedy algorithms are similar to dynamic programing ones, they build an optimal solution of a
problem based on the optimal solutions of smaller subproblems of that problem. However,
with a Greedy algorithm we can have a local choice of the subproblem that will lead to an
optimal answer.

• For instance, in dynamic programming, when we are solving the “sum of subsets” problem, we
need to find all dependent subproblems (local decision problem T or F). Finally, we need to go
back again to see which items can be included in the subset based on those local decisions.

• On the other hand, a greedy algorithm would make a choice (wether including the item or not)
based on the optimal local solution to the subproblem (T or F decision). So, it won’t go back
again

• In summary, a Greedy algorithm never reconsiders its choices whereas Dynamic Programming
may consider one or more of the previous states. Greedy algorithms are less efficient than
dynamic programming ones. Why?

3 Copyright Jose Ortiz


Minimum Spanning Trees
Greedy Prim’s Algorithm

4 Copyright Jose Ortiz


Minimum Spanning Trees
Prim’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Prim′s Greedy Algorithm
Pick vertex A . Then, include the vertex connected
to A with minimum edge′s distance
11
A B
7 9
A B
13 15
7

D F
3
D F
2 6
E

5 Copyright Jose Ortiz


Minimum Spanning Trees
Prim’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Prim′s Greedy Algorithm
Repeat the same process with all the vertex connected
to the vertex | Vi | with the minimum edge′s distance
11
A B
7 9
A B
13 15
7

D F
3
3
D F
2 6
E

6 Copyright Jose Ortiz


Minimum Spanning Trees
Prim’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Prim′s Greedy Algorithm
After vertex E is added, we consider vertex F again
However, vertex F will create a cycle . Therefore, the
11 the edge connecting vertices E and F is rejected
A B
7 9
A B
13 15
7

D F
3
3
D F
2 6
E
2
E

7 Copyright Jose Ortiz


Minimum Spanning Trees
Prim’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Prim′s Greedy Algorithm
11
After edge 6 is rejected, we go back to D again
A B
7 9 Vertex B is added since it has the next minimum distance

13 15
A B
D F 7
3 9

2 6
E 3
D F

The total weigth of the minimum spanning tree is 21 2


E

8 Copyright Jose Ortiz


Minimum Spanning Trees
Greedy Kruskal Algorithm

9 Copyright Jose Ortiz


Minimum Spanning Trees
Kruskal’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Kruskal′s Greedy Algorithm
This algorithm considers non connected paths
with minimum edge′s distance first
11
A B
7 9
A B
13 15

D F
3
D F
2 6
E
2
E

10 Copyright Jose Ortiz


Minimum Spanning Trees
Kruskal’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Kruskal’s algorithm
Kruskal′s Greedy Algorithm
This algorithm considers non connected paths
with minimum edge′s distance first
11
A B
7 9
A B
13 15

D F
3 3
D F
2 6
E
2
E

11 Copyright Jose Ortiz


Minimum Spanning Trees
Kruskal’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Kruskal′s Greedy Algorithm
This algorithm considers non connected paths
with minimum edge′s distance first
11
A B
7 9
A B
13 15
7

D F
3 3
D F
2 6
E
2
E

12 Copyright Jose Ortiz


Minimum Spanning Trees
Kruskal’s algorithm
Consider the following undirected graph. Find the minimum spanning tree using
the Prim algorithm
Kruskal′s Greedy Algorithm
This algorithm considers non connected paths
with minimum edge′s distance first
11
A B
7 9
A B
13 15 7 9

D F
3 3
D F
2 6
E
2
E

13 Copyright Jose Ortiz


Time Complexity Analysis
Prim’s and Kruskal’s Algorithms
• MST-Prim’s Algorithm
2
• This algorithm has time complexity Θ( | V | ) where | V | is the number of vertices

• With a Fibonacci heap, it can be improved to Θ(E + log | V | ) where E is the number of
edges

• MST-Kruskal’s Algorithm

2
• This algorithm has time complexity Θ( | V | | E | ) or Θ(n )

• With the help of a Min Heap data structure, we can improve the time complexity of this
algorithm to Θ( | E | log | V | )

• In general, for large graphs, Prim’s algorithm runs faster than Kruskal’s one

14 Copyright Jose Ortiz


Greedy Dijkstra’s Algorithm

15 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
Consider the following undirected graph. Find the shortest path from A → {B, D, F, E}
using the Dijkstra’s algorithm

11
A B Dijkstra′s Greedy Algorithm
7 9

13 15 Create a matrix from the graph starting with Vertice A

D F vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)


3
{A} A,11 A,13 ∞ A,7
2 6
E

Shortest Path
A
For example vi = {Vertex considered (A)}
d(vi, B) = {distance from vi to vertex B}
16 Copyright Jose Ortiz
Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9

13 15 Dijkstra′s Greedy Algorithm


D F Vertex F is included because {A, F} = 7 is the minimum distance
3

2 6 vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)


E
{A} A,11 A,13 ∞ A,7

{A, F,} A,7


Shortest Path
A,7
7
A F A,7

17 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9

13 15 Dijkstra′s Greedy Algorithm


D F New p( . . . ), d( . . . ) are computed for the path {A, F}
3
vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)
2 6
E
{A} A,11 A,13 ∞ A,7

{A, F,} A,11 F,10 F,13 A,7


Shortest Path
A,7
7
A F A,7

18 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9

13 15
Dijkstra′s Greedy Algorithm
Vertex D is included because {A, F, D} = 10 is the minimum distance
D F
3
vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)
2 6
E
{A} A,11 A,13 ∞ A,7

{A, F,} A,11 F,10 F,13 A,7


Shortest Path
{A, F, D} F,10 A,7
7 3
A F D F,10 A,7

19 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9

13 15 Dijkstra′s Greedy Algorithm


New p( . . . ), d( . . . ) are computed for the path {A, F, D}
D F
3
vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)
2 6
E
{A} A,11 A,13 ∞ A,7

{A, F,} A,11 F,10 F,13 A,7


Shortest Path
{A, F, D} A,11 F,10 D,12 A,7
7 3
A F D F,10 A,7

20 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9

13 15 Dijkstra′s Greedy Algorithm


D F Vertex B is included because {A, B} = 11 is the minimum distance
3

vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)


2 6
E
{A} A,11 A,13 ∞ A,7

F,13
Shortest Path {A, F,} A,11 F,10 A,7

7 3 {A, F, D} A,11 F,10 D,12 A,7


A F D
{A, B} {A, F, D} A,11 F,10 A,7
11
B

21 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
11
A B
7 9

13 15
Dijkstra′s Greedy Algorithm
Finally the remaining vertex (E) is included
D F
3
vi, d(vi, B) vi, d(vi, D) vi, d(vi, E) vi, d(vi, F)
6
2
E {A} A,11 A,13 ∞ A,7

{A, F,} A,11 F,10 F,13 A,7

{A, F, D} A,11 F,10 D,12 A,7


7 3 2
A F D E {A, B} {A, F, D} A,11 F,10 D,12 A,7

11
{A, B} {A, F, D, E} A,11 F,10 D,12 A,7

B Shortest Path

22 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm

Original Graph Shortest Path

11 7 3 2
A B A F D E
7 9 p(B), d(B) p(D), d(D) p(E), d(E) p(F), d(F)
11
13 15 {A}
B
{A, F,}
D F
3 {A, F, D} {A} → {F} = 7
{A, F, D, E}
{A} → {B} = 11
2 6
E {A} → {D} = 10
{A} → {E} = 12

23 Copyright Jose Ortiz


Shortest Path in a Graph Dijkstra’s algorithm
Class Exercise
Consider the following undirected graph. Find the shortest path from {u} → {v, x, w, y, z}

24 Copyright Jose Ortiz


Time Complexity Analysis
Dijkstra’s Algorithm
2
• Linear Array or Matrix: O(V )

• Binary Heap: O((V + E)log V)

• Fibonacci Heap:
O(E + V log V)

Image Source: https://www.quora.com 25 Copyright Jose Ortiz


Final Exam Review
• Given the graph on the right:
12
• Find the minimum spanning V1
9 11
V2

tree using Prim Algorithm


16 24
(beginning with vertex v1)
V3 V4
• Find the minimum spanning 5

tree using Kruskal Algorithm 1


V5
6

• Find the shortest path using


Dijkstra’s algorithm from
{V1} → {V2,V3,V4,V5}
26 Copyright Jose Ortiz
Greedy vs Dynamic Programming
Floyd’s Algorithm

27 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
• Dijkstra’s algorithm is a greedy algorithm that can only find the shortest path in a graph
using a source node as reference. However, if we compute the same algorithm for a
different source node, the shortest path will be different. To accomplish this with Dijkstra’s
algorithm using a linear or matrix approach, the time complexity of the algorithm increases
2 3
to O(V(V )) = O(V ) with a big overhead.

• Floyd’s Algorithm is a dynamic programming algorithm (like with a modified version of


3
Dijkstra’s algorithm) and can compute all pairs of shortest path in O(V ). However, it
provides several advantages over Dijkstra’s algorithm.

• Better suited for distributed systems


• Works for negative edges ( Dijkstra’s algorithm only considers positive ones)
• Less overhead than Dijkstra’s algorithm
28 Copyright Jose Ortiz
Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path

0
Create a matrix A representing the graph
3

0
4
8
5 A 1 2 3 4 5

9 2 1 0 6 5 ∞ 2
7
2 ∞ 0 1 ∞ ∞
3 1
5
3 7 4 0 ∞ ∞
4
6
1 4 ∞ ∞ 9 0 8
2
5 ∞ ∞ ∞ 3 0

29 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
1 0
Create matrix A only with values colum[1] and and row[1] from A

3
0 1
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 6 5 ∞ 2
9 2
7 2 ∞ 0 1 ∞ ∞ 2 ∞ 0
3 1
5 3 7 4 0 ∞ ∞ 3 7 0
4
1
6 4 ∞ ∞ 9 0 8 4 ∞ 0
2
5 ∞ ∞ ∞ 3 0 5 ∞ 0

30 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
k k−1 K−1 K−1
Compute A [i][ j] = min(A [i][ j], A [i][k] + A [k][ j])

0
A 1 2 3 4 5 A 1 1 2 3 4 5 A 1[2][3] = min(A 0[2][3], A 0[2][1] + A 0[1][3]) = 1
1 0 0 0
∞ A [2][4] = min(A [2][4], A [2][1] + A [1][4]) = ∞
1 0 6 5 2 1 0 6 5 ∞ 2
1 0 0 0
A [2][5] = min(A [2][5], A [2][1] + A [1][5]) = ∞
2 ∞ 0 1 ∞ ∞ 2 ∞ 0 1 ∞ ∞ A 1[3][2] = min(A 0[3][2], A 0[3][1] + A 0[1][2]) = 4

3 ∞ ∞ ∞ A 1[3][4] = min(A 0[3][4], A 0[3][1] + A 0[1][4]) = ∞


7 4 0 3 7 4 0 9
1 0 0 0
A [3][5] = min(A [3][5], A [3][1] + A [1][5]) = 9
4 ∞ ∞ 9 0 8 4 ∞ ∞ 9 0 8
.....
5 ∞ ∞ ∞ 3 0 5 ∞ ∞ ∞ 3 0
1 0 0 0
A [5][4] = min(A [5][4], A [5][1] + A [1][4]) = 3

31 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
2 1
Create matrix A only with values colum[2] and and row[2] from A

3
1 2
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 6
9 2
7 2 ∞ 0 1 ∞ ∞ 2 ∞ 0 1 ∞ ∞
3 1
5 3 7 4 0 ∞ 9 3 4 0
4
1
6 4 ∞ ∞ 9 0 8 4 ∞ 0
2
5 ∞ ∞ ∞ 3 0 5 ∞ 0

32 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
k k−1 K−1 K−1
Compute A [i][ j] = min(A [i][ j], A [i][k] + A [k][ j])

2
A 1 1 2 3 4 5 A 1 2 3 4 5

1 0 6 5 ∞ 2 1 0 6 5 ∞ 2

2 ∞ 0 1 ∞ ∞ 2 ∞ 0 1 ∞ ∞ 2 1
After computations, A = A
3 7 4 0 ∞ 9 3 7 4 0 ∞ 9

4 ∞ ∞ 9 0 8 4 ∞ ∞ 9 0 8

5 ∞ ∞ ∞ 3 0 5 ∞ ∞ ∞ 3 0

33 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
3 2
Create matrix A only with values colum[3] and and row[3] from A

3
2 3
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 5
9 2
7 2 ∞ 0 1 ∞ ∞ 2 0 1
3 1
5 3 7 4 0 ∞ 9 3 7 4 0 ∞ 9
4
1
6 4 ∞ ∞ 9 0 8 4 9 0
2
5 ∞ ∞ ∞ 3 0 5 ∞ 0

34 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
k k−1 K−1 K−1
Compute A [i][ j] = min(A [i][ j], A [i][k] + A [k][ j])

2 3
A 1 2 3 4 5 A 1 2 3 4 5
A 3[1][2] = min(A 2[1][2], A 2[1][3] + A 2[3][2]) = 6
1 0 6 5 ∞ 2 1 0 6 5 ∞ 2 A 3[1][4] = min(A 2[1][4], A 2[1][3] + A 2[3][4]) = ∞
A 3[1][5] = min(A 2[1][5], A 2[1][3] + A 2[3][5]) = 2
2 ∞ 0 1 ∞ ∞ 2 8 0 1 ∞ 10 A 3[2][1] = min(A 2[2][1], A 2[2][3] + A 2[3][5]) = 8
A 3[2][4] = min(A 2[2][4], A 2[2][3] + A 2[3][4]) = ∞
3 7 4 0 ∞ 9 3 7 4 0 ∞ 9 3 2 2 2
A [2][5] = min(A [2][5], A [2][3] + A [3][5]) = 10
4 ∞ ∞ 9 0 8 4 16 13 9 0 8 A 3[4][1] = min(A 2[4][1], A 2[4][3] + A 2[3][1]) = 16
3 2 2 2
A [4][2] = min(A [4][2], A [4][3] + A [3][2]) = 13
5 ∞ ∞ ∞ 3 0 5 ∞ ∞ ∞ 3 0 .....
35 Copyright Jose Ortiz
Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
4 3
Create matrix A only with values colum[4] and and row[4] from A

3
3 4
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 ∞
9 2
7 2 8 0 1 ∞ 10 2 0 ∞
3 1
5 3 7 4 0 ∞ 9 3 0 ∞
4
6 4 16 13 9 0 8 4 16 13 9 0 8
1
2
5 ∞ ∞ ∞ 3 0 5 3 0

36 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
k k−1 K−1 K−1
Compute A [i][ j] = min(A [i][ j], A [i][k] + A [k][ j])

.....
3 4
A 1 2 3 4 5 A 1 2 3 4 5

1 0 6 5 ∞ 2 1 0 6 5 ∞ 2
A 4[5][1] = min(A 3[5][1], A 3[5][3] + A 3[3][1]) = 19
2 8 0 1 ∞ 10 2 8 0 1 ∞ 10 A 4[5][2] = min(A 3[5][2], A 3[5][3] + A 3[3][2]) = 16
A 4[5][3] = min(A 3[5][3], A 3[5][3] + A 3[3][3]) = 12
3 7 4 0 ∞ 9 3 7 4 0 ∞ 9

4 16 13 9 0 8 4 16 13 9 0 8
.....
5 ∞ ∞ ∞ 3 0 5 19 16 12 3 0

37 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
5 4
Create matrix A only with values colum[5] and and row[5] from A

3
4 5
4 5
A 1 2 3 4 5 A 1 2 3 4 5
8
1 0 6 5 ∞ 2 1 0 2
9 2
7 2 8 0 1 ∞ 10 2 0 10
3 1
5 3 7 4 0 ∞ 9 3 0 9
4
6 4 16 13 9 0 8 4 0 8
1
2
5 19 16 12 3 0 5 19 16 12 3 0

38 Copyright Jose Ortiz


Floyd’s Algorithm All Pairs Shortest Path
Consider the following graph. Find all pairs shortest path
k k−1 K−1 K−1
Compute A [i][ j] = min(A [i][ j], A [i][k] + A [k][ j])

A 4 1 2 3 4 5 A 5 1 2 3 4 5 .....
1 0 6 5 ∞ 2 1 0 6 5 5 2 A 5[1][4] = min(A 4[1][4], A 4[1][5] + A 4[5][4]) = 5

∞ A 5[2][4] = min(A 4[2][4], A 4[2][5] + A 4[5][4]) = 13


2 8 0 1 10 2 8 0 1 13 10
A 5[3][4] = min(A 4[3][4], A 4[3][5] + A 4[5][4]) = 12
3 7 4 0 ∞ 9 3 7 4 0 12 9
.....
4 16 13 9 0 8 4 16 13 9 0 8

5 19 16 12 3 0 5 19 16 12 3 0

39 Copyright Jose Ortiz


Floyd’s Algorithm
All Pairs Shortest Path Pseudocode
// initialize a matrix A^0 from the original graph.

A = [V][V]

// Compute all the iterations

for (k=1; k<=n; k++) {

for (i=1; i<=n; i++) {

for (j=1; j<=n; j++) {

A[i,j] = min(A[i][j], A[i,k] + A[k,j])

40 Copyright Jose Ortiz


Greedy vs Dynamic Programming
Bellman-Ford Algorithm

41 Copyright Jose Ortiz


Bellman-Ford Algorithm All Pairs Shortest Path

• Bellman-Ford is a dynamic programming algorithm


• Like Floy’s algorithm, Bellman-Ford can handle negative edge weights, and can detect
if a negative cycle or Infiniti path exists

• Good for distributed systems


• Better time complexity than Floy’s algorithm

• Time complexity: O(VE); V = Vertices E = Edges


• Applicability:
• Both, Dijkstra and Bellman-Ford algorithms, are used in computer networks to
implement the link state and distance vector routing protocols in routers.

42 Copyright Jose Ortiz


Bellman-Ford Algorithm All Pairs Shortest Path

Image Source: Instructor’s Computer Networks Slides 43 Copyright Jose Ortiz


Greedy vs Dynamic Programming
Knapsack Problem

44 Copyright Jose Ortiz


Knapsack Problem
Given a list of items item = {1,2,3,4}, their weights w = {3,4,2,5}
and their profit p = {3,8,1,15}. Determine the number of items to
include in a knapsack so that the total weight is at most w = 11, and
the total profit is as large as possible. (maximized profit)

Max
Capacity
w = 11 kg

45 Copyright Jose Ortiz


Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11

w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 3 1

8 4 2

1 2 3

15 5 4

How many items can we add to the knapsack with w = 0? Zero


46 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11

w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 3 1 0 0 0 3 3 3 3 3 3 3 3 3

8 4 2

1 2 3

15 5 4

How many items can we add to the knapsack with w = 3? one . So, p[1] = 3 is added
47 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11

w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 3 1 0 0 0 3 3 3 3 3 3 3 3 3

8 4 2 0 0 0 3 8 8 8 11 11 11 11 11

1 2 3

15 5 4

How many items can we add to the knapsack with w = 4? one . So, p[2] = 8 is added
How many items can we add to the knapsack with w = 7? one . So, p[2] = 8
However, item[1] must also be considered . So, since M[i − 1][w] < M[i − 1][w − wi] + p[i] then M[i][w] = 8 + 3 = 11
48 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11

w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 3 1 0 0 0 3 3 3 3 3 3 3 3 3

8 4 2 0 0 0 3 8 8 8 11 11 11 11 11

1 2 3 0 0 1 3 8 8 9 11 11 12 12 12

15 5 4

As we continue repeating the same steps for all the items :


M[i][w] = max(M[i − 1][w], M[i − 1][w − wi] + p[i])
49 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11

w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 3 1 0 0 0 3 3 3 3 3 3 3 3 3

8 4 2 0 0 0 3 8 8 8 11 11 11 11 11

1 2 3 0 0 1 3 8 8 9 11 11 12 12 12

15 5 4 0 0 1 3 8 15 15 16 18 23 23 24

M[i][w] = max(M[i − 1][w], M[i − 1][w − wi] + p[i])


For example : M[4][11] = max(M[3][11], M[3][6] + p[15]) = max(12, 9 + 15) = 24
50 Copyright Jose Ortiz
Knapsack Problem 0/1 Dynamic Programming
item = {1,2,3,4}; w = {3,4,2,5}; p = {3,8,1,15}; w = 11
w→
p[i] w[i] item[i] 0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 1 0 0 0 3 3 3 3 3 3 3 3 3
8 4 2 0 0 0 3 8 8 8 11 11 11 11 11
1 2 3 0 0 1 3 8 8 9 11 11 12 12 12
15 5 4 0 0 1 3 8 15 15 16 18 23 23 24

Which items are included?


item = 4 is included since wi = 5 < w = 11
11 kg
item = 3 is included since 24 − 15 = 9, and p = 9 is in item 3 row
item = 2 is included since 9 − 1 = 8, and p = 8 is in item 2 row
item = 1 is NOT included since weights of items 2,3, and 4 are 4 + 2 + 5 = 11
Max Profit = {p[2], p[3], p[4]} = {8 + 1 + 15} = 24 Copyright Jose Ortiz
Knapsack Problem Fractional Greedy Algorithm
pi
Greedy Approach : Include max( ) first
item = {1,2,3,4}; wi

Item 4 is included and w = w − wi = 11 − 5 = 6


w = {3,4,2,5}; Max
Capacity
w = 11 kg Item 2 is included and w = w − wi = 6 − 4 = 2
p = {3,8,1,15}; Dilemma : since there is still w = 2 capacity,
the logical solution would be to add
pi item 3. However . since dividing items
= {1, 2, 0.5, 3} is allowed, we can take 2/3 from
wi item 1 which give us better profit
Therefore, 2/3 parts of item 1 are included .

w = 11 n


MaxWeight = xiwi = 3(2/3) + 4 + 0 + 5 = 11
i=1
n


MaxProfit = xi pi = 3(2/3) + 8 + 0 + 15 = 25
52 i=1
Copyright Jose Ortiz

You might also like