Ada Module 4
Ada Module 4
ALGORITHMS
UNIT-IV
CHAPTER 8:
DYNAMIC PROGRAMMING
OUTLINE
Dynamic Programming
Computing a Binomial Coefficient
Warshall’s Algorithm for computing the
transitive closure of a digraph
Floyd’s Algorithm for the All-Pairs
Shortest-Paths Problem
The Knapsack Problem
Introduction
Dynamic programming is an algorithm design technique
which was invented by a prominent U. S. mathematician,
Richard Bellman, in the 1950s.
Algorithm fib(n)
if n = 0 or n = 1 return n
return fib(n − 1) + fib(n − 2)
fib(4) + fib(3)
fib(1) + fib(0)
To compute C(n, k), we fill the table row by row, starting with
row0 and ending with row n.
ALGORITHM Binomial(n, k)
//Computes C(n, k) by the dynamic programming algorithm
//Input: A pair of nonnegative integer n ≥ k ≥ 0
//Output: The value of C(n, k)
for i ← 0 to n do
for j ← 0 to min(i, k) do
if j = 0 or j = i
C[i, j] ← 1
else C[i, j] ← C[i - 1, j - 1] + C[i - 1, j]
return C[n, k]
Time Efficiency of Binomial Coefficient Algorithm
The basic operation for this algorithm is addition.
Because the first k + 1 rows of the table form a triangle while the
remaining n – k rows form a rectangle, we have to split the sum
expressing A(n, k) into two parts:
k i-1 n k k n
A(n, k) = ∑ ∑ 1 + ∑ ∑ 1 = ∑ (i-1) + ∑ k
i=1 j=1 i=k+1 j=1 i=1 i=k+1
= (k – 1)k + k (n – k) Є Θ(nk).
2
WARSHALL’S ALGORITHM
a b c d a b c d
a b
a 0 1 0 0 a 1 1 1 1
A= b 0 0 0 1 T= b 1 1 1 1
c d c 0 0 0 0 c 0 0 0 0
d 1 0 1 0 d 1 1 1 1
(a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.
WARSHALL’S ALGORITHM
We can generate the transitive closure of a digraph with the
help of depth-first search or breadth-first search.
The series starts with R(0) , which does not allow any
intermediate vertices in its path; hence, R(0) is nothing else but
the adjacency matrix of the graph.
R(1) contains the information about paths that can use the first
vertex as intermediate; so, it may contain more ones than R(0) .
The last matrix in the series, R(n) , reflects paths that can use all n
vertices of the digraph as intermediate and hence is nothing else
but the digraph’s transitive closure.
WARSHALL’S ALGORITHM
We have the following formula for generating the elements of
matrix R(k) from the elements of matrix R(k-1) :
rij (k) = rij (k-1) or (rik (k-1) and rkj (k-1) ). ------------------ (3)
j k j k
R (k-1) = k 1 R (k) = k 1
i 0 1 i 1 1
WARSHALL’S ALGORITHM
a b c d
a b
a 0 1 0 0 Ones reflect the existence of
paths with no intermediate
R(0) = b 0 0 0 1 vertices (R(0) is just the
adjacency matrix); boxed
c d c 0 0 0 0 row and column are used for
d 1 0 1 0 getting R(1) .
a b c d
Ones reflect the existence of
a 0 1 0 0 paths with intermediate
vertices numbered not
R(1) = b 0 0 0 1 higher than 1. i.e., just
vertex a (note a new path
c 0 0 0 0
from d to b); boxed row and
d 1 1 1 0 column are used for getting
R(2) .
Figure: Application of Warshall’s algorithm to the digraph
shown. New ones are in bold.
WARSHALL’S ALGORITHM
a b c d
a b
a 0 1 0 1 Ones reflect the existence of
paths with intermediate
R(2) = b 0 0 0 1 vertices numbered not
c d c 0 0 0 0 higher than 2. i.e., a and b
(note two new paths); boxed
d 1 1 1 1 row and column are used for
getting R(3) .
a b c d
Ones reflect the existence of
a 0 1 0 1 paths with intermediate
vertices numbered not
R(3) = b 0 0 0 1
higher than 3. i.e., a, b and c
c 0 0 0 0 (no new paths); boxed row
and column are used for
d 1 1 1 1 getting R(4) .
Figure: Application of Warshall’s algorithm to the digraph
shown. New ones are in bold.
WARSHALL’S ALGORITHM
a b
c d
a b c d
a 1 1 1 1 Ones reflect the existence of paths
with intermediate vertices numbered
R(4) = b 1 1 1 1 not higher than 4. i.e., a, b, c, and d
c 0 0 0 0 (note five new paths) .
d 1 1 1 1
(a) Digraph. (b) Its weight matrix. (c) Its distance matrix.
FLOYD’S ALGORITHM
The last matrix in the series, D(n) , contains the lengths of the shortest
paths among all paths that can use all n vertices as intermediate and
hence is nothing but the distance matrix being sought.
We can compute all the elements of each matrix D(k) from its immediate
predecessor D(k-1) in series (1).
FLOYD’S ALGORITHM
The lengths of the shortest paths is got by the following
recurrence:
dij(k) = min { dij(k-1) , dik(k-1) + dkj(k-1) } for k ≥ 1, dij(0) = wij
dij(k-1)
Vi Vj
dik(k-1) dkj(k-1)
Vk
FLOYD’S ALGORITHM
ALGORITHM Floyd(W[1...n, 1…n])
//Implements Floyd’s algorithm for the all-pairs shortest-
//paths problem
//Input: The weight matrix W of a graph with no negative
//length cycle
//Output: The distance matrix of the shortest paths’ lengths
D ← W // is not necessary if W can be overwritten
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
D[i, j] ← min { D[i, j], D[i, k] + D[k, j] }
return D
c d
1
a b c d
a 0 10 3 4 Lengths of the shortest paths with
b 2 0 5 6 intermediate vertices numbered
D(4) = not higher than 4, i.e., a, b, c, and
c 7 7 0 1 d. (Note a new shortest path from
c to a).
d 6 16 9 0
3
2
This is a knapsack 3 4
Max weight: W = 20
4 5
5 8
W = 20
9 10
Knapsack problem
The problem is called a “0-1 Knapsack problem“,
because each item must be entirely accepted or
rejected.
Just another version of this problem is the “Fractional
Knapsack Problem”, where we can take fractions of
items.
Knapsack problem: brute-force approach
Since there are n items, there are 2n possible
combinations of items.
0 if i=0 or j=0
Our goal is to find V[n, W], the maximal value of a subset of the n
given items that fit into the knapsack of capacity W, and an optimal
subset itself.
KNAPSACK PROBLEM
Below figure illustrates the values involved in recurrence equations:
0 j-wi j W
0 0 0 0 0
n 0 goal
Figure: Table for solving the knapsack problem by dynamic programming.
For i, j > 0, to compute the entry in the ith row and the jth column,
V[i, j], we compute the maximum of the entry in the previous row
and the same column and the sum of vi and the entry in the
previous row and wi columns to the left. The table can be filled
either row by row or column by column.
i.e., V[i, j] = max{V[i-1], j], vi + V[i-1, j-wi]}
Example : Let us consider the instance
given by the following data
Capacity W = 5
Example – Dynamic Programming Table
Capacity
Item j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1
w2=1, v2= 10 2
w3=3, v3=20 3
w4=2, v4=15 4
Entries for Row 0:
V[0, 0]= 0 since i and j values are 0
V[0, 1]=V[0, 2]=V[0, 3]=V[0,4]=V[0, 5]=0 Since i=0
Example – Dynamic Programming Table
Item Capacity
j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2
w3=3, v3=20 3
w4=2, v4=15 4
Entries for Row 1:
V[1, 0] = 0 since j=0
V[1, 1] = V[0, 1] = 0 (Here, V[i, j]= V[i-1, j] since j-wi < 0)
V[1, 2] = max{V[0, 2], 12 + V[0, 0]} = max(0, 12) = 12
V[1, 3] = max{V[0, 3], 12 +V[0, 1]} = max(0, 12) = 12
V[1, 4] = max{V[0, 4], 12 + V[0, 2]} = max(0, 12) = 12
V[1, 5] = max{V[0, 5], 12 + V[0, 3]} = max(0, 12) = 12
Example – Dynamic Programming Table
Item Capacity
j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2 0 10 12 22 22 22
w3=3, v3= 20 3
w4=2, v4= 15 4
Entries for Row 2:
V[2, 0] = 0 since j = 0
V[2, 1] = max{V[1, 1], 10 + V[1, 0]} = max(0, 10) = 10
V[2, 2] = max{V[1, 2], 10 + V[1, 1]} = max(12, 10) = 12
V[2, 3] = max{V[1, 3], 10 +V[1, 2]} = max(12, 22) = 22
V[2, 4] = max{V[1, 4], 10 + V[1, 3]} = max(12, 22) = 22
V[2, 5] = max{V[1, 5], 10 + V[1, 4]} = max(12, 22) = 22
Example – Dynamic Programming Table
Item Capacity
j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2 0 10 12 22 22 22
w3=3, v3= 20 3 0 10 12 22 30 32
w4=2, v4= 15 4
Entries for Row 3:
V[3, 0] = 0 since j = 0
V[3, 1] = V[2, 1] = 10 (Here, V[i, j]= V[i-1, j] since j-wi < 0)
V[3, 2] = V[2, 2] = 12 (Here, V[i, j]= V[i-1, j] since j-wi < 0)
V[3, 3] = max{V[2, 3], 20 +V[2, 0]} = max(22, 20) = 22
V[3, 4] = max{V[2, 4], 20 + V[2, 1]} = max(22, 30) = 30
V[3, 5] = max{V[2, 5], 20 + V[2, 2]} = max(22, 32) = 32
Example – Dynamic Programming Table
Item Capacity
j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2 0 10 12 22 22 22
w3=3, v3= 20 3 0 10 12 22 30 32
w4=2, v4= 15 4 0 10 15 25 30 37
Entries for Row 4:
V[4, 0] = 0 since j = 0
V[4, 1] = V[3, 1] = 10 (Here, V[i, j]= V[i-1, j] since j-wi < 0)
V[4, 2] = max{V[3, 2], 15 + V[3, 0]} = max(12, 15) = 15
V[4, 3] = max{V[3, 3], 15 +V[3, 1]} = max(22, 25) = 25
V[4, 4] = max{V[3, 4], 15 + V[3, 2]} = max(30, 27) = 30
V[4, 5] = max{V[3, 5], 15 + V[3, 3]} = max(32, 37) = 37
Example: To find composition of optimal subset
Item Capacity
j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2 0 10 12 22 22 22
w3=3, v3= 20 3 0 10 12 22 30 32
w4=2, v4= 15 4 0 10 15 25 30 37
Thus, the maximal value is V [4, 5]= $37. We can find the composition
of an optimal subset by tracing back the computations of this entry in
the table.
Since V [4, 5] is not equal to V [3, 5], item 4 was included in an optimal
solution along with an optimal subset for filling 5 - 2 = 3 remaining
units of the knapsack capacity.
Example: To find composition of optimal subset
Item Capacity
j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2 0 10 12 22 22 22
w3=3, v3= 20 3 0 10 12 22 30 32
w4=2, v4= 15 4 0 10 15 25 30 37
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1=2, v1=12 1 0 0 12 12 12 12
w2=1, v2= 10 2 0 10 12 22 22 22
w3=3, v3= 20 3 0 10 12 22 30 32
w4=2, v4= 15 4 0 10 15 25 30 37
The remaining is V[1,2]
V[1, 2] V[0, 2] so item 1 is included
UNIT-IV
CHAPTER 9:
GREEDY TECHNIQUE
OUTLINE
Greedy Technique
Prim’s Algorithm
Kruskal’s Algorithm
Dijkstra’s Algorithm
Minimum Spanning Tree
Definition: A spanning tree of a connected graph is its connected acyclic
subgraph (i.e., a tree) that contains all the vertices of the graph. A
minimum spanning tree of a weighted connected graph is its spanning
tree of smallest weight, where the weight of a tree is defined as the sum of
the weights on all its edges. The minimum spanning tree problem is the
problem of finding a minimum spanning tree for a given weighted
connected graph.
Figure: Graph and its spanning trees: T1 is the minimum spanning tree.
GREEDY APPROACH
The greedy approach suggests constructing a solution through a
sequence of steps, each expanding a partially constructed
solution obtained so far, until a complete solution to the problem
is reached. On each step—and this is the central point of this
technique—the choice made must be
locally optimal, i.e., it has to be the best local choice among all
feasible choices available on that step.
The algorithm stops after all the graph’s vertices have been
included in the tree being constructed.
6
PRIM’S ALGORITHM
The nature of Prim’s algorithm makes it necessary to provide
each vertex not in the current tree with the information about the
shortest edge connecting the vertex to a tree vertex.
We can provide such information by attaching two labels to a
vertex: the name of the nearest tree vertex and the weight
(length) of the corresponding edge.
Vertices that are not adjacent to any of the tree vertices can be
given the ∞ label and a null label for the name of the nearest tree
vertex.
We can split the vertices that are not in the tree into two sets, the
“fringe” and the “unseen”.
The fringe contains only the vertices that are not in the tree
but are adjacent to at least one tree vertex. These are the
candidates from which the next tree vertex is selected.
The unseen vertices are all other vertices of the graph.
Figure: Application of Prim’s Algorithm. The parenthesized labels of a vertex in the
middle column indicate the nearest tree vertex and edge weight; selected
vertices and edges are shown in bold.
b 1 c
3 4 4 6
a 5 5
f d
6 2 8
e
Tree vertices Remaining vertices Illustration
3 b 1 c
6
a(-, -) b(a, 3) c(-, ∞) d(-, ∞) e(a, 6) 4 4
f(a, 5) a f d
5 5
2
6 8
e
3
b 1
c
6
b(a, 3) c(b, 1) d(-, ∞) e(a, 6) f(b, 4) 4 4
a f 5 d
5
2
6 8
e
Figure: Application of Prim’s Algorithm. The parenthesized labels of a vertex in the
middle column indicate the nearest tree vertex and edge weight; selected
vertices and edges are shown in bold.
Tree vertices Remaining vertices Illustration
3
b 1 c
c(b, 1) d(c, 6) e(a, 6) f(b, 4) 6
4 4
a 5 f 5 d
6 2 8
e
3 b 1
c
6
f(b, 4) d(f, 5) e(f, 2) 4 4
a f 5 d
5
2 8
6
e
3
b 1
c
6
e(f, 2) d(f, 5) 4 4
a f 5 d
5
2 8
6
d(f, 5) e
PRIM‘S ALGORITHM PROBLEM
5
A B
4 6 2
2 D 3
C
3 1 2
E F
4
PRIM’S ALGORITHM ANALYSIS
Then, starting with the empty subgraph, it scans this sorted list
adding the next edge on the list to the current subgraph if such an
inclusion does not create a cycle and simply skipping the edge
otherwise.
KRUSKAL’S ALGORITHM
Figure: Application of Kruskal’s Algorithm. Selected edges are shown in bold.
b 1 c
3 4 4 6
a 5 5
f d
6 2 8
e
Tree vertices Sorted list of edges Illustration
3 b 1 c
6
bc ef ab bf cf af df ae cd de 4 4
1 2 3 4 4 5 5 6 6 8 a f d
5 5
2
6 8
e
b 1
c
3 6
bc bc ef ab bf cf af df ae cd de 4 4
1 1 2 3 4 4 5 5 6 6 8 a f 5 d
5
2
6 8
e
Figure: Application of Kruskal’s Algorithm. Selected edges are shown in bold.
3
b 1 c
6
ef bc ef ab bf cf af df ae cd de 4 4
2 1 2 3 4 4 5 5 6 6 8 a 5 f 5 d
2
6 8
e
3 b 1
c
ab bc ef ab bf cf af df ae cd de 6
3 1 2 3 4 4 5 5 6 6 8 4 4
a f 5 d
5
2 8
6
e
bf bc ef ab bf cf af df ae cd de 3
b 1
c
6
4 1 2 3 4 4 5 5 6 6 8 4 4
a f 5 d
5
df 2
6 8
5 e
KRUSKAL‘S ALGORITHM
5
A B
4 6 2
2 D 3
C
3 1 2
E F
4
ANALYSIS
With an efficient sorting algorithm, efficiency of Kruskal’s
algorithm will be in O(|E| log |E |).
DIJKSTRA’S ALGORITHM
Assumptions:
the graph is connected
the edges are undirected (or directed)
the edge weights are nonnegative
DIJKSTRA’S ALGORITHM
First, it finds the shortest path from the source to a vertex
nearest to it, then to a second nearest, and so on.
These vertices, the source, and the edges of the shortest paths
leading to them from the source form a subtree Ti of the given
graph.
4
3 b c
6
a(-, 0) b(a, 3) c(-, ∞) d(a, 7) e(-, ∞) 2 5
a d e
7 4
4
3
b c
b(a, 3) c(b, 3+4) d(b, 3+2) e(-, ∞) 6
2 5
a d e
7 4
Figure: Application of Dijkstra’s Algorithm. The next closest vertex is
shown in bold.
24
DIJKSTRA’S ALGORITHM ANALYSIS
The time efficiency of Dijkstra’s algorithm depends on
the data structures used for representing an input
graph.