04. Dynamic Programming (1)
04. Dynamic Programming (1)
Dynamic Programming
Dynamic Programming is a technique used to solve different types of problems in time O(n2)
or O(n3) for which a naive approach would take exponential time. Dynamic programming is
typically applied to optimization problem. Dynamic Programming is an algorithmic paradigm
that solves a given complex problem by breaking it into subproblems and stores the results of
subproblems to avoid computing the same results again. There are two main properties of a
problem which suggest that the given problem can be solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure
Overlapping Subproblems:
As studied in Divide and Conquer, Dynamic Programming technique combines the solutions
of sub-problems. Dynamic Programming is mainly used when solutions of same subproblems
are needed again and again. Therefore the computed solutions to subproblems are stored in a
table so that these don’t have to recomputed. So Dynamic Programming is not useful when
there are no common (overlapping) subproblems because there is no point storing the
solutions if they are not needed again.
For example, Binary Search doesn’t have common subproblems. But if we consider the
following recursive program for Fibonacci Numbers, there are many subproblems which are
solved again and again.
int fib(int n)
{
if ( n <= 1 ) return n;
return fib(n-1) + fib(n-2);
}
Optimal Substructure
A problem is said to have optimal substructure if an optimal solution can be constructed
efficiently from optimal solutions of its subproblems. This property is used to determine the
usefulness of dynamic programming and greedy algorithms for a problem. There are two
ways of doing this.
1) Top-Down : Start solving the given problem by breaking it down. If you see that the
problem has been solved already, then just return the saved answer. If it has not been solved,
solve it and save the answer. This is usually easy to think of and very intuitive. This is
referred to as Memoization.
Page 1
Chapter 4 – Dynamic Programming
2) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved
and start solving from the trivial subproblem , up towards the given problem. In this process,
it is guaranteed that the subproblems are solved before solving the problem. This is referred
to as Dynamic Programming: Note that divide and conquer is slightly a different technique.
In that, we divide the problem in to non-overlapping subproblems and solve them
independently, like in mergesort and quick sort.
Principal of optimality: The principle of optimality states that no matter what the first
decision, the remaining decisions must be optimal with respect to the state that results from
this first decision. This principle implies that an optimal decision sequence is comprised for
some formulations of some problem. Since the principle of optmaility may not hold for some
formulations of some problems, it is necessary to verify that it does not hold for the problem
being solved. Dynamic programming cannot be applied when this principle does not hold.
In dynamic programming the solution to the problem is a result of the sequence of decisions.
At every stage we make decisions to obtain optimal solution. This method is effective when a
given sub problem may arise from more than one partial set of choices. Using this method the
exponential time algorithm may be brought down to polynomial algorithm as it reduces
amount of enumeration by avoiding the enumeration of some decision sequences that cannot
possibly be optimal. A dynamic programming algorithm solves every sub problem just once
and then saves its answer in a table thereby avoiding the work of recomputing the answer
every time the sub problem is encountered.
The basic steps of dynamic programming are:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom up fashion
4. Construct optimal solution computed information.
Principle of Optimality
The principle states that an optimal sequence of decisions has the property that
whatever the initial state and decisions are the remaining decisions must constitute an
optimal decision sequences with regard to the state resulting from the first decision.
Multistage Graphs
A multistage graph is a graph
G= (V, E) with V partitioned into K >= 2 disjoint subsets such that if an edge (u, v) is
in E, then u is in Vi , and v is in Vi+1 where 1<=i<=k, for some subsets in the partition.
The number of vertices in V1 and Vk is one. i.e) | V1 | = | VK | = 1. The node in the first
set V1 is source and the node in the last set Vk is destination. Each set Vi is called a
stage in the graph.
The "multistage graph problem" is to find the minimum cost path from the source to the
destination. There are two approaches.
1. Multistage graph-Forward approach
2. Multistage graph –Backward approach.
1. Multistage graph-Forward approach
A dynamic programming solution for a K-stage graph is obtained by first noticing that every
path from source to destination is a result of a sequence of K-2 decisions. The ith decision
involves in determining which vertex in stage Vi+1 where 1<=i<=K-2 is to be selected to be
on the path. Let P (i, j) be the minimum cost path from vertex j in Vi to vertex T (destination
vertex)
Cost of this path cost (i , j) = min{c(j, L)+cost(i+1,L)}
where L ϵ Vi+1 and (j, L) ϵ E.
Page 2
Chapter 4 – Dynamic Programming
Here j denotes the vertex in stage I and L denotes the vertex in stage i+1.
Example 1: Find the minimum cost path from the start node S to the terminal node T
using dynamic programming (Multi stage graph- forward approach).
4
A D
1 18
11 9
2 5 13
S B E T
16 2
5
C 2
F
We have 4 stages and the value of K=4.Let Lnode be the node that minimizes c (i, j)
in every step.
Step 1: Calculate the minimum cost at stage 3.Start with K-1th stage (3rd stage) to find
out the cost from k-1th stage to Kth stage.
Cost (3, D) = cost of travelling from vertex D in stage 3 to vertex T (destination node)
= 18
Cost (3, E) = cost of travelling from vertex D in stage 3 to vertex T (destination node)
= 13
Cost (3, F) = cost of travelling from vertex D in stage 3 to vertex T (destination node)
=2
Cost (2, B) = cost of travelling from vertex B in stage 2 to vertex T (destination node)
=min {c(B,D)+cost(3,D),c(B,E)+cost(3,E),c(B,F)+cost(3,F)}
=min {9+18, 5+13, 16+2} = min {27, 18, 18} = 18
Lnode = E
Cost (2, C) = cost of travelling from vertex C in stage 2 to vertex T (destination node)
=min{c(C, F) + cost (3, F)}
=min {2+2} = 4
Lnode=F
Page 3
Chapter 4 – Dynamic Programming
= min{c(S,A)+cost(2,A),c(S,B)+cost(2,B),c(S,C)+cost(2,C)
= min {1+ 22, 2+18, 5+4} = min {23, 20, 9} =9
Lnode = C
Example 2: Find the minimum cost path from the start node 1 to the terminal node 7
using dynamic programming (Multi stage graph- forward approach).
2
2 5
9 5
4
7 6
1 3 7
3 6
3 8
4 5
6
Page 4
Chapter 4 – Dynamic Programming
cost(2,3) = min{c(3,5)+cost(3,5),c(3,6)+cost(3,6)}
= min{6+5,3+6}=9
Lnode = 6
cost(2,4) = min{c(4,5)+cost(3,5),c(4,6)+cost(3,6)}
= min {8+5, 5+6}=11
Lnode = 6
cost(1,1) = min{c(1,2)+cost(2,2),c(1,3)+cost(2,3),c(1,4)+cost(2,4)}
= min {9+7, 7+9,3+11}=14
Lnode = 4
V1=1
v2=Lnode of cost (1, 1)=4
v3=Lnode of cost (2, 4) = 6
V4=7
Therefore the path is 1→4→6→7
Page 5
Chapter 4 – Dynamic Programming
Multistage graph problem can also be solved using backward approach. Let cost (i, j ) be the
minimum cost path from vertex S(source) to vertex j in Vi. The cost (i, j) using backward
approach is
Here j denotes the vertex in stage i and L denotes the vertex in stage i - 1.
Example 1: Find the minimum cost path from the start node S to the terminal node T
using dynamic programming (Multi stage graph- backward approach).
4
A D
1 18
11 9
2 5 13
S B E T
16 2
5
C 2
F
Page 6
Chapter 4 – Dynamic Programming
Lnode = F
Example 2: Find the minimum cost path from the start node 1 to the terminal node 7
using dynamic programming (Multi stage graph- backward approach).
2
2 5
9 5
4
7 6
1 3 7
3 6
3 8
4 5
6
Page 7
Chapter 4 – Dynamic Programming
Example 3:
Find the minimum cost path for the below multistage graph using the backward approach.
cost(S,A)=1
cost(S,B)=2
cost(S,C)=7
cost(S,D)=min{cost(S,A)+c(A,D),cost(S,B)+c(B,D)}=min{1+c(A,D),2+c(B,D)}
=min{1+3,2+4}=4
cost(S,D)=4 i.e. Path S-A-D is minimum
cost(S,E)=min{cost(S,A)+c(A,E),cost(S,B)+c(B,E),cost(S,C)+c(C,E)}
=min{1+c(A,E), 2+c(B,E),7+c(C,E)}
=min {1+6,2+10,7+3}
=min {7,12,10}
d(S,E)=7 i.e. Path S-A-E is chosen.
d(S,T)=min{cost(S,D)+c(D,T),cost(S,E)+c(E,T),cost(S,C)+c(C,T)}
=min {4+8,7+2,7+10}=min{12,9,17}=9
d(S,T)=9 i.e. Path cost(S,E)+c( E,T) is chosen.
=cost(S,E)+c( E,T)
=cost(S,A)+c(A,E)+c( E,T)
=c(S,A)+c(A,E)+c( E,T)
Therefore the path is S-A-E-T with a minimum cost of 9
Page 8
Chapter 4 – Dynamic Programming
Formula:
Dk [i, j] =min {Dk-1[i, j], Dk-1[i, k] +Dk-1[k, j]}
Page 9
Chapter 4 – Dynamic Programming
Where Dk represents the matrix D after kth iteration. Initially D0=C (cost matrix). i is
the source vertex , j is the destination vertex and k represents the intermediate vertex
used.
for i ←1 to n do
for j←1 to n do
D[i , j] = cost [i ,j]
endfor
endfor
2. Find the shortest distances from all nodes to all other nodes
for k←0 to n do
for i←1 to n do
for j←1 to n do
D[i , j] = min ( D[i , j],D[i ,k] + D[k,j])
endfor
endfor
endfor
3. Finished
Return
Example 1: Solve all pairs shortest path problem for the directed graph shown below
6
4 V2
V1
2
11
3
V3
Page 10
Chapter 4 – Dynamic Programming
V1 V2 V3
D (i, j) = cost (i, j) =
0
V1 0 4 11
V2 6 0 2
V3 3 ∞ 0
Step 1: Find D1
For k=1 ie. Going from i to j through vertex V1
D1 (V1, V1) =min { D0 (V1, V1), D0 (V1, V1) + D0 (V1, V1)} =min {0 ,0+0}=0
D1 (V1, V2) =min { D0 (V1, V2), D0 (V1, V1) + D0 (V1, V2)} = min {4 , 0+ 4 }=4
D1 (V1, V3) =min { D0 (V1, V3), D0 (V1, V1) + D0 (V1, V3)} = min {11, 0+11 }=11
D1 (V2, V1) =min { D0 (V2, V1), D0 (V2, V1) + D0 (V1, V1)} =min { 6 , 6+0 }=6
D1 (V2, V2) =min { D0 (V2, V2), D0 (V2, V1) + D0 (V1, V2)} = min {0 , 6+ 4 }=0
D1 (V2, V3) =min { D0 (V2, V3), D0 (V2, V1) + D0 (V1, V3)} = min {2, 6+11 }=2
D1 (V3, V1) =min { D0 (V3, V1), D0 (V3, V1) + D0 (V1, V1)} =min { 3 , 3+0 }=3
D1 (V3, V2) =min { D0 (V3, V2), D0 (V3, V1) + D0 (V1, V2)} = min {∞ , 3+ 4 }=7
D1 (V3, V3) =min { D0 (V3, V3), D0 (V3, V1) + D0 (V1, V3)} = min {0, 3+11 }=0
V1 V2 V3
V1 0 4 11
D1 (i, j) = V2 6 0 2
V3 3 7 0
Step 2: Find D2
For k=2 ie. Going from i to j through vertex V2
D2 (V1, V1) =min { D1 (V1, V1), D0 (V1, V2) + D0 (V2, V1)} =min {0 ,4+6}=0
D2 (V1, V2) =min { D1 (V1, V2), D0 (V1, V2) + D0 (V2, V2)} = min {4, 4+ 0 }=4
D2 (V1, V3) =min { D1 (V1, V3), D0 (V1, V2) + D0 (V2, V3)} = min {11, 4+2 }=6
D2 (V2, V1) =min { D1 (V2, V1), D0 (V2, V2) + D0 (V2, V1)} =min { 6 , 0+6 }=6
D2 (V2, V2) =min { D1 (V2, V2), D0 (V2, V2) + D0 (V2, V2)} = min {0 , 0+ 0 }=0
D2 (V2, V3) =min { D1 (V2, V3), D0 (V2, V2) + D0 (V2, V3)} = min {2, 0+2 }=2
D2 (V3, V1) =min { D1 (V3, V1), D0 (V3, V2) + D0 (V2, V1)} =min { 3 , 7+6 }=3
D2 (V3, V2) =min { D1 (V3, V2), D0 (V3, V2) + D0 (V2, V2)} = min {7 , 7+ 0 }=7
D2 (V3, V3) =min { D1 (V3, V3), D0 (V3, V2) + D0 (V2, V3)} = min {0, 7+2 }=0
V1 V2 V3
D2 (i, j) = V1 0 4 6
V2 6 0 2
V3 3 7 0
Page 11
Chapter 4 – Dynamic Programming
Step 3: Find D3
For k=3 ie. Going from i to j through vertex V3
D3 (V1, V1) =min { D2 (V1, V1), D2 (V1, V3) + D2 (V3, V1)} =min {0 ,6+3}=0
D3 (V1, V2) =min { D2 (V1, V2), D2 (V1, V3) + D2 (V3, V2)} = min {4, 6+ 7} =4
D3 (V1, V3) =min { D2 (V1, V3), D2 (V1, V3) + D2 (V3, V3)} = min {6, 6+0} =6
D3 (V2, V1) =min { D2 (V2, V1), D2(V2, V3) + D2(V3, V1)} =min { 6 , 2+3 }=5
D3 (V2, V2) =min { D2 (V2, V2), D2 (V2, V3) + D2 (V3, V2)} = min {0, 2+7} =0
D3 (V2, V3) =min { D2 (V2, V3), D2 (V2, V3) + D2 (V3, V3)} = min {2, 2+0} =2
D3 (V3, V1) =min { D2 (V3, V1), D2(V3, V3) + D2(V3, V1)} =min { 3 , 0+3 }=3
D3 (V3, V2) =min { D2 (V3, V2), D2 (V3, V3) + D2 (V3, V2)} = min {7, 0+ 7} =7
D3 (V3, V3) =min { D2 (V3, V3), D2 (V3, V3) + D2 (V3, V3)} = min {0, 0+0} =0
V1 V2 V3
D2 (i, j) = V1 0 4 6
V2 5 0 2
V3 3 7 0
This final matrix gives the shortest distance from all nodes to all other nodes.
Example 2: Solve all pairs shortest path problem for the directed graph shown below
a b
3
7 6
c
d
1
a b c d
D0 (i, j) = cost (i, j) = a 0 ∞ 3 ∞
b 2 0 ∞ ∞
c ∞ 7 0 1
d 6 ∞ ∞ 0
Step 1: Find D1
For k=1 ie. Going from i to j through vertex a
D1 (a, a) =min { D0 (a, a), D0 (a, a) + D0 (a, a)} =min { 0 ,0+0}=0
D1 (a, b) =min { D0 (a, b), D0 (a, a) + D0 (a, b)} = min {∞ , 0+ ∞ }=∞
D1 (a, c) =min { D0 (a, c), D0 (a, a) + D0 (a, c)} = min {3, 0+3 }=3
D1 (a, d) =min { D0 (a, d), D0 (a, a) + D0 (a, d)} = min {∞ , 0+ ∞ }=∞
Page 12
Chapter 4 – Dynamic Programming
a b c d
D1 (i, j) = a 0 ∞ 3 ∞
b 2 0 5 ∞
c ∞ 7 0 1
d 6 ∞ 9 0
Step 2: Find D2
For k=2 ie. Going from i to j through vertex b
D2 (a, a) =min { D1 (a, a), D1 (a, b) + D1 (b, a)} = min{0,∞+2}=0
D2 (a, b) =min { D1 (a, b), D1 (a, b) + D1 (b, b)} = min{∞,∞+0}=∞
D2 (a, c) =min { D1 (a, c), D1(a, b) + D1 (b, c)} = min {3,∞+5 }=3
D2 (a, d) =min { D1 (a, d), D1 (a, b) + D1 (b, d)} = min{∞,∞+∞ }=∞
D2 (b, a) =min { D1 (b, a), D1(b, b) + D1 (b, a)} = min {2 , 0+2 }=2
D2 (b, b) =min { D1 (b, b), D1 (b, b) + D1 (b, b)}= min {0 , 0+ 0 }=0
D2 (b, c) =min { D1 (b, c), D1(b, b) + D1(b, c)} = min {5 , 0+ 5 }=5
D2 (b, d) =min { D1 (b, d), D1 (b, b) + D1 (b, d)} = min {∞, 0+ ∞}=∞
Page 13
Chapter 4 – Dynamic Programming
a b c d
A 0 ∞ 3 ∞
D2 (i, j) = b 2 0 5 ∞
c 9 7 0 1
d 6 ∞ 9 0
Step 3: Find D3
For k=3 ie. Going from i to j through vertex c
D3 (a, a) =min { D2 (a, a), D2 (a, c) + D2 (c, a)} = min{0,3+9}=0
D3 (a, b) =min { D2 (a, b), D2 (a, c) + D2 (c, b)} = min{∞,3+7}=10
D3 (a, c) =min { D2 (a, c), D2(a, c) + D2 (c, c)} = min {3,3+0 }=3
D3 (a, d) =min { D2 (a, d), D2 (a, c) + D2 (c, d)} = min{∞,3+1 }=4
D3 (b, a) =min { D2 (b, a), D2(b, c) + D2(c, a)} = min {2 , 5+9 }=2
D3 (b, b) =min { D2 (b, b), D2 (b, c) + D2 (c, b)}= min {0 , 5+ 7 }=0
D3 (b, c) =min { D2 (b, c), D2(b, c) + D2(c, c)} = min {5 , 5+ 0 }=5
D3 (b, d) =min { D2 (b, d), D2 (b, c) + D2 (c, d)} = min {∞, 5+ 1}=6
a b c d
a 0 10 3 4
D3 (i , j) = b 2 0 5 6
c 9 7 0 1
d 6 16 9 0
Step 4: Find D4
For k=4 ie. Going from i to j through vertex d
D4 (a, a) =min { D3 (a, a), D3(a, d) + D3 (d, a)} = min{0,4+6}=0
D4 (a, b) =min { D3 (a, b), D3 (a, d) + D3 (d, b)} = min{10,4+16}=10
D4 (a, c) =min { D3 (a, c), D3(a, d) + D3 (d, c)} = min {3,4+9 }=3
D4 (a, d) =min { D3 (a, d), D3 (a, d) + D3 (d, d)} = min{4,4+0 }=4
D4 (b, a) =min { D3 (b, a), D3(b, d) + D3(d, a)} = min {2 , 6+6 }=2
Page 14
Chapter 4 – Dynamic Programming
a b c d
a 0 10 3 4
D (i, j) =
4 b 2 0 5 6
c 7 7 0 1
d 6 16 9 0
This final matrix gives the shortest distance from all nodes to all other nodes.
Knapsack problem
The recurrence relation to get the solution to knapsack problem using dynamic
programming can be:
Page 15
Chapter 4 – Dynamic Programming
0 if i=j=0
Solution : It is given that the number of items N=4 and the capacity of the knapsack=5
with weights w1=2,w2=1,w3=3 and w4=2 with the profits p1=12,p2=10,p3=20and p4=15.
Step1: Since N=4 and M=5, let us have a table with N+1 rows(i.e., 5 rows) and M+1
columns (i.e.,6 columns).We know from the above relation that whenever i=0 it
indicates that there are no items to select and hence irrespective of capacity of the
knapsack, the profit will be 0 and is denoted by V[i, j]=0 for 0<=j<=M when i=0.
So V[0,0] =V[0,1]=V[0,2]=V[0,3]=V[0,4]=V[0,5]=0
And whenever j=0 and irrespective of the number of items selected, we cannot place
into knapsack and hence profit will be 0. This is denoted by V[i, j]=0 for 0<=i<=N
when j=0.
So V[0,0] =V[1,0]=V[2,0]=V[3,0]=V[4,0]=0
So, the table with N+1 rows and M+1 columns can be filled as shown below:
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
Page 16
Chapter 4 – Dynamic Programming
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0
3 0
4 0
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0
4 0
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0
Page 17
Chapter 4 – Dynamic Programming
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0 10 15 25 30 37
Now find the maximum value of v [N, M] after selecting N items and placing them
into the knapsack to get optimal solution.
Example 2:
Apply dynamic programming algorithm to the following instance of knapsack
problem.
The [i, j] entry here will be V [i, j], the best value obtainable using the first "i" rows of items
if the maximum capacity were j. We begin by initialization and first row.
Page 18
Chapter 4 – Dynamic Programming
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0
3 0
4 0
5 0
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0
4 0
5 0
Page 19
Chapter 4 – Dynamic Programming
V[3][3]=v[2][3]=7
V[3][4]=v[2][4]=7
V[3][5]=Max{v[2,5],v[2,0]+18}=18
V[3][6]=Max{v[2,6],v[2,1]+18}=19
V[3][7]=Max{v[2,7],v[2,2]+18}=24
V[3][8]=Max{v[2,8],v[2,3]+18}=25
V[3][9]=Max{v[2,9],v[2,4]+18}=25
V[3][10]=Max{v[2,10],v[2,5]+18}=25
V[3][11]=Max{v[2,11],v[2,6]+18}=25
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0
5 0
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5
Page 20
Chapter 4 – Dynamic Programming
V[5][6]=v[4][6]=22
V[5][7]=Max{v[4,7],v[4,0]+22}=24
V[5][8]=Max{v[4,8],v[4,1]+22}=28
V[5][9]=Max{v[4,9],v[4,2]+22}=29
V[5][10]=Max{v[4,10],v[4,3]+22}=29
V[5][11]=Max{v[4,11],v[4,4]+22}=40
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5 0 1 6 7 7 18 22 24 28 29 29 40
Therefore we choose the maximum profit at v[5][11]=40 ,compare with previous cell
v[4][11],as there is no change in value we consider v[4][11] which has produced the value
=40.So consider object 4.Maximum knapsack capacity is 11,so considering object 4 the left
out capacity M=11- 6=5.Number of objects yet to consider=3.Therefore check the value at
v[3,5],as v[3,5]=18,and the previous cell value v[2,5]=7,consider v[3,5].So we consider
object 3 of weight=5.Now the left out capacity M=5-5=0.
The solution is to consider object 4 and object 3,producing the total maximum profit of 40
10
1 2
5
8
10
15
8 20 9 13
6
12
4 3
9
Page 21
Chapter 4 – Dynamic Programming
Solution:
Let the source vertex be 1.The cost matrix for the graph is,
C(i , j) 1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
Step 1: Consider |s|=0.In this case no intermediate node is considered and the number
of elements in s is 0.i.e.Coming to node 1 without going to any intermediate vertex.
Using the formula g(i,s) = min{c(i, j)+g( j, s-{ j })}
jϵs
g(2,0)=c(2,1)=5 //path 21
g(3,0)=c(3,1)=6 //path 31
g(4,0)=c(4,1)=8 //path 41
Step 2: Consider |s|=1.In this case one intermediate node is considered and the number
of elements in s is 1.i.e.Coming to node 1 going through any 1 intermediate vertex.
g(2,{3})=c(2,3)+g(3,0)=9+6=15 //path 231
g(2,{4})=c(2,4)+g(4,0)=10+8=18 //path 241
Step 3: Consider |s|=2.In this case two intermediate nodes are considered and the
number of elements in s is 2.i.e.Coming to node 1 going through any 2 intermediate
nodes.
g(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min {9+20,10+15}=min{29,25}=25
g(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
=min {13+18, 12+13}=min{31,25}=25
g(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min {8+15, 9+18} =min {23,27}=23
Step 4: Consider |s|=3.In this case three intermediate nodes are considered and the
number of elements in s is 3.i.e.Coming to node 1 going through 3 intermediate nodes.
G(1,{2,3,4})=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min {10+25, 15+25, 20+23}=min{35,40,43}=35
Therefore the optimal tour cost is 35.
Page 22
Chapter 4 – Dynamic Programming
Example 2:
Find the shortest path of the Travelling salesman problem using dynamic programming for
the below graph.
G{2,0}=c(2,1)=10
G{3,0}=c(3,1)=18
G{4,0}=c(4,1)=20
G{2,{3}}=c(2,3)+g(3,0)=6+18=24
G{2,{4}}=c(2,4)+g(4,0)=12+20=32
G{3,{2}}=c(3,2)+g(2,0)=6+10=16
G{3,{4}}=c(3,4)+g(4,0)=18+20=38
G{4,{2}}=c(4,2)+g(2,0)=12+10=22
G{4,{3}}=c(4,3)+g(3,0)=18+18=36
G(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min{6+38,12+36}=min{44,48}=44
G(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
=min{6+32,18+22}=min{38,40}=38
G(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min{12+24,18+16}=min{36,34}=34
G{1,{2,3,4}}=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min{10+44,18+38,20+34}=Min{54,56,54}=54
Page 23
Chapter 4 – Dynamic Programming
Find the tour path by expanding G{1,{2,3,4}} looking for the minimum value
Path 1: c(1,2)+g(2,{3,4}
= c(1,2)+ c(2,3)+g(3,{4})
=c(1,2)+ c(2,3)+c(3,4)+g(4,0)
=c(1,2)+ c(2,3)+c(3,4)+c(4,1)
Solution 1 : 1->2->3->4->1
2 (T11) 0 (T12)
J= 3 (T21) 3 (T22)
5 (T31) 2 (T32)
Page 24
Chapter 4 – Dynamic Programming
Time 0 2
P1 T11
P2 T22
2. Before T22 could finish it is pre-empted and the processor p2 is given to T21
after 2 units of time as Task T11 is completed.T21 needs 3 units of time.
Time 0 2 5
P1 T11
P2 T22 T21
3. As T21 is completed T22 continues and completes at 6th unit of time.As T21 is
completed processor p3 will start with T31.T31 needs 5 units of time. But it is
pre-empted by T32.T32 needs 2 units and completes at 8th unit of time. After this
T31 continues for 4 units of time.
Time 0 2 5 6 8 12
P1 T11
P2 T22 T21 T22
P3 T31 T32 T31
Time 0 2 3 5 6 11
P1 T11
P2 T22 T21
P3 T32 T31
Page 25
Chapter 4 – Dynamic Programming
Example 2:
Let n=2 jobs to be scheduled on 4 processors {p1,p2,p3,p4}
The time required by each operation of these jobs is given by following matrix.Find the
finish time and mean finish time.
Solution: Given that there are 4 machines, and 2 jobs with the below details
3 T11 0 T12
0 T21 3 T22
4 T31 2 T32
5 T41 2 T42
Preemptive scheduling:
The flow shop scheduling for these operations with Preemptive approach is a shown below
with job2 given priority
Time 0 3 4 5 6 7 8 9 10 14 15 16
P1 T11
P2 T22
P3 T32 T31
P4 T42 T41
The flow shop scheduling for these operations with Non preemptive approach is a shown
below
Time 0 3 4 7 8 9 10 12 13 14
Page 26
Chapter 4 – Dynamic Programming
P1 T11
P2 T22
P3 T31 T32
P4 T41 T41 T42
Questions:
1. Define graph, tree, binary tree, complete binary tree, strict binary tree.
2. Define path, cycle , adjacency matrix, cost matrix, indegree, outdegree.
3. Explain the principle of optimality.
4. Explain the Floyd’s algorithm to find the shortest path between all pairs of vertices.
5. Write an algorithm for multi stage problem – forward approach.
6. Write an algorithm for multi stage problem – backward approach.
7. Explain flow shop scheduling with an example.
8. Solve all pairs shortest path problem for the directed graph shown below using
Floyd’s Algorithm
9. Consider the below multistage graph and find the minimum cost from S to T using
forward approach
Page 27
Chapter 4 – Dynamic Programming
10. Consider the below multistage graph and find the minimum cost from 1 to 9 using
backward approach
Page 28