Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
7 views

04. Dynamic Programming (1)

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

04. Dynamic Programming (1)

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 4 – Dynamic Programming

Chapter 4: Dynamic Programming


Graphs
A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes
also referred to as vertices and the edges are lines or arcs that connect any two nodes in the
graph. Therefore graph is a finite set of vertices which are connected by edges.
Undirected graphs have edges that do not have a direction. The edges indicate a two-
way relationship, in that each edge can be traversed in both directions.Directed graphs have
edges with direction. The edges indicate a one-way relationship, in that each edge can only be
traversed in a single direction.
Graphs are used to solve many real-life problems. Graphs are used to represent networks. The
networks may include paths in a city or telephone network or circuit network. Graphs are also
used in social networks like linkedIn, Facebook. For example, in Facebook, each person is
represented with a vertex(or node). Each node is a structure and contains information like
person id, name, gender, locale etc.

Dynamic Programming
Dynamic Programming is a technique used to solve different types of problems in time O(n2)
or O(n3) for which a naive approach would take exponential time. Dynamic programming is
typically applied to optimization problem. Dynamic Programming is an algorithmic paradigm
that solves a given complex problem by breaking it into subproblems and stores the results of
subproblems to avoid computing the same results again. There are two main properties of a
problem which suggest that the given problem can be solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure

Overlapping Subproblems:
As studied in Divide and Conquer, Dynamic Programming technique combines the solutions
of sub-problems. Dynamic Programming is mainly used when solutions of same subproblems
are needed again and again. Therefore the computed solutions to subproblems are stored in a
table so that these don’t have to recomputed. So Dynamic Programming is not useful when
there are no common (overlapping) subproblems because there is no point storing the
solutions if they are not needed again.
For example, Binary Search doesn’t have common subproblems. But if we consider the
following recursive program for Fibonacci Numbers, there are many subproblems which are
solved again and again.
int fib(int n)
{
if ( n <= 1 ) return n;
return fib(n-1) + fib(n-2);
}

Optimal Substructure
A problem is said to have optimal substructure if an optimal solution can be constructed
efficiently from optimal solutions of its subproblems. This property is used to determine the
usefulness of dynamic programming and greedy algorithms for a problem. There are two
ways of doing this.
1) Top-Down : Start solving the given problem by breaking it down. If you see that the
problem has been solved already, then just return the saved answer. If it has not been solved,
solve it and save the answer. This is usually easy to think of and very intuitive. This is
referred to as Memoization.

Page 1
Chapter 4 – Dynamic Programming

2) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved
and start solving from the trivial subproblem , up towards the given problem. In this process,
it is guaranteed that the subproblems are solved before solving the problem. This is referred
to as Dynamic Programming: Note that divide and conquer is slightly a different technique.
In that, we divide the problem in to non-overlapping subproblems and solve them
independently, like in mergesort and quick sort.
Principal of optimality: The principle of optimality states that no matter what the first
decision, the remaining decisions must be optimal with respect to the state that results from
this first decision. This principle implies that an optimal decision sequence is comprised for
some formulations of some problem. Since the principle of optmaility may not hold for some
formulations of some problems, it is necessary to verify that it does not hold for the problem
being solved. Dynamic programming cannot be applied when this principle does not hold.

In dynamic programming the solution to the problem is a result of the sequence of decisions.
At every stage we make decisions to obtain optimal solution. This method is effective when a
given sub problem may arise from more than one partial set of choices. Using this method the
exponential time algorithm may be brought down to polynomial algorithm as it reduces
amount of enumeration by avoiding the enumeration of some decision sequences that cannot
possibly be optimal. A dynamic programming algorithm solves every sub problem just once
and then saves its answer in a table thereby avoiding the work of recomputing the answer
every time the sub problem is encountered.
The basic steps of dynamic programming are:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom up fashion
4. Construct optimal solution computed information.
Principle of Optimality
The principle states that an optimal sequence of decisions has the property that
whatever the initial state and decisions are the remaining decisions must constitute an
optimal decision sequences with regard to the state resulting from the first decision.

Multistage Graphs
A multistage graph is a graph
 G= (V, E) with V partitioned into K >= 2 disjoint subsets such that if an edge (u, v) is
in E, then u is in Vi , and v is in Vi+1 where 1<=i<=k, for some subsets in the partition.
 The number of vertices in V1 and Vk is one. i.e) | V1 | = | VK | = 1. The node in the first
set V1 is source and the node in the last set Vk is destination. Each set Vi is called a
stage in the graph.
The "multistage graph problem" is to find the minimum cost path from the source to the
destination. There are two approaches.
1. Multistage graph-Forward approach
2. Multistage graph –Backward approach.
1. Multistage graph-Forward approach
A dynamic programming solution for a K-stage graph is obtained by first noticing that every
path from source to destination is a result of a sequence of K-2 decisions. The ith decision
involves in determining which vertex in stage Vi+1 where 1<=i<=K-2 is to be selected to be
on the path. Let P (i, j) be the minimum cost path from vertex j in Vi to vertex T (destination
vertex)
Cost of this path cost (i , j) = min{c(j, L)+cost(i+1,L)}
where L ϵ Vi+1 and (j, L) ϵ E.

Page 2
Chapter 4 – Dynamic Programming

Here j denotes the vertex in stage I and L denotes the vertex in stage i+1.

Example 1: Find the minimum cost path from the start node S to the terminal node T
using dynamic programming (Multi stage graph- forward approach).
4
A D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

We have 4 stages and the value of K=4.Let Lnode be the node that minimizes c (i, j)
in every step.

Step 1: Calculate the minimum cost at stage 3.Start with K-1th stage (3rd stage) to find
out the cost from k-1th stage to Kth stage.
Cost (3, D) = cost of travelling from vertex D in stage 3 to vertex T (destination node)
= 18
Cost (3, E) = cost of travelling from vertex D in stage 3 to vertex T (destination node)
= 13
Cost (3, F) = cost of travelling from vertex D in stage 3 to vertex T (destination node)
=2

Step 2: Calculate the minimum cost at stage 2.


Cost (2, A) = cost of travelling from vertex A in stage 2 to vertex T (destination node)
= min{c(A,D)+cost(3,D),c(A,E)+cost(3,E)}
=min {4+18, 11+13} = min {22, 24} = 22
Lnode = D

Cost (2, B) = cost of travelling from vertex B in stage 2 to vertex T (destination node)
=min {c(B,D)+cost(3,D),c(B,E)+cost(3,E),c(B,F)+cost(3,F)}
=min {9+18, 5+13, 16+2} = min {27, 18, 18} = 18
Lnode = E

Cost (2, C) = cost of travelling from vertex C in stage 2 to vertex T (destination node)
=min{c(C, F) + cost (3, F)}
=min {2+2} = 4
Lnode=F

Step 3: Calculate the minimum cost at stage 1


Cost (1, S) = cost of travelling from vertex S in stage 1 to vertex T (destination node)

Page 3
Chapter 4 – Dynamic Programming

= min{c(S,A)+cost(2,A),c(S,B)+cost(2,B),c(S,C)+cost(2,C)
= min {1+ 22, 2+18, 5+4} = min {23, 20, 9} =9
Lnode = C

Determine the path as follows.


The minimum cost path is: V1 (Source vertex), V2, V3……..Vk-1, Vk (destination
vertex)
V1=S
V2 = Lnode of cost (1, S) = C
V3 = Lnode of cost (2, C) = F
V4 = T

Therefore the path is SCFT

Example 2: Find the minimum cost path from the start node 1 to the terminal node 7
using dynamic programming (Multi stage graph- forward approach).

2
2 5
9 5
4

7 6
1 3 7
3 6

3 8
4 5
6

Solution: We have 4 stages and hence the value of k=4

Step1: Consider (k-1) th stage (4-1) 3rd stage

Cost (3, 5) = cost of travelling from vertex 5 in stage 3 to terminal node 7 =5


Cost (3, 6) = cost of travelling from vertex 6 in stage 3 to terminal node 7 =6

Step2: Consider 2nd stage

Cost (2, 2) = cost of travelling from vertex 2 in stage 2 to terminal node 7


= min{c(2,5)+cost(3,5),c(2,6)+cost(3,6)}
= min {2+5, 4+6} = 7
Lnode = 5

Page 4
Chapter 4 – Dynamic Programming

cost(2,3) = min{c(3,5)+cost(3,5),c(3,6)+cost(3,6)}
= min{6+5,3+6}=9
Lnode = 6

cost(2,4) = min{c(4,5)+cost(3,5),c(4,6)+cost(3,6)}
= min {8+5, 5+6}=11
Lnode = 6

Step 3: consider 1st stage

cost(1,1) = min{c(1,2)+cost(2,2),c(1,3)+cost(2,3),c(1,4)+cost(2,4)}
= min {9+7, 7+9,3+11}=14
Lnode = 4

Determine the path


Let the minimum cost path be V1, v2, v3, vk-1, Vk

V1=1
v2=Lnode of cost (1, 1)=4
v3=Lnode of cost (2, 4) = 6
V4=7
Therefore the path is 1→4→6→7

Algorithm for Multistage graph –Forward approach


Algorithm: Fgraph (G, k, n, P)
Let G (V, E) be the directed k-stage weighted graph with n vertices. E is the set of
edges and c [i , j] is the cost matrix. P [i: k] is the minimum path.
Step 1: cost[n] = 0 //Cost of destination vertex is 0.
Step 2: Compute cost [ j ] from k-1 stage to 1st stage
for j=n-1 to 1 step -1 do
{
Let r be the vertex such that (j, r) is an edge of G and
c [j, r] + cost[r] is minimum
cost [j]=c[j,r]+cost[r] ;
d[j]=r ; //Lnode
}
Step 3: To find minimum cost path
P [1] = 1 //source vertex
For j=2 to k-1 do
P [ j ] = d[p[j-1]];
Step 4: exit

Page 5
Chapter 4 – Dynamic Programming

2. Multistage graph-Backward approach

Multistage graph problem can also be solved using backward approach. Let cost (i, j ) be the
minimum cost path from vertex S(source) to vertex j in Vi. The cost (i, j) using backward
approach is

cost (i , j) = min{ cost (i-1, L) + c(L, j) }


Where L ϵ Vi -1 and (L, j) ϵ E.

Here j denotes the vertex in stage i and L denotes the vertex in stage i - 1.

Example 1: Find the minimum cost path from the start node S to the terminal node T
using dynamic programming (Multi stage graph- backward approach).
4
A D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

Step 1: Calculating minimum cost is in stage 1


cost (1,S) = 0

Step 2: Calculating minimum cost from stage 2


cost (2,A) = cost(1,S) + c (S , A) = 0 + 1=1
cost (2,B) = cost(1,S) + c (S , B) = 0 + 2=2
cost (2,C) = cost(1,S) + c (S , C) = 0 + 5=5

Step 3: Calculating minimum cost from stage 3


cost (3,D) = min{cost(2,A)+c(A,D) , cost(2,B) + c(B,D) }
=min {1+4, 2+9} = min {5, 11} =5
Lnode= A
cost (3,E) = min{cost(2,A)+c(A,E) , cost(2,B) + c(B,E) }
=min {1+11, 2+5} = min {12, 7} =7
Lnode= B
cost (3,F) = min{cost(2,B)+c(B,F) , cost(2,C) + c(C,F) }
=min {2+16, 5+2} = min {18, 7} =5
Lnode= C

Step 4: Calculating minimum cost from stage 4


cost(4,T)=min{ cost(3,D)+c(D,T), cost(3,E)+c(E,T), cost(3,F)+c(F,T)}
=min {5+18, 7+13, 5+2} =min {23, 20, 7} =7

Page 6
Chapter 4 – Dynamic Programming

Lnode = F

Determine the minimum cost path as follows:


Minimum cost of travelling from source vertex S to destination vertex T is 7(obtained
from stage 4)
From stage 4 the Lnode is F.
From stage 3 track the Lnode of cost(3,F) =C

Therefore the path is SCFT

Example 2: Find the minimum cost path from the start node 1 to the terminal node 7
using dynamic programming (Multi stage graph- backward approach).

2
2 5
9 5
4

7 6
1 3 7
3 6

3 8
4 5
6

Step 1: Calculating minimum cost from stage 1


cost (1, 1)=0

Step 2: Calculating minimum cost from stage 2


cost (2, 2) = cost(1, 1) + c(1, 2)= 0 + 9=9
cost (2, 3) = cost(1, 1) +c(1, 3) = 0 + 7=7
cost (2, 4) = cost(1, 1) + c(1,4) = 0 + 3 =3

Step 3: Calculating minimum cost from stage 3


cost(3,5) = min{cost(2,2)+c(2,5) , cost(2,3)+c(3,5), cost(2,4)+c(4,5)}
=min {9+2, 7+6, 3+8} =min {11, 13, 11} =11
Lnode=2
cost(3,6) = min{cost(2,2)+c(2,6) , cost(2,3)+c(3,6),cost(2,4)+c(4,6)}
=min {9+4, 7+3, 3+5} =min {13, 10, 8} =8
Lnode=4

Step 4: Calculating minimum cost from stage 4


Cost(4,7) = min{cost(3,5)+c(5,7),cost(3,6)+c(6,7)}
=min {11+5, 8 +6} = min {16, 14} = 14
Lnode = 6

Page 7
Chapter 4 – Dynamic Programming

Determine the minimum cost path as follows:


Minimum cost of travelling from source vertex 1 to destination vertex 7 is 14(obtained
from stage 4)
From stage 4 the Lnode that gives the minimum cost of 14 is 6
From stage 3 track the Lnode of cost (3,6) = 4
Therefore the path is 1467

Example 3:
Find the minimum cost path for the below multistage graph using the backward approach.

cost(S,A)=1
cost(S,B)=2
cost(S,C)=7
cost(S,D)=min{cost(S,A)+c(A,D),cost(S,B)+c(B,D)}=min{1+c(A,D),2+c(B,D)}
=min{1+3,2+4}=4
cost(S,D)=4 i.e. Path S-A-D is minimum
cost(S,E)=min{cost(S,A)+c(A,E),cost(S,B)+c(B,E),cost(S,C)+c(C,E)}
=min{1+c(A,E), 2+c(B,E),7+c(C,E)}
=min {1+6,2+10,7+3}
=min {7,12,10}
d(S,E)=7 i.e. Path S-A-E is chosen.
d(S,T)=min{cost(S,D)+c(D,T),cost(S,E)+c(E,T),cost(S,C)+c(C,T)}
=min {4+8,7+2,7+10}=min{12,9,17}=9
d(S,T)=9 i.e. Path cost(S,E)+c( E,T) is chosen.
=cost(S,E)+c( E,T)
=cost(S,A)+c(A,E)+c( E,T)
=c(S,A)+c(A,E)+c( E,T)
Therefore the path is S-A-E-T with a minimum cost of 9

Page 8
Chapter 4 – Dynamic Programming

The minimum cost=9 with the path S-A-E-T.

Algorithm for Multistage graph –Backward approach


Algorithm: Bgraph (G, k, n, P)
Let G (V, E) be the directed k-stage weighted graph with n vertices. E is the set of
edges and c [i , j] is the cost matrix. P [i: k] is the minimum path.
Step 1: bcost[n] = 0 //Cost of destination vertex is 0.
Step 2: Compute bcost [ j ] from 1st stage to k-1 stage
for j=2 to n do
{
Let r be the vertex such that (j, r) is an edge of G and
c [j, r] + cost[r] is minimum
cost [j]=c[j, r]+cost[r] ;
d[j]=r ; //Lnode
}
Step 3: To find minimum cost path
P [1] = 1 //source vertex
P [k] =n //destination vertex
For j=2 to k-1 do
P [ j ] = d[p[j+1]];
Step 4: exit

All Pairs shortest Path – FLOYD’s Algorithm


Floyd Warshall Algorithm is used for solving the All Pairs Shortest Path problem. The
problem is to find shortest distances between every pair of vertices in a given edge. The
solution matrix is initialized using the given graph . Then the solution matrix is updated by
considering all the vertices as an intermediate vertex. We have to pick one by one all the
vertices and updates all shortest paths which include the picked vertex as an intermediate
vertex in the shortest path. When we pick vertex number k as an intermediate vertex, we
already have considered vertices {0, 1, 2, .. k-1} as intermediate vertices. For every pair (i,
j) of the source and destination vertices, there are two possible cases
1) k is not an intermediate vertex in shortest path from i to j. We keep the value of dist[i][j]
as it is
2) k is an intermediate vertex in shortest path from i to j. We update the value of dist[i][j] as
dist[i][k] + dist[k][j] if dist[i][j] > dist[i][k] + dist[k][j]
Let G=(V,E) be a directed graph where V is a set of vertices and E is the set of edges.
Let cost be the cost adjacency matrix for the graph G such that
1. cost[i , j] = 0 if i=j for 0≤ i ≤ n
2. cost[i , j] is the cost associated with the edge (i ,j ) if there is an edge from i to j
where i!=j
3. cost[i , j] = ∞ if there is no edge from i to j .
The all pair shortest path problem is to determine a matrix D such that D (i , j)
contains the shortest distance from i to j.

Formula:
Dk [i, j] =min {Dk-1[i, j], Dk-1[i, k] +Dk-1[k, j]}
Page 9
Chapter 4 – Dynamic Programming

Where Dk represents the matrix D after kth iteration. Initially D0=C (cost matrix). i is
the source vertex , j is the destination vertex and k represents the intermediate vertex
used.

Floyd’s Algorithm: All Pair shortest paths problem


Algorithm Floyd (n, cost, D)

//Input: cost adjacency matrix of the graph G (V,E)


//Output: Shortest distance matrix D

1. Make a copy of cost adjacency matrix

for i ←1 to n do
for j←1 to n do
D[i , j] = cost [i ,j]
endfor
endfor

2. Find the shortest distances from all nodes to all other nodes

for k←0 to n do
for i←1 to n do
for j←1 to n do
D[i , j] = min ( D[i , j],D[i ,k] + D[k,j])
endfor
endfor
endfor

3. Finished

Return
Example 1: Solve all pairs shortest path problem for the directed graph shown below
6

4 V2
V1

2
11
3

V3

Solution: The cost adjacency matrix for the given graph is

Page 10
Chapter 4 – Dynamic Programming

V1 V2 V3
D (i, j) = cost (i, j) =
0
V1 0 4 11
V2 6 0 2
V3 3 ∞ 0

Step 1: Find D1
For k=1 ie. Going from i to j through vertex V1
D1 (V1, V1) =min { D0 (V1, V1), D0 (V1, V1) + D0 (V1, V1)} =min {0 ,0+0}=0
D1 (V1, V2) =min { D0 (V1, V2), D0 (V1, V1) + D0 (V1, V2)} = min {4 , 0+ 4 }=4
D1 (V1, V3) =min { D0 (V1, V3), D0 (V1, V1) + D0 (V1, V3)} = min {11, 0+11 }=11

D1 (V2, V1) =min { D0 (V2, V1), D0 (V2, V1) + D0 (V1, V1)} =min { 6 , 6+0 }=6
D1 (V2, V2) =min { D0 (V2, V2), D0 (V2, V1) + D0 (V1, V2)} = min {0 , 6+ 4 }=0
D1 (V2, V3) =min { D0 (V2, V3), D0 (V2, V1) + D0 (V1, V3)} = min {2, 6+11 }=2

D1 (V3, V1) =min { D0 (V3, V1), D0 (V3, V1) + D0 (V1, V1)} =min { 3 , 3+0 }=3
D1 (V3, V2) =min { D0 (V3, V2), D0 (V3, V1) + D0 (V1, V2)} = min {∞ , 3+ 4 }=7
D1 (V3, V3) =min { D0 (V3, V3), D0 (V3, V1) + D0 (V1, V3)} = min {0, 3+11 }=0

V1 V2 V3
V1 0 4 11
D1 (i, j) = V2 6 0 2
V3 3 7 0
Step 2: Find D2
For k=2 ie. Going from i to j through vertex V2
D2 (V1, V1) =min { D1 (V1, V1), D0 (V1, V2) + D0 (V2, V1)} =min {0 ,4+6}=0
D2 (V1, V2) =min { D1 (V1, V2), D0 (V1, V2) + D0 (V2, V2)} = min {4, 4+ 0 }=4
D2 (V1, V3) =min { D1 (V1, V3), D0 (V1, V2) + D0 (V2, V3)} = min {11, 4+2 }=6

D2 (V2, V1) =min { D1 (V2, V1), D0 (V2, V2) + D0 (V2, V1)} =min { 6 , 0+6 }=6
D2 (V2, V2) =min { D1 (V2, V2), D0 (V2, V2) + D0 (V2, V2)} = min {0 , 0+ 0 }=0
D2 (V2, V3) =min { D1 (V2, V3), D0 (V2, V2) + D0 (V2, V3)} = min {2, 0+2 }=2

D2 (V3, V1) =min { D1 (V3, V1), D0 (V3, V2) + D0 (V2, V1)} =min { 3 , 7+6 }=3
D2 (V3, V2) =min { D1 (V3, V2), D0 (V3, V2) + D0 (V2, V2)} = min {7 , 7+ 0 }=7
D2 (V3, V3) =min { D1 (V3, V3), D0 (V3, V2) + D0 (V2, V3)} = min {0, 7+2 }=0

V1 V2 V3
D2 (i, j) = V1 0 4 6
V2 6 0 2
V3 3 7 0

Page 11
Chapter 4 – Dynamic Programming

Step 3: Find D3
For k=3 ie. Going from i to j through vertex V3
D3 (V1, V1) =min { D2 (V1, V1), D2 (V1, V3) + D2 (V3, V1)} =min {0 ,6+3}=0
D3 (V1, V2) =min { D2 (V1, V2), D2 (V1, V3) + D2 (V3, V2)} = min {4, 6+ 7} =4
D3 (V1, V3) =min { D2 (V1, V3), D2 (V1, V3) + D2 (V3, V3)} = min {6, 6+0} =6

D3 (V2, V1) =min { D2 (V2, V1), D2(V2, V3) + D2(V3, V1)} =min { 6 , 2+3 }=5
D3 (V2, V2) =min { D2 (V2, V2), D2 (V2, V3) + D2 (V3, V2)} = min {0, 2+7} =0
D3 (V2, V3) =min { D2 (V2, V3), D2 (V2, V3) + D2 (V3, V3)} = min {2, 2+0} =2

D3 (V3, V1) =min { D2 (V3, V1), D2(V3, V3) + D2(V3, V1)} =min { 3 , 0+3 }=3
D3 (V3, V2) =min { D2 (V3, V2), D2 (V3, V3) + D2 (V3, V2)} = min {7, 0+ 7} =7
D3 (V3, V3) =min { D2 (V3, V3), D2 (V3, V3) + D2 (V3, V3)} = min {0, 0+0} =0

V1 V2 V3
D2 (i, j) = V1 0 4 6
V2 5 0 2
V3 3 7 0

This final matrix gives the shortest distance from all nodes to all other nodes.

Example 2: Solve all pairs shortest path problem for the directed graph shown below

a b

3
7 6

c
d
1

Solution: The cost adjacency matrix for the given graph is

a b c d
D0 (i, j) = cost (i, j) = a 0 ∞ 3 ∞
b 2 0 ∞ ∞
c ∞ 7 0 1
d 6 ∞ ∞ 0

Step 1: Find D1
For k=1 ie. Going from i to j through vertex a
D1 (a, a) =min { D0 (a, a), D0 (a, a) + D0 (a, a)} =min { 0 ,0+0}=0
D1 (a, b) =min { D0 (a, b), D0 (a, a) + D0 (a, b)} = min {∞ , 0+ ∞ }=∞
D1 (a, c) =min { D0 (a, c), D0 (a, a) + D0 (a, c)} = min {3, 0+3 }=3
D1 (a, d) =min { D0 (a, d), D0 (a, a) + D0 (a, d)} = min {∞ , 0+ ∞ }=∞

Page 12
Chapter 4 – Dynamic Programming

D1 (b, a) =min { D0 (b, a), D0 (b, a) + D0 (a, a)} = min {2 , 2+0}=2


D1 (b, b) =min { D0 (b, b), D0 (b, a) + D0 (a, b)} = min {0 , 2+ ∞ }=0
D1 (b, c) =min { D0 (b, c), D0 (b, a) + D0 (a, c)} = min {∞ , 2+ 3 }=5
D1 (b, d) =min { D0 (b, d), D0 (b, a) + D0 (a, d)} = min {∞ , 2+ ∞ }=∞

D1 (c, a) =min { D0 (c, a), D0 (c, a) + D0 (a, a)} = min {∞ , ∞+0}= ∞


D1 (c, b) =min { D0 (c, b), D0 (c, a) + D0 (a, b)} = min {7 , ∞+ ∞ }=7
D1 (c, c) =min { D0 (c, c), D0 (c, a) + D0 (a, c)} = min {0 , ∞+ 3 }=0
D1 (c, d) =min { D0 (c, d), D0 (c, a) + D0 (a, d)} = min {1 , ∞+ ∞ }=1

D1 (d, a) =min { D0 (d, a), D0 (d, a) + D0 (a, a)} = min { 6 , 6+0}= 6


D1 (d, b) =min { D0 (d, b), D0 (d, a) + D0 (a, b)} = min { ∞ , 6+ ∞ }= ∞
D1 (d, c) =min { D0 (d, c), D0 (d, a) + D0 (a, c)} = min { ∞ , 6+ 3 }=9
D1 (d, d) =min { D0 (d, d), D0 (d, a) + D0 (a, d)} = min { 0 , 6+ ∞ }=0

a b c d
D1 (i, j) = a 0 ∞ 3 ∞
b 2 0 5 ∞
c ∞ 7 0 1
d 6 ∞ 9 0

Step 2: Find D2
For k=2 ie. Going from i to j through vertex b
D2 (a, a) =min { D1 (a, a), D1 (a, b) + D1 (b, a)} = min{0,∞+2}=0
D2 (a, b) =min { D1 (a, b), D1 (a, b) + D1 (b, b)} = min{∞,∞+0}=∞
D2 (a, c) =min { D1 (a, c), D1(a, b) + D1 (b, c)} = min {3,∞+5 }=3
D2 (a, d) =min { D1 (a, d), D1 (a, b) + D1 (b, d)} = min{∞,∞+∞ }=∞

D2 (b, a) =min { D1 (b, a), D1(b, b) + D1 (b, a)} = min {2 , 0+2 }=2
D2 (b, b) =min { D1 (b, b), D1 (b, b) + D1 (b, b)}= min {0 , 0+ 0 }=0
D2 (b, c) =min { D1 (b, c), D1(b, b) + D1(b, c)} = min {5 , 0+ 5 }=5
D2 (b, d) =min { D1 (b, d), D1 (b, b) + D1 (b, d)} = min {∞, 0+ ∞}=∞

D2 (c, a) =min { D1(c, a), D1 (c, b) + D1 (b, a)} = min {∞ ,7+2}= 9


D2 (c, b) =min { D1 (c, b), D1 (c, b) + D1 (b, b)} = min {7 , 7+ 0 }=7
D2 (c, c) =min { D1 (c, c), D1 (c, b) + D1 (b, c)} = min {0 , 7+ 5 }= 0
D2 (c, d) =min { D1 (c, d), D1 (c, b) + D1 (b, d)} = min {1, 7+ ∞ }=1

D2 (d, a) =min { D1 (d, a), D1 (d, b) + D1 (b, a)} = min { 6 , ∞+2}= 6


D2 (d, b) =min { D1 (d, b), D1 (d, b) + D1 (b, b)} = min { ∞ ,∞+0}= ∞
D2 (d, c) =min { D1 (d, c), D1 (d, b) + D1 (b, c)} = min { 9 , ∞+5}= 9
D2 (d, d) =min { D1 (d, d), D1 (d, b) + D1 (b, d)} = min { 0 , ∞+∞}= 0

Page 13
Chapter 4 – Dynamic Programming

a b c d
A 0 ∞ 3 ∞
D2 (i, j) = b 2 0 5 ∞
c 9 7 0 1
d 6 ∞ 9 0

Step 3: Find D3
For k=3 ie. Going from i to j through vertex c
D3 (a, a) =min { D2 (a, a), D2 (a, c) + D2 (c, a)} = min{0,3+9}=0
D3 (a, b) =min { D2 (a, b), D2 (a, c) + D2 (c, b)} = min{∞,3+7}=10
D3 (a, c) =min { D2 (a, c), D2(a, c) + D2 (c, c)} = min {3,3+0 }=3
D3 (a, d) =min { D2 (a, d), D2 (a, c) + D2 (c, d)} = min{∞,3+1 }=4

D3 (b, a) =min { D2 (b, a), D2(b, c) + D2(c, a)} = min {2 , 5+9 }=2
D3 (b, b) =min { D2 (b, b), D2 (b, c) + D2 (c, b)}= min {0 , 5+ 7 }=0
D3 (b, c) =min { D2 (b, c), D2(b, c) + D2(c, c)} = min {5 , 5+ 0 }=5
D3 (b, d) =min { D2 (b, d), D2 (b, c) + D2 (c, d)} = min {∞, 5+ 1}=6

D3 (c, a) =min { D2(c, a), D2 (c, c) + D2 (c, a)} = min {9 ,0+9}= 9


D3 (c, b) =min { D2 (c, b), D2 (c, c) + D2 (c, b)} = min {7 , 0+ 7 }=7
D3 (c, c) =min { D2 (c, c), D2 (c, c) + D2(c, c)} = min {0 , 0+ 0 }= 0
D3 (c, d) =min { D2 (c, d), D2 (c, c) + D2 (c, d)} = min {1, 0+ 1 }=1

D3 (d, a) =min { D2 (d, a), D2 (d, c) + D2 (c, a)} = min { 6 , 9+9}= 6


D3 (d, b) =min { D2 (d, b), D2 (d, c) + D2 (c, b)} = min { ∞ ,9+7}= 16
D3 (d, c) =min { D2 (d, c), D2 (d, c) + D2 (c, c)} = min { 9 , 9+0}= 9
D3 (d, d) =min { D2 (d, d), D2 (d, c) + D2 (c, d)} = min { 0 , 9+1}= 0

a b c d
a 0 10 3 4
D3 (i , j) = b 2 0 5 6
c 9 7 0 1
d 6 16 9 0

Step 4: Find D4
For k=4 ie. Going from i to j through vertex d
D4 (a, a) =min { D3 (a, a), D3(a, d) + D3 (d, a)} = min{0,4+6}=0
D4 (a, b) =min { D3 (a, b), D3 (a, d) + D3 (d, b)} = min{10,4+16}=10
D4 (a, c) =min { D3 (a, c), D3(a, d) + D3 (d, c)} = min {3,4+9 }=3
D4 (a, d) =min { D3 (a, d), D3 (a, d) + D3 (d, d)} = min{4,4+0 }=4

D4 (b, a) =min { D3 (b, a), D3(b, d) + D3(d, a)} = min {2 , 6+6 }=2

Page 14
Chapter 4 – Dynamic Programming

D4 (b, b) =min { D3 (b, b), D3 (b, d) + D3 (d, b)}= min {0 , 6+ 16 }=0


D4 (b, c) =min { D3 (b, c), D3(b, d) + D3(d, c)} = min {5 , 6+ 9 }=5
D4 (b, d) =min { D3 (b, d), D3 (b, d) + D3 (d, d)} = min {6, 6+ 0}=6

D4 (c, a) =min { D3(c, a), D3 (c, d) + D3 (d, a)} = min {9 ,1+6}= 7


D4 (c, b) =min { D3 (c, b), D3 (c, d) + D3 (d, b)} = min {7 , 1+16 }=7
D4 (c, c) =min { D3 (c, c), D3(c, d) + D3(d, c)} = min {0 , 1+ 9 }= 0
D4 (c, d) =min { D3 (c, d), D3 (c, d) + D3 (d, d)} = min {1, 1+ 0 }=1

D4 (d, a) =min { D3 (d, a), D3 (d, d) + D3 (d, a)} = min { 6 , 0+6}= 6


D4 (d, b) =min { D3 (d, b), D3 (d, d) + D3 (d, b)} = min { 16 ,0+16}= 16
D4 (d, c) =min { D3 (d, c), D3 (d, d) + D3(d, c)} = min { 9 , 0+9}= 9
D4 (d, d) =min { D3 (d, d), D3 (d, d) + D3 (d, d)} = min { 0 , 0+0}= 0

a b c d
a 0 10 3 4
D (i, j) =
4 b 2 0 5 6
c 7 7 0 1
d 6 16 9 0

This final matrix gives the shortest distance from all nodes to all other nodes.

Knapsack problem

In this problem we are given a knapsack (bag or a container) of capacity M and N


objects of weight w1, w2…..wn with profit p1, p2,...pn.
The main objective is to place the objects into the knapsack so that maximum profit is
obtained and the weight of the object should not exceed the capacity of the knapsack.
Given weights and values of n items, put these items in a knapsack of capacity W to get the
maximum total value in the knapsack. In other words, given two integer arrays val[0..n-1]
and wt[0..n-1] which represent values and weights associated with n items respectively. Also
given an integer W which represents knapsack capacity, find out the maximum value subset
of val[] such that sum of the weights of this subset is smaller than or equal to W. You cannot
break an item, either pick the complete item, or don’t pick it (0-1 property). Problem
Description If we are given n objects and a knapsack or a bag in which the object i that has
weight wi is to be placed. The knapsack has a capacity W. Then the profit that can be earned
is pixi . The objective is to obtain filling of knapsack with maximum profit
earned.Maximized pixi. subject to constraint wixi<=W Where 1<=i<=n and n is total no. of
objects and xi =0 or 1.

The knapsack problem can be stated in 2 ways:


1. Continuous or fractional
2. Discrete or 0/1

The recurrence relation to get the solution to knapsack problem using dynamic
programming can be:

Page 15
Chapter 4 – Dynamic Programming

Max (v[i-1, j], v[i-1 , j-wi]+pi if wi ≤ j

V[i , j] = v[i-1 , j] if wi > j

0 if i=j=0

Example 1: Apply dynamic programming algorithm to the following instance of


knapsack problem.
Item Weight Value
1 2 12
2 1 10
3 3 20
4 2 15

With the capacity M=5.

Solution : It is given that the number of items N=4 and the capacity of the knapsack=5
with weights w1=2,w2=1,w3=3 and w4=2 with the profits p1=12,p2=10,p3=20and p4=15.

Step1: Since N=4 and M=5, let us have a table with N+1 rows(i.e., 5 rows) and M+1
columns (i.e.,6 columns).We know from the above relation that whenever i=0 it
indicates that there are no items to select and hence irrespective of capacity of the
knapsack, the profit will be 0 and is denoted by V[i, j]=0 for 0<=j<=M when i=0.

So V[0,0] =V[0,1]=V[0,2]=V[0,3]=V[0,4]=V[0,5]=0

And whenever j=0 and irrespective of the number of items selected, we cannot place
into knapsack and hence profit will be 0. This is denoted by V[i, j]=0 for 0<=i<=N
when j=0.
So V[0,0] =V[1,0]=V[2,0]=V[3,0]=V[4,0]=0
So, the table with N+1 rows and M+1 columns can be filled as shown below:

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

Page 16
Chapter 4 – Dynamic Programming

Step 2: When i=1 W1 =2 and P1=12


V[1,1]=V[0,1]=0 (because j=1 which is less than w1 2)
V[1,2]=max(V[0,2],V[0,0]+12)=max{0,0+12}=12
V[1,3]=max(V[0,3],V[0,1]+12)= max{0,0+12}=12
V[1,4]=max(V[0,4],V[0,2]+12)= max{0,0+12}=12
V[1,5]=max(V[0,5],V[0,3]+12)= max{0,0+12}=12
By placing these values in the table shown in step1.we have the table shown below:

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0
3 0
4 0

Step 2: Consider i=2, w2=1 P2=10


v[2,1]=max{v[1,1],v[1,0]+10}=max{0,0+10}=10
v[2,2]=max{v[1,2],v[1,1]+10}=max{12,0+10}=12
v[2,3]=max{v[1,3],v[1,2]+10}=max{12,12+10}=22
v[2,4]=max{v[1,4],v[1,3]+10}=max{12,12+10}=22
v[2,5]=max{v[1,5],v[1,4]]+10}=max{12,12+10}=22

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0
4 0

Step 3: Consider i=3, w3=3 P3=20


v [3,1]=v[2,1] =10 (because j<w3 ie.1<3)

v [3,2]=v[2,2] =12 (because j<w3 ie.2<3)


v[3,3]=max{v[2,3],v[2,0]+10}=max{22,0+20}=22
v[3,4]=max{v[2,4],v[2,1]+10}=max{22,10+20}=30
v[3,5]=max{v[2,5],v[2,2]]+10}=max{22,12+20}=32

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0

Page 17
Chapter 4 – Dynamic Programming

Step 4: Consider i=4, w4=2 P4=15


v [4,1]=v[3,1] =10 (because j<w3 ie.1<2)
v [4,2]=max{v[3,2],v[3,0]+15}=max{12,0+15} =15
v[4,3]=max{v[3,3],v[3,1]+15}=max{22,10+15}=25
v[4,4]=max{v[3,4],v[3,2]+15}=max{30,12+15}=30
v[4,5]=max{v[3,5],v[3,3]+15}=max{32,22+15}=37

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0 10 15 25 30 37

Now find the maximum value of v [N, M] after selecting N items and placing them
into the knapsack to get optimal solution.

Here, N=4 and M=5. Therefore, an optimal solution is v[N,M]=v[4,5]=37 units.


v[4,5] ≠ v[3,5],therefore item 4 was used to fill the knapsack thereby reducing the
knapsack’s total capacity to (5-2)=3 ie.Knapsack capacity – weight of item 4.The
remaining capacity of knapsack is represented by v[3,3] ie.Remaining number of
items to be selected is 3 and remaining knapsack capacity is 3.Check the value at
v[3,3].Now v[3,3]=v[2,3] so item 3 is not included in the knapsack. Compare
v[2,3]with v[1,3].As v[2,3] ≠ v[1,3] item 2 is included and the knapsack capacity
reduces to (3-1=2) ie.Remaining knapsack capacity – weight of item 2. Check v[1,2]
≠v[0,2] so include item 1 and he knapsack capacity further reduces to (2-2=0)and
knapsack is full. So including the items 4, 2, 1 gives a profit of 15, 10, and 12
respectively producing a maximum profit of 37.

Example 2:
Apply dynamic programming algorithm to the following instance of knapsack
problem.

Item Weight Value


1 1 1
2 2 6
3 5 18
4 6 22
5 7 28

With the capacity M=11

The [i, j] entry here will be V [i, j], the best value obtainable using the first "i" rows of items
if the maximum capacity were j. We begin by initialization and first row.

Page 18
Chapter 4 – Dynamic Programming

i=1 w1=1 p1=1


V[1][1]=Max{v[0,1],v[0,0]+1}=1
V[1][2]=Max{v[0,2],v[0,1]+1}=1
V[1][3]=Max{v[0,3],v[0,2]+1}=1
V[1][4]=Max{v[0,4],v[0,3]+1}=1
V[1][5]=Max{v[0,5],v[0,4]+1}=1
V[1][6]=Max{v[0,6],v[0,5]+1}=1
V[1][7]=Max{v[0,7],v[0,6]+1}=1
V[1][8]=Max{v[0,8],v[0,7]+1}=1
V[1][9]=Max{v[0,9],v[0,8]+1}=1
V[1][10]=Max{v[0,10],v[0,9]+1}=1
V[1][11]=Max{v[0,11],v[0,10]+1}=1

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0
3 0
4 0
5 0

i=2 w2=2 p2=6


V[2,1]=v[1,1]=1
V[2,2]=Max{v[1,2],v[1,0]+6}=6
V[2,3]=Max{v[1,3],v[1,1]+6}=7
V[2,4]=Max{v[1,4],v[1,2]+6}=7
V[2,5]=Max{v[1,5],v[1,3]+6}=7
V[2,6]=Max{v[1,6],v[1,4]+6}=7
V[2,7]=Max{v[1,7],v[1,5]+6}=7
V[2,8]=Max{v[1,8],v[1,6]+6}=7
V[2,9]=Max{v[1,9],v[1,7]+6}=7
V[2,10]=Max{v[1,10],v[1,8]+6}=7
V[2,11]=Max{v[1,11],v[1,9]+6}=7

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0
4 0
5 0

i=3 w3=5 p3=18


V[3][1]=v[2][1]=1
V[3][2]=v[2][2]=6

Page 19
Chapter 4 – Dynamic Programming

V[3][3]=v[2][3]=7
V[3][4]=v[2][4]=7
V[3][5]=Max{v[2,5],v[2,0]+18}=18
V[3][6]=Max{v[2,6],v[2,1]+18}=19
V[3][7]=Max{v[2,7],v[2,2]+18}=24
V[3][8]=Max{v[2,8],v[2,3]+18}=25
V[3][9]=Max{v[2,9],v[2,4]+18}=25
V[3][10]=Max{v[2,10],v[2,5]+18}=25
V[3][11]=Max{v[2,11],v[2,6]+18}=25

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0
5 0

i=4 w4=6 p4=22


V[4][1]=v[3][1]=1
V[4][2]=v[3][2]=6
V[4][3]=v[3][3]=7
V[4][4]=v[3][4]=7
V[4][5]=v[3][5]=18
V[4][6]=Max{v[3,6],v[3,0]+22}=22
V[4][7]=Max{v[3,7],v[3,1]+22}=24
V[4][8]=Max{v[3,8],v[3,2]+22}=28
V[4][9]=Max{v[3,9],v[3,3]+22}=29
V[4][10]=Max{v[3,10],v[3,4]+22}=29
V[4][11]=Max{v[3,11],v[3,5]+22}=40

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5

i=5 w1=7 p1=28


V[5][1]=v[4][1]=1
V[5][2]=v[4][2]=6
V[5][3]=v[4][3]=7
V[5][4]=v[4][4]=7
V[5][5]=v[4][5]=18

Page 20
Chapter 4 – Dynamic Programming

V[5][6]=v[4][6]=22
V[5][7]=Max{v[4,7],v[4,0]+22}=24
V[5][8]=Max{v[4,8],v[4,1]+22}=28
V[5][9]=Max{v[4,9],v[4,2]+22}=29
V[5][10]=Max{v[4,10],v[4,3]+22}=29
V[5][11]=Max{v[4,11],v[4,4]+22}=40

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5 0 1 6 7 7 18 22 24 28 29 29 40

Therefore we choose the maximum profit at v[5][11]=40 ,compare with previous cell
v[4][11],as there is no change in value we consider v[4][11] which has produced the value
=40.So consider object 4.Maximum knapsack capacity is 11,so considering object 4 the left
out capacity M=11- 6=5.Number of objects yet to consider=3.Therefore check the value at
v[3,5],as v[3,5]=18,and the previous cell value v[2,5]=7,consider v[3,5].So we consider
object 3 of weight=5.Now the left out capacity M=5-5=0.
The solution is to consider object 4 and object 3,producing the total maximum profit of 40

Travelling Salesman Problem


The problem is of a salesman to visit n cities. He has to visit all the cities exactly once
and reach back to the city he started his journey from. There is an integer cost c(i,j) to
travel from one city i to other city j. The salesman wishes to make a tour to visit all the
cities and come back to his starting place with a minimum cost.

Equation to solve this problem is


g(1,s-{1}) = min{c(i, j)+g(j, s-{ j })}
jϵs
Where c(i,j) is the cost of the edge from i to j. s can take values from 0 to n-1.
Therefore we must calculate for |s|=0, |s|=1,……|s|=n-1.

10

1 2
5
8

10
15
8 20 9 13
6

12
4 3
9
Page 21
Chapter 4 – Dynamic Programming

Solution:
Let the source vertex be 1.The cost matrix for the graph is,
C(i , j) 1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0

Step 1: Consider |s|=0.In this case no intermediate node is considered and the number
of elements in s is 0.i.e.Coming to node 1 without going to any intermediate vertex.
Using the formula g(i,s) = min{c(i, j)+g( j, s-{ j })}
jϵs
g(2,0)=c(2,1)=5 //path 21
g(3,0)=c(3,1)=6 //path 31
g(4,0)=c(4,1)=8 //path 41
Step 2: Consider |s|=1.In this case one intermediate node is considered and the number
of elements in s is 1.i.e.Coming to node 1 going through any 1 intermediate vertex.
g(2,{3})=c(2,3)+g(3,0)=9+6=15 //path 231
g(2,{4})=c(2,4)+g(4,0)=10+8=18 //path 241

g(3,{2})=c(3,2)+g(2,0)=13+5=18 //path 321


g(3,{4})=c(3,4)+g(4,0)=12+8=20 //path 341
g(4,{2})=c(4,2)+g(2,0)=8+5=13 //path 421
g(4,{3})=c(4,3)+g(3,0)=9+6=15 //path 431

Step 3: Consider |s|=2.In this case two intermediate nodes are considered and the
number of elements in s is 2.i.e.Coming to node 1 going through any 2 intermediate
nodes.
g(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min {9+20,10+15}=min{29,25}=25

g(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
=min {13+18, 12+13}=min{31,25}=25

g(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min {8+15, 9+18} =min {23,27}=23

Step 4: Consider |s|=3.In this case three intermediate nodes are considered and the
number of elements in s is 3.i.e.Coming to node 1 going through 3 intermediate nodes.
G(1,{2,3,4})=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min {10+25, 15+25, 20+23}=min{35,40,43}=35
Therefore the optimal tour cost is 35.

Step 5: Find the tour path

Page 22
Chapter 4 – Dynamic Programming

g(1,{2,3,4})= c(1,2) + g(2,{3,4})


= c(1,2) + c(2,4)+g(4,{3})
=c(1,2) + c(2,4) + c(4,3)+g(3,0)
= c(1,2) + c(2,4) + c(4,3)+c(3,1)

Therefore the tour path is 12431.

Example 2:
Find the shortest path of the Travelling salesman problem using dynamic programming for
the below graph.

G{2,0}=c(2,1)=10
G{3,0}=c(3,1)=18
G{4,0}=c(4,1)=20

G{2,{3}}=c(2,3)+g(3,0)=6+18=24
G{2,{4}}=c(2,4)+g(4,0)=12+20=32
G{3,{2}}=c(3,2)+g(2,0)=6+10=16
G{3,{4}}=c(3,4)+g(4,0)=18+20=38
G{4,{2}}=c(4,2)+g(2,0)=12+10=22
G{4,{3}}=c(4,3)+g(3,0)=18+18=36

G(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min{6+38,12+36}=min{44,48}=44
G(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
=min{6+32,18+22}=min{38,40}=38
G(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min{12+24,18+16}=min{36,34}=34

G{1,{2,3,4}}=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min{10+44,18+38,20+34}=Min{54,56,54}=54

Page 23
Chapter 4 – Dynamic Programming

Find the tour path by expanding G{1,{2,3,4}} looking for the minimum value
Path 1: c(1,2)+g(2,{3,4}
= c(1,2)+ c(2,3)+g(3,{4})
=c(1,2)+ c(2,3)+c(3,4)+g(4,0)
=c(1,2)+ c(2,3)+c(3,4)+c(4,1)

Solution 1 : 1->2->3->4->1

Path 2:Expand and trace back G{1,{2,3,4}}


=c(1,4)+g(4,{2,3})
=c(1,4)+c(4,3)+g(3,{2})
=c(1,4)+c(4,3)+c(3,2)+g(2,0)
=c(1,4)+c(4,3)+c(3,2)+c(2,1)
Solution 2 : 1->4->3->2->1

Flow shop Scheduling


There are n jobs each containing m tasksT1i, T2i, T3i,……..Tmi where 1<=i<=n to be
performed. Task Tj i is to be performed on Pj processor where 1<=j<=m.The time
needed for each task Tji is tji..Objective is to find the sequence of jobs on the
processors that minimizes the make span.
Constraints:
1. Task Tj I must be assigned to processor Pj
2. No processor can have more than one task at any time interval.
3. The task Tj i can be started only on the completion of the previous task Tj-1,i.
Note: j is the task number , processor number and i is the job number.

The finish time F(S) is given by


F(S) = max {fi (S)}
1<=i<=n

The Mean Flow Time MFT = 1/n ∑ fi (S)


1<=i<=n

Example 1: Let n=2 jobs to be scheduled on 3 processors {p1,p2,p3}.The task times


are as follows,

2 (T11) 0 (T12)
J= 3 (T21) 3 (T22)
5 (T31) 2 (T32)

There are 2 kinds of scheduling


1. Preemptive scheduling
2. Non preemptive scheduling
1. Preemptive scheduling

Page 24
Chapter 4 – Dynamic Programming

It is a scheduling in which task on a processor is terminated before the completion of


the task.
Preemptive Solution:
1. Assign task T11 to processor p1 and task T22 to processor p2.T11 will complete
after 2 units of time.T22 needs 3 units of time for completion.

Time 0 2
P1 T11
P2 T22

2. Before T22 could finish it is pre-empted and the processor p2 is given to T21
after 2 units of time as Task T11 is completed.T21 needs 3 units of time.
Time 0 2 5
P1 T11
P2 T22 T21

3. As T21 is completed T22 continues and completes at 6th unit of time.As T21 is
completed processor p3 will start with T31.T31 needs 5 units of time. But it is
pre-empted by T32.T32 needs 2 units and completes at 8th unit of time. After this
T31 continues for 4 units of time.

Time 0 2 5 6 8 12
P1 T11
P2 T22 T21 T22
P3 T31 T32 T31

Finish time F(S) = max{8,12}=12


Mean finish time MFT = ½ {f1( S) + f2(S)}
=1/2 {12+8}=20/2=10

2. Non Preemptive scheduling


It is a scheduling in which task on a processor is not terminated until the task is
completed.
Solution:
Assign T11to p1.It completes its task in 2 units of time. Meanwhile p2 does the task T22
for 3 units of time. Even though T11 was completed at 2nd unit of time it waits for the
processor to complete T22 and starts the task T21 only at 3rd unit of time. Processor p3
carries out T32 and completes it at 5th unit of time. Later as T21 is completed at 6th unit
processor p3 takes the task T31 for completion and finishes it at 11th unit of time.

Time 0 2 3 5 6 11
P1 T11
P2 T22 T21
P3 T32 T31

Page 25
Chapter 4 – Dynamic Programming

Finish time F(S) = max {11,5}=11


Mean finish time MFT = ½ {f1(S) + f2(S)}
=1/2 {11+5} = 16/2=8

Example 2:
Let n=2 jobs to be scheduled on 4 processors {p1,p2,p3,p4}

The time required by each operation of these jobs is given by following matrix.Find the
finish time and mean finish time.

Solution: Given that there are 4 machines, and 2 jobs with the below details
3 T11 0 T12
0 T21 3 T22
4 T31 2 T32
5 T41 2 T42

Preemptive scheduling:
The flow shop scheduling for these operations with Preemptive approach is a shown below
with job2 given priority
Time 0 3 4 5 6 7 8 9 10 14 15 16
P1 T11
P2 T22
P3 T32 T31
P4 T42 T41

Finish time F(S) = max{14,7}=14


Mean finish time MFT = ½ {f1( S) + f2(S)}
=1/2 {14+7}=21/2=10.5
Non Preemptive scheduling:

The flow shop scheduling for these operations with Non preemptive approach is a shown
below
Time 0 3 4 7 8 9 10 12 13 14

Page 26
Chapter 4 – Dynamic Programming

P1 T11
P2 T22
P3 T31 T32
P4 T41 T41 T42

Finish time F(S) = max{12,14}=14


Mean finish time MFT = ½ {f1( S) + f2(S)}
=1/2 {12+14}=26/2=13

Questions:
1. Define graph, tree, binary tree, complete binary tree, strict binary tree.
2. Define path, cycle , adjacency matrix, cost matrix, indegree, outdegree.
3. Explain the principle of optimality.
4. Explain the Floyd’s algorithm to find the shortest path between all pairs of vertices.
5. Write an algorithm for multi stage problem – forward approach.
6. Write an algorithm for multi stage problem – backward approach.
7. Explain flow shop scheduling with an example.
8. Solve all pairs shortest path problem for the directed graph shown below using
Floyd’s Algorithm

9. Consider the below multistage graph and find the minimum cost from S to T using
forward approach

Page 27
Chapter 4 – Dynamic Programming

10. Consider the below multistage graph and find the minimum cost from 1 to 9 using
backward approach

11.Apply dynamic programming algorithm to the following instance of knapsack problem.

Item Weight Value


1 1 2
2 2 3
3 5 4
4 6 5

With the capacity M=8.

12.Let n=2 jobs to be scheduled on 3 processors {p1,p2,p3}.The task times are as


follows, Find the finish time and mean finish time.

Page 28

You might also like