Dynamic Programming
Dynamic Programming
Dynamic Programming
Dynamic Programming is a technique used to solve different types of problems in time O(n2)
or O(n3) for which a naive approach would take exponential time. Dynamic programming is
typically applied to optimization problem. Dynamic Programming is an algorithmic paradigm
that solves a given complex problem by breaking it into subproblems and stores the results of
subproblems to avoid computing the same results again. There are two main properties of a
problem which suggest that the given problem can be solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure
Overlapping Subproblems:
As studied in Divide and Conquer, Dynamic Programming technique combines the solutions
of sub-problems. Dynamic Programming is mainly used when solutions of same subproblems
are needed again and again. Therefore the computed solutions to subproblems are stored in a
table so that these don’t have to recomputed. So Dynamic Programming is not useful when
there are no common (overlapping) subproblems because there is no point storing the
solutions if they are not needed again.
For example, Binary Search doesn’t have common subproblems. But if we consider the
following recursive program for Fibonacci Numbers, there are many subproblems which are
solved again and again.
int fib(int n)
{
if ( n <= 1 ) return n;
return fib(n-1) + fib(n-2);
}
Optimal Substructure
A problem is said to have optimal substructure if an optimal solution can be constructed
efficiently from optimal solutions of its subproblems. This property is used to determine the
usefulness of dynamic programming and greedy algorithms for a problem. There are two
ways of doing this.
1) Top-Down : Start solving the given problem by breaking it down. If you see that the
problem has been solved already, then just return the saved answer. If it has not been solved,
solve it and save the answer. This is usually easy to think of and very intuitive. This is
referred to as Memoization.
Page 1
Chapter 4 – Dynamic Programming
2) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved
and start solving from the trivial subproblem , up towards the given problem. In this process,
it is guaranteed that the subproblems are solved before solving the problem. This is referred
to as Dynamic Programming: Note that divide and conquer is slightly a different technique.
In that, we divide the problem in to non-overlapping subproblems and solve them
independently, like in mergesort and quick sort.
Principal of optimality: The principle of optimality states that no matter what the first
decision, the remaining decisions must be optimal with respect to the state that results from
this first decision. This principle implies that an optimal decision sequence is comprised for
some formulations of some problem. Since the principle of optmaility may not hold for some
formulations of some problems, it is necessary to verify that it does not hold for the problem
being solved. Dynamic programming cannot be applied when this principle does not hold.
In dynamic programming the solution to the problem is a result of the sequence of decisions.
At every stage we make decisions to obtain optimal solution. This method is effective when a
given sub problem may arise from more than one partial set of choices. Using this method the
exponential time algorithm may be brought down to polynomial algorithm as it reduces
amount of enumeration by avoiding the enumeration of some decision sequences that cannot
possibly be optimal. A dynamic programming algorithm solves every sub problem just once
and then saves its answer in a table thereby avoiding the work of recomputing the answer
every time the sub problem is encountered.
The basic steps of dynamic programming are:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom up fashion
4. Construct optimal solution computed information.
Principle of Optimality
The principle states that an optimal sequence of decisions has the property that
whatever the initial state and decisions are the remaining decisions must constitute an
optimal decision sequences with regard to the state resulting from the first decision.
Knapsack problem
Page 2
Chapter 4 – Dynamic Programming
The recurrence relation to get the solution to knapsack problem using dynamic
programming can be:
0 if i=j=0
Solution : It is given that the number of items N=4 and the capacity of the knapsack=5
with weights w1=2,w2=1,w3=3 and w4=2 with the profits p1=12,p2=10,p3=20and p4=15.
Step1: Since N=4 and M=5, let us have a table with N+1 rows(i.e., 5 rows) and M+1
columns (i.e.,6 columns).We know from the above relation that whenever i=0 it
indicates that there are no items to select and hence irrespective of capacity of the
knapsack, the profit will be 0 and is denoted by V[i, j]=0 for 0<=j<=M when i=0.
So V[0,0] =V[0,1]=V[0,2]=V[0,3]=V[0,4]=V[0,5]=0
And whenever j=0 and irrespective of the number of items selected, we cannot place
into knapsack and hence profit will be 0. This is denoted by V[i, j]=0 for 0<=i<=N
when j=0.
So V[0,0] =V[1,0]=V[2,0]=V[3,0]=V[4,0]=0
So, the table with N+1 rows and M+1 columns can be filled as shown below:
Page 3
Chapter 4 – Dynamic Programming
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0
3 0
4 0
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0
4 0
Page 4
Chapter 4 – Dynamic Programming
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0 10 15 25 30 37
Now find the maximum value of v [N, M] after selecting N items and placing them
into the knapsack to get optimal solution.
Example 2:
Apply dynamic programming algorithm to the following instance of knapsack
problem.
Page 5
Chapter 4 – Dynamic Programming
5 7 28
The [i, j] entry here will be V [i, j], the best value obtainable using the first "i" rows of items
if the maximum capacity were j. We begin by initialization and first row.
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0
3 0
4 0
5 0
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
Page 6
Chapter 4 – Dynamic Programming
3 0
4 0
5 0
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0
5 0
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5
Page 7
Chapter 4 – Dynamic Programming
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5 0 1 6 7 7 18 22 24 28 29 29 40
Therefore we choose the maximum profit at v[5][11]=40 ,compare with previous cell
v[4][11],as there is no change in value we consider v[4][11] which has produced the value
=40.So consider object 4.Maximum knapsack capacity is 11,so considering object 4 the left
out capacity M=11- 6=5.Number of objects yet to consider=3.Therefore check the value at
v[3,5],as v[3,5]=18,and the previous cell value v[2,5]=7,consider v[3,5].So we consider
object 3 of weight=5.Now the left out capacity M=5-5=0.
The solution is to consider object 4 and object 3,producing the total maximum profit of 40
10
1 2
5
8
10
Page 8 15
8 20 9 13
6
Chapter 4 – Dynamic Programming
Solution:
Let the source vertex be 1.The cost matrix for the graph is,
C(i , j) 1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
Step 1: Consider |s|=0.In this case no intermediate node is considered and the number
of elements in s is 0.i.e.Coming to node 1 without going to any intermediate vertex.
Using the formula g(i,s) = min{c(i, j)+g( j, s-{ j })}
jϵs
g(2,0)=c(2,1)=5 //path 21
g(3,0)=c(3,1)=6 //path 31
g(4,0)=c(4,1)=8 //path 41
Step 2: Consider |s|=1.In this case one intermediate node is considered and the number
of elements in s is 1.i.e.Coming to node 1 going through any 1 intermediate vertex.
g(2,{3})=c(2,3)+g(3,0)=9+6=15 //path 231
g(2,{4})=c(2,4)+g(4,0)=10+8=18 //path 241
Step 3: Consider |s|=2.In this case two intermediate nodes are considered and the
number of elements in s is 2.i.e.Coming to node 1 going through any 2 intermediate
nodes.
g(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min {9+20,10+15}=min{29,25}=25
g(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
=min {13+18, 12+13}=min{31,25}=25
g(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min {8+15, 9+18} =min {23,27}=23
Page 9
Chapter 4 – Dynamic Programming
Step 4: Consider |s|=3.In this case three intermediate nodes are considered and the
number of elements in s is 3.i.e.Coming to node 1 going through 3 intermediate nodes.
G(1,{2,3,4})=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min {10+25, 15+25, 20+23}=min{35,40,43}=35
Therefore the optimal tour cost is 35.
Example 2:
Find the shortest path of the Travelling salesman problem using dynamic programming for
the below graph.
G{2,0}=c(2,1)=10
G{3,0}=c(3,1)=18
G{4,0}=c(4,1)=20
G{2,{3}}=c(2,3)+g(3,0)=6+18=24
G{2,{4}}=c(2,4)+g(4,0)=12+20=32
G{3,{2}}=c(3,2)+g(2,0)=6+10=16
G{3,{4}}=c(3,4)+g(4,0)=18+20=38
G{4,{2}}=c(4,2)+g(2,0)=12+10=22
G{4,{3}}=c(4,3)+g(3,0)=18+18=36
G(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min{6+38,12+36}=min{44,48}=44
G(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
Page 10
Chapter 4 – Dynamic Programming
=min{6+32,18+22}=min{38,40}=38
G(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min{12+24,18+16}=min{36,34}=34
G{1,{2,3,4}}=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min{10+44,18+38,20+34}=Min{54,56,54}=54
Find the tour path by expanding G{1,{2,3,4}} looking for the minimum value
Path 1: c(1,2)+g(2,{3,4}
= c(1,2)+ c(2,3)+g(3,{4})
=c(1,2)+ c(2,3)+c(3,4)+g(4,0)
=c(1,2)+ c(2,3)+c(3,4)+c(4,1)
Solution 1 : 1->2->3->4->1
Page 11