Dynamic Programming (0-1 Knapsack)
Dynamic Programming (0-1 Knapsack)
• An optimization problem is one in which we are given a set of input values, which are required either to
be maximized or minimized (known as objective), i.e. some constraints or conditions. Greedy Algorithm
always makes the choice (greedy criteria) looks best at the moment, to optimize a given objective.
➢ Greedy Algorithm
➢ Dynamic Programming
Dynamic Programming
Dynamic Programming (DP) is an algorithmic technique for solving optimization problems by breaking them
into simpler sub-problems and storing each sub-solution so that the corresponding sub-problem can be solved
only once. Dynamic Programming is a good methodology for optimization problems that seek the maximal or
minimal solution with restrictions as it searches through all possible sub-problems and never recomputes the
conclusion to any sub-problem.
Dynamic Programming is a bottom-up approach we solve all possible small problems and then combine them
to obtain solutions for bigger problems.
This is particularly helpful when the number of copying subproblems is exponentially large. Dynamic
Programming is frequently related to Optimization Problems.
In greedy method, we try to follow a predefined procedure to get the optimal result. For example always select
the minimum distance in finding the shortest path.
In dynamic programming we try to find out all possible solutions and then pick up the best solution. It is time
consuming compared to greedy method.
• For any problem there may be many solutions, which are feasible so we try out all those solution to find the
optimal one.
• Dynamic programming algorithms are generally solved using recursive formulas, but we generally avoid
recursion.
• Dynamic programming follows principal of optimality, which says that a problem can be solved by taking a
sequence of decisions to get optimal solution.
• In greedy method, we take decision once, but in dynamic programming at every stage we take decisions.
• dynamic programming says that the similar subproblem should not be computed more than once.
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for
future purposes so that we do not need to compute the result again. The subproblems are optimized to
optimize the overall solution is known as optimal substructure property. The main use of dynamic
programming is to solve optimization problems. Here, optimization problems mean that when we are trying to
find out the minimum or the maximum solution of a problem. The dynamic programming guarantees to find
the optimal solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex problem by first
breaking into a collection of simpler subproblems, solving each subproblem just once, and then storing their
solutions to avoid repetitive computations.
Let's understand this approach through an example.
Consider an example of the Fibonacci series. The following series is the Fibonacci series:
The numbers in the above series are not randomly calculated. Mathematically, we could write each of the
terms using the below formula:
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above
relationship. For example, F(2) is the sum f(0) and f(1), which is equal to 1.
The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows
that how F(20) is calculated.
F(n) = F(n-1) + F(n-2)
As we can observe in the above figure that F(20) is calculated as the sum of F(19) and F(18). In the dynamic
programming approach, we try to divide the problem into the similar subproblems. We are following this
approach in the above case where F(20) into the similar subproblems, i.e., F(19) and F(18). If we recap the
definition of dynamic programming that it says the similar subproblem should not be computed more than once.
Still, in the above case, the subproblem is calculated twice. In the above example, F(18) is calculated two times;
similarly, F(17) is also calculated twice. However, this technique is quite useful as it solves the similar
subproblems, but we need to be cautious while storing the results because we are not particular about storing
the result that we have computed once, then it can lead to a wastage of resources.
In the above example, if we calculate the F(18) in the right subtree, then it leads to the tremendous usage of
resources and decreases the overall performance.
The solution to the above problem is to save the computed results in an array. First, we calculate F(16) and
F(17) and save their values in an array. The F(18) is calculated by summing the values of F(17) and F(16),
which are already saved in an array. The computed value of F(18) is saved in an array. The value of F(19) is
calculated using the sum of F(18), and F(17), and their values are already saved in an array. The computed
value of F(19) is stored in an array. The value of F(20) can be calculated by adding the values of F(19) and
F(18), and the values of both F(19) and F(18) are stored in an array. The final computed value of F(20) is stored
in an array.
How does the dynamic programming approach work?
The following are the steps that the dynamic programming follows:
• It breaks down the complex problem into simpler subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of subproblems (memoization). The process of storing the results of subproblems is
known as memoization.
• It reuses them so that same sub-problem is calculated more than once.
• Finally, calculate the result of the complex problem.
The above five steps are the basic steps for dynamic programming. The dynamic programming is applicable that
are having properties such as:
• Those problems that are having overlapping subproblems and optimal substructures. Here, optimal
substructure means that the solution of optimization problems can be obtained by simply combining the
optimal solution of all the subproblems.
• In the case of dynamic programming, the space complexity would be increased as we are storing the
intermediate results, but the time complexity would be decreased.
Approaches of dynamic programming
There are two approaches to dynamic programming:
• Top-down approach
• Bottom-up approach
Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach follows the tabulation
method. Here memorization is equal to the sum of recursion and caching. Recursion means calling the
function itself, while caching means storing the intermediate results.
Advantages
• It is very easy to understand and implement.
• It solves the subproblems only when it is required.
• It is easy to debug.
Disadvantages
• It uses the recursion technique that occupies more memory in the call stack. Sometimes when the
recursion is too deep, the stack overflow condition will occur.
• It occupies more memory that degrades the overall performance.
There are two different ways to store the values so that the values of a sub-problem can be reused. Here, will
discuss two patterns of solving dynamic programming (DP) problem:
• Tabulation: Bottom Up
• Memoization: Top Down
Let's understand dynamic programming through an example.
int fib(int n)
{
if(n<0)
error;
if(n = = 0)
return 0;
if(n = = 1)
return 1;
sum = fib(n-1) + fib(n-2);
}
In the above code, we have used the recursive approach to find out the Fibonacci series. When the value of 'n'
increases, the function calls will also increase, and computations will also increase. In this case, the time
complexity increases exponentially, and it becomes 2n.
One solution to this problem is to use the dynamic programming approach. Rather than generating the recursive
tree again and again, we can reuse the previously calculated value. If we use the dynamic programming approach,
then the time complexity would be O(n).
Here the time complexity of this recursive function is 2n.
F -1 -1 -1 -1 -1
0 1 2 3 4
F -1 1 -1 -1 -1
0 1 2 3 4
F 0 1 -1 -1 -1
0 1 2 3 4
F 0 1 1 -1 -1
0 1 2 3 4
F 0 1 1 2 -1
0 1 2 3 4
Total number of function calls = n+1
F 0 1 1 2 3
O(n) 0 1 2 3 4
When we apply the dynamic programming approach in the implementation of the Fibonacci series, then the code
would look like:
If we want we can use this method, but generally we don’t use it. We use the tabulation method instead, which is
the iterative method.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to implement the dynamic
programming. It uses the tabulation technique to implement the dynamic programming approach. It solves the
same kind of problems but it removes the recursion. If we remove the recursion, there is no stack overflow issue
and no overhead of the recursive functions, thus saving the memory space. In this tabulation technique, we solve
the problems and store the results in a matrix.
The bottom-up is an algorithm that starts from the beginning, whereas the recursive algorithm starts from the end
and works backward. In the bottom-up approach, we start from the base case to find the answer for the end. As
we know, the base cases in the Fibonacci series are 0 and 1. Since the bottom approach starts from the base cases,
so we will start from 0 and 1.
Key points
• We solve all the smaller sub-problems that will be needed to solve the larger sub-problems then move to the
larger problems using smaller sub-problems.
• We use for loop to iterate over the sub-problems.
• The bottom-up approach is also known as the tabulation or table filling method.
Suppose we have an array that has 0 and 1 values at a[0] and a[1] positions, respectively shown as below:
Since the bottom-up approach starts from the lower values, so the values at a[0] and a[1] are added to find the
value of a[2] shown as below:
The value of a[3] will be calculated by adding a[1] and a[2], and it becomes 2 shown as below:
The value of a[4] will be calculated by adding a[2] and a[3], and it becomes 3 shown as below:
The value of a[5] will be calculated by adding the values of a[4] and a[3], and it becomes 5 shown as below:
Bottom-Up approach
The code for implementing the Fibonacci series using the bottom-up approach (iterative manner) is given below:
int fib(int n)
{
if (n < = 1)
return n;
F[0] = 0, F[1] = 1;
for( i=2; i<=n; i++)
{
F[i] = F[i-1] + F[i-2]
}
return F[n];
}
In the above code, base cases are 0 and 1 and then we have used for loop to find other values of Fibonacci series.
0 1 2 int fib(int n)
F
0 1 2 3 4 5 {
if (n < = 1)
return n;
i
F[0] = 0, F[1] = 1;
for( i=2; i<=n; i++)
F 0 1 2 3 {
0 1 2 3 4 5 F[i] = F[i-1] + F[i-2]
}
i return F[n];
}
F 0 1 2 3 5
Return 5 means 5 will be returned by
0 1 2 3 4 5
the function
M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
Now since the item is not divisible, it can be selected entirely or not selected at all. So we will have to take
decisions at each step, whether to select the object (1) or not (0).
0000 = no object
0001 = last object
1000 = first object
1010 = first and third object
1111 = all objects
So total 2^4 possibilities (solutions) are there. Therefore, for n objects, there will be n possible solutions.
So the time complexity will be 2^n. However, the dynamic programming method provides an easy method to do
this.
Lets use a tabulation method to solve this problem. Total weight allowed
M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
P/W
Fill the first row as if we are not
considering any object. So the profit
and weight both will be 0. Therefore,
the first row and column will contain
zero only.
M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
M=8 P = {1, 2, 5, 6} 0 1 2 3 4 5 6 7 8
N=4 W = {2, 3, 4, 5}
P w 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 3 5
F(i, w) = max{F[i-1, w], F[i-1, w-wi]+Pi} F(4, 5) = max{F[4-1, 5], F[4-1, 5-5]+6}
= max{F[3,5], F[3, 0] + 6 } =
= max{5, 0+6} = 6
F(4, 1) = max{F[4-1, 1], F[4-1, 1-5]+6}
= max{F[3,1]} = 0
F(4, 6) = max{F[4-1, 6], F[4-1, 6-5]+6}
Upto column 5 we will get the same values as previous. = max{F[3,6], F[3, 1] + 6 } =
= max{6, 0+6} = 6
0 1 2 3 4 5 6 7 8
F(4, 7) = max{F[4-1, 7], F[4-1, 7-5]+6} P w 0 0 0 0 0 0 0 0 0 0
= max{F[3,7], F[3, 2] + 6 }
= max{7, 1+6} = 7 1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
F(4, 8) = max{F[4-1, 8], F[4-1, 8-5]+6} 5 4 3 0 0 1 2 5 5 6 7 7
= max{F[3,8], F[3, 3] + 6 }
= max{7, 2+6} = 8 6 5 4 0 0 1 2 5 6 6 7 8
M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
All previous values as they are
Now check 2 is present in second last row or not? Here we are checking for the third object. Whether it is required to
earn that profit 2 or not. Now go in one previous row (third) and check it is present. Yes. Meaning that profit 2 was
not earned due to object 3. So don’t consider it
_, _, 0, 1
Now check 2 is present in second row or not? Here we are checking for the second object. Whether it is required to
earn that profit 2 or not. It is not present, means object 2 was involved in making this profit. So include object.
_, 1, 0, 1
0, 1, 0, 1
0 1 2 3 4 5 6 7 8
P w 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8