Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
54 views

Dynamic Programming (0-1 Knapsack)

The document discusses optimization problems and two strategies to solve them: greedy algorithms and dynamic programming. It provides details on dynamic programming, including that it breaks problems into simpler subproblems, solves each subproblem only once, and stores the solutions to avoid repetitive computations. Dynamic programming is useful for optimization problems seeking a maximal or minimal solution subject to restrictions.

Uploaded by

Shivansh Goel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Dynamic Programming (0-1 Knapsack)

The document discusses optimization problems and two strategies to solve them: greedy algorithms and dynamic programming. It provides details on dynamic programming, including that it breaks problems into simpler subproblems, solves each subproblem only once, and stores the solutions to avoid repetitive computations. Dynamic programming is useful for optimization problems seeking a maximal or minimal solution subject to restrictions.

Uploaded by

Shivansh Goel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Optimization Problem

• An optimization problem is one in which we are given a set of input values, which are required either to
be maximized or minimized (known as objective), i.e. some constraints or conditions. Greedy Algorithm
always makes the choice (greedy criteria) looks best at the moment, to optimize a given objective.

• The minimization or maximization problems are termed as optimization problems.

• There are two strategies to solve the optimization problem:

➢ Greedy Algorithm
➢ Dynamic Programming
Dynamic Programming
Dynamic Programming (DP) is an algorithmic technique for solving optimization problems by breaking them
into simpler sub-problems and storing each sub-solution so that the corresponding sub-problem can be solved
only once. Dynamic Programming is a good methodology for optimization problems that seek the maximal or
minimal solution with restrictions as it searches through all possible sub-problems and never recomputes the
conclusion to any sub-problem.

Dynamic Programming is a bottom-up approach we solve all possible small problems and then combine them
to obtain solutions for bigger problems.

This is particularly helpful when the number of copying subproblems is exponentially large. Dynamic
Programming is frequently related to Optimization Problems.

In greedy method, we try to follow a predefined procedure to get the optimal result. For example always select
the minimum distance in finding the shortest path.

In dynamic programming we try to find out all possible solutions and then pick up the best solution. It is time
consuming compared to greedy method.
• For any problem there may be many solutions, which are feasible so we try out all those solution to find the
optimal one.

• Dynamic programming algorithms are generally solved using recursive formulas, but we generally avoid
recursion.

• Dynamic programming follows principal of optimality, which says that a problem can be solved by taking a
sequence of decisions to get optimal solution.

• In greedy method, we take decision once, but in dynamic programming at every stage we take decisions.

• dynamic programming says that the similar subproblem should not be computed more than once.
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for
future purposes so that we do not need to compute the result again. The subproblems are optimized to
optimize the overall solution is known as optimal substructure property. The main use of dynamic
programming is to solve optimization problems. Here, optimization problems mean that when we are trying to
find out the minimum or the maximum solution of a problem. The dynamic programming guarantees to find
the optimal solution of a problem if the solution exists.

The definition of dynamic programming says that it is a technique for solving a complex problem by first
breaking into a collection of simpler subproblems, solving each subproblem just once, and then storing their
solutions to avoid repetitive computations.
Let's understand this approach through an example.

Consider an example of the Fibonacci series. The following series is the Fibonacci series:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…

The numbers in the above series are not randomly calculated. Mathematically, we could write each of the
terms using the below formula:

F(n) = F(n-1) + F(n-2),

With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above
relationship. For example, F(2) is the sum f(0) and f(1), which is equal to 1.

How can we calculate F(20)?

The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows
that how F(20) is calculated.
F(n) = F(n-1) + F(n-2)
As we can observe in the above figure that F(20) is calculated as the sum of F(19) and F(18). In the dynamic
programming approach, we try to divide the problem into the similar subproblems. We are following this
approach in the above case where F(20) into the similar subproblems, i.e., F(19) and F(18). If we recap the
definition of dynamic programming that it says the similar subproblem should not be computed more than once.
Still, in the above case, the subproblem is calculated twice. In the above example, F(18) is calculated two times;
similarly, F(17) is also calculated twice. However, this technique is quite useful as it solves the similar
subproblems, but we need to be cautious while storing the results because we are not particular about storing
the result that we have computed once, then it can lead to a wastage of resources.

In the above example, if we calculate the F(18) in the right subtree, then it leads to the tremendous usage of
resources and decreases the overall performance.

The solution to the above problem is to save the computed results in an array. First, we calculate F(16) and
F(17) and save their values in an array. The F(18) is calculated by summing the values of F(17) and F(16),
which are already saved in an array. The computed value of F(18) is saved in an array. The value of F(19) is
calculated using the sum of F(18), and F(17), and their values are already saved in an array. The computed
value of F(19) is stored in an array. The value of F(20) can be calculated by adding the values of F(19) and
F(18), and the values of both F(19) and F(18) are stored in an array. The final computed value of F(20) is stored
in an array.
How does the dynamic programming approach work?

The following are the steps that the dynamic programming follows:
• It breaks down the complex problem into simpler subproblems.
• It finds the optimal solution to these sub-problems.
• It stores the results of subproblems (memoization). The process of storing the results of subproblems is
known as memoization.
• It reuses them so that same sub-problem is calculated more than once.
• Finally, calculate the result of the complex problem.

The above five steps are the basic steps for dynamic programming. The dynamic programming is applicable that
are having properties such as:
• Those problems that are having overlapping subproblems and optimal substructures. Here, optimal
substructure means that the solution of optimization problems can be obtained by simply combining the
optimal solution of all the subproblems.
• In the case of dynamic programming, the space complexity would be increased as we are storing the
intermediate results, but the time complexity would be decreased.
Approaches of dynamic programming
There are two approaches to dynamic programming:

• Top-down approach
• Bottom-up approach

Top-down approach

The top-down approach follows the memorization technique, while bottom-up approach follows the tabulation
method. Here memorization is equal to the sum of recursion and caching. Recursion means calling the
function itself, while caching means storing the intermediate results.

Advantages
• It is very easy to understand and implement.
• It solves the subproblems only when it is required.
• It is easy to debug.
Disadvantages
• It uses the recursion technique that occupies more memory in the call stack. Sometimes when the
recursion is too deep, the stack overflow condition will occur.
• It occupies more memory that degrades the overall performance.
There are two different ways to store the values so that the values of a sub-problem can be reused. Here, will
discuss two patterns of solving dynamic programming (DP) problem:

• Tabulation: Bottom Up
• Memoization: Top Down
Let's understand dynamic programming through an example.

int fib(int n)
{
if(n<0)
error;
if(n = = 0)
return 0;
if(n = = 1)
return 1;
sum = fib(n-1) + fib(n-2);
}

In the above code, we have used the recursive approach to find out the Fibonacci series. When the value of 'n'
increases, the function calls will also increase, and computations will also increase. In this case, the time
complexity increases exponentially, and it becomes 2n.

One solution to this problem is to use the dynamic programming approach. Rather than generating the recursive
tree again and again, we can reuse the previously calculated value. If we use the dynamic programming approach,
then the time complexity would be O(n).
Here the time complexity of this recursive function is 2n.

We may observe that here the same function call is being


done again and again, for example F(2) is called 3 times,
F(1) is called 5 times, and F(0) is called 3 times.

If we store there results somewhere, we wont need to call


them again and again, and thus we can reduce the time.
Suppose we take an array to store the intermediate results of the function call. Initially put -1 in all slots.

F -1 -1 -1 -1 -1
0 1 2 3 4

F -1 1 -1 -1 -1
0 1 2 3 4

F 0 1 -1 -1 -1
0 1 2 3 4

F 0 1 1 -1 -1
0 1 2 3 4

F 0 1 1 2 -1
0 1 2 3 4
Total number of function calls = n+1
F 0 1 1 2 3
O(n) 0 1 2 3 4
When we apply the dynamic programming approach in the implementation of the Fibonacci series, then the code
would look like:

static int count = 0;


int fib(int n) In this code, we have used the memorization technique in which
{ we store the results in an array to reuse the values. This is also
if(memo[n]!= NULL) known as a top-down approach in which we move from the top
return memo[n]; and break the problem into sub-problems.
count++;
if(n<0)
error;
if(n==0) Memorization follows top down approach.
return 0;
if(n==1)
return 1;
sum = fib(n-1) + fib(n-2);
memo[n] = sum;
}

If we want we can use this method, but generally we don’t use it. We use the tabulation method instead, which is
the iterative method.
Bottom-Up approach

The bottom-up approach is also one of the techniques which can be used to implement the dynamic
programming. It uses the tabulation technique to implement the dynamic programming approach. It solves the
same kind of problems but it removes the recursion. If we remove the recursion, there is no stack overflow issue
and no overhead of the recursive functions, thus saving the memory space. In this tabulation technique, we solve
the problems and store the results in a matrix.

The bottom-up is an algorithm that starts from the beginning, whereas the recursive algorithm starts from the end
and works backward. In the bottom-up approach, we start from the base case to find the answer for the end. As
we know, the base cases in the Fibonacci series are 0 and 1. Since the bottom approach starts from the base cases,
so we will start from 0 and 1.
Key points

• We solve all the smaller sub-problems that will be needed to solve the larger sub-problems then move to the
larger problems using smaller sub-problems.
• We use for loop to iterate over the sub-problems.
• The bottom-up approach is also known as the tabulation or table filling method.

Let's understand through an example.

Suppose we have an array that has 0 and 1 values at a[0] and a[1] positions, respectively shown as below:

Since the bottom-up approach starts from the lower values, so the values at a[0] and a[1] are added to find the
value of a[2] shown as below:
The value of a[3] will be calculated by adding a[1] and a[2], and it becomes 2 shown as below:

The value of a[4] will be calculated by adding a[2] and a[3], and it becomes 3 shown as below:

The value of a[5] will be calculated by adding the values of a[4] and a[3], and it becomes 5 shown as below:
Bottom-Up approach

The code for implementing the Fibonacci series using the bottom-up approach (iterative manner) is given below:

int fib(int n)
{
if (n < = 1)
return n;
F[0] = 0, F[1] = 1;
for( i=2; i<=n; i++)
{
F[i] = F[i-1] + F[i-2]
}
return F[n];
}

In the above code, base cases are 0 and 1 and then we have used for loop to find other values of Fibonacci series.
0 1 2 int fib(int n)
F
0 1 2 3 4 5 {
if (n < = 1)
return n;
i
F[0] = 0, F[1] = 1;
for( i=2; i<=n; i++)
F 0 1 2 3 {
0 1 2 3 4 5 F[i] = F[i-1] + F[i-2]
}
i return F[n];
}

F 0 1 2 3 5
Return 5 means 5 will be returned by
0 1 2 3 4 5
the function

i Here, the table is generated from an


iterative function and filled from
0 1 2 3 5 8 smaller value. So it’s a bottom un
F approach.
0 1 2 3 4 5
It has started with 0 unlike previous
which started with 5.
i
0/1 Knapsack Problem
In this item cannot be broken which means thief should take the item as a whole or should leave it. That's why it is
called 0/1 knapsack Problem.

• Each item is taken or not taken.


• Cannot take a fractional amount of an item taken or take an item more than once.
• It cannot be solved by the Greedy Approach because it is enable to fill the knapsack to capacity.
• Greedy Approach doesn't ensure an Optimal Solution.
Consider the following example

M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}

Now since the item is not divisible, it can be selected entirely or not selected at all. So we will have to take
decisions at each step, whether to select the object (1) or not (0).

So the total number of possibilities are:

0000 = no object
0001 = last object
1000 = first object
1010 = first and third object
1111 = all objects

So total 2^4 possibilities (solutions) are there. Therefore, for n objects, there will be n possible solutions.

So the time complexity will be 2^n. However, the dynamic programming method provides an easy method to do
this.
Lets use a tabulation method to solve this problem. Total weight allowed

M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}

P/W
Fill the first row as if we are not
considering any object. So the profit
and weight both will be 0. Therefore,
the first row and column will contain
zero only.

We will fill profit according to the weights in this matrix.


Now consider the first object and write the
associated profit and weight values in the
table. Ignore all remaining objects.

M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}

Since it has the weight 2, so the profit will be


filled corresponding to weight 2. So F(1, 2) = 1.
Now since we have put only one object in the
bag the profit will be 1 only. So all columns in
the first row will have 1.

Now consider the second object and ignore all


remaining objects. Now we will consider the
first object as well.

Now what about the columns 1 and 2 in the


third row, they will be as they are.
Now what about the columns from 4 to 8.
Now as mentioned before, we will have to
consider all previous object, 1 in this case.
So the total weight will be (2+3) = 5. and
the profit is (1+2) = 3. So fill 3 at column 5.

All the next columns from 6 to 8 will be


filled with 3, as we are considering 2 objects
and their weight will be less than the
column weights.

Now what about column 4? Now we will


compare the values F(1, 4) and F(2, 3),
whichever is greater. Since F(2, 3) = 2 is
greater, it will be filled.
Now consider the third object and ignore all
remaining objects. Now we will consider the
first and second objects as well.

M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
M=8 P = {1, 2, 5, 6} 0 1 2 3 4 5 6 7 8
N=4 W = {2, 3, 4, 5}
P w 0 0 0 0 0 0 0 0 0 0

1 2 1 0 0 1 1 1 1 1 1 1

2 3 2 0 0 1 2 2 3 3 3 3

5 4 3 0 0 1 2 5 5 6 7 7

6 5 4 0

F(i, w) = max{F[i-1, w], F[i-1, w-wi]+Pi}


0 1 2 3 4 5 6 7 8
M=8 P = {1, 2, 5, 6} P w 0 0 0 0 0 0 0 0 0 0
N=4 W = {2, 3, 4, 5}
1 2 1 0 0 1 1 1 1 1 1 1

2 3 2 0 0 1 2 2 3 3 3 3

5 4 3 0 0 1 2 5 5 6 7 7

6 5 4 0 0 1 2 3 5

F(i, w) = max{F[i-1, w], F[i-1, w-wi]+Pi} F(4, 5) = max{F[4-1, 5], F[4-1, 5-5]+6}
= max{F[3,5], F[3, 0] + 6 } =
= max{5, 0+6} = 6
F(4, 1) = max{F[4-1, 1], F[4-1, 1-5]+6}
= max{F[3,1]} = 0
F(4, 6) = max{F[4-1, 6], F[4-1, 6-5]+6}
Upto column 5 we will get the same values as previous. = max{F[3,6], F[3, 1] + 6 } =
= max{6, 0+6} = 6
0 1 2 3 4 5 6 7 8
F(4, 7) = max{F[4-1, 7], F[4-1, 7-5]+6} P w 0 0 0 0 0 0 0 0 0 0
= max{F[3,7], F[3, 2] + 6 }
= max{7, 1+6} = 7 1 2 1 0 0 1 1 1 1 1 1 1

2 3 2 0 0 1 2 2 3 3 3 3
F(4, 8) = max{F[4-1, 8], F[4-1, 8-5]+6} 5 4 3 0 0 1 2 5 5 6 7 7
= max{F[3,8], F[3, 3] + 6 }
= max{7, 2+6} = 8 6 5 4 0 0 1 2 5 6 6 7 8

Fill it directly without formula

M=8 P = {1, 2, 5, 6}
N=4 W = {2, 3, 4, 5}
All previous values as they are

Now consider pairing with


previous objects.
Now we will have to make sequence of decisions
X1, x2, x3, x4

Start with the maximum profit i.e. 8. Since 8 is not


present in the second last row, that means it has
come due to including the last object. This means _, _, _, 1
that last object is giving maximum profit. So
consider this

Now calculate the profit of remaining objects,

Total – profit of fourth object


8–6=2

Now check 2 is present in second last row or not? Here we are checking for the third object. Whether it is required to
earn that profit 2 or not. Now go in one previous row (third) and check it is present. Yes. Meaning that profit 2 was
not earned due to object 3. So don’t consider it

_, _, 0, 1
Now check 2 is present in second row or not? Here we are checking for the second object. Whether it is required to
earn that profit 2 or not. It is not present, means object 2 was involved in making this profit. So include object.

_, 1, 0, 1

Now calculate the profit of remaining objects,

Total – profit of second object


2–2=0
We can directly write 0 for first object as no profit is remaining.

0, 1, 0, 1
0 1 2 3 4 5 6 7 8
P w 0 0 0 0 0 0 0 0 0 0

1 2 1 0 0 1 1 1 1 1 1 1

2 3 2 0 0 1 2 2 3 3 3 3

5 4 3 0 0 1 2 5 5 6 7 7

6 5 4 0 0 1 2 5 6 6 7 8

You might also like