Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
44 views

Dynamic Programming: by Dr.V.Venkateswara Rao

Dynamic programming is an algorithmic technique that solves optimization problems by breaking them down into simpler subproblems. It is applicable when subproblems overlap and share common subsolutions. Dynamic programming solves each subproblem only once, storing and reusing the results to avoid recomputing the same subproblems repeatedly. It uses a bottom-up approach by solving smaller subproblems first and building up to the solution of the original problem.

Uploaded by

rajkumar
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Dynamic Programming: by Dr.V.Venkateswara Rao

Dynamic programming is an algorithmic technique that solves optimization problems by breaking them down into simpler subproblems. It is applicable when subproblems overlap and share common subsolutions. Dynamic programming solves each subproblem only once, storing and reusing the results to avoid recomputing the same subproblems repeatedly. It uses a bottom-up approach by solving smaller subproblems first and building up to the solution of the original problem.

Uploaded by

rajkumar
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Dynamic Programming

By
Dr.V.Venkateswara Rao
What is Dynamic Programming?

Dynamic Programming (DP) is an algorithmic technique for solving an optimization


problem by breaking it down into simpler subproblems and utilizing the fact that the
optimal solution to the overall problem depends upon the optimal solution to its
subproblems.

Dynamic Programming is a technique in computer programming that helps to efficiently


solve a class of problems that have overlapping subproblems and optimal substructure
 property.
Such problems involve repeatedly calculating the value of the same subproblems to find
the optimum solution. 
Dynamic programming, like the divide-and-conquer method, solves problems by combining
the solutions to subproblems. 

1. divide-and-conquer algorithms partition the problem into independent subproblems,


solve the subproblems recursively, and then combine their solutions to solve the original
problem.

2. If there are overlapping subproblems (if subproblems are repeated or same subproblems
occur no of times) then each time it computes or solves subproblem independently again
and again. It is Time Consuming Process

3. Divide and Conquer is Best when The subproblems are different and independent and
there no overlapping of subproblems

4. In contrast, dynamic programming is applicable when the subproblems are not


independent, that is, when subproblems share subsubproblems.
5. In contrast, dynamic programming is applicable when the subproblems are not
independent, that is, when subproblems share subsubproblems. In this context, a divide-
and-conquer algorithm does more work than necessary, repeatedly solving the common
subsubproblems. A dynamic-programming algorithm solves every subsubproblem just
once and then saves its answer in memory and re uses it whenver subproblem repeats or
re occurs
Dynamic Programming is the most powerful design technique for solving optimization problems.
Divide & Conquer algorithm partition the problem into disjoint subproblems solve
the subproblems recursively and then combine their solution to solve the original problems.

Dynamic Programming is used when the subproblems are not independent,


e.g. when they share the same subproblems.

In this case, divide and conquer may do more work than necessary,
because it solves the same sub problem multiple times.

Dynamic Programming solves each subproblems just once and stores the result in a table (memory)
so that it can be repeatedly retrieved if needed again.

Dynamic Programming is a Bottom-up approach- we solve all possible small problems and
then combine to obtain solutions for bigger problems.

Dynamic Programming is a paradigm of algorithm design in which an optimization problem


is solved by a combination of achieving sub-problem solutions and appearing to the "principle of
optimality".
HTML Tutorial
Overlapping Subproblems
Any problem has overlapping sub-problems if finding its solution
involves solving the same subproblem multiple times.
Take the example of the Fibonacci numbers; to find the fib(4),
we need to break it down into the following sub-problems:
Characteristics of Dynamic Programming:
Dynamic Programming works when a problem has the following features:-

Optimal Substructure: If an optimal solution contains optimal sub solutions then a


problem exhibits optimal substructure.

Overlapping subproblems: When a recursive algorithm would visit the same


subproblems repeatedly, then a problem has overlapping subproblems.
Elements of Dynamic Programming
There are basically three elements that characterize a dynamic programming algorithm:-

Substructure: Decompose the given problem into smaller subproblems. Express the solution of
the original problem in terms of the solution for smaller problems.

Table Structure: After solving the sub-problems, store the results to the sub problems in a
table. This is done because subproblem solutions are reused many times, and we do not want
to repeatedly solve the same problem over and over again.

Bottom-up Computation: Using table, combine the solution of smaller subproblems to solve


larger subproblems and eventually arrives at a solution to complete problem.

Bottom-up means:-
  
Start with smallest subproblems.
Combining their solutions obtain the solution to sub-problems of increasing size.
Until solving at the solution of the original problem.
It can be broken into four steps:
Characterize the structure of an optimal solution or mathematical notation for solution
of problem.

Recursively defined the value of the optimal solution. Like Divide and Conquer, divide
the problem into two or more optimal parts recursively. This helps to determine what
the solution will look like.

Compute the value of the optimal solution from the bottom up (starting with the
smallest subproblems)

Construct the optimal solution for the entire problem form the computed values of
smaller subproblems.
Applications of dynamic programming
It can be used to solve complex
1. Decision Problems
2. Complex Optimization Problems
3. Combinatory (Enumeration) Problems
Applications of dynamic programming
0/1 knapsack problem
Mathematical optimization problem
All pair Shortest path problem
Reliability design problem
Longest common subsequence (LCS)
Flight control and robotics control
Time-sharing: It schedules the job to maximize CPU usage
Matrix Chain Multiplication
Optimal Binary Search Tree
Travelling Salesman Problem
No of Ways or Count of Ways to reach Nth Stair
No of Ways or Count of Ways to reach Destination
Differentiate between Divide & Conquer Method vs Dynamic Programming.

Divide & Conquer Method Dynamic Programming


1.It deals (involves) three steps at each level of •1.It involves the sequence of four steps:Characterize the
recursion: structure of optimal solutions.
Divide the problem into a number of subproblems. •Recursively defines the values of optimal solutions.
Conquer the subproblems by solving them recursively. •Compute the value of optimal solutions in a Bottom-up
Combine the solution to the subproblems into the minimum.
solution for original subproblems. •Construct an Optimal Solution from computed
information.

2. It is Recursive. 2. It is non Recursive.


3. It does more work on subproblems and hence has 3. It solves subproblems only once and then stores in the
more time consumption. table.

4. It is a top-down approach. 4. It is a Bottom-up approach.

5. In this subproblems are independent of each other. 5. In this subproblems are interdependent.

6. For example: Merge Sort & Binary Search etc. 6. For example: Matrix Multiplication.
Dynamic Programming Methods #
DP offers two methods to solve a problem.
There are following two different ways to store the values of overlapping subproblems so that
these values can be reused:
a) Memoization (Top Down)
b) Tabulation (Bottom Up)
1. Top-down with Memoization #
In this approach, we try to solve the bigger problem by recursively finding the solution to
smaller sub-problems. Whenever we solve a sub-problem, we cache(Stores) its result in
temporary 1 Dimensinal (1D) Memory so that we don’t end up solving it repeatedly if it’s
called multiple times. Instead, we can just return the saved result. This technique of storing
the results of already solved subproblems is called Memoization.

The 1D Array is also called lookup array.


Bottom-up with Tabulation #
Tabulation is the opposite of the top-down approach and avoids recursion. In this approach,
we solve the problem “bottom-up” (i.e. by solving all the related sub-problems first). This is
typically done by filling up an n-dimensional table. Based on the results in the table, the
solution to the top/original problem is then computed.
Tabulation is the opposite of Memoization, as in Memoization we solve the problem and
maintain a map of already solved sub-problems.

The tabulated program for a given problem builds a table in bottom up fashion and returns the
last entry from table.
int fib(int n)
{
  int f[n+1];
  int i;
  f[0] = 0;   f[1] = 1; 
  for (i = 2; i <= n; i++)
      f[i] = f[i-1] + f[i-2];
  
  return f[n];
}
   
int main ()
{
  int n = 9;
  printf("Fibonacci number is %d ", fib(n));
  return 0;
}
Count ways to reach the n’th stair
There are n stairs, a person standing at the bottom wants to reach the top. T
he person can climb either 1 stair or 2 stairs at a time.
Count the number of ways, the person can reach the top.
 
int countWays(int m)
{ // your code here
int fib[m+1];
If(m==1)
return 1;
fib[0]=0,fib[1]=1,fib[2]=2;
for(int i=3;i<=m;i++)
{
fib[i]=fib[i-1+fib[i-2;
}
return fib[m];
}
int countWays(int m)
{ // your code here
int fib[m+1];
If(m==1)
return 1;

fib[0]=0,fib[1]=1,fib[2]=2;

for(int i=3;i<=m;i++)
{
fib[i]=((fib[i-1]%1000000007)+(fib[i-2]%1000000007))%1000000007;
}
return fib[m];
}
int countWays(int n)
{
    int res[n + 1];
If(n==1)
return 1;
If(n==2)
return 2;
    res[0] = 1;
    res[1] = 1;
    res[2] = 2;
    for (int i = 3; i <= n; i++)
     {
   res[i] = res[i - 1] + res[i - 2]+ res[i - 3];
  }
    return res[n];
}
 
// Driver program to test above functions
int main()
{
    int n ;
cin>>n;
    cout << countWays(n);
    return 0;
}
{    int res[n];
If(n==1)
return 1;

    res[0] = 1;
    res[1] = 1;
    for(int i = 2; i < n; i++)
    {
       res[i] = 0;
      for(int j = 1; j <= m && j <= i; j++)
        {
   res[i] += res[i - j];
}
    }
    return res[n - 1];
}
 // Returns number of ways to reach s'th stair
int countWays(int s, int m)
{
    return countWaysUtil(s + 1, m);
}
 int main()
{
    int s = 4, m = 2;
    cout << "Number of ways = "<< countWays(s, m);          
  
/* Returns length of LCS for X[0..m-1], Y[0..n-1] */
int lcs( char *X, char *Y, int m, int n ) 
{    int L[m + 1][n + 1]; 
    int i, j; 
 for(j=0;j<n;j++)
{
L[0][j]=0;
}
for(i=0;j<m;i++)
{
L[i][0]=0;
}
    for (i = 1; i <= m; i++) 
    { 
        for (j = 1; j <= n; j++) 
        { 
        if (X[i - 1] == Y[j - 1]) 
            L[i][j] = L[i - 1][j - 1] + 1; 
         else
            L[i][j] = max(L[i - 1][j], L[i][j - 1]); 
        } 
    } 
 // L[m][n] contains length of LCS for X[0..n-1] and Y[0..m-1]
    return L[m][n]; 
 
// Driver Code
int main() 

    char X[] = "AGGTAB"; 
    char Y[] = "GXTXAYB"; 
      
    int m = strlen(X); 
    int n = strlen(Y); 
      
    cout << "Length of LCS is " 
         << lcs( X, Y, m, n ); 
      
    return 0; 

  

You might also like