Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
29 views

Dynamic Programing

Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of subproblems to avoid recomputing them. It is applicable to problems exhibiting optimal substructure, where optimal solutions to subproblems can be used to arrive at an optimal solution to the original problem, and overlapping subproblems, where the same subproblems are solved repeatedly. The main steps of a dynamic programming algorithm are to characterize the optimal substructure, recursively define the value of an optimal solution, compute the value bottom-up using stored results, and optionally construct an optimal solution. Examples where dynamic programming is used include the knapsack problem, traveling salesperson problem, and longest common subsequence problem.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Dynamic Programing

Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of subproblems to avoid recomputing them. It is applicable to problems exhibiting optimal substructure, where optimal solutions to subproblems can be used to arrive at an optimal solution to the original problem, and overlapping subproblems, where the same subproblems are solved repeatedly. The main steps of a dynamic programming algorithm are to characterize the optimal substructure, recursively define the value of an optimal solution, compute the value bottom-up using stored results, and optionally construct an optimal solution. Examples where dynamic programming is used include the knapsack problem, traveling salesperson problem, and longest common subsequence problem.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Dynamic programming

Well known algorithm design techniques:.


– Divide-and-conquer algorithms

• Another strategy for designing algorithms is dynamic programming.

– Used when problem breaks down into recurring small Subproblems

• Dynamic programming is typically applied to optimization problems. In such problem there can be
many solutions. Each solution has a value, and we wish to find a solution with the optimal value

Dynamic Programming is a general algorithm design technique for solving problems defined by or
formulated as
recurrences with overlapping sub instances.

Invented by American mathematician Richard Bellman in


the 1950s to solve optimization problems .

Main idea:
- set up a recurrence relating a solution to a larger
instance to solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table

Dynamic programming is a way of improving on inefficient divide and-conquer algorithms.

• By “inefficient”, we mean that the same recursive call is made over and over.

• If same subproblem is solved several times, we can use table to store result of a subproblem the
first time it is computed and thus never have to recompute it again.

• Dynamic programming is applicable when the subproblems are dependent, that is, when
subproblems share subsubproblems
.• “Programming” refers to a tabular method

Elements of Dynamic Programming:-

DP is used to solve problems with the following characteristics:

• Simple subproblems
– We should be able to break the original problem to smaller
subproblems that have the same structure

• Optimal substructure of the problems


– The optimal solution to the problem contains within
optimal solutions to its subproblems.

• Overlapping sub-problems
– there exist some places where we solve the same
subproblem more than once.
Steps to Designing a Dynamic Programming Algorithm

1.Characterize optimal substructure

2. Recursively define the value of an optimal Solution

3. Compute the value bottom up

4. (if needed) Construct an optimal solution.

Principle of Optimality

The dynamic Programming works on a principle of optimality.

• Principle of optimality states that in an optimal sequence of decisions or choices, each sub
sequences must also be optimal.

Dynamic Programming vs.Divide & Conquer


Dynamic Programming Greedy Method

1. Dynamic Programming is used to 1. Greedy Method is also used to get the optim
obtain the optimal solution. al solution.

2. In Dynamic Programming, we cho 2. In a greedy Algorithm, we make whatever c


ose at each step, but the choice may d hoice seems best at the moment and then solve
epend on the solution to sub-problem the sub-problems arising after the choice is ma
s. de.

3. Less efficient as compared to a gre 3. More efficient as compared to a greedy appr


edy approach oach

4. Example: 0/1 Knapsack 4. Example: Fractional Knapsack

5. It is guaranteed that Dynamic Prog 5. In Greedy Method, there is no such guarante


ramming will generate an optimal sol e of getting Optimal Solution.
ution using Principle of Optimality.

Examples:

1.0/1 Knapsack Method


2.Traveling sales person Problem
3.Optimal Binary search tree
4.Matrix chain multiplication
5. Longest Common Sequence

You might also like