Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
34 views

Lecture 4 Dynamic Programming

The document discusses dynamic programming and provides examples to illustrate the technique. It begins with an overview of problem solving approaches like brute force, divide and conquer, and dynamic programming. Dynamic programming is characterized as a special case of divide and conquer that applies when subproblems overlap. The document then provides examples of problems solved using dynamic programming, including the rod cutting problem and matrix chain multiplication. It explains how dynamic programming problems can be solved recursively in a top-down manner or iteratively in a bottom-up manner to improve time complexity over naive recursive solutions.

Uploaded by

Ibrahim Choudary
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Lecture 4 Dynamic Programming

The document discusses dynamic programming and provides examples to illustrate the technique. It begins with an overview of problem solving approaches like brute force, divide and conquer, and dynamic programming. Dynamic programming is characterized as a special case of divide and conquer that applies when subproblems overlap. The document then provides examples of problems solved using dynamic programming, including the rod cutting problem and matrix chain multiplication. It explains how dynamic programming problems can be solved recursively in a top-down manner or iteratively in a bottom-up manner to improve time complexity over naive recursive solutions.

Uploaded by

Ibrahim Choudary
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Data Structures and Algorithms

Dynamic Programming

Dr. Muhammad Safyan


Department of computer Science
Government College University, Lahore
Today’s Agenda

Problem solving Approaches


Dynamic programming
Approaches to Solve a problem
Brute Force
A brute force algorithm blindly iterates an entire domain of
possible solutions in search of one or more solutions which satisfy
a condition. Imagine attempting to open one of these:

Divide and Conquer:


 Divide a large problem in such a way to become sub-problem
of the original ones and again divide sub-problem in to it s sub-
problem and subsequently reach to a position where the
problem become very simple.
 Very simple case become the base case.
 Usually Applies to the problem when the sub-problems are
disjoint e.g Merge Sort
Approaches to Solve a problem

Dynamic Programming: is a special case of Divide and conquer


approach applies when the subproblems overlap
e.g. Fibonacci problem

Greedy Approach: ?
Dynamic Programming

• Dynamic Programming is a technique used for Optimizations.


Goal: Either get the Maximum or Minimum Results.
In Dynamic Programming, Procedure are not well defined.
Instead:
It find out all possible solution and then pickup the solution i.e.
optimal.
Dynamic programming may consume more memory than normal
--> Choose best from multiple solutions.
Dynamic Programming (DP)

• Use Recursive approach though not using Recursive Formula.

• Recursive approach use Recursion/ Iteration.

• DP follows Principle of Optimality-taking the sequence of steps


while solving a problem for decision making.

• Following Conditions met for the Dynamic problem

 Recursive Equation

 Optimal Substructure

 Overlapping sub problem


Dynamic Programming

Recursive Equations:
Function Call itself
Optimal Sub-Structure
Optimal solution to a problem contains optimal solution to sub-
Problem.
Overlapping Sub-problem: Repeating sub-problem
A function call another function that also part of another function of
problem.
Advantage
+ Dynamic Programming reduce Time Complexity
Dynamic Programming

Recursion Methodologies
There are two way solve recursive solution
• Top-Down: use recursion
• Bottom-Up: use Memoization or Tabulation
• use recursive Equation and For Loop
Fibonacci Series
Fib(n)= 0 if n=0
1 if n=1
fib(n-2)+fib(n-1)
Int fib( int n)
{
if (n<=1)
return n;
Else
return(fib(n-2)+fib(n-1)
}

Time Complexity=T(n)= O(2n).


How can we reduce it?
Top Down: Fibonacci
Series

What’s the problem?


Top: Down Iterative Mehtod:
Fib(n)
{
if (n == 0)
return M[0];
if (n == 1)
return M[1];

if (Fib(n-2) is not already calculated)


call Fib(n-2);

if(Fib(n-1) is already calculated)


call Fib(n-1);
//Store the ${n}^{th}$ Fibonacci no. in memory & use previous results.
M[n] = M[n-1] + M[n-2]

Return M[n];
}
already calculated …
Fibonacci Series
Store the result into Global array

Total call=6
T(n)= n+1
T(n)= O(n)
This is called Memoization.
Memoization brings big Difference.
This is called Bottom-Up approach.
We use iterative method -> called Tabulation Method
Dynamic Problem
Dynamic programming is a method of solving optimization
problems by combining the solutions of subproblems
Developing these algorithms follows four steps:
• Characterize the optimality - formally state what properties an
optimal solution exhibits
• Recursively define an optimal solution - analyze the problem in
a top-down fashion to determine how subproblems relate to the
original
• Solve the subproblems - start with a base case and solve the sub-
problems in a bottom-up manner to find the optimal value
• Reconstruct the optimal solution - (optionally) determine the
solution that produces the optimal value
Rod Cutting Problem

Assume a company buys long steel rods and cuts them into shorter
rods for sale to its customers. If each cut is free and rods of different
lengths can be sold for different amounts, we wish to determine how
to best cut the original rods to maximize the revenue.

Brute Force Solution:

• Let the length of the rod be n inches. There are 2n-1 different ways to cut
the rod.

• Binary decision of whether or not to make a cut. Number of permutations


of lengths is equal to the number of binary patterns of n-1 bits of which
there are 2n-1.
Rod Cutting Problem

Eight possible ways to cut a rod of length 4


Rod Cutting Problem

To find the optimal value we simply add up the prices for all the
pieces of each permutation and select the highest value.

Dynamic Programming Solution:

Formalize the problem by assuming that a piece of length i has


price pi. Optimal solution cuts the rod into k pieces of

lengths i1, i2, ... , ik. such that n = i1 + i2 + ... + ik, then the revenue
for a rod of length n is
Rod Cutting Problem

Optimal Substructure:
Rod Cutting Problem

Recursive Equation: Complexity

where T(j) is the number of times the recursion occurs for each iteration
of the for loop with j = n-i. The solution to this recursion can be shown
to be T(n) = 2n which is still exponential behavior.
The problem with the top-down naive solution is that we recomputed
all possible cuts thus producing the same run time as brute-force (only
in a recursive fashion).
Rod Cutting Problem: Bottom-Up

we can store the solutions to the smaller problems in a bottom-


up manner rather than recomputed them.

the run time can be drastically improved (at the cost of additional
memory usage).

To implement this approach we simply solve the problems


starting for smaller lengths and store these optimal revenues in
an array (of size n+1). Then when evaluating longer lengths we
simply look-up these values to determine the optimal revenue for
the larger piece. We can formulate this recursively as follows
Rod Cutting Problem: Bottom-Up

Total Length
Piece 0 1 2 3 4 5
Profit ↓Length
0
0 0 0 0 0 0 0

1
Length of Profit 2 0 2 4 6 8 10
Pieces per Piece 2 ↓
1 2 5 0 2 5 7 10 12
2 5 3 ↓ ↓
3 9 9 0 2 5 9 11 14
4 6 4 ↓ ↓ ↓ ↓ ↓
6 0 2 5 9 11 14

Max(Profit by excluding new piece, Profit by including new piece)


Rod Cutting Problem: Bottom-Up

Note that to compute any rj we only need the values r0 to rj-1 which we
store in an array.
Hence we will compute the new element using only previously
computed values. The implementation of this approach is
Rod Cutting Problem: Bottom-Up
Rod Cutting Problem: Extended Bottom-Up
Matrix Multiplication: Dynamic Programming

Recalling Matrix Multiplication


Matrix Multiplication: Dynamic Programming
Recalling Matrix Multiplication
Matrix Multiplication: Dynamic Programming
Recalling Matrix Multiplication
Matrix-Chain multiplication (cont.)

Cost of the matrix multiplication:

An example: A1 A2 A3
A1 : 10  100
A2 : 100  5
A3 : 5  50

28
Matrix-Chain multiplication (cont.)

If we multiply (( A1 A2 ) A3 ) we perform 10 100  5  5000


scalar multiplications to compute the 10  5 matrix product A1 A2 ,
plus another 10  5  50  2500 scalar multiplications to multiply
this matrix by A3 , for a total of 7500 scalar multiplications.

If we multiply ( A1 ( A2 A3 )) we perform 100  5  50  25 000


scalar multiplications to compute the 100  50 matrix product A2 A3 ,
plus another 10 100  50  50 000 scalar multiplications to multiply
A1 by this matrix, for a total of 75 000 scalar multiplications.
29
Matrix-Chain multiplication (cont.)

The problem:
A1, A2 , ..., An
Given a chain of n
matrices, where matrix Ai has dimension pi-1x

pi, fully paranthesize the product


A1A2 ... An
in a way that minimizes the
number of scalar multiplications.

30
Elements of dynamic programming (cont.)
Overlapping subproblems: (cont.)

1.
.4

1. 2. 1. 3. 1. 4.
.1 .4 .2 .4 .3 .4

2. 3. 2. 4. 1. 2. 3. 4. 1. 2. 1. 3.
.2 .4 .3 .4 .1 .2 .3 .4 .1 .3 .2 .3

3. 4. 2. 3. 2. 3. 1. 2.
.3 .4 .2 .3 .2 .3 .1 .2

The recursion tree of RECURSIVE-MATRIX-CHAIN( p, 1, 4). The


computations performed in a shaded subtree are replaced by a single table31
lookup in MEMOIZED-MATRIX-CHAIN( p, 1, 4).
Matrix-Chain multiplication (Contd.)

RECURSIVE-MATRIX-CHAIN (p, i, j)
1 if i = j
2 then return 0
3 m[i,j] ← -1
4 for k←i to j-1
5 do q←RECURSIVE-MATRIX-CHAIN (p, i, k)
+ RECURSIVE-MATRIX-CHAIN (p, k+1, j)+ pi-1 pk pj
6 if q < m[i,j]
7 then m[i,j] ←q
8 return m[i,j]

32
Elements of dynamic programming (cont.)

Overlapping subproblems: (cont.)

WE guess that T (n)  (2 n ).


Using the substitution method with
T (n)  2 n 1
n 1
T ( n)  2  2i
i 1
n2
 2  2i  n
i 0
n 1
 2(2  1)  n
 2 n 1 33
Matrix-Chain multiplication (cont.)

Counting the number of alternative paranthesization


: bn

1 if n  1 , there is o nly one matrix



bn   n 1
 bk bn  k if n  2
 k 1

bn  (2 n )
34
Matrix-Chain multiplication (cont.)

Step 1: The structure of an optimal paranthesization(op)

Find the optimal substructure and then use it to


construct an optimal solution to the problem from
optimal solutions to subproblems.
Let Ai...j where i ≤ j, denote the matrix product
Ai Ai+1 ... Aj
Any parenthesization of Ai Ai+1 ... Aj must split the product
between Ak and Ak+1 for i ≤ k < j.

35
Matrix-Chain multiplication (cont.)

The optimal substructure of the problem:


Suppose that an op of Ai Ai+1 ... Aj splits the product
between Ak and Ak+1 then the paranthesization of the
subchain Ai Ai+1 ... Ak within this parantesization of Ai
Ai+1 ... Aj must be an op of Ai Ai+1 ... Ak

36
Matrix-Chain multiplication (cont.)

Step 2: A recursive solution:


Let m[i,j] be the minimum number of scalar multiplications
needed to compute the matrix Ai...j where 1≤ i ≤ j ≤ n.
Thus, the cost of a cheapest way to compute A1...n would
be m[1,n].
Assume that the op splits the product Ai...j between Ak and
Ak+1.where i ≤ k <j.
Then m[i,j] =The minimum cost for computing Ai...k and
Ak+1...j + the cost of multiplying these two matrices.
37
Matrix-Chain multiplication (cont.)

Recursive defination for the minimum cost of


paranthesization:

0 if i  j
m[i, j ]   min m[i, k ]  m[k  1, j ]  p p p } if i  j.
i  k  j i 1 k j

38
Matrix-Chain multiplication (cont.)

To help us keep track of how to constrct an optimal solution


we define s[ i,j] to be a value of k at which we can split the product
Ai...j to obtain an optimal paranthesization.

That is s[ i,j] equals a value k such that

m[i, j ]  m[i, k ]  m[k  1, j ]  pi 1 pk p j


s[i, j ]  k

39
Matrix-Chain multiplication (cont.)

Step 3: Computing the optimal costs

It is easy to write a recursive algorithm based on


recurrence for computing m[i,j].

But the running time will be exponential!...

40
Matrix-Chain multiplication (cont.)

Step 3: Computing the optimal costs

We compute the optimal cost by using a tabular, bottom-


up approach.

41
Matrix-Chain multiplication (Contd.)

MATRIX-CHAIN-ORDER(p)
n←length[p]-1
for i←1 to n
do m[i,i]←0
for l←2 to n
do for i←1 to n-l+1
do j←i+l-1
m[i,j]← ∞
for k←i to j-1
do q←m[i,k] + m[k+1,j]+pi-1 pk pj
if q < m[i,j]
then m[i,j] ←q
s[i,j] ←k
return m and s

42
Matrix-Chain multiplication (cont.)
An example: matrix dimension
A1 30 x 35
A2 35 x 15
A3 15 x 5
A4 5 x 10
A5 10 x 20
A6 20 x 25

m[2,2]  m[3,5]  p1 p2 p5  0  2500  35  15  20  13000



m[2,5]  min m[2,3]  m[4,5]  p1 p3 p5  2625  100  35  5  20  7125
m[2,4]  m[5,5]  p p 4 p  4375  0  35  10  20  11375
 1 5
  7125 43
Matrix-Chain multiplication (cont.)
s
m 6 1

6 1 5 3 2
i
2
j
5 151 4 3 3 3
25 i
j 118 105 3 3 3 4
4 3 3
75 00
4 5 5
3 2 1 3 3
937 712 575
5 5 4 5
2 5 1 2 3
437 250
250 350
787 6
5 00 0
1 157 5 500
262 750 100
1000
50 0
00 00
0 05 0

A A A A A A
1 2 3 4 5 6

44
Matrix-Chain multiplication (cont.)

Step 4: Constructing an optimal solution

An optimal solution can be constructed from the computed


information stored in the table s[1...n, 1...n].
We know that the final matrix multiplication is

A1...s[1, n] As[1, n ]1...n


The earlier matrix multiplication can be computed recursively.

45
Matrix-Chain multiplication (Contd.)

PRINT-OPTIMAL-PARENS (s, i, j)
1 if i=j
2 then print “Ai”
3 else print “ ( “
4 PRINT-OPTIMAL-PARENS (s, i, s[i,j])
5 PRINT-OPTIMAL-PARENS (s, s[i,j]+1, j)
6 Print “ ) ”

46
Matrix-Chain multiplication (Contd.)

RUNNING TIME:

Recursive solution takes exponential time.

Matrix-chain order yields a running time of O(n3)

47

You might also like