Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
48 views

Modified Dynamic

This document discusses dynamic programming, an optimization technique for solving multistage problems by breaking them down into subproblems. It covers the basics of dynamic programming, including overlapping subproblems, optimal substructures, memoization, tabulation, and examples like the Fibonacci sequence and multistage graph problems.

Uploaded by

Rashi Rana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Modified Dynamic

This document discusses dynamic programming, an optimization technique for solving multistage problems by breaking them down into subproblems. It covers the basics of dynamic programming, including overlapping subproblems, optimal substructures, memoization, tabulation, and examples like the Fibonacci sequence and multistage graph problems.

Uploaded by

Rashi Rana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Chapter 13 - Dynamic

Programming
BASICS OF DYNAMIC PROGRAMMING
Useful for solving multistage optimization problems.
An optimization problem deals with the maximization or minimization
of the objective function as per problem requirements.
In multistage optimization problems, decisions are made at successive
stages to obtain a global solution for the given problem.
Dynamic programming divides a problem into subproblems and
establishes a recursive relationship between the original problem and
its subproblems.
BASICS OF DYNAMIC PROGRAMMING-
A subproblem representing a small part of the original problem is
solved to obtain the optimal solution.
Then the scope of this subproblem is enlarged to find the optimal
solution for a new subproblem.

encompasses the original problem.


After that, a solution for the whole problem is obtained by combining
the optimal solutions of its subproblems.
Dynamic programming vs divide-and-conquer
approaches
The difference between dynamic programming and the top-down or divide-and-conquer approach is that the
subproblems overlap in the dynamic programming approach. In contrast, in the divide-and-conquer
approach, the subproblems are independent. The differences between the dynamic programming and
divide-and-conquer approaches.
Dynamic programming approach vs greedy
approach
Dynamic programming can also be compared with the greedy approach.
Unlike dynamic programming, the greedy approach fails in many
problems.
consider the shortest-path problem for the distance between s and t in
the graph shown in Fig. A greedy algorithm such as Dijkstra would take
the route from s to vertex 2, as its path length is shorter than that of the

After that, the path length increases, resulting in a total path length of
1002.
However, dynamic programming would not make such a mistake, as it
treats the problem in stages.
For this problem, there are three stages with vertices {s}, {1, 2}, and {t}.
The problem would find the shortest path from stage 2, {1, 2}, to {t} first,
then calculate the final path. Hence, dynamic programming would result
in a path s to 1 and from 1 to t.
Dynamic programming problems, therefore, yield a globally optimal
result compared to the greedy approach, whose solutions are just locally
optimal.
Dynamic programming approach vs greedy approach

Advantages and Disadvantages of dynamic programming


Components of Dynamic Programming
The guiding principle of dynamic programming is the . In
simple words, a given problem is split into subproblems.
Then, this principle helps us solve each subproblem optimally, ultimately leading
to the optimal solution of the given problem. It states that an optimal sequence
of decisions in a multistage decision problem is feasible if its sub-sequences are
optimal.

(a sequence of decisions) has the property that whatever the initial state and
decision are, the remaining decisions must constitute an optimal policy

The solution to a problem can be achieved by breaking the problem into


subproblems. The subproblems can then be solved optimally. If so, the original
problem can be solved optimally by combining the optimal solutions of its
subproblems.
Components of Dynamic Programming
overlapping subproblems
Dynamic programs possess two essential properties
overlapping subproblems
optimal substructures.
Overlapping subproblems One of the main characteristics of dynamic
programming is to split the problem into subproblems, similar to the
divide-and-conquer approach. The sub-problems are further divided into
smaller problems. However, unlike divide and conquer, here, many
subproblems overlap and cannot be treated distinctly.
This feature is the primary characteristic of dynamic programming. There
are two ways of handling this overlapping problem:
The memoization technique
The tabulation method.
overlapping subproblems
following recurrence equations can generate this sequence:

A straightforward pseudo-code for implementing this recursive


equation is as follows:
Consider a Fibonacci recurrence tree for n = 5, as shown in Fig. 13.2.
It can be observed from Fig. 13.3 that there are multiple overlapping
subproblems.
As n becomes large, subproblems increase exponentially, and
repeated calculation makes the algorithm ineffective. In general, to
compute Fibonacci(n), one requires two terms:
Fibonacci (n-1) and Fibonacci (n-2)
Even for computing Fibonacci(50), one must have all the previous 49
terms, which is tedious.
This exponential complexity of Fibonacci computation is because many
of the subproblems are repetitive, and hence there are chances of
multiple recomputations.
The complexity analysis of this algorithm yields . Here f
is called the golden ratio, whose value is approximately 1.62.
Optimal substructure
The optimal solution of a problem can be expressed in terms of optimal
solutions to its subproblems.
In dynamic programming, a problem can be divided into subproblems.
Then, the subproblem can be solved suboptimally. If so, the optimal
solutions of the subproblems can lead to the optimal solution of the
original problem.
If the solution to the original problem has stages, then the decision taken
at the current stage depends on the decision taken in the previous stages.
Therefore, there is no need to consider all possible decisions and their
consequences as the optimal solution of the given problem is built on the
optimal solution of its subproblems
Optimal substructure
Consider the graph shown in Fig. 13.3. Consider the best route to visit the
city v from city u.
This problem can be broken into a problem of finding a route from u to x
and from x to v. If <u, x> is the optimal route from u to x, and the best
route from x to v is <x, v>, then the best route from u to v can be built on
the optimal solutions of its two subproblems. In other words, the route
constructed on the suboptimal optimal routes must also be optimal.
The general template for solving dynamic programming looks like this:
Step 1: The given problem is divided into many subproblems, as in the case
of the divide-and-conquer strategy. In divide and conquer, the subproblems
are independent of each other; however, in the dynamic programming
case, the subproblems are not independent of each other, but they are
interrelated and hence are overlapping subproblems.
Optimal substructure
Step 2: A table is created to avoid the recomputation of multiple
overlapping subproblems repeatedly; a table is created. Whenever a
subproblem is solved, its solution is stored in the table, so that its solutions
can be reused.
Step 3.The solutions of the subproblems are combined in a bottom-up
manner to obtain the final solution of the given problem
The essential steps are thus:
1. It is breaking the problems into its subproblems.
2. Creating a lookup table that contains the solutions of the subproblems.
3. Reuse the solutions of the subproblems stored in the table to construct
solutions to the given problem.
Optimal substructure
Dynamic programming uses the lookup table in both a top-down manner
and a bottom-up manner. The top-down approach is called the
memorization technique, and the bottom-up manner is called the tabulation
method.
1. Memoization technique: This method looks into a table to check whether
-
computed value is reused. In other words, computation follows a top-down
method similar to the recursion approach.
2. Tabulation method: Here, the problem is solved from scratch. The
smallest subproblem is solved, and its value is stored in the table. Its value is
used later for solving larger problems. In other words, computation follows a
bottom-up method.
FIBONACCI PROBLEM

recurrence equations can generate this sequence:

The best way to solve this problem is to use the dynamic programming
approach, in which the results of the intermediate problems are stored.
Consequently, results of the previously computed subproblems can be used
instead of recomputing the subproblems repeatedly. Thus, a subproblem is
computed only once, and the exponential algorithm is reduced to a
polynomial algorithm.
A table is created to store the intermediate results, a table is created, and
its values are reused.
Bottom-up approach
An iterative loop can be used to modify this Fibonacci number computation
effectively, and the resulting dynamic programming algorithm can be used to
compute the nth Fibonacci number.

Step 1: Read a positive integer n.


Step 2: Initialize first to 0.
Step 3: Initialize second to 1.
Step 4:
4a: Compute current as first + second.
4b: Update first = second.
4c: Update second = current.
Step 5: Return(current).
Step 6: End.
Fibonacci Algorithm using dynamic programming
Top-down approach and Memoization
Dynamic programming can use the top-down approach for populating and
manipulating the table.
Step 1: Read a positive integer n
Step 2: Create a table A with n entries
Step 3:
Step 4:
4a: Compute recursively
mem_Fib(n) = mem_Fib mem_Fib
else
return table value
Step 5: Return(mem_Fibonacci(A, n))
Memoization technique algorithm
MULTISTAGE GRAPH PROBLEM (stagecoach
problem)
The multistage problem
is a problem of finding
the shortest path from
the source to the
destination. Every edge
connects two nodes
from two partitions.
The initial node is
called a source (No
indegree), and the last
node is called a sink (as
there is no outdegree).
Forward computation procedure
Step 1: Read directed graph G = <V, E> with k stages.
Step 2: Let n be the number of nodes and dist
Step 3: Set the initial cost to zero.
Step 4: Loop index i
4a: Find a vertex v from the next stage such that the edge connecting the current
stage and the next stage is minimum (j, v) + cost(v) is minimum.
4b: Update cost and store v in another array dist[].
Step 5: Return cost.
Step 6: End.
The shortest path can finally be obtained or reconstructed by tracking the
distance array
MULTISTAGE GRAPH Algorithm
MULTISTAGE GRAPH Algorithm
Consider the graph shown in Fig. 13.7. Find the
shortest path from the source to the sink using
the forward approach of dynamic programming.
Backward computation procedure
Consider the graph shown in Fig. 13.8. Find the shortest path from
the source to the sink using the backward approach of dynamic
programming.
Unlike the forward approach, the computation starts from stage 1 to
stage 5. The cost of the edges that connect Stage 2 to Stage 1 is
calculated.
Floyd-Warshall all-pairs shortest algorithm
The Floyd Warshall algorithm finds the shortest simple path from node s to node
t. The input for the algorithm is an adjacency matrix of the graph G. The cost(i, j)
is computed as follows
Step 1: Read weighted graph G = <V, E>.
Step 2: Initialize D[i, j] as follows:

Step 3: For intermediate nodes k, recursively compute

Step 4: Return matrix D.


Step 5: End.
Floyd-Warshall all-pairs shortest algorithm
Floyd-Warshall all-pairs shortest algorithm
Floyd-Warshall all-pairs shortest algorithm
Apply the Warshall algorithm and
find the transitive closure for the
graph shown in Fig. 13.10.
Consider the graph shown in Fig. 13.11 and find the shortest path
using the Floyd Warshall algorithm.
Bellman-Ford Algorithm
Bellman-Ford Algorithm
Bellman-Ford Algorithm

Step 2:Label all vertices except the source as .


Step 3: Repeat n 1 times, where n is the number of vertices.
3a: If the label of v is larger than that of u + cost of the edge (u, v) then
relax on edge.
3b: Update the label of v as the label of u + cost of the edge (u, v).
Step 4: Check the presence of any negative edge cycle by repeating the
iteration and carrying out the procedure if any
edges still relax. If so, report the presence of a negative weight cycle.
Step 5: Exit
Bellman-Ford Algorithm
Apply the Bellman Ford algorithm to the graph shown in Fig. 13.14.
Travelling salesman problem
Step 1: Read weighted graph G = <V, E>.
Step 2: Initialize d[i, j] as follows:

Step 3: Compute a function g(i, S), a function that gives the length of the shortest path starting from
vertex i, traversing through all the vertices in set S, and terminating at vertex i as follows:
3a:
compute recursively
and store the value.
Step 4: Compute the minimum cost of the travelling salesperson tour as follows: Compute
computed in Step 3.
Step 5: Return the value of Step 4.
Step 6: End.
Travelling salesman problem
Solve the TSP for the graph shown in Fig. 13.17 using dynamic programming.
cost(2, ) = d[2, 1] = 2; cost(3, ) = d[3, 1] = 4; cost(4, ) = d[4, 1] = 6
This indicates the distance from vertices 2, 3, and 4 to vertex 1. When |S| = 1, cost(i, j) function
can be solved using the recursive function as follows:

cost(2,{3}) = d[2, 3] + cost(3, )=5+4=9

cost(2,{4}) = d[2, 4] + cost(4, ) = 7 + 6 = 13

cost(3,{2}) = d[3, 2] + cost(2, ) =3+2=5

cost(3,{4}) = d[3, 4] + cost(4, ) = 8 + 6 = 14

cost(4,{2}) = d[4, 2] + cost(2, ) =5+2=7

cost(4,{3}) = d[4, 3] + cost(3, ) = 9 + 4 = 13


Now, cost(S, i) is computed with |S| = 2, i S, i n, that is, set S involves two intermediate nodes.

cost(2,{3,4}) = min{d[2, 3] + cost(3, {4}), d[2, 4] + cost(4, {3})} = min{5 + 14 , 7 + 13} = min{19, 20} = 19

cost(3,{2,4}) = min{d[3, 2] + cost(2, {4}), dist[3, 4]+cost(4, {2})} = min{3 + 13, 8 + 7} = min{16, 15} = 15

cost(4,{2,3}) = min{d[4, 2] + cost(2, {3}), d[4, 3] + cost(3, {2})} = min{5 + 9, 9 + 5} = min{14, 14} = 14

Finally, the total cost of the tour is calculated, which involves three intermediate nodes, that is, |S| = 3. As |S|

min dist
[cost(1, {2, 3, 4})] is computed as follows:

cost(1,{2, 3, 4}) = min{d[1, 2] + cost(2,{3, 4}), d[1, 3] + cost(3, {2, 4}),d[1, 4] + cost(4, {2, 3})} = min{5 + 19, 3 +
15, 10 + 14} = min {24,18, 24} = 18

Hence, the minimum cost tour is 18. Therefore, P(1, {2, 3, 4}) = 3, as the minimum cost obtained is only 18. Thus,
Hence, the
Chain Matrix Multiplication
Step 1: Read n chain of matrices with dimensions

Step 2: Let i and j represent the start and end matrices, and let M[i, j]
represent the minimum matrix multiplication required to compute .
Step 3: Compute recursively.
3a: If (i = j), then M[i, j] = 0
else
if (i < j), then
split the matrix A as A = A × A Where, 1
compute
Step 4: Return M[1, n] as the minimum cost of chain matrix multiplication.
Chain Matrix Multiplication
The extraction of the optimal order is explained as follows:
Step 1: Read the trace matrix R with minimum k, yielding the minimum
cost.
Step 2: Perform recursive call as follows:
If ( ), then
k = R[i, j]
return
else
return(R(i, j)
Step 3: End.
Chain Matrix Multiplication
Chain Matrix Multiplication
Consider the following four matrices whose orders are given and
perform chain matrix multiplication using the dynamic programming
approach:
Knapsack Problem
Step 1: Let n be the number of objects.
Step 2: Let W be the capacity of the knapsack.
Step 3: Construct the matrix V [i, j] that stores items and their weights. Index i
tracts the items added to the knapsack (i.e., 1 to n), and index j tracks the weights
(i.e., 0 to W)
Step 4:
knapsack.
Step 5: Recursively compute the following steps:
5a: Leave the object i if or . This leaves the knapsack
with the items with profit
5b: Add the item i if or . In this case, the addition of the
items results in a profit max Here is the
profit of the current item.
Step 6: Return the maximum profit of adding feasible items to the knapsack, V

Step 7: End.
Knapsack Problem
Knapsack Problem
Apply the dynamic programming algorithm to the instance of the
knapsack problem shown in Table 13.28. Assume that the knapsack
capacity is w = 3.
Computation of first row:

V[1, 2] = max{V[0, 1], 1 + V[0, 1]} = 1


V[1, 3] = max{V[0, 1], 1 + V[0, 2]} = 1
Computation of second row:
V[2, 1] = 1
V[2, 2] = max{V[1, 1], 6 + V[1, 0]} = 6
V[2, 3] = max{V[1, 1], 6 + V[1, 1]} = 7
Computation of third row
V[3, 1] = V[2, 1] = 1
V[3, 2] = V[2, 2] = 6
V[3, 3] = V[2, 3] = 7
Optimal Binary Search Trees
1. Each node of a BST has one key.
2. All the nodes on the LHS of a BST are less than or equal to the keys
stored in the root.
3. All the nodes on the RHS of a BST are greater than or equal to the
keys stored in the root.
Optimal Binary Search Trees
It can be observed that the
number of possible
resulting trees is a Catalan
number. As discussed
earlier in Chapter 6, the nth
Catalan number Cn is given
as follows:

This expression can be


checked for, say, three
nodes; therefore,
=5.
Dynamic programming approach for
constructing optimal BST
Let be the distinct nodes of a BST. Let us assume these nodes
are arranged in ascending order and let be the probabilities of
searching these items. The idea is to construct a table C, where each entry
represents the cost of the average number of comparisons required
for a successful search in BST.
Step 1: Read n symbols with probability pi.
Step 2: Create the table C[i i
Step 3: Set C[i, i] = pi and C[i i [n].
Step 4: Recursively compute the following relation: C[i
, for all i and j
Step 5:
Step 6: End.
BST Algorithm
Using the dynamic programming approach, construct an optimal BST
for the following keys:
Key 1:Danny p1 = 2/7
Key 2:Ianp2 = 1/7
Key 3:Radha p3 = 3/7
Key 4:Zeep4 = 1/7
C1,1 = p1 = 2/ 7

C2,2 = p2 = 1/ 7

C3,3 = p3 = 3 /7

C4,4 = p4 = 1/ 7
C[1,2]=min

= min{0 + 1/7 + 3/7,2/7 + 0 + 3/7} = min{4/7, 5/7} = 4/7


The minimum is when k = 1.
C [2, 3] can be computed as follows (here, i = 2; j = 3; k = 2, 3):

C[2,3]=min

= min{0 +3/7+ (1/7+ 3/7), 1/7 + 0 + (2/7+ 3/7)} = min{7/7, 5/7} = 6/7
The minimum is when k = 3.
C [3, 4] can be computed as follows (i = 3; j = 4; k = 3, 4):

C[3,4]=min

= min{0 + 1/7 + (3/7 + 1/7), 3/7 + 0 + (3/7+ 1/7)} = min{5/7, 1} = 5/7


The minimum is when k = 3.
C[1,3]=min

=min{0+5/7+(2/7+1/7+3/7),2/7+3/7+(2/7+1/7+3/7),4/7+0+(2/7+1/7+3/7)}
= min{11/7,11/7, 10/7} = 10/7
The minimum is when k = 3.
Similarly, C[2, 4] can be computed as follows (i = 2; j = 4; k = 2, 3, 4):

C[2,4]=min

= min{5/7 + 0 + (1/7 + 3/7+ 1/7), 1/7 + 1/7 + (1/7 + 3/7 + 1/7), 6/7 + 0 + (1/7 + 3/7+ 1/7)}
= min{10/7,7/7, 11/7} = 7/7
The minimum is when k = 2.
Finally, the last entry C[1, 4] should be computed with i = 1; j = 4; k = 1, 2, 3, 4:

C[2,4]=min

= min{0 + 7/7+ (7/7), 2/7 + 5/7+ (7/7),4/7 + 1/7+ (7/7), 10/7 + 1/7 +(7/7)
= min{14/7, 14/7, 12/7, 18/7} = 12/7
The minimum is when k = 3.
It can be observed that the minimum cost is 13/7

1 × p3 + 2 ×( p1 + p4) + 3 × p2
= 3/7 + 6/7 + 3/7 = 12/7
FLOW-SHOP SCHEDULING PROBLEM
Single-machine Sequencing Problem
Finishing time or completion time The finishing time fi(s) of job i is the
time by which all the tasks are completed in schedule s. The finish time F(s)
of s is defined as the maximum finish time of the jobs.

Mean flow time The mean flow time is defined as follows:


FLOW-SHOP SCHEDULING PROBLEM
Two-machine Sequencing Problem
Step 1: Find the smallest value of (A1, B1).
Step 2: If the smallest job is Ai, then process the job on machine A first.
If the smallest job is Bi, then process the job on machine B at the end.
In case of a tie, that is, Ai = Bi, choose any of the jobs.
Step 3: Repeat Step 2 till all jobs are scheduled.
Step 4: Calculate the minimum elapsed time and exit.
The complexity analysis of this algorithm is O(nlogn)
Let there be five jobs. All these jobs have to go through machines A
and B in the order A followed by B. The processing times of the jobs
are given in Table 13.41.Find the optimal order for ordering these
tasks using Johnson Bellman algorithm.

You might also like