Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
29 views

DAA - Unit V - Dynamic Programming

Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller overlapping subproblems and storing the results of already solved subproblems, rather than recomputing them. This document provides examples of dynamic programming including the knapsack problem, Warshall's algorithm for finding the transitive closure of a graph, and Floyd's algorithm for finding all pair shortest paths in a weighted graph. It also describes how dynamic programming can be applied to compute the Fibonacci numbers more efficiently than a naive recursive approach.

Uploaded by

nishkarsh
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

DAA - Unit V - Dynamic Programming

Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller overlapping subproblems and storing the results of already solved subproblems, rather than recomputing them. This document provides examples of dynamic programming including the knapsack problem, Warshall's algorithm for finding the transitive closure of a graph, and Floyd's algorithm for finding all pair shortest paths in a weighted graph. It also describes how dynamic programming can be applied to compute the Fibonacci numbers more efficiently than a naive recursive approach.

Uploaded by

nishkarsh
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Design and Analysis of

Algorithms

Unit V - Dynamic Programming


Dynamic programming
• Dynamic programming is an algorithm design invented by a
prominent U.S. mathematician, Richard Bellman, in the 1950s as a
general method for optimizing multistage decision processes.
• The word “programming” in the name of this technique stands for
“planning” and does not refer to computer programming.
• Dynamic programming is a technique for solving problems with
overlapping sub
• Rather than solving overlapping subproblems again and again,
dynamic programming suggests solving each of the smaller
subproblems only once and recording the results in a table from
which a solution to the original problem can then be obtained.
Knapsack Problem
Given n items of known weights w1, . . . , wn and values v1, . . . ,
vn and a knapsack of capacity W, find the most valuable subset
of the items that fit into the knapsack.

• To design a dynamic programming algorithm, we need to derive a


recurrence relation that expresses a solution to an instance of the
knapsack problem in terms of solutions to its smaller sub
instances.

• Let us consider an instance defined by the first i items, 1≤ i ≤ n,


with weights w1, . . . , wi, values v1, . . . , vi , and knapsack capacity j,
1 ≤ j ≤ W.

• Let F(i, j) be the value of an optimal solution to this instance, i.e.,


the value of the most valuable subset of the first i items that fit into
the knapsack of capacity j.

• We can divide all the subsets of the first i items that fit the
knapsack of capacity j into two categories: those that do not
Knapsack Problem

1. Among the subsets that do not include the ith item, because wi > j the
value of an optimal subset is F(i − 1, j).

2. Among the subsets that do include the ith item because wi < = j, an
optimal subset is made up of this item and an optimal subset of the first i −
1 items that fits into the knapsack of capacity j − wi .

The value of such an optimal subset is vi+ F(i − 1, j − wi).

Therefore the value of an optimal solution among all feasible subsets of the
first I items is

F(i,j) = max{vi + F(i-1,j-wi ) , F(i-1,j) } if wi ≤ j

F(i,j) = F(i-1,j) if wi > j

F(1,j) = V1 if w1 ≤ j
=0 if w1 >j
Knapsack Problem consider the instance given by the following data:

Solution

F(i,j) = max{vi + F(i-1,j-wi ) , F(i-1,j) } if wi<= j


F(i,j) = F(i-1,j) if wi > j
Memory Functions
• The direct top-down approach to finding a solution to such a recurrence
leads to an algorithm that solves common sub problems more than once
and hence is very inefficient.

• The classic dynamic programming approach, on the other hand, works


bottom up: it fills a table with solutions to all smaller sub problems, but
each of them is solved only once.

• An unsatisfying aspect of this approach is that solutions to some of


these smaller sub problems are often not necessary for getting a
solution to the problem given.

• Since this drawback is not present in the top-down approach, it is natural


to try to combine the strengths of the top-down and bottom-up
approaches. The goal is to get a method that solves only sub problems
that are necessary and does so only once. Such a method exists; it is
based on using memory functions.
• This method solves a given problem in the top-down manner
but, in addition, maintains a table of the kind that would have
been used by a bottom-up dynamic programming algorithm.

• Initially, all the table’s entries are initialized with a special


“null” symbol to indicate that they have not yet been
calculated.

• Thereafter, whenever a new value needs to be calculated,


the method checks the corresponding entry in the table first:

• if this entry is not “null,” it is simply retrieved from the table;


otherwise, it is computed by the recursive call whose result
is then recorded in the table.
Using Memory Functions: Knapsack Problem
Using Memory Functions: Knapsack Problem
Warshall’s Algorithm : computes the transitive closure of a directed
graph

• Adjacency matrix A = {aij } of a directed graph is the boolean


matrix that has 1 in its ith row and jth column if and only if there is a
directed edge from the ith vertex to the jth vertex.

• Transitive closure of the digraph is the matrix containing the


information about the existence of directed paths of arbitrary
lengths between vertices of a given graph. It would allow us to
determine in constant time whether the jth vertex is reachable from
the ith vertex.

• Definition :The transitive closure of a directed graph with n


vertices can be defined as the n × n boolean matrix T, in which the
element in the ith row and the jth column, tij is 1 if there exists a
nontrivial path (i.e., directed path of a positive length) from the ith
vertex to the jth vertex; otherwise, tij is 0.
Using DFS to generate Transitive closure

The transitive closure of a digraph can be generated with the help of depth
first search or breadth-first search. Performing either traversal starting at
the ith vertex gives the information about the vertices reachable from it and
hence the columns that contain 1’s in the ith row of the transitive closure.
Thus, doing such a traversal for every vertex as a starting point yields the
transitive closure in its entirety. Drawback of this is that this method
traverses the same digraph several times.

Warshall’s algorithm : named after Stephen Warshall.


Warshall’s algorithm constructs the transitive closure through a series of n ×
n boolean matrices: R(0), . . . , R(k−1), R(k), . . . R(n).

Each of these matrices provides certain information about directed paths in


the digraph. For example, the element r(k) ij in the ith row and jth column of
matrix R(k) (i, j = 1, 2, . . . , n, k = 0, 1, . . . , n) is equal to 1 if and only if there
exists a directed path of a positive length from the ith vertex to the jth vertex
with each intermediate vertex, if any, numbered not higher than k.
• The series starts with R(0), which does not allow any intermediate vertices
in its paths; hence, R(0) is nothing other than the adjacency matrix of the
digraph.

• R(1) contains the information about paths that can use the first vertex as
intermediate; thus, with more freedom, so to speak, it may contain more
1’s than R(0)

• In general, each subsequent matrix in series has one more vertex to use
as intermediate for its paths than its predecessor and hence may,
contain more 1’s.

• The last matrix in the series, R(n), reflects paths that can use all n
vertices of the digraph as intermediate and hence is nothing other than
the digraph’s transitive closure.
Working of the Algorithm
we can compute all the elements of each matrix R(k) from its immediate
predecessor R(k−1) in series.

Let r(k) ij , the element in the ith row and jth column of matrix R(k), be equal to
1. This means that there exists a path from the ith vertex vi to the jth vertex
vj with each intermediate vertex numbered not higher than k:

Following is the formula for generating the elements of matrix R(k) from the
elements of matrix R(k−1):

.
Pseudo Code of Warshalls Algorithm
Floyds Algorithm or All Pair shortest Path Algorithm

Given a weighted connected graph (undirected or


directed), the all-pairs shortest paths problem
asks to find the distances—i.e., the lengths of the
shortest paths— from each vertex to all other
vertices.
.
Distance Matrix

It is convenient to record the lengths of shortest paths in an n × n matrix


D called the distance matrix: the element dij in the ith row and the jth
column of this matrix indicates the length of the shortest path from the ith
vertex to the jth vertex.

We can generate the distance matrix with an algorithm that is very


similar to Warshall’s algorithm. It is called Floyd’s algorithm,

It is applicable to both undirected and directed weighted graphs provided


that they do not contain a cycle of a negative length.

Floyd’s algorithm computes the distance matrix of a weighted graph with


n vertices through a series of n × n matrices:
D(0), . . . , D(k−1), D(k), . . . , D(n).
Distance Matrix

Each of these matrices contains the lengths of shortest paths with


certain constraints on the paths considered for the matrix in question.

Specifically, the element d(k) ij in the ith row and the jth column of matrix
D(k) (i, j = 1, 2, . . . , n, k = 0, 1, . . . , n) is equal to the length of the
shortest path among all paths from the ith vertex to the jth vertex with
each intermediate vertex, if any, numbered not higher than k.

we can compute all the elements of each matrix D(k)


from its immediate
predecessor D(k−1)

Let d(k) ij be the element in the ith row and the jth column of matrix D(k).
This means that d(k) ij is equal to the length of the shortest path among all
paths from the ith vertex vi to the jth vertex vj with their intermediate
vertices numbered not higher than k:
vi, a list of intermediate vertices each numbered not higher than k, vj .
Floyds Algorithm Pseudo Code
Example
• The Fibonacci numbers are the elements of the sequence 1, 1,
2, 3, 5, 8, 13, 21, 34, . . . , can be defined by the simple
recurrence F(n) = F(n − 1) + F(n − 2) for n > 1. and two initial
conditions F(1) = 1, F(2) = 1.

• If we try to use recurrence directly to compute the nth Fibonacci


number F(n), we would have to re compute the same values of
this function many times.

• The problem of computing F(n) is expressed in terms of its


smaller and overlapping subproblems of computing F(n − 1)
and F(n − 2).
Fibonacci numbers: 1, 1, 2, 3, 5, 8, 13, 21, …
F(n) = F(n-1) + F(n-2)
F(1) = F(2) = 1

Computing the nth Fibonacci number recursively (top-down):

F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)


...
Algorithm F(n)
//Computes nth Fibonacci Number recursively
//Input: positive integer n
//Output: nth Fibonacci Number

if(n = 1 OR n = 2) return 1
return F(n-1) + F(n-2)
Example
• So we can simply fill elements of a one-dimensional array with
the n + 1 consecutive values of F(n) by starting, in view of initial
conditions with 1 and 1 and using above equation as the rule
for producing all the other elements.

• The last element of this array will contain F(n)

• The Fibonacci numbers are the elements of the sequence 1, 1,


2, 3, 5, 8, 13, 21, 34, . . . , can be defined by the simple
recurrence F(n) = F(n − 1) + F(n − 2) for n > 1. and two initial
conditions F(1) = 1, F(2) = 1.
Fibonacci numbers: 1, 1, 2, 3, 5, 8, 13, 21, …
F(n) = F(n-1) + F(n-2)
F(1) = F(2) = 1

Computing the 5th Fibonacci number recursively (top-down):

F(5)

F(4) + F(3)

F(3) + F(2) F(2) + F(1)

F(2) F(1) …
Algorithm Fibonacci_DP_BottomUp(n)
//Computes nth Fibonacci Number using
// bottom-up approach of Dynamic Programming
//Input: positive integer n
//Output: nth Fibonacci Number

F[1] ← F[2] ← 1
for i ← 3 to n
F[i] ← F[i-1] + F[i-2]
return F[n]
Computing 6th Fibonacci number:
F(6) = F(5) + F(4)
F(1) 1
F(5) = F(4) + F(3)
F(4) = F(3) + F(2) F(2) 1
F(3) = F(2) + F(1)
F(2) = 1 F(3) -1
F(1) = 1 F(4) -1

F(5) -1

F(6) -1
Computing 6th Fibonacci number:
F(6) = F(5) + F(4)
F(1) 1
F(5) = F(4) + F(3)
F(4) = F(3) + F(2) F(2) 1
F(3) = 1 + 1 = 2
F(2) = 1 F(3) 2
F(1) = 1 F(4) -1

F(5) -1

F(6) -1
Computing 6th Fibonacci number:
F(6) = 5 + 3 = 8
F(1) 1
F(5) = 3 + 2 = 5
F(4) = 2 + 1 = 3 F(2) 1
F(3) = 1 + 1 = 2
F(2) = 1 F(3) 2
F(1) = 1 F(4) 3

F(5) 5

F(6) 8
Algorithm Fibonacci_DP_TopDown(n, F)
//Computes nth Fibonacci Number using a table
// to avoid recomputing subproblems
//Input: positive integer n and array F where
// F[i] is either ith Fibonacci Number or -1
// indicating it’s not yet computed.
//Output: nth Fibonacci Number

if(F[n] ≠ -1) return F[n]


F[n] ← Fibonacci_DP_TopDown(n-1, F) +
Fibonacci_DP_TopDown(n-2, F)
return F[n]
Q: How many bit strings of length n does not
have consecutive two zeros.

Eg
If n=8
(10110110 is one of them, but 11100111 is not.)
Q: How many bit strings of length 8 does not
have consecutive two zeros.
(10110110 is one of them, but 11100111 is not.)

Soln: f(n) = f(n-1) + f(n-2),


where f(1) = 2, f(2) = 3
Q: How many bit strings of length 8 does not
have consecutive three zeros.
(10010110 is one of them, but 11000111 is not.)

Soln: f(n) = f(n-1) + f(n-2) + f(n-3),


where f(1) = 2, f(2) = 4, f(3) = 7
Principle of optimality.
it says that an optimal solution to any instance of an
optimization problem is composed of optimal solutions to its
sub instances.
Coin-row problem
There is a row of n coins whose values are some
positive integers c1, c2, …, cn, not necessarily
distinct. The goal is to pick up the maximum
amount of money subject to the constraint that
no two coins adjacent in the initial row can be
picked up.
Find the Recurrence
Eg: Coin-row problem
There is a row of n coins whose values are some
positive integers c1, c2, …, cn, not necessarily
distinct. The goal is to pick up the maximum
amount of money subject to the constraint that
no two coins adjacent in the initial row can be
picked up.

F(n) = max{ F(n-1), cn + F(n-2) } for n>1


where F(0) = 0, F(1) = c1
Coin-row problem. There is a row of n coins whose values are some
positive integers c1, c2, . . . , cn, not necessarily distinct. The goal is to pick
up the maximum amount of money subject to the constraint that no two
coins adjacent in the initial row can be picked up.
Let F(n) be the maximum amount that can be picked up from the row of n
coins. To derive a recurrence for F(n), we partition all the allowed coin
selections into two groups: those that include the last coin and those
without it. The largest amount we can get from the first group is equal to cn
+ F(n − 2)—the value of the nth coin plus the maximum amount we can pick
up from the first n − 2 coins. The maximum amount we can get from the
second group is equal to F(n − 1) by the definition of F(n). Thus, we have
the following recurrence subject to the obvious initial conditions:

F(n) = max{cn + F(n − 2), F(n − 1)} for n > 1,


F(0) = 0, F(1) = c1.
Pseudo code
Solving the coin-row problem by dynamic programming for the coin row
5, 1, 2, 10, 6, 2.

You might also like