Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
189 views

Dynamic Programming

Dynamic programming solves optimization problems by breaking them down into overlapping subproblems. It differs from divide-and-conquer in that subproblems are not independent and may be solved multiple times. Dynamic programming involves defining subproblems, storing solutions to computed subproblems in a table for lookup, and building up the overall solution by combining optimal solutions to subproblems. An example is the matrix chain multiplication problem, which can be solved optimally using dynamic programming by defining the cost of multiplying sub-matrices as a recursive function and storing intermediate results in a cost table.

Uploaded by

renumathav
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views

Dynamic Programming

Dynamic programming solves optimization problems by breaking them down into overlapping subproblems. It differs from divide-and-conquer in that subproblems are not independent and may be solved multiple times. Dynamic programming involves defining subproblems, storing solutions to computed subproblems in a table for lookup, and building up the overall solution by combining optimal solutions to subproblems. An example is the matrix chain multiplication problem, which can be solved optimally using dynamic programming by defining the cost of multiplying sub-matrices as a recursive function and storing intermediate results in a cost table.

Uploaded by

renumathav
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 101

Dynamic Programming

What is Dynamic Programming?


• Dynamic programming solves
optimization problems by
combining solutions to
subproblems
11-2
What is Dynamic Programming?
…contd

Recall the divide-and-conquer approach


Partition the problem into independent
subproblems
Solve the subproblems recursively
Combine solutions of subproblems

This contrasts with the dynamic


programming approach
11-3
What is Dynamic Programming?
…contd

Dynamic programming is applicable when


subproblems are not independent
i.e., subproblems share subsubproblems
Solve every subsubproblem only once and
store the answer for use when it reappears

11-4
Divide and conquer vs. dynamic
programming
Elements of DP Algorithms
Sub-structure: decompose problem into smaller sub-
problems. Express the solution of the original
problem in terms of solutions for smaller problems.
Table-structure: Store the answers to the sub-
problem in a table, because sub-problem solutions
may be used many times.
Bottom-up computation: combine solutions on
smaller sub-problems to solve larger sub-problems,
and eventually arrive at a solution to the complete
problem.
The shortest path

To find a shortest path in a multi-stage graph


3 2 7

1 4
S A B 5
T

5 6

Apply the greedy method :


the shortest path from S to T :
1+2+5=8
8 -7
The shortest path in
multistage graphs
e.g. A
4
D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

The greedy method can not be applied to this case:


(S, A, D, T) 1+4+18 = 23.
The real shortest path is:
(S, C, F, T) 5+2+2 = 9.
8 -8
Dynamic programming approach
Dynamic programming approach (forward approach):

1 A d(A, T)

2
d(B, T) T
S B
d(C, T)
5
C
d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
4
 A D
d(D, T)

11
E T
d(E, T)
8 -9
Dynamic programming
d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
The above way of reasoning is called
9 D
backward reasoning. d(D, T)

d(E, T)
B
5
E T

16
F d(F,8T-)10
Backward approach
(forward reasoning)
d(S, A) = 1
d(S, B) = 2
d(S, C) = 5
d(S,D)=min{d(S, A)+d(A, D),d(S, B)+d(B, D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S, A)+d(A, E),d(S, B)+d(B, E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S, A)+d(A, F),d(S, B)+d(B, F)}
= min{ 2+16, 5+2 } = 7
8 -11
d(S,T) = min{d(S, D)+d(D, T),d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9

8 -12
Principle of optimality
Principle of optimality: Suppose that in solving a
problem, we have to make a sequence of
decisions D1, D2, …, Dn. If this sequence is
optimal, then the last k decisions, 1  k  n
must be optimal.
e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then
i1, i2, …, j must be a shortest path from i1 to j

8 -13
1. Matrix Chain Multiplication
• Matrix-chain multiplication problem
– Given a chain A1, A2, …, An of n
matrices,with sizes p0  p1, p1  p2, p2  p3, …,
pn-1  pn

– Parenthesize the product A1A2…An such that


the total number of scalar multiplications is
minimized.
Matrix Multiplication

p×q

q×r

p×r

Number of scalar multiplications = pqr


Example
Matrix Dimensions
A1 13 x 5
A2 5 X 89
A3 89 X 3
A4 3 X 34
Parenthesization Scalar multiplications
1 ((A1 A2 ) A3 ) A4 10,582
2 (A1 A2 ) (A3 A4 ) 54,201
3 (A1 (A2 A3 )) A4 2, 856
4 A1 ((A2 A3 ) A4 ) 4, 055
5 A1 (A2 (A3 A4 )) 26,418

1. 13 x 5 x 89 scalar multiplications to get (A B) 13 x 89 result


13 x 89 x 3 scalar multiplications to get ((AB)C) 13 x 3 result
13 x 3 x 34 scalar multiplications to get (((AB)C)D) 13 x 34
Dynamic Programming Approach
• The structure of an optimal solution
– Let us use the notation Ai..j for the matrix that results
from the product Ai Ai+1 … Aj

– An optimal parenthesization of the product A1A2…An


splits the product between Ak and Ak+1 for some
integer k where1 ≤ k < n

– First compute matrices A1..k and Ak+1..n ; then


multiply them to get the final matrix A1..n
Dynamic Programming Approach
…contd

– Key observation: parenthesizations of the


subchains A1A2…Ak and Ak+1Ak+2…An must also be
optimal if the parenthesization of the chain A1A2…An
is optimal.

– That is, the optimal solution to the problem contains


within it the optimal solution to subproblems.
Dynamic Programming Approach
…contd

• Recursive definition of the value of an


optimal solution.
– Let m[i, j] be the minimum number of scalar
multiplications necessary to compute Ai..j

– Minimum cost to compute A1..n is m[1, n]

– Suppose the optimal parenthesization of Ai..j


splits the product between Ak and Ak+1 for
some integer k where i ≤ k < j
Dynamic Programming Approach
…contd

– Ai..j = (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k · Ak+1..j

– Cost of computing Ai..j = cost of computing Ai..k +


cost of computing Ak+1..j + cost of multiplying Ai..k
and Ak+1..j

– Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj

– m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j


– m[i, i ] = 0 for i=1,2,…,n
Dynamic Programming Approach
…contd

– But… optimal parenthesization occurs at


one value of k among all possible i ≤ k < j
– Check all these and select the best one

0 if i=j
m[i, j ] =
min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<j
i ≤ k< j
Dynamic Programming Approach
…contd

• To keep track of how to construct


an optimal solution, we use a table
s
• s[i, j ] = value of k at which Ai Ai+1 … Aj
is split for optimal parenthesization.
Ex:-
[A1]5×4 [A2]4×6 [A3]6×2 [A4]2×7
Computation Sequence
P0=5, p1=4, p2=6, p3=2, p4=7

1 2 3 4
m(1,4)

1
M11=0 M22=0 M33=0 M44=0
m(1,3) m(2,4)
2 M12=120 M23=48 M34=84
s12=1 s23=2 s34=3 m(1,2) m(2,3) m(3,4)
3 M13=88 M24=104
s13=1 s24=3
4 M14=158 Optimal Parenthesization
s14=3

( A1 ( A2 A3 )) A4
Matrix Chain Multiplication Algorithm

– First computes costs for chains of length l=1


– Then for chains of length l=2,3, … and so on
– Computes the optimal cost bottom-up.
Input: Array p[0…n] containing matrix dimensions and n
Result: Minimum-cost table m and split table s

Algorithm Matrix_Chain_Mul(p[], n)
{
for i:= 1 to n do
m[i, i]:= 0 ;

for len:= 2 to n do // for lengths 2,3 and so on


{
for i:= 1 to ( n-len+1 ) do
{
j:= i+len-1;
m[i, j]:=  ;

for k:=i to j-1 do


{
q:= m[i, k] + m[k+1, j] + p[i-1] p[k] p[j];
if q < m[i, j]
{
m[i, j]:= q;
s[i, j]:= k;
}
}
}
}
return m and s
Time complexity of above algorithm is O(n3)
}
Constructing Optimal Solution
• Our algorithm computes the minimum-
cost table m and the split table s
• The optimal solution can be constructed
from the split table s
– Each entry s[i, j ]=k shows where to split the
product Ai Ai+1 … Aj for the minimum cost.

11-27
Example
• Copy the table of previous example and then
construct optimal parenthesization.
0/1 knapsack problem
n objects , weight W1, W2, ,Wn
profit P1, P2, ,Pn
capacity M
maximize P x i i
1in

subject to W x i i
1in
3
xi = 0 or 1, 1in
e. g. i Wi Pi
1 10 40 M=10
2 3 20
8
-
2
9
The multistage graph solution
The 0/1 knapsack problem can be described
by a multistage graph.
x3=0
x2=0 10 100
0
1 0
0
x1=1 011
40 x3=1
S 30 0
0
T
01 0 010
x =0 0 x2=1 x3=0
1
20
0
0 0
x3=1 001
0 30
x2=0 00 0
x3=0 000 8 -30
The dynamic programming
approach
The longest path represents the optimal solution:
x1=0, x2=1, x3=1
=P20+30 = 50
 xi i

Let fi(Q) be the value of an optimal solution to


objects 1,2,3,…,i with capacity Q.
fi(Q) = max{ fi-1(Q), fi-1(Q-Wi)+Pi }
The optimal solution is fn(M).

8 -31
Optimal binary search trees
e.g. binary search trees for 3, 7, 9, 12;

3 12

7 7 9
7

3 12 3 9
9 3

9 12 7
12
(a) (b) (c) (d)

8 -32
Optimal binary search trees
n identifiers : a1 <a2 <a3 <…< an
Pi, 1in : the probability that ai is searched.
Qi, 0in : the probability that x is searched
where ai < x < ai+1 (a0=-, an+1=).
n n

   Qi  1
Pi i 0

i 1
8 -33
10
Identifiers : 4, 5, 8, 10, 11,
12, 14
5 14
Internal node : successful
search, Pi
4 8 11 E7

External node :
E0 E1 E2 E3 E4 12 unsuccessful search, Qi

E5 E6

P
n
i
)
n
 i1  level( i  (level(E i ) 1)
a Q
i i0
8 -34
The dynamic programming
approach
Let C(i, j) denote the cost of an optimal binary
search tree containing ai,…,aj .
The cost of the optimal binary search tree with ak
as its root :   
   
   n     
 k1 
 0 i  k  i
C(1, min Q P Q C 1, k 1 Q P Q C k 1, n
i
 i 
n) Pk  i1  ik1 

1kn

ak
P1...Pk-1 Pk+1...Pn
Q0...Qk-1 Qk...Qn

a1...ak-1 ak+1...an
C(1,k-1) C(k+1,n) 8 -35
General formula
  
 
 k1     
C(i, mi Pk Qi-1  Pm Qm C i, k 1 
j) n 

  
ik j mi

  j    



Qk

 Pm Qm C k j 
mk1 
1, j

 min i k  1 Ck  j Qi-1   Qm 
 C
,  1,   P m

ik j

mi 
ak
P1...Pk-1 Pk+1...Pn
Q0...Qk-1 Qk...Qn

a1...ak-1 ak+1...an

C(1,k-1) C(k+1,n) 8 -36


Computation relationships of
subtrees
e.g. n=4
C(1,4)

C(1,3) C(2,4)

C(1,2) C(2,3) C(3,4)

Time complexity : O(n3)


when j-i=m, there are (n-m) C(i, j)’s to compute.
Each C(i, j) with j-i=m can be com3 puted in O(m) time.
O(  m(n  m))  O(n
1mn-1
)
8 -37
Optimal Binary Search Tree(OBST)

• Problem
– n identifiers : a1 <a2 <a3 <…< an
– p(i), 1in : the probability that ai is searched.
– q(i), 0in : the probability that x is searched
where ai < x < ai+1 (a0=-, an+1=).

• Build a binary search tree ( BST ) with minimum


search cost.
• Ex:- (a1,a2,a3)=( do,if,while ), p(i)=q(i)=1/7 for all i
• The number of possible binary search trees= (1/n+1)2ncn = ¼ ( 6c3 ) =5

while
if

if
do while

do

(b)
(a)
while do
do
do while
if
if if
while

(d) (e)
(c)
For the above example, Algorithm search(x)
{ found:=false; t:=tree;
cost( tree a ) = 15 / 7 while( (t≠0) and not found ) do
cost( tree b ) = 13/7 {
if( x=t->data ) then found:=true; else if( x<t->da
cost( tree c ) = 15/7 }
if( not found ) then return 0; else return 1;
cost( tree d ) = 15/7
cost( tree e ) = 15/7
Therefore, tree b is optimal

Ex. 2
}
p(1)=.5, p(2)=.1, p(3)=.05
q(0)=.15, q(1)=.1, q(2)=.05, q(3)=.05
Cost of searching a successful identifier
= frequencry * level
Cost of searching an unsuccessful identifier
cost( tree a ) = 2.65 = frequencry * ( level -1 )
cost( tree b ) = 1.9
cost( tree c ) = 1.5
cost( tree d ) = 2.05
cost( tree e ) = 1.6
Therefore, tree c is optimal
• Identifiers : stop, if, do
stop
Internal node : successful
if
search, p(i)
E3
• External node :
do E2 unsuccessful search, q(i)
E0 E1

 Thenexpected cost
n
of a binary tree:
P lei)v
i

Qei(ll(eia)
n1 n0
8 -41
The dynamic programming approach
• Make a decision as which of the ai’s should be assigned
to the root node of the tree.
• If we choose ak, then it is clear that the internal nodes for
a1,a2,……,ak-1 as well as the external nodes for the
classes E0, E1,….,Ek-1 will lie in the left subtree l of the
root. The remaining nodes will be in the right subtree r.

ak
P1. .Pk-1 Pk+1..Pn
Q0..Qk-1 Qk..Qn

a1...ak-1 ak+1...an

C(1,k-1)
cost( l)= ∑ p(i)*level(ai) + ∑ q(i)*(level(Ei)-1)
1≤i<k 0≤i<k

cost(r)=

• In both the cases the level is measured by considering


the root of the respective subtree to be at level 1.

j
• Using w(i, j) to represent the sum q(i) +∑ l=i+1( q(l)+p(l) ),
we obtain the following as the expected cost of the
above search tree.

p(k) + cost(l) + cost(r) + w(0,k-1) + w(k,n)


• If we use c(i,j) to represent the cost of an optimal binary
search tree tij containing ai+1,……,aj and Ei,…,Ej, then
cost(l)=c(0,k-1), and cost(r)=c(k,n).

• For the tree to be optimal, we must choose k such that


p(k) + c(0,k-1) + c(k,n) + w(0,k-1) + w(k,n) is minimum.

Hence, for c(0,n) we obtain

c(0,n)= min c(0,k-1) + c(k, n) +p(k)+ w(0,k-1) + w(k,n)


0<k≤n

We can generalize the above formula for any c(i,j) as shown below

c (i, j)= min c (i,k-1) + c (k,j) + p(k)+ w(i,k-1) + w(k,j)


i<k≤j
c(i, j)= min cost(i,k-1) + cost(k,j) + w (i, j)
i<k≤j

– Therefore, c(0,n) can be solved by first computing all c(i, j)


such that j - i=1, next we compute all c(i,j) such that j - i=2,
then all c(i, j) wiht j – i=3, and so on.

– During this computation we record the root r(i, j) of each tree


t ij, then an optimal binary search tree can be constructed
from these r(i, j).

– r(i, j) is the value of k that minimizes the cost value.

Note:1. c(i,i) = 0, w(i, i) = q(i), and r(i, i) = 0 for all 0 ≤ i ≤ n


2. w(i, j) = p(j) + q(j) + w(i, j-1 )
Ex 1: Let n=4, and ( a1,a2,a3,a4 ) = (do, if, int, while).
Let p(1 : 4 ) = ( 3, 3, 1, 1) and q(0: 4) = ( 2, 3, 1,1,1 ).
p’s and q’s have been multiplied by 16 for convenience.
Then, we get
0 w00=2 w11=3 w22=1 w33=1 w44=1
c00=0 c11=0 c22=0 c33=0 c44=0
r00=0 r11=0 r22=0 r33=0 r44=0
w01=8 w12=7 w23=3 w34=3
1
c01=8 c12=7 c23=3 c34=3
j-i r01=1 r12=2 r23=3 r34=4

2 w02=12 w13=9 w24=5


c02=19 c13=12 c24=8
r02=1 r13=2 r24=3
w03=14 w14=11
3
c03=25 c14=19
r03=2 r14=2
4 w04=16
c04=32
r04=2

Computation of c(0,4), w(0,4), and r(0,4)


• From the table we can see that c(0,4)=32 is the minimum
cost of a binary search tree for ( a1, a2, a3, a4 ).
• The root of tree t04 is a2.
• The left subtree is t01 and the right subtree t24.
• Tree t01 has root a1; its left subtree is t00 and right
subtree t11.
• Tree t24 has root a3; its left subtree is t22 and right
subtree t34.
• Thus we can construct OBST.
if

do int

while
Ex 2: Let n=4, and ( a1,a2,a3,a4 ) = (count, float, int,while).
Let p(1 : 4 ) =( 1/20, 1/5, 1/10, 1/20) and
q(0: 4) = ( 1/5,1/10, 1/5,1/20,1/20 ).

Using the r(i, j)’s construct an optimal binary search tree.


Time complexity of above procedure to
evaluate the c’s and r’s
• Above procedure requires to compute c(i, j) for
( j - i) = 1,2,…….,n .

• When j – i = m, there are n-m+1 c( i, j )’s to compute.

• The computation of each of these c( i, j )’s requires to find


m quantities.

• Hence, each such c(i, j) can be computed in time o(m).


• The total time for all c(i,j)’s with j – i= m is
= m( n-m+1)
= mn-m2+m
=O(mn-m2)

• Therefore, the total time to evaluate all the c(i, j)’s and
r(i, j)’s is

∑ ( mn – m2 ) = O(n3)
1≤m≤n
• We can reduce the time complexity by using the
observation of D.E. Knuth

• Observation:
• The optimal k can be found by limiting the search
to the range r( i, j – 1) ≤ k ≤ r( i+1, j )

• In this case the computing time is O(n2).


OBST Algorithm
Algorithm OBST(p,q,n)
{
for i:= 0 to n-1 do
{ // initialize.
w[ i, i ] :=q[ i ]; r[ i, i ] :=0; c[ i, i ]=0;
// Optimal trees with one node.
w[ i, i+1 ]:= p[ i+1 ] + q[ i+1 ] + q[ i ] ;
c[ i, i+1 ]:= p[ i+1 ] + q[ i+1 ] + q[ i ] ;
r[ i, i+1 ]:= i + 1;
}
w[n, n] :=q[ n ]; r[ n, n ] :=0; c[ n, n ]=0;
// Find optimal trees with m nodes.

for m:= 2 to n do
{
for i := 0 to n – m do
{
j:= i + m ;
w[ i, j ]:= p[ j ] + q[ j ] + w[ i, j -1 ];

// Solve using Knuth’s result


x := Find( c, r, i, j );

c[ i, j ] := w[ i, j ] + c[ i, x -1 ] + c[ x, j ];
r[ i, j ] :=x;
}
}
Algorithm Find( c, r, i, j )
{
for k := r[ i, j -1 ] to r[ i+1, j ] do
{ min :=∞;
if ( c[ i, k -1 ] +c[ k, j ] < min ) then
{
min := c[ i, k-1 ] + c[ k, j ]; y:= k;
}
}
return y;
}
Traveling Salesperson Problem (TSP)

Problem:-

• You are given a set of n cities.


• You are given the distances between the cities.
• You start and terminate your tour at your home city.
• You must visit each other city exactly once.
• Your mission is to determine the shortest tour. OR
minimize the total distance traveled.
• e.g. a directed graph :
2
1 2
2
10
4
6 5 9 3
8
7
4 3
4

1 2 3 4
• Cost matrix: 1 0 2 10 5
2 2 9 
0 0 4
3 4
3
4 6 8 7 0
The dynamic programming approach

• Let g( i, S ) be the length of a shortest path starting at


vertex i, going through all vertices in S and terminating
at vertex 1.

• The length of an optimal tour :

gV(-
{11m,
}
{1)kcgi
V(-{nk
2kn
1
• The general form:
gS(im),

{iijcngS(-{j,
2
jS
• Equation 1 can be solved for g( 1, V- {1} ) if we know
g( k, V- {1,k} ) for all choices of k.

• The g values can be obtained by using equation 2 .

Clearly,
g( i, Ø ) = Ci1 , 1≤ i ≤ n.

• Hence we can use eq 2 to obtain g( i, S ) for all S of


size 1. Then we can obtain g( i, s) for all S of size 2
and so on.
Thus,
g(2, Ø)=C21=2 , g(3, Ø)=C31=4

g(4, Ø)=C41=6
We can obtain
g(2, {3})=C23 + g(3, Ø)=9+4=13
g(2, {4})=C24 + g(4, Ø)=∞

g(3, {2})=C32 + g(2, Ø)=3+2=5


g(3, {4})=C34 + g(4, Ø)=4+6=10
g(4, {2})=C42 + g(2, Ø)=8+2=10
g(4, {3})=C43 + g(3, Ø)=7+4=11

Next, we compute g(i,S) with |S | =2,


g( 2,{3,4} )=min { c23+g(3,{4}), c24+g(4,{3}) }
=min {19, ∞}=19
g( 3,{2,4} )=min { c32+g(2,{4}), c34+g(4,{2}) }
=min {∞,14}=14
g(4,{2,3} )=min {c42+g(2,{3}), c43+g(3,{2}) }
=min {21,12}=12
Finally,
We obtain

g(1,{2,3,4})=min { c12+ g( 2,{3,4} ),


c13+ g( 3,{2,4} ),
c14+ g(4,{2,3} ) }
=min{ 2+19,10+14,5+12}
=min{21,24,17}
=17.
• A tour can be constructed if we retain with each g( i, s )
the value of j that minimizes the tour distance.
• Let J( i, s ) be this value, then J( 1, { 2, 3, 4 } ) = 4.
• Thus the tour starts from 1 and goes to 4.

• The remaining tour can be obtained from g( 4, { 2,3 } ).


So J( 4, { 3, 2 } )=3

• Thus the next edge is <4, 3>. The remaining tour is


g(3, { 2 } ). So J(3,{ 2 } )=2

The optimal tour is: (1, 4, 3, 2, 1)


Tour distance is 5+7+3+2 = 17
All pairs shortest path problem

Floyd-Warshall Algorithm
All-Pairs Shortest Path Problem

• Let G=( V,E ) be a directed graph consisting of n


vertices.
• Weight is associated with each edge.

• The problem is to find a shortest path between every pair


of nodes.
Ex:-
1
1 2 3 4 5 v1 v2
3
1 0 1  1 5 1 9 3
5 2
2 9 0 3 2  v5 2
3   0 4  3v
4   2 0 3 4 v3
   0 4
5 3
Idea of Floyd-Warshall Algorithm
• Assume vertices are {1,2,……n}

• Let d k( i, j ) be the length of a shortest path from i to j


with intermediate vertices numbered not higher than k
where 0 ≤ k ≤ n, then

• d0( i, j )=c( i, j ) (no intermediate vertices at all)

• d k( i, j )=min { dk-1( i, j ), dk-1( i, k )+ dk-1( k, j ) }


– and d n( i, j ) is the length of a shortest path from i to j
• In summary, we need to find d n with d 0 =cost matrix .

• General formula

d k [ i, j ]= min { d k-1[ i, j ], d k-1[ i, k ]+ d k-1[ k, j ] }


Shortest path using intermediate vertices
{ V1, . . . Vk }

Vk d k-1[k, j]
d k-1[i, k]

Vj
Vi
d k-1[i, j]

Shortest Path using intermediate vertices


{ V1, . . . Vk -1 }
d0 =

d1 =

d2 =

d3 =

d4 =

d5 =
Algorithm
Algorithm AllPaths( c, d, n )
// c[1:n,1:n] cost matrix
// d[i,j] is the length of a shortest path from i to j
{
for i := 1 to n do
for j := 1 to n do
d [ i, j ] := c [ i, j ] ; // copy c into d

for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
d [ i, j ] := min ( d [ i, j ] , d [ i, k ] + d [ k, j ] );
}
Time Complexity is O ( n 3 )
0/1 Knapsack Problem
Let xi = 1 when item i is selected and let xi = 0
when item i is not selected.
n
maximize pi xi
i=
1n
subject to wi xi <= c
i=1
and xi = 0 or 1 for all i
All profits and weights are positive.
Sequence Of Decisions

• Decide the xi values in the order x1, x2, x3,


…, xn.
OR
• Decide the xi values in the order xn, xn-1, xn-2,
…, x1.
Problem State
• The state of the 0/1 knapsack problem is
given by
 the weights and profits of the available items
 the capacity of the knapsack
• When a decision on one of the xi values is
made, the problem state changes.
 item i is no longer available
 the remaining knapsack capacity may be less
Problem State
• Suppose that decisions are made in the order
x1, x2, x3, …, xn.
• The initial state of the problem is described
by the pair (1, m).
 Items 1 through n are available
 The available knapsack capacity is m.
• Following the first decision the state becomes
one of the following:
 (2, m) … when the decision is to set x1= 0.
 (2, m-w1) … when the decision is to set x1= 1.
Problem State
• Suppose that decisions are made in the order
xn, xn-1, xn-2, …, x1.
• The initial state of the problem is described
by the pair (n, m).
 Items 1 through n are available
 The available knapsack capacity is m.
• Following the first decision the state becomes
one of the following:
 (n-1, m) … when the decision is to set xn= 0.
 (n-1, m-wn) … when the decision is to set xn= 1.
Dynamic programming approach

• Let fn(m) be the value of an optimal


solution, then
fn(m)= max { fn-1(m), fn-1( m-wn) + p n }

General formula

fi(y)= max { fi-1(y), fi-1( y-w i) + p i }


Recursion Tree

f(1,c)

f(2,c) f(2,c-w1)

f(3,c) f(3,c-w2) f(3,c-w1) f(3,c-w1 –w2)

f(4,c) f(4,c-w3) f(4,c-w2) f(4,c-w1 –w3)

f(5,c)
f(5,c-w1 –w3 –w4)
• We use set si is a pair ( P, W )
where P= fi(y), W=y

• Note That s0 =( 0, 0 )

• We can compute si+1 from si by first


computing

si1={ ( P, W ) ( P- pi+1, W- wi+1)€ s i }


OR
Si 1
Si-1 + (pi, wi)
=

Merging :- si+1 can be computed by merging the


pairs in s i and s i 1

Purging :- if si+1 contains two pairs ( p j, w j ) and ( p k, w k )


with the property that p j ≤ p k and w j ≥ w k

• When generating s i’s, we can also purge all pairs ( p, w )


with w > m as these pairs determine the value of f n (x)
only for x > m.
• The optimal solution f n (m) is given by the highest profit
pair.
Set of 0/1 values for the x i ’ s

• Set of 0/1 values for x i ’ s can be determined by


a search through the si s

– Let ( p, w ) be the highest profit tuple in s n


Step1: if ( p, w ) € s n and ( p, w ) € s n -1
x n= 1
otherwise x n = 0
This leaves us to determine how either ( p, w ) or
( p – pn, w- wn ) was obtained in Sn-1 .
This can be done recursively ( Repeat Step1 ).
Reliability Design
• The problem is to design a system that is
composed of several devices connected in
series.

D1 D2 D3 … Dn

n devices connected in series


• Let r i be the reliability of device Di ( that is, ri is
the probability that device i will function properly ).

• Then, the reliability of entire system is π ri

• Even if the individual devices are very reliable, the


reliability of the entire system may not be very
good.

• Ex. If n=10 and ri = 0.99, 1≤ i ≤ 10, then π ri =0.904

• Hence, it is desirable to duplicate devices.


• Multiple copies of the same device type are connected in
parallel as shown below.

D D
D
1
2
3 … D
D D n
1
D 3 D
2
D D n

Multiple devices connected in parallel in each stage


• If stage i contains mi copies of device Di , then the
probability that all mi have malfunction is ( 1-ri )m i .
Hence the reliability of stage i becomes 1-(1-ri ) m i .

Ex:- If r I =.99 and m I =2 , the stage reliability becomes 0.9999


• Let Фi ( m i ) be the reliability of stage i, i ≤ n
• Then, the reliability of system of n stages is π Ф i ( m i )
1≤i≤n
• Our problem is to use device duplication to maximize
reliability. This maximization is to be carried out under a
cost constraint.
• Let c i be the cost of each device i and c be the
maximum allowable cost of the system being
designed.

• We wish to solve the following maximization problem:

maximize π Ф i ( mi )
1≤i≤n

subjected to ci mi ≤ c
1≤i≤n

m i ≥1 and integer,
1≤i≤n
Dynamic programming approach

• Since, each c i >0, each mi must be in the range


1≤ m i ≤ u i, where

ui=
( c + c i - ∑ c j )C i
1≤j≤n

• The upper bound u i follows from the observation that


m i ≥ 1.
• The optimal solution m 1,m 2,……,m n is the result of a
sequence of decisions, one decision for each m i
• Let fn(c) be the reliability of an optimal
solution, then
f n(c)= max { Фn( mn ) f n-1 ( c- cn mn ) }
1≤ m n ≤ u n

General formula

fi(x)= m1≤ax
m i ≤ u i ( mi ) fi-1(x - ci mi ) }

• Clearly, f0(x)=1, for all x, 0 ≤ x ≤ c


• Let s i consist of tuples of the form ( f, x )

Where f = f i( x )

Purging rule :- if si+1 contains two pairs ( f j, x j ) and


( fk, x k ) with the property that
fj ≤ f k and x j ≥ w k , then we can
purge ( f j, x j )
• When generating s i’s, we can also purge all pairs ( f, x )
with c - x < ∑ ck as such pairs will not leave sufficient
i +1≤ k ≤ n

funds to complete the system.

• The optimal solution f n (c ) is given by the


highest reliability pair.

• Start wit S0 =(1, 0 )

You might also like