Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
18 views

Dynamic Programming

Uploaded by

A0554
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Dynamic Programming

Uploaded by

A0554
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

DYNAMIC PROGRAMMING

 Dynamic programming is an algorithm design method


that can be used when the solution to a problem can
be viewed as the result of a sequence of decisions.
 The solution to the knapsack problem can be viewed
as the result of a sequence of decisions. We have to
decide the values of x i ,1 ≤i ≤ n. First we make a decision on
x 1, then on x 2, then on x 3, and so on. An optimal sequence
of decisions maximizes the objective function ∑ pi x i
subject to the constraints ∑ wi xi ≤ mand 0 ≤ xi ≤ 1.
 One way to find a shortest path from vertex ito vertex j
in a directed graph Gis to decide which vertex should
be the second vertex, which the third, which the
fourth, and so on, until vertex j is reached. An optimal
sequence of decisions is one that results in a path of
least length.
 For some problems, an optimal sequence of decisions
can be found by making the decisions one at a time
and never making an erroneous decision. This is true
for all problems solvable by greedy method.
 For many problems, it is not possible to make stepwise
decisions (based only on local information) in such a
manner that the sequence of decisions made is
optimal.
 Suppose we wish to find a shortest path from vertex ito
vertex j in a directed graphG. Let Ai be the vertices
adjacent from vertexi. Which of the vertices in Aishould
be the second vertex on the path? There is no way to
make a decision at this time and guarantee that future
decisions leading to an optimal sequence can be made.
If on the other hand we wish to find a shortest path
from vertex ito all other vertices inG, then at each step,
a correct decision can be made.
 One way to solve problems for which it is not possible
to make a sequence of stepwise decisions leading to an
optimal decision is to try all possible decision
sequences.
 We could enumerate all decision sequences and then
pick out the best. But the time and space
requirements may be prohibitive.
 Dynamic programming reduces the amount of
enumeration by avoiding the enumeration of some
decision sequences that cannot possibly be optimal.
 In dynamic programming an optimal sequence of
decisions is obtained by making explicit appeal to the
principle of optimality.
 Principle of Optimality: The principle of optimality
states that an optimal sequence of decisions has the
property that whatever the initial state and decision
are, the remaining decisions must constitute an
optimal decision sequence with regard to the state
resulting from the first decision.
Divide and Conquer vs. Dynamic Programming
Divide and Conquer Method Dynamic Programming
It involves three steps: It involves a sequence of four
 Divide the problem into a steps:
number of sub problems.  Characterize the structure of
 Conquer the sub problems. optimal solutions by
 Combine the solutions to the decomposing into optimal sub
sub problems into the solution problems.
for the original problem  Recursively define the value of
optimal solutions by expressing
it in terms of optimal solutions
for smaller problems (usually
using min/max)
 Bottom-up Computation:
Compute the value of optimal
solutions in a bottom-up
fashion.
 Construct an optimal solution
from computed information.

The last step can be omitted if


only the value of an optimal
solution is required.
The sub problems are Dynamic programming is a
independent so we can solve technique for solving problems
them in any order. with overlapping sub problems.
Each sub problem is solved only
once and the result of each sub
problem is stored in a table. The
technique of storing the sub
problem solutions is known as
memoization.
Divide and conquer splits its Dynamic Programming splits its
input at pre-specified input at every possible split points
deterministic points (e.g., always rather than splitting at a pre-
in the middle) specified deterministic points.
After trying all split points, it
determines which split point is
optimal.

Dynamic Programming vs. Greedy Method


 A greedy method computes its solution by making its
choices in a serial forward fashion, never looking back or
revising previous choices.
o At each step, the algorithm makes a choice, based
on some heuristic, that achieves the most obvious
and beneficial profit. The algorithm hopes to achieve
an optimal solution, even though it’s not always
achievable.
 In a greedy method, we make whatever choice seems
best at the moment in the hope that it will lead to global
optimal solution.
 Dynamic programming computes its solution bottom up
by synthesizing them from smaller optimal sub
solutions.
 There is no a priori litmus test by which one can tell if
the Greedy method will lead to an optimal solution. By
contrast, there is a litmus test for Dynamic
Programming, called “The Principle of Optimality”.
OPTIMAL BINARY SEARCH TREES

 Given a fixed set of identifiers, we wish to create a binary


search tree.
 We may expect different binary search trees for the same
identifier set to have different performance characteristics.

 In the worst case, the tree of Figure 5.12 (a) requires four
comparisons to find an identifier, whereas the tree of Figure
5.12 (b) requires only three.
 On the average the two trees need 12/5 and 11/5
comparisons.
 In the case of tree (a), it takes 1,2,2,3, and 4 comparisons,
respectively, to find the identifiers for, do, while, int and if.
Thus the average number of comparisons is
(1+2+2+3+4)/5=12/5.
 In the case of tree (b), it takes 1, 2, 3, 2, and 3 comparisons,
respectively, to find the identifiers for, do, while, int and if.
Thus the average number of comparisons is
(1+2+3+2+3)/5=11/5.
 This calculation assumes that each identifier is searched for
with equal probability and that no unsuccessful searches
(i.e., searches for identifiers not in the tree) are made.
 Generally, different identifiers are searched for with
different frequencies (or probabilities).
 In addition, unsuccessful searches also are made.
 Let us assume that the given set of identifiers is {a1 , a2 , … , a n }
witha 1< a2 <…< an. Let p(i) be the probability with which we
search for a i .Let q ( i )be the probability that the identifier x
being searched for is such that a i< x < ai+1 , 0 ≤i ≤n (assume a 0=−∞
and a n+1=+ ∞)
 0∑≤i ≤ n
q (i)
is the probability of unsuccessful searches and
∑ p ( i ) is the probability of successful searches.
1 ≤i ≤n

 Clearly, 1∑
≤i ≤n
p ( i ) + ∑ q(i)=1
0 ≤i ≤n

 In obtaining a cost function for binary search trees, it is


useful to add a fictitious node in place of every empty
sub tree in the search tree. Such nodes are called
external nodes and are drawn in square in Figure 5.13. If
a binary search tree represents nidentifiers, then there
will be exactly n internal nodes and n+1(fictitious) external
nodes. Every internal node represents a point where a
successful search may terminate and every external node
represents a point where an unsuccessful search may
terminate.
Figure 5.13 Binary search trees of Figure 5.12 with external
nodes added
 If a successful search terminates at an internal node at level l ,
then l comparisons are need. Hence, the expected cost
contribution from the internal nodes for a iis p ( i )∗level ( ai ).
 Unsuccessful search terminates at an external node. The
identifiers not in the binary search tree can be partitioned into
n+1 equivalence classes Ei , 0 ≤i ≤ n. The class E0 contains all
identifiers x such that x <a1. The class Ei contains all identifiers x
such thata i< x < ai+1 , 1 ≤i< n. The class Encontains all identifiers x such
that x >a n.
 For all identifiers in the same class Ei the search terminates at the
same external node. For identifiers in different Ei the search
terminates at different external nodes.
 If the failure node for Ei is at level l ,then l comparisons are
needed. Hence, the expected cost contribution from the external
node isq ( i )∗level(E i).
 Therefore, the expected cost of a binary search tree is:
∑ p ( i )∗level ( ai ) + ∑ q (i )∗level(E i)
1 ≤i ≤n 0≤ i ≤n

¿ ∑ p ( i )∗(depth ( ai ) +1)+ ∑ q ( i )∗¿ ¿ ¿


1 ≤i ≤n 0 ≤i ≤n

¿ 1+ ∑ p ( i )∗depth ( a i ) + ∑ q ( i )∗depth ( Ei )
1 ≤i ≤n 0 ≤i ≤n
 To apply dynamic programming to the problem of obtaining an
optimal binary search tree, we need to view the construction of
such a tree as the result of a sequence of decisions and then
observe that the principal of optimality holds when applied to the
problem state resulting from a decision.
 A possible approach would be to make a decision as to which of
the a 'i sshould be assigned to the root node of the tree.
 If we choose a k , then the internal nodes a 1 , a2 , … , ak−1 as well as the
external nodes for the classes E0 , E1 , … , E k−1 will lie in the left sub
tree and the remaining nodes a k+1 , ak +2 , … , a n and E k , Ek +1 , … , En

 We define,
cost ( l )= ∑ p (i )∗(depth ( a i ) +1)+ ∑ q ( i )∗(depth ( Ei ) +1)
1 ≤i< k 0 ≤ i<k

cost ( r )= ∑ p ( i )∗(depth ( ai ) +1)+ ∑ q ( i )∗(depth ( Ei ) +1)


k<i< n k <i≤ n

 Let
j j
w (i , j )=∑ p l+ ∑ ql
l =i l =i−1

 If c ( i, j ) represents the cost of an optimal binary search tree


containing a i+1 , ai+2 , … , a j and Ei , Ei+1 , … , E j , then cost ( l )=c ( 0 , k−1 ) and
cost ( r )=c (k , n).

 Thus, if a kis the root of the tree, the cost of the resulting tree is:
c (0 , n)= p ( k )+ c ( 0 , k −1 ) +c ( k , n ) +w ( 0 , k −1 ) +w (k , n)
 In order to construct an optimal binary search tree, we must
choose k such that c ( 0 , n )is minimum. That is,
c ( 0 , n )= min { p ( k ) +c ( 0 , k−1 ) +c ( k , n ) +w ( 0 , k−1 ) + w ( k , n ) }
1 ≤ k≤ n
 More generally,
c ( i, j ) =min { c ( i , k−1 ) +c ( k , j ) + p ( k ) + w (i , k −1 )+ w ( k , j ) }
i≤ k ≤ j

c ( i , j ) =min { c ( i , k−1 ) +c ( k , j ) +w (i , j) }
i≤ k ≤ j

 c ( 0 , n )can be solved by first computing all c ( i, j ) by first computing


all c ( i, j )such that j−i=1(note c ( i, i )=0and w (i , i )=q ( i ) ,for 0 ≤ i≤ n).
 Next, we compute all c ( i, j ) such that j−i=2, then all c ( i, j ) such that
j−i=3, and so on.

 During this computation, we record the root of the sub tree r ( i , j )


that minimizes c ( i, j ) in order to construct the optimal binary
search tree.
EXAMPLE 1: Let n=4 and( a 1 , a 2 , a3 , a 4 )=(do , if , ∫ , while). Let
p ( 1: 4 )=( 3 ,3 ,1 , 1 ) and q ( 0: 4 )=(2 , 3 , 1, 1 ,1)

Initially,

w ( 0 ,0 )=2 w (1 , 1 )=3 w (2 , 2 )=1 w (3 , 3 ) =1 w (4 , 4)=1


c ( 0 , 0 )=0 c ( 1 ,1 )=0 c ( 2 ,2 ) =0 c ( 3 ,3 )=0 c ( 4 , 4 ) =0
r ( 0 , 0 )=0 r ( 1 ,1 ) =0 r ( 2 ,2 )=0 r ( 3 , 3 )=0 r ( 4 , 4 )=0

Next we tabulate w (i , j ) , c ( i , j )∧r (i , j) for j−i=1, that is

w ( 0 , 1 )= p ( 1 ) +q ( 1 ) + w ( 0 , 0 )=8
c ( 0 , 1 )=w ( 0 ,1 ) +min { c ( 0 , 0 ) +c ( 1 ,1 ) } =8
r ( 0 ,1 ) =1

w ( 1 ,2 ) =p (2 )+ q ( 2 ) +w ( 1 , 1 )=7
c ( 1 ,2 ) =w ( 1 ,2 )+ min ⁡{c ( 1 ,1 ) +c ( 2 ,2 ) }=7
r ( 1 ,2 )=2

w ( 2 , 3 )= p ( 3 ) +q ( 3 )+ w ( 2 , 2 )=3
c ( 2 ,3 )=w ( 2 , 3 ) +min { c ( 2 ,2 ) + c ( 3 , 3 ) }=3
r ( 2 , 3 )=3

w ( 3 , 4 )= p ( 4 ) +q ( 4 ) + w ( 4 , 4 )=3
c ( 3 , 4 )=w ( 3 , 4 ) +min {c ( 3 , 3 ) +c ( 4 , 4 ) }=3
r ( 3 , 4 ) =4

w ( 0 ,1 ) =8 w (1 , 2 )=7 w (2 , 3 )=3 w (3 , 4 )=3


c ( 0 ,1 )=8 c ( 1 ,2 )=7 c ( 2 ,3 )=3 c ( 3 , 4 )=3
r ( 0 , 1 )=1 r ( 1 ,2 )=2 r ( 2 ,3 )=3 r ( 3 , 4 )=4
Next we tabulate w (i , j ) , c ( i , j )∧r (i , j) for j−i=2, that is

w ( 0 , 2 )= p ( 2 ) +q ( 2 )+ w ( 0 , 1 )=12
c ( 0 , 2 )=w ( 0 ,2 ) +min { c ( 0 , 0 ) +c ( 1 ,2 ) , c ( 0 ,1 ) + c ( 2 , 2 ) }=12+min { 7 , 8 } =19
r ( 0 , 2 )=1

w ( 1, 3 ) =p ( 3 )+ q ( 3 ) +w (1 , 2 )=9
c ( 1 ,3 )=w ( 1 , 3 ) +min { c ( 1, 1 ) +c ( 2 ,3 ) , c ( 1, 2 ) +c ( 3 ,3 ) }=9+ min { 3 ,7 }=12
r ( 1 ,3 )=2

w ( 2 , 4 ) =p ( 4 )+ q ( 4 ) +w ( 2 , 3 )=5
c ( 2 , 4 ) =w ( 2 , 4 )+ min { c (2 , 2 ) +c ( 3 , 4 ) , c ( 2, 3 ) +c ( 4 , 4 ) }=5+min {3 , 3 }=8
r ( 2 , 4 ) =3

w ( 0 ,2 ) =12 w (1 , 3 )=9 w (2 , 4 )=5


c ( 0 , 2 )=19 c ( 1 ,3 ) =12 c ( 2 , 4 ) =8
r ( 0 , 2 )=1 r ( 1 ,3 )=2 r ( 2 , 4 )=3

Next we tabulate w (i , j ) , c ( i , j )∧r (i , j) for j−i=3, that is

w ( 0 ,3 ) =p (3 )+ q ( 3 ) + w ( 0 ,2 )=14
c ( 0 , 3 )=w ( 0 ,3 )+ min {c ( 0 , 0 ) +c (1 , 3 ) , c ( 0 , 1 ) +c ( 2 ,3 ) , c ( 0 ,2 )+ c ( 3 , 3 ) }=14+min {12 , 11,19 }=25
r ( 0 , 3 )=2
w (1 , 4 )= p ( 4 ) +q ( 4 )+ w ( 1 ,3 )=11
c ( 1 , 4 )=w ( 1 , 4 )+ min{c (1 , 1 ) +c ( 2, 4 ) , c ( 1 , 2 ) +c ( 3 , 4 ) , c ( 1 , 3 ) +c ( 4 , 4 ) }=11+min {8 , 10 , 12 }=19
r ( 1 , 4 )=2

w ( 0 ,3 )=14 w (1 , 4 )=11
c ( 0 , 3 )=25 c ( 1 , 4 )=19
r ( 0 , 3 )=2 r ( 1 , 4 )=2

Next we tabulate w (i , j ) , c ( i , j )∧r (i , j) for j−i=4 , that is

w ( 0 , 4 ) =p ( 4 )+ q ( 4 ) +w ( 0 , 3 ) =16
c ( 0 , 4 )=w ( 0 , 4 ) +min {c ( 0 , 0 )+ c ( 1 , 4 ) , c ( 0 ,1 )+ c ( 2 , 4 ) ,c ( 0 , 2 ) +c ( 3 , 4 ) , c ( 0 , 3 ) + c ( 4 , 4 )=16+min {19 , 16 , , 22 ,25 }
r ( 0 , 4 )=2

w ( 0 , 4 ) =16
c ( 0 , 4 )=32
r ( 0 , 4 )=2
w (1 , 1 )=3 w (2 , 2 )=1 w (3 , 3 ) =1 w (4 , 4)=1
c ( 0 , 0 )=0 c ( 1 ,1 )=0 c ( 2 ,2 ) =0 c ( 3 ,3 )=0 c ( 4 , 4 ) =0
r ( 0 , 0 )=0 r ( 1 ,1 ) =0 r ( 2 ,2 )=0 r ( 3 , 3 )=0 r ( 4 , 4 )=0

w ( 0 ,1 ) =8 w (1 , 2 )=7 w (2 , 3 )=3 w (3 , 4 )=3


c ( 0 ,1 )=8 c ( 1 ,2 )=7 c ( 2 ,3 )=3 c ( 3 , 4 )=3
r ( 0 , 1 )=1 r ( 1 ,2 )=2 r ( 2 ,3 )=3 r ( 3 , 4 )=4

w ( 0 ,2 ) =12 w (1 , 3 )=9 w (2 , 4 )=5


c ( 0 , 2 )=19 c ( 1 ,3 ) =12 c ( 2 , 4 ) =8
r ( 0 , 2 )=1 r ( 1 ,3 )=2 r ( 2 , 4 )=3

w ( 0 ,3 )=14 w (1 , 4 )=11
c ( 0 , 3 )=25 c ( 1 , 4 )=19
r ( 0 , 3 )=2 r ( 1 , 4 )=2

w ( 0 , 4 ) =16
c ( 0 , 4 )=32
r ( 0 , 4 )=2

Figure 5.16 Computation of w ( 0 , 4 ) , c ( 0 , 4 ) ∧r (0 , 4 )


Algorithm 5.5 Finding a minimum-cost binary search tree
 Minimum cost-binary search tree construction requires us to
compute c ( i, j ) for ( j−i )=1 ,2 , … , n in that order.
 When( j−i )=m, then number ofc (i , j) ’s to be computed isn−m+ 1.
 Each of thesec (i , j) ’s requires us to find the minimum of m
quantities. Hence each c (i , j) can be computed in O(m) time.
 The total time for allc (i , j) ’s with ( j−i )=m is thereforeO(nm−m2 ).
 The total time to evaluate all c (i , j) ’s and r (i , j)’s is therefore
∑ ( nm−m2 )=O( n3 )
1 ≤m ≤ n
0/1 KNAPSACK
 A solution to the knapsack problem can be obtained by making a
sequence of decisions on the variables x 1 , x 2 ,… , x n.
 A decision on variable x iinvolves determining which of the values
0 or 1 is to be assigned to it.
 Let us assume that the decisions on the x iare made in the order
x n , x n−1 , … , x 1.
 Following a decision on x n, we may be in one of two possible
states: the capacity remaining in the knapsack is m and no profit
has accrued or the capacity remaining in the knapsack is m−wn
and a profit of pnhas accrued.
 The remaining decisions x n−1 , … , x 1must be optimal with respect to
the problem state resulting from the decision on x n. Otherwise,
x n , x n−1 , … , x 1will not be optimal. Hence, the principal of optimality
holds.
 Since the principle of optimality holds, we obtain
f n ( m) =max {f n−1 ( m ) , f n −1 ( m−wn ) + pn } (1)
For arbitrary f i ( y ) , i>0 ,Equation (1) generalizes to
f i ( y ) =max {f i−1 ( y ) , f i−1 ( y−wi ) + pi } (2)

Where f 0 ( y )=0 for all y and f i ( y ) =−∞ , y < 0.


 Whenw 'i sare integer, we need to compute f i ( y ) for integer
y , 0 ≤ y ≤ m. Since each f ican be computed from f i−1in Θ ( m )time,
it takes Θ ( mn )to compute f n.
 Whenw 'i sare real numbers, we need to compute f i ( y ) for real
numbers such that0 ≤ y ≤ m. Therefore, f icannot be explicitly
computed for all y in this range.
 Even whenw 'i sare integer, the explicitΘ ( mn )computation of f n
may not be the most efficient computation.
 So we explore alternative method for both cases (integer
and real)
Alternative Method for 0/1 Knapsack using Dynamic
Programming
 Notice that there are a finite number of y ' s , 0= y 1< y 2< …< y k ,
such that f i ( y 1)< f i ( y 2 )< …< f i ( y ¿¿ k )¿ ;
 Furthermore f i ( y ) =−∞ , y < y 1 ; f i ( y )=f i ( y k ) , y ≥ y k ;
 We need to compute only f i ( y j ) for1 ≤ j≤ k .
 We use the ordered set Si=¿where each member of Siis a
pair( P , W )where P=f i ( y j )andW = y j to represent f i ( y)

 Notice that 0
S ={( 0 , 0 ) }

 We can compute S from Siby first computing


i +1

i
S1={(P , W )∨(P− pi ,W −wi)∈ S }
i
(3)
 Now, S can be computed by merging the pairs in
i +1 i
S ∧S 1
i

together.
 Note that if S contains two pairs ( P j , W j )and ( Pk , W k ) such
i +1

that P j ≤ Pk andW j ≥ W k, then the pair( P j , W j ) can be


discarded.
 Discarding or purging rules such as this one are also
known as dominance rules. Dominated tuples get
purged. In the above, ( Pk , W k )dominates( P j , W j ).
Example 2: Consider the knapsack instance
n=3 , ( w1 , w2 , w3 ) =( 2 , 3 , 4 ) , ( p1 , p2 , p 3 )=( 1 , 2, 5 ) ,andm=6 .

S = { ( 0 , 0 ) } ; S 1= {( 1 , 2 ) }
0 0

S 1= {( 0 ,0 ) , ( 1 , 2 ) } ; S 11={ ( 2 ,3 ) , ( 3 , 5 ) }
S2= { ( 0 , 0 ) , ( 1 , 2 ) , ( 2 , 3 ) , ( 3 ,5 ) } ; S21 ={ ( 5 , 4 ) , ( 6 , 6 ) , ( 7 , 7 ) , ( 8 , 9 ) }
S = { ( 0 , 0 ) , ( 1 , 2 ) , ( 2 , 3 ) , ( 5 , 4 ) , ( 6 , 6 ) , ( 7 ,7 ) , ( 8 ,9 ) }
3

 Note that the pair( 3 , 5 )has been eliminated from S3as( 5 , 4 ) dominates
( 3 , 5)

 Withm=6 , the value of f 3 ( 6 )is given by the tuple( 6 , 6 ) in S3; the tuple
( 6 , 6 ) ∉ S 2, and so we must set x 3=1. The pair( 6 , 6 )came from the pair
( 6− p3 , 6−w3 )= (1 , 2 ) ;it can be noted that( 1 , 2 ) ∈ S 2 , ( 1 , 2 ) ∈ S 1∧( 1 ,2 ) ∉ S 0 ; hence
x 2=0∧x 1=1; finally,( x 1 , x 2 , x 3 ) =(1 , 0 ,1).

Example 3: Generate the sets Si , 0 ≤i ≤ 4 , for a knapsack instance


( w1 , w2 , w3 , w4 )=( 10 , 15 , 6 , 9 )∧( p1 , p 2 , p3 , p 4 )=(2 , 5 ,8 ,1).
S = { ( 0 , 0 ) } ; S 1={ ( 2 , 10 ) }
0 0

S ={ ( 0 , 0 ) , ( 2 ,10 ) } ; S11 ={ ( 5 ,15 ) , ( 7 , 25 ) }


1

S2= { ( 0 , 0 ) , ( 5 , 15 ) , ( 7 ,25 ) } ; S 21={ ( 8 , 6 ) , ( 13 , 21 ) , ( 15 , 31 ) }


S = { ( 0 , 0 ) , ( 8 ,6 ) , ( 13 , 21 ) , ( 15 , 31 ) } ; S 1={ ( 1, 9 ) , ( 9 , 15 ) , (14 , 30 ) , ( 16 , 40 ) }
3 3

S ={ ( 0 , 0 ) , ( 8 , 6 ) , ( 9 ,15 ) , ( 13 , 21 ) , ( 14 ,30 ) , ( 15 , 31 ) , ( 16 , 40 ) }
4
RELIABILITY DESIGN
 In this section, we look at an example of how to use dynamic
programming to solve a problem with a multiplicative
optimization function.
 The problem is to design a system that is composed of several
devices connected in series (Figure 5.19).

 Let r i be the reliability of device Di(that is r i is the probability that


device Di will function properly). Then the reliability of the entire
system is∏ r i.
 Even if the individual devices are very reliable (the r i s are close to
'

one), the reliability of the system may not be very good. Hence, it
is desirable to duplicate devices. Multiple copies of the same
device type are connected in parallel (Figure 5.20).

 If stageicontainsmi copies of device Di ,then the probability that


all mihave a malfunction is(1−r i)m i

 Hence the reliability of stage ibecomes 1−(1−r i)


m i
 Let us assume that the reliability of stage i is given by a
function∅ i (mi ), 1 ≤n . The reliability of the system of stages is

1 ≤i ≤n
∅i (mi ).

 Our problem is to use device duplication to maximize


reliability. This maximization is to be carried out under a cost
constraint.
 Let c ibe the cost of each unit of device iand letc be the
maximum allowable cost of the system being designed. We
wish to solve the following maximization problem:
max ∏ ∅ i (mi)
1 ≤i ≤ n

subject to ∑ ci mi ≤ c , mi ≥ 1, ∀ 1≤ i≤ n
1 ≤i ≤n

 For eachc i >0 , mimust be in the range 1 ≤mi ≤u i ,where

( )
n
c+ ci −∑ c j
j =1
ui=⌊ ⌋
ci

 An optimal solution is the result of a sequence of decisions,


one decision for eachmi.

 Let f i ( x ) represent the maximum value of ∏ ∅ ( m j )subject to the


1 ≤ j≤ i

constraints1∑
≤ j≤ i
c j m j ≤ x and1 ≤m ≤ u , 1 ≤ j≤ i.
j j

 The value of an optimal solution is f n (c ). The last decision made


requires one to choose mnfrom{1 , 2 ,3 , … , un }. Then the remaining
decisions must be such as to use the remaining funds c−c n mnin
an optimal way. The principal of optimality holds and
f n ( c ) = max { ∅n ( mn ) f n−1 ( c−cn mn ) }
1≤ mn ≤un

 For any f i ( x ) ,i ≥ 1,this equation generalizes to


f i ( x )= max { ∅i ( mi ) f i−1 ( x−c i mi ) }
1 ≤ mi ≤ u i

 Clearly, f 0 ( x )=1 , ∀ x , 0 ≤ x ≤ c
 f i ( x )can be solved using an approach similar to that used for

the knapsack problem. Let Siconsist of tuples of the form ( f , x ) ,


where f =f i ( x ). There is at most one tuple for each different x that
results from a sequence of decisions onm1 ,m2 , … , mn . The
dominance rule (f 1 , x 1) dominates ( f 2 , x 2 ) iff f 1 ≥ f 2∧x 1 ≤ x 2for this
problem also.
Example: We are to design a three stage system with device types
D1 , D 2∧D3. The costs are $30, $15, and $20 respectively. The cost of

the system is to be no more than $105. The reliability of each device


type is 0.9, 0.8 and 0.5 respectively.
c 1=30 , c 2=15 , c 3=20 , c=105
r 1=0.9 ,r 2=0.8 , r 3=0.5

( )
3
105+30−∑ c j
j=1
u1=⌊ ⌋ =2
30
( 105+15−65 )
u2=⌊ ⌋=3
15
( 105+20−65 )
u3=⌊ ⌋=3
20
∅ 1 (1 )=0.9 , ∅ 1 ( 2 ) =1−(1−0.9)2=0.99
∅ 2 ( 1 )=0.8 , ∅ 2 ( 2 ) =1−(1−0.8)2=0.96 , ∅2 ( 3 )=1−(1−0.8)3 =0.992
∅ 3 ( 1 )=0.5 , ∅ 3 (2 )=1−(1−0.5)2=0.75 , ∅3 ( 3 )=1−(1−0.5)3 =0.875

We can obtain Sifrom Si−1 by trying all possible values for miand
combining the resulting tuples together.
S ={ ( 1 ,0 ) }
0

1 1 1
S1={(0.9 , 30)}, S 2={ ( 0.9 ,30 ) ,( 0.99 ,60) }S ={( 0.9 , 30 ) ,(0.99 , 60)}
S1= { ( 0.72 , 45 ) , ( 0.792, 75 ) } , S 2= { ( 0.864 , 60 ) , ( 0.95 , 90 ) } , S 3={ ( 0.893 , 75 ) , ( 0.982,105 ) }
2 2 2

2
S ={(0.72 , 45),(0.864 , 60) ,(0.893 ,75)}
S1= { ( 0.36 , 65 ) , ( 0.432, 80 ) , ( 0.447 , 95 ) }
3

3
S2={( 0.54 , 85 ) , ( 0.648,100 ) , ( 0.670,115 ) }
S3= { ( 0.63,105 ) , ( 0.756,120 ) , ( 0.781,135 ) }
3

3
S ={( 0.36 , 65 ) , ( 0.432 ,80 ) , ( 0.54 , 85 ) , ( 0.648,100 ) }
Tracking back we havem1=1 , m2=2 ,m3=2

THE TRAVELING SALESPERSON PROBLEM


 0/1 knapsack is a subset selection problem.
 Traveling salesperson problem is a permutation problem.
Usually, permutation problems are harder to solve than subset
problems as there are n !different permutations of nobjects
whereas there are only 2ndifferent subsets of nobjects ((n !> 2n).
 Let G= (V , E ) be a directed graph with edge costsc ij. The variable c ijis
defined such that c ij > 0for all iand j and c ij =∞ if(i , j)∉ E .
 Let |V |=nand assumen>1 . A tour G is a directed simple cycle that
includes every vertex inV . The cost of a tour is the sum of the
cost of the edges on the tour. The traveling salesperson problem
is to find a tour of minimum cost.
 In the following discussion, a tour is a simple path that starts
and ends at vertex1. Every tour consists of an edge ( 1 , k )for some
k ∈ V −{1 } and a path from vertexk to vertex 1. The path from vertex

k to vertex 1 goes through each vertex in V − {1 , k } exactly once. If

the tour is optimal, then the path from k to 1 must be a shortest


path fromk to 1 going through all vertices in V − {1 , k }. Hence the
principle of optimality holds.
 Let g ( i, S )be the length of a shortest path starting at vertex i ,going
through all vertices in S ,and terminating at vertex 1.
 The function g ( 1 ,V −{ 1 } )is the length of an optimal salesperson
tour. From the principle of optimality it follows that:
g ( 1 ,V −{ 1 } )= min {c 1 k + g ( k ,V −{ 1 , k } ) }
2 ≤ k≤ n

 Generalizing, we obtain (for i∉ S ¿


g ( i, S )=min {c ij + g ( j , S−{ j } ) }
j ∈S
 Clearly, g ( i, ∅ )=ci 1 , 1 ≤i ≤ n. We first obtain g ( i, S ) for all Sof size 1. Then
we can obtain g ( i, S )for Swith|S|=2, and so on. When |S|< n−1 , the
values of iand Sfor which g ( i, S ) is needed are such thati≠ 1 , 1∉ S ,i ∉ S.
ALL PAIRS SHORTEST PATHS
 Let G= (V , E ) be a directed graph with nvertices. Letcost be a cost
adjacency matrix for Gsuch thatcost ( i, i )=0 , 1 ≤i ≤ n. Then cost ( i, j )is the
length (or cost) of edge ( i , j )if ( i , j ) ∈ E ( G )and cost ( i, j )=∞if i≠ jand
(i , j)∉ E (G) .

 The all-pairs shortest-path is to determine a matrix A such that


A ( i , j )is the length of a shortest path fromi to j .

 The matrix A can be obtained by solving nsingle-source problems


using the greedy method. Since each application of this
procedure requires O ( n2 )time, the matrix A can be obtained in O ( n3 )
time.
 We obtain an alternate O ( n3 )solution to this problem using the
principle of optimality.
 Let us examine a shortestito j path inG , i≠ j . This path originates at
vertexiand goes through some intermediate vertices (possibly
none) and terminates at vertex j .
o We assume that this path contains no cycles, even if there
is a cycle then this can be deleted without increasing the
path length.
o Ifk is an intermediate vertex on this shortest path, then the
sub paths from itok and fromk to j must be the shortest paths
fromito k and fromk to j , respectively.
o Otherwise, theito j path is not of minimum length. So, the
principle of optimality holds.
 Ifk is the intermediate vertex with highest index, then ito k path is a
shortest i tok path in Ggoing through no vertex with index greater
thank −1. Similarly, thek to j path is a shortest k to j path inG going
through no vertex of index greater thank −1.
 We can regard the construction of a shortest ito j path as requiring
a decision as to which is the highest indexed intermediate vertex k
. Once this decision has been made, we need to find two shortest
paths fromitok and the other fromk to j . Neither of these may go
through a vertex with index greater thank −1.
 Using A k ( i , j )to represent the length of a shortest path from ito j
going through no vertex of index greater than k ,we obtain
k−1 k−1
A ( i , j )=min ⁡{ min { A ( i, k )+ A (k , j)}, cost (i, j)}
1 ≤ k ≤n

 We can obtain a recurrence for A k ( i , j )as follows: A shortest path


fromito j going through no vertex higher thank either goes through
vertex k or it does not. If it does, A k ( i , j )={ A k−1 ( i , k ) + Ak−1 ( k , j)}. If it does
not, then no intermediate vertex has index greater thank −1.
Hence, A k ( i , j )= Ak −1 ( i, j ) . Combining, we get
A k ( i , j )=min { A k−1 ( i , j ) , A k−1 ( i, k )+ A k−1 ( k , j ) } , k ≥1
Clearly, A0 ( i , j )=cost ( i, j ) , 1≤ i≤ n , 1 ≤ j≤ n. The above recurrence can
be solved for An by first computing A1, then A2, then A3and so on.
Since there is no vertex inGwith index greater thann, A ( i , j )= An (i , j).
EXAMPLE

1 2 3 4 1 2 3 4 1 2 3 4
0 1 2
A A A

1 0 5 ∞ ∞ 1 0 5 ∞ ∞ 1 0 5 6 ∞
2 ∞ 0 1 ∞ 2 ∞ 0 1 ∞ 2 ∞ 0 1 ∞
3 8 ∞ 0 3 3 8 13 0 3 3 8 1 0 3
3
4 2 ∞ ∞ 0 4 2 7 ∞ 0 4 2 7 8 0

1 2 3 4 1 2 3 4
3 4
A A

1 0 5 6 9 1 0 5 6 9
2 9 0 1 4 2 6 0 1 4
3 8 13 0 3 3 5 10 0 3
4 2 7 8 0 4 2 7 8 0
 The line 11 is iterated n3 times, and so the time for all pairs
shortest paths algorithm using dynamic programming isO(n 3).

You might also like