Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
0 views

Dynamic Programming

Dynamic programming is an algorithm design method that solves problems through a sequence of decisions, focusing on overlapping sub-problems and optimal substructure. It involves four major steps: characterizing the optimal solution, defining its value recursively, computing it in a bottom-up manner, and constructing the solution. Applications include matrix chain multiplication, longest common subsequence, and the traveling salesperson problem.

Uploaded by

s9008905
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Dynamic Programming

Dynamic programming is an algorithm design method that solves problems through a sequence of decisions, focusing on overlapping sub-problems and optimal substructure. It involves four major steps: characterizing the optimal solution, defining its value recursively, computing it in a bottom-up manner, and constructing the solution. Applications include matrix chain multiplication, longest common subsequence, and the traveling salesperson problem.

Uploaded by

s9008905
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

DYNAMIC PROGRAMMING

INTRODUCTION
The drawback of the greedy method is, we will make one decision at a time. This can be
overcome in dynamic programming. In this, we will make more than one decision at a time.

Dynamic programming is an algorithm design method that can be used when the solution to a
problem can be viewed as the result of a sequence of decisions. Dynamic programming is
applicable when the sub-problems are not independent, that is when sub-problems share sub-
sub-problems. A dynamic programming algorithm solves every sub-sub-problem just once
and then saves its answer in a table, thereby avoiding the work of re-computing the answer
every time the sub-problem is encountered.

Definition:
It is a programming technique in which a solution is obtained from a sequence of decisions

A dynamic programming problem can be divided into a number of stages where an optimal
decision must be made at each stage. The decision made at each stage must take into account
its effects not only on the next stage, but also on the entire subsequent stages. Dynamic
programming provides a systematic procedure whereby starting with the last stage of the
problem and working backwards one makes an optimal decision for each stage of problem.
The information for the last stage is the information derived from the previous stage.

Dynamic programming design involves 4 major steps.


1) Characterize the structure of optimal solution.
2) Recursively define the value of an optimal solution.
3) Compute the value of an optimum solution in a bottom up fashion.
4) Construct an optimum solution from computed information.

The Dynamic programming technique was developed by Bellman based upon his principle
known as principle of optimality. This principle states that “An optimal policy has the
property that, whatever the initial decisions are, the remaining decisions must constitute an
optimal policy with regard to the state resulting from the first decision”.
General Characteristics of Dynamic Programming:
The general characteristics of Dynamic programming are
1) The problem can be divided into stages with a policy decision required at each stage.
2) Each stage has number of states associated with it.
3) Given the current stage an optimal policy for the remaining stages is independent of
the policy adopted.
4) The solution procedure begins be finding the optimal policy for each state of the last
stage.
5) A recursive relation is available which identifies the optimal policy for each stage
with n stages remaining given the optimal policy for each stage with (n-1) stages
remaining.

APPLICATIONS OF DYNAMIC PROGRAMMING


1) Matrix Chain Multiplication
2) Longest common subsequence
3) Traveling sales person problem
4) All pair shortest path problem
1.Matrix Chain Multiplication

Input: n matrices A1, A2, A3….An of dimensions P1xP2, P2xp3, ….PnxPn+1 respectively.
Goal: To compute the matrix product A1A2…An.
Problem: In what order should A1A2…An be multiplied so that it would take the minimum number of
computations to derive the product.
Let A and B be two matrices of dimensions pxq and qxr. Then C=AB. C is of dimension pxr. Thus Cij
takes q scalar multiplications and q-1 scalar additions.

Consider an example of the best way of multiplying 3 matrices.


Let A1 of dimensions 5x4
A2 of dimensions 4x6
A3 of dimensions 6x2
(A1 A2) A3 takes (5x4x6) + (5x6x2) = 180
A1 (A2 A3) takes (5x4x2) + (4x6x2) = 88
Thus A1 (A2 A3) is much cheaper to compute than (A1 A2) A3, although both lead to the same final
answer.
Hence optimal cost is 88.
To solve this problem using dynamic programming method, we will perform the following steps.
Step 1: Let Mij denote the cost of multiplying Ai…Aj where the cost is measured in the number of
multiplications. Here, M(i,i) = 0 for all i and M(1,n) is required solution.

Step 2: The sequence of decisions can be build using the principle of optimality. Consider the process of
matrix chain multiplication. Let T be the tree corresponding to the optimal way of multiplying Ai…Aj .
T has a left sub-tree L and right sub-tree R.
L corresponds to multiplying Ai…Ak and R to multiplying Ak+1…Aj. for some integer k such that
(i<=k<=j-1).
Thus we get optimal sub-chains of matrices and then the multiplication is performed. This ultimately
proves that the matrix chain multiplication follows the principle of optimality.
Step 3: We will apply following formula for computing each sequence.

Mij=min{Mik+Mk+1,j + Pi-1PkPj | i<=k<j}


The recursive definition for the minimum cost of parenthesizing the product Ai Ai+1 Aj is

For which value of k the m[i,j] entry is minimum, put that k value in s[i,j] entry.
Algorithm for Computing the optimal cost
MATRIX-CHAIN-ORDER(p)
1 n ← length[p] - 1
2 for i ← 1 to n
3 do m[i, i] ← 0
4 for l ← 2 to n //l is the chain length.
5 do for i ← 1 to n - l + 1
6 do j ← i + l - 1
7 m[i, j] ← ∞
8 for k ← i to j - 1
9 do q ← m[i, k] + m[k + 1, j] + pi-1 pkpj
10 if q < m[i, j]
11 then m[i, j] ← q
12 s[i, j] ← k
13 return m and s
Algorithm for Opitimal parenthesization

PRINT-OPTIMAL-PARENS(s, i, j)
1 if i = j
2 then print "A"i
3 else print "("
4 PRINT-OPTIMAL-PARENS(s, i, s[i, j])
5 PRINT-OPTIMAL-PARENS(s, s[i, j] + 1, j)
6 print ")"

Example:
Find a minimum number of multiplications required to multiply: A [1 × 5], B
[5 × 4], C [4 × 3], D [3 × 2], and E [2 × 1]. Also, give optimal parenthesization.
Solution:
Here d = {d , d , d , d , d , d } = {1, 5, 4, 3, 2, 1}
0 1 2 3 4 5

Optimal substructure of matrix chain multiplication is,


m[i, j] = 0, for i = 1 to 5
m[1, 1] = m[2 , 2] = m[3, 3] = m[4, 4] = m[5, 5] = 0
m[1, 2] = {m[1, 1] + m[2, 2] + d x d x d }
0 1 2

= {0 + 0 + 1 × 5 × 4}
= 20
m[2, 3] = {m[2, 2] + m[3, 3] + d x d x d }
1 2 3

= {0 + 0 + 5 × 4 × 3}
= 60
m[3, 4] = {m[3, 3] + m[4, 4] + d x d x d }
2 3 4

= {0 + 0 + 4 × 3 × 2}
= 24
m[4, 5] = {m[4, 4] + m[5, 5] + d x d x d }
3 4 5

= {0 + 0 + 3 × 2 × 1}
=6
In the above example the call MATRIX-CHAIN-MULTIPLY(A, s, 1, 5) computes the matrix-chain
product according to the parenthesization

Check s table for K value

S[1,5] k=4 ((ABCD)E)

s[1, 4]k=3 s[5,5] ( ((ABC)D)E)

s[1,3]k=2 s[4,4]s[5,5] ( (((AB)C)D)E)

s[1,2]s[3,3] s[4,4] S[5,5]

[refer to class notes for detail]

Optimal parenthesization ((((AB)C)D)E)using the above algorithm PRINT-OPTIMAL-


PARENS(s, i, j) and the s table

2.LCS (Longest Common Subsequences)

Definition
Given two sequences X=x1x2x3…xm and Y=y1y2 y3…yn and find a longest subsequence
Z=z1z2z3…zk that is common to both subsequence X and Y.
Brute-force approach to solve LCS problem
Check every subsequence of x[1 . . m] to see if it is also a subsequence of y[1 . . n], keeping
track of the longest subsequence found.

Example
Determine an LCS of < A, B, C > and < B, A, C >
Solution
Let X=< A, B, C > and Y=< B, A, C >
Find all possible subsequences of < A, B, C >
They are
A
B
C
AB
BC
AC
ABC
EMPTY
Now take every subsequence of X and check whether it is also a subsequence of Y.

Take a subsequence of X Check if it is a subsequence of Y Length

A √ 1
B √ 1
C √ 1
AB × -
BC √ 2
AC √ 2
ABC × -

So the LCS here are BC and AC with length 2.

Analysis
i) There are 2m possible subsequences of X.

ii) Each subsequence is checked at most n times as the length of sequence Y is n.

iii) So total time in worst case by brute force method will take = O(n 2 m)

iv) This approach requires exponential time, making it impractical for long
sequences.

Dynamic Programming approach to solve LCS problem


LCS problem can be solved efficiently using dynamic programming. To implement DP for LCS, it will
take four steps.

Step 1: Characterizing a longest common subsequence (Optimal substructure of an LCS)

The LCS problem has an Optimal substructure property.

Let X =< x1, x2, ..., xm> and Y = <y1, y2, ..., yn> be sequences, and let Z = <z1, z2, ..., zk> be any
LCS of X and Y.
1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.
2. If xm ≠ yn, then zk ≠ xm implies that Z is an LCS of Xm-1 and Y.
3. If xm ≠ yn, then zk ≠ yn implies that Z is an LCS of X and Yn-1.

Step 2: A recursive solution (Overlapping subproblems)

If we assume c[i, j] = Length of LCS for for xi and yj


Then

0 if i = 0 or j = 0

c[i, j] = c[i-1, j-1] + 1 if i, j > 0 and xi = yj

max(c[i, j-1), c[i-1, j]) if i, j > 0 and xi ≠ yj


Step 3: Computing the length of an LCS
Based on the above equation for c[i ,j], we could write an exponential recursive algorithm, but
there are only m*n distinct sub problems. For the solution of distinct sub problems, we use
dynamic programming to compute the solution using the bottom up approach.

LCS-LENGTH (X, Y)
1 m ← length[X]
2 n ← length[Y]
3 for i ← 1 to m
4 do c[i, 0] ← 0
5 for j ← 0 to n
6 do c[0, j] ← 0
7 for i ← 1 to m
8 do for j ← 1 to n
9 do if xi = yj
10 then c[i, j] ← c[i - 1, j - 1] + 1
11 b[i, j] ← ""
12 else if c[i - 1, j] ≥ c[i, j - 1]
13 then c[i, j] ← c[i - 1, j]
14 b[i, j] ← "↑"
15 else c[i, j] ← c[i, j - 1]
16 b[i, j] ← ←
17 return c and b
Running time of this algorithm(LCS-LENGTH) = O(mn)

Step 4: Constructing an LCS

PRINT-LCS(b, X, i, j)
1 if i = 0 or j = 0
2 then return
3 if b[i, j] = ""
4 then PRINT-LCS(b, X, i - 1, j - 1)
5 print xi
6 elseif b[i, j] = "↑"
7 then PRINT-LCS(b, X, i - 1, j)
8 else PRINT-LCS(b, X, i, j - 1)

The initial invocation is PRINT-LCS(b, X, length[X], length[Y]).


Running time of this algorithm (PRINT-LCS) = O(m +n)

Example
Determine an LCS of X=<A, B, C, B, D, A, B> AND y=<B, D, C, A, B, A>

Solution
i. Count the length of X and Y, These are m=7 and n=6 respectively.

ii. Draw a 8x7 table. 8 rows=7+1 7 columns=6+1, Length of x is m means row will be
m+1 abd Length of Y is n means column will be n+1

iii. Number row as i from 0 to m and column as j from 0 to n. In this case I varies from 0 to
7 and j varies from 0 to 6.
iv. Write symbols of X in row side each row one symbol. Write symbols of Y on above
the column, each column above write one symbol as follows.

v. Draw one table which will serve for two tables c table and b table. C table entries
contain numeric values and b table entries contain arrow symbols.

vi. Initially fill up x0 rows and y0 column entries all zeroes.

vii. c[i, j] values in the c table, entries are computed in row-major order. (That is, the first
row of c is filled in from left to right, then the second row, and so on.)

viii. How to fill up the table with valid entries refer class note and remember class room
discussion.

ix. To reconstruct the elements of an LCS, follow the b[i, j] arrows from the lower right-
hand corner; the path is shaded.

It is found as BCBA which is the LCS with length 4


3.TRAVELLING SALES PERSON

The tour of graph G is a directed simple cycle that includes every vertex in V. The
cost of the tour is the sum of cost of the edges on the tour. The travelling sales-
person problem is to find a tour of minimum cost.
Applications: Suppose we have to route a postal van to pick up mail from mailbox
located at ‘n’ different sites. An (n+1) vertex graph can be used to represent the
situation. One vertex represents the post office from which the postal van starts and
to which it must return. Edge <i,j> is assigned a cost equal to the distance from site
‘I’ to site ‘j’. Route taken by the postal van is a tour and we are interested in finding
tour of minimum length.

Every tour consist of an edge <1, k> for some k Є V-{1} and a path from vertex k to
vertex 1, which goes through each vertex in V-{1,k} exactly once. It is easy to see
that if the tour is optimal, then the path from k to 1 must be a shortest k to 1 path
going through all vertices in V-{1, k}. Hence, the principle of optimality holds.

Let g(i,S) be the length of a shortest path starting at vertex ‘i’ going through all
vertices in ‘S’ & terminating at vertex 1. The function g(1,V-{1}) is the length of an
optimal salesperson tour. From the principle of optimality
g (1, V  {1}) 
m in { C1 k  g ( k , V  {1, k })}
2 k n

Generalizing the above equation we obtain ( for iS )


g (i , S )  m in { C i j  g ( j , S  { j})}
j S

Find g(1,{2,3,4}) V{1,2,3,4}


Solution:
Stage 1) compute g(i,S) where |S| = ø, with no intermediate nodes.
i Є {v-{1} } i  1, 1S and i S
i={2,3,4}
g(2,ø)=c21=5
g(3,ø)=c31=6
g(4,ø)=c41=8

Stage 2) Compute g(i,S) where |S|=1, with one intermediate node


iЄ{v-{1}} i  1, 1S and i S
i={2,3,4} s={2/3/4} & <i,s> should be an edge (j=s)
g (i , S )  m in { C i j  g ( j , S  { j})}
j S
g(2,{3}) = c23+g(3,s-{3}) = 9 + g(3,ø) = 9 + c31 = 9+6 = 15
g(2,{4}) = c24+g(4,s-{4}) = 10 + g(4,ø) = 10 + c41 = 10+8=18
g(3,{2}) = c32+g(2,s-{2}) = 13 + g(2,ø) = 13 + c21 = 13+5=18
g(3,{4}) = c34+g(4,s-{4}) = 12 + g(4,ø) = 12 + c41 = 12+8=20
g(4,{2}) = c42+g(2,s-{2}) = 8 + g(2,ø) = 10 + c21 = 8 + 5=13
g(4,{3}) = c43+g(3,s-{3}) = 9 + g(3,ø) = 9 + c31 = 9 + 6=15

Step 3) Compute g(i,s) where |S|=2, with two intermediate nodes


iЄ(v-{2}} i  1, 1S and i S
i={2,3,4}, S={{2,3}/{3,4}/{2,4}}
g (i , S )  m in { C i j  g ( j , S  { j})}
j S

j={3,4}
g(2,{3,4}) = min c23 + g(3,s-{3}) when j=3
c24 + g(4,s-{4}) when j=4

min c23 + g(3,{4}) = min 9+20 = min{29,25} =25


c24 + g(4,{3}) 10+15

g(3,{2,4}) = min c32 + g(2,s-{2}) j={2,4}


c34 + g(4,s-{4})

min 13 + g(2,{4}) = min 13+18 = 25


12 + g(4,{2}) 12+13

g(4,{2,3}) = min c42 + g(2,s-{2}) j={2,3}


c43 + g(3,s-{3})

min 8 + g(2,{3}) = min 8+15 = min{23,27} =23


9 + g(3,{2}) 9+18

Stage 4) Compute g(i,S) where |S|=3, with three intermediate nodes


i Є{v-{1}} and i={1} ,
S={2,3,4}
j={2/3/4}
g (i , S )  m in { C i j  g ( j , S  { j})}
j S

g(1,{2,3,4}) = min C12 + g(2,S-{2}) when j=2


C13 + g(3,S-{3}) when j=3
C14 + g(4,S-{4}) when j=4

min 10 + g(2,{3,4}) = min 10 + 25 = min{35,40,43}=35


15 + g(3,{2,4}) 15 + 25
20 + g(4,{2,3}) 20 + 23

Optimal cost of tour is 35

g(1,{2,3,4}) = {C12+g(2,{3,4}) } thus the tour starts form 1 and goes to 2


g(2,{3,4}) = {C24+g(4,{3})} then from 2 to 4
g(4,{3}) = {C43+g(3,{Φ}) } then from 4 to 3
g(3,{Φ} = C31 and from 3 to 1
C12 C24 C43 C31
The sequence of tour is 1 2 4 3 1
Example 2) Travelling Salesperson

Compute g(1,{2,3,4,5}) V={1,2,3,4,5}

Stage 1) Compute g(i,s) where |s|=ø , i Є {v- { s }}


i={2,3,4,5} with no intermediate nodes

g (2,ø)=C21 =1
g (3,ø)=C31 =6
g (4,ø)=C41 =1
g (5,ø)=C51 =3

Stage 2) Compute g(i,s) where |s|=1 , i Є {v- { s }}


With one intermediate node
i={2,3,4,5} s = {2,3,4,5} & <i,s> should be an edge
g (i , S )  m in { C i j  g ( j , S  { j})}
j S

g(2,{3}) = C23 + g(3,s-{3}) = 3 + g(3,ø) = 3 + 6 = 9


g(2,{4}) = C24 + g(4,s-{4}) = 2 + g(4,ø) = 2 + 1 = 3
g(2,{5}) = C25 + g(5,s-{5}) = 5 + g(5,ø) = 5 + 3 = 8

g(3,{1}) = C31 + g(1,s-{1}) = 6 + g(1,ø) = 6 + 0 = 6


g(3,{2}) = C32 + g(2,s-{2}) = 2 + g(2,ø) = 2 + 1 = 3
g(3,{4}) = C34 + g(4,s-{4}) = 2 + g(4,ø) = 2 + 1 = 3
g(3,{5}) = C35 + g(5,s-{5}) = 1 + g(5,ø) = 1 + 3 = 4

g(4,{1}) = C41 + g(1,s-{1}) = 1 + g(1,ø) = 1 + 0 = 1


g(4,{2}) = C42 + g(2,s-{2}) = 1 + g(2,ø) = 1 + 1 = 2
g(4,{3}) = C43 + g(3,s-{3}) = 2 + g(3,ø) = 2 + 6 = 8
g(4,{5}) = C45 + g(5,s-{5}) = 2 + g(5,ø) = 2 + 3 = 5

g(5,{1}) = C51 + g(1,s-{1}) = 3 + g(1,ø) = 3 + 0 = 3


g(5,{2}) = C52 + g(2,s-{2}) = 1 + g(2,ø) = 1 + 1 = 2
g(5,{3}) = C53 + g(3,s-{3}) = 2 + g(3,ø) = 2 + 6 = 8
g(5,{4}) = C54 + g(4,s-{4}) = 1 + g(4,ø) = 1 + 1 = 2
Stage 3) Compute g(i,s) where |s|=2 ,i.e i Є {v- { s }}
With two intermediate nodes
i={2,3,4,5} s = {2,3,4,5}
g (i , S )  m in { C i j  g ( j , S  { j})}
j S

g(2,{3,4})= min C23 + g(3,S-{3})


C24 + g(4,S-{4})

min 3 + g(3,{4}) = min 3 + 3 = min 6 = 6


2 + g(4,{3}) 2+8 10

g(2,{4,5})= min C24 + g(4,S-{4})


C25 + g(5,S-{5})

min 2 + g(4,{5}) = min 2 + 5 = min 7 = 7


5 + g(5,{4}) 5+2 7

g(3,{2,4})= min C32 + g(2,S-{2})


C34 + g(4,S-{4})

min 2 + g(2,{4}) = min 2 + 3 = min 5 =4


2 + g(4,{2}) 2+2 4

g(3,{2,5})= min C32 + g(2,S-{2})


C35 + g(5,S-{5})

min 2 + g(2,{5}) = min 2 + 8 = min 10 = 3


1 + g(5,{2}) 1+2 3

g(4,{2,3})= min C42 + g(2,S-{2})


C43 + g(3,S-{3})

min 1 + g(2,{3}) = min 1 + 9 = min 10 = 5


2 + g(3,{2}) 2+3 5

g(4,{2,5})= min C42 + g(4,S-{2})


C45 + g(4,S-{5})

min 1 + g(4,{5}) = min 1 + 5 = min 6 = 4


2 + g(5,{2}) 2+2 4

g(5,{2,3})= min C52 + g(2,S-{2})


C53 + g(3,S-{3})

min 1 + g(2,{3}) = min 1 + 9 = min 10 = 5


2 + g(3,{2}) 2+3 5

g(5,{2,4})= min C52 + g(2,S-{2})


C54 + g(4,S-{4})

min 1 + g(2,{4}) = min 1 + 3 = min 4 = 3


1 + g(4,{2}) 1+2 3
g(2,{3,5})= min C23 + g(3,S-{3})
C25 + g(5,S-{5})

min 3 + g(3,{5}) = min 3 + 4 = min 7 = 7


5 + g(5,{3}) 5+8 13

g(3,{4,5})= min C34 + g(4,S-{4})


C35 + g(5,S-{5})

min 2 + g(4,{5}) = min 2 +5 = min 7 =3


1 + g(5,{4}) 1+2 3

`
g(4,{3,5})= min C43 + g(3,S-{3})
C45 + g(5,S-{5})

min 2 + g(3,{5}) = min 2 +4 = min 6 =6


2 + g(5,{3}) 2+8 10

g(5,{3,4})= min C53 + g(3,S-{3})


C54 + g(4,S-{4})

min 2 + g(3,{4}) = min 2 +3 = min 5 = 5


1 + g(4,{3}) 1+8 9

Stage 4) Compute g(i,s) where |s|=3 ,i.e i Є {v- { s }}


With three intermediate nodes
i={2,3,4,5} s = {2,3,4,5}
g (i , S ) 
m in { C i j  g ( j , S  { j})}
j S

g(2,{3,4,5})= min C23 + g(3,S-{3})


C24 + g(4,S-{4})
C25 + g(5,S-{5})

= min 3 + g(3,{4,5}) = min 3 + 3 = min 6 = 6


2 + g(4,{3,5}) 2+6 8
5 + g(5,{3,4}) 5+5 10

g(3,{2,4,5})= min C32 + g(2,S-{2})


C34 + g(4,S-{4})
C35 + g(5,S-{5})

= min 2 + g(2,{4,5}) = min 2 + 7 = min 9 = 4


2 + g(4,{2,5}) 2+4 6
1 + g(5,{2,4}) 1+3 4

g(4,{2,3,5})= min C42 + g(2,S-{2})


C43 + g(3,S-{3})
C45 + g(5,S-{5})

= min 1 + g(2,{3,5}) = min 1 + 7 = min 8 = 5


2 + g(3,{2,5}) 2+3 5
2 + g(5,{2,3}) 2+5 7
g(5,{2,3,4})= min C52 + g(2,S-{2})
C53 + g(3,S-{3})
C54 + g(4,S-{4})

= min 1 + g(2,{3,4}) = min 1 + 6 = min 7 = 6


2 + g(3,{2,4}) 2+4 6
1 + g(4,{2,3}) 1+5 6

Stage 5) Compute g(i,s) where |s|=4 ,i.e i Є {v- { 1 }}


With 4 intermediate nodes
i={1} s = {2,3,4,5}
g (i , S )  m in { C i j  g ( j , S  { j})}
j S

g(1,{2,3,4,5})= min C12 + g(2,S-{2})


C13 + g(3,S-{3})
C14 + g(4,S-{4})
C15 + g(5,S-{5})

= min 2 + g(2,{3,4,5}) = min 2 + 6 = min 8 = 5


1 + g(3,{2,4,5}) 1+4 5
2 + g(4,{2,3,5}) 2+5 7
1 + g(5,{2,3,4}) 1+6 7
Optimal Cost tour is 5
g(1,{2,3,4,5}) = C13 + g(3,{2,4,5}) thus tour stars from 1 and goes to 3
g(3,{2,4,5} = C35 + g(5,{2,4}) then from 3 to 5
g(5,{2,4}) = C52 + g(2,{4}) then from 5 to 2
g(2,{4}) = C24 + g(4,{ø}) then from 2 to 4
g (4,ø) = C41 then from 4 to 1

Optimal tour is 1-3-5-4-2-1


5) All pairs shortest path problem

Let G=(V,E) be a directed graph consisting of n vertices and each edge is


associated with a weight. The problem of finding the shortest path between all
pairs of vertices in a graph is called all pairs shortest path problem. This problem
can be solved by using Dynamic programming Technique.

The all pair shortest path problem is to determine a matrix A such that A(i,j) is the
length of a shortest path from vertex i to j. Assume that this path contains no
cycles. If k is an intermediate vertex on this path, then the sub paths form i to k
and from k to j are the shortest paths from I to k and from k to j respectively.

Otherwise the path from i to j is not shortest path. If k is intermediate vertex with
highest index then the path i to k is the shortest path going through no vertex with
index greater than k-1. similarly the path k to j is shortest path going through no
vertex with index greater than k-1.

The shortes path can be computed using following recursive method.


Ak (i, j) W(i, j) if k  0
Ak (i, j) min{Ak1(i, j), Ak1(i, k)  Ak1(k, j)} if k 1

Example : Compute all pairs shortest path for the following graph

1 2
4

11
2
3
3

Graph G

0 4 11
cost Adjacency Matrix A0 (i, j)  W(i, j)  6 0 2
3  0
Step 1
For k=1 i.e. going from i to j through intermediate vertex 1
When i=1 j=1/2/3
A1(1,1)min{A0(1,1), A0(1,1)  A0(1,1)}  min{0,0  0}  0
A1(1,2) min{A0(1, 2), A0(1,1)  A0(1,2)}  min{4,0  4}  4
A1(1,3) min{A0(1,3), A0(1,1)  A0(1,3)}  min{11,0 11} 11
When i=2 j=1/2/3
A1(2,1) min{A0(2,1), A0(2,1)  A0(1,1)}  min{6,6  0}  6
A1(2, 2) min{A0(2, 2), A0(2,1)  A0(1, 2)}  min{0,6  4}  0
A1(2,3) min{A0(2,3), A0(2,1)  A0(1,3)}  min{2,611}  2
When i=3 j=1/2/3
A1(3,1) min{A0(3,1), A0(3,1)  A0(1,1)}  min{3,3 0}  3
A1(3,2) min{A0(3, 2), A0(3,1)  A0(1,2)}  min{,3 4}  7
A1(3,3) min{A0(3,3), A0(3,1)  A0(1,3)}  min{0,311}  0

0 4 11
cost Adjacency Matrix A1 (i, j)  6 0 2
3 7 0

Step 2
For k=2 i.e. going from i to j through intermediate vertex 2
When i=1 k=2 j=1/2/3
Ak (i, j) min{Ak1(i, j), Ak1(i, k)  Ak1(k, j)}
A2(1,1) min{A1(1,1), A1(1,2)  A1(2,1)}  min{0,4  6}  0
A2(1,2) min{A1(1, 2), A1(1,2)  A1(2, 2)}  min{4,4  0}  4
A2(1,3) min{A1(1,3), A1(1, 2)  A0(2,3)}  min{11, 4  2}  6
When i=2 j=1/2/3
A2(2,1) min{A1(2,1), A1(2, 2)  A1(2,1)} min{6,0 6}  6
A2(2, 2) min{A1(2, 2), A1(2, 2)  A1(2, 2)}  min{0,0  0}  0
A2(2,3) min{A1(2,3), A1(2, 2)  A1(2,3)}  min{2,0  2}  2
When i=3 j=1/2/3
A2(3,1) min{A1(3,1), A1(3, 2)  A1(2,1)}  min{3,7  6}  3
A2(3, 2) min{A1(3,2), A1(3, 2)  A1(2, 2)}  min{7,7  0}  7
A2(3,3) min{A1(3,3), A1(3,2)  A1(2,3)}  min{0,0  2}  0
0 4 6
cost Adjacency Matrix A2 (i, j)  6 0 2
3 7 0

Step 3
For k=3 i.e. going from i to j through intermediate vertex 3
When i=1 k=3 j=1/2/3
Ak (i, j) min{Ak1(i, j), Ak1(i, k)  Ak1(k, j)}
A3(1,1) min{A2(1,1), A2(1,3)  A2(3,1)}  min{0,6  3} 0
A3(1,2) min{A2(1, 2), A2(1,3)  A2(3, 2)}  min{4,6  7}  4
A3(1,3) min{A2(1,3), A2(1,3)  A2(3,3)}  min{6,6  0}  6
When i=2 j=1/2/3
A3(2,1) min{A2(2,1), A2(2,3)  A2(3,1)}  min{6, 2 3}  5
A3(2, 2) min{A2(2, 2), A2(2,3)  A2(3, 2)}  min{0,2  7}  0
A3(2,3) min{A2(2,3), A2(2,3)  A2(3,3)}  min{2,2  0}  2
When i=3 j=1/2/3
A3(3,1) min{A2(3,1), A2(3,3)  A2(3,1)}  min{3,0  3} 3
A3(3, 2) min{A2(3, 2), A2(3,3)  A3(3, 2)}  min{7,0  7}  7
A3(3,3) min{A2(3,3), A2(3,3)  A3(3,3)}  min{0,0  0}  0

0 4 6
cost Adjacency Matrix A3 (i, j)  5 0 2
3 7 0
This matrix gives the all pairs shortest path solution

Algorithm all_pairs_shortest_path(W,A,n)
// W is weighted array matrix, n is the number of vertices,
// A is the cost of shortest path from vertex i to j.
{
for i:= 1 to n do
for j:= 1 to n do
A[i,j]:= W[i,j]

for k:=1 to n do
for i:= 1 to n do
for j:= 1 to n do
A[i,j]:=min(A[i,j],A[i,k]+A[k,j]
}

The Time Complexity for this method is O(n3)

You might also like