Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
56 views

Dynamic Programming 2

The dynamic programming approach stores the minimum cost of multiplying matrices from index i to j in a 2D table m[i,j]. It fills the table in order of increasing distance between indices i and j. This allows it to compute each entry in O(n^3) time by considering all splitting points k. Dividing the problem into overlapping subproblems and storing results in a table avoids recomputing the same subproblems. This leads to an overall O(n^3) time complexity, faster than the exponential time naive recursive approach or O(n^3) divide and conquer approach.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Dynamic Programming 2

The dynamic programming approach stores the minimum cost of multiplying matrices from index i to j in a 2D table m[i,j]. It fills the table in order of increasing distance between indices i and j. This allows it to compute each entry in O(n^3) time by considering all splitting points k. Dividing the problem into overlapping subproblems and storing results in a table avoids recomputing the same subproblems. This leads to an overall O(n^3) time complexity, faster than the exponential time naive recursive approach or O(n^3) divide and conquer approach.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Algorithm Course Notes

Dynamic programming 2
Summary
 Dynamic programming applied to
◦ Matrix product problem

Matrix Product
◦ Multiplying together n rectangular matrices
M = M1· M2 ··· Mn where each Mi has ri-1 rows and ri columns
The naïve matrix multiplication algorithm takes pqr operations to multiply
a p x q matrix by a q x r matrix
we know matrix multiplication is associative so we can evaluate the
product in any order. However, some order are cheaper than others.
Objective here is to find the cost of cheapest one.
(Note: Matrix product is not commutative)
Matrix Product
 Example
◦ If we multiply from right to left the following 4 matrices

M = M1· (M2 ·(M3· M4))

10 x 20 20 x 50 50 x 1 1 x 100

50 x 100 cost = 5000

20 x 100 cost = 100,000

10 x 100 cost = 20,000


Total cost is 5,000 + 100,000 + 20,000 = 125,000
However, if begin in the middle:
M = (M1· (M2 ·M3) )· M4
Matrix Product
10 x 20 20 x 50 50 x 1 1 x 100

20 x 1 cost = 1000

10 x 1 cost = 200

10 x 100 cost = 1000


Total cost is 1000 + 200 + 1000 = 2200

The Naïve algorithm


One way is to try all possible ways of parenthesizing the n matrices.
But it will take exponential time since there are exponentially many ways
Of Parenthesizing them.
Matrix Product
 Let X(n) be the number of ways of parenthesizing the n matrices. It
is easy to see that X(1) = X(2) =1 and for n>2
n 1

X(n) = X(k)· X(n-k)


k 1

Consider breaking product into two pieces

( M 1  M 2    M k )  ( M k 1    M n )
    
X ( k ) ways X ( n  k ) ways

Therefore,
2
X (3)   X (k ) X (3  k )  X (1)  X (2)  X (2)  X (1)  2
k 1

This is easy to verify : x( xx ), ( xx ) x


2
X (4)   X (k ) X (4  k )  X (1)  X (3)  X (2)  X (2)  X (3)  X (1)  5
k 1

This is easy to verify : x(( xx ) x), x( x( xx )), ( xx )( xx ), (( xx ) x) x, ( x( xx )) x


Matrix Product
 Claim: X(n) ≥ 2n-2.

Proof: Proof by induction on n. The claim is certainly true for n


≤ 4. now suppose n ≥ 5. Then

n 1
X (n)   X (k ) X (n  k )
k 1
n 1
  2 k  2 2 n  k  2 (by induction hypothesis )
k 1
n 1
  2 n  4  (n  1)2 n  4
k 1

 2n2
Matrix Product
 Divide and Conquer

Let cost(i,j) be the minimum cost of computing

Mi.Mi+1…..MJ

What is the cost of breaking the product at Mk?

(Mi.Mi+1…Mk)(Mk+1…MJ).

It is cost(i,k) plus cost(k+1,j) plus the cost of multiplying ri-1 x


rk matrix by an rk x rj matrix

Therefore, cost of breaking the product at Mk is

cost(i, k) + cost(k+1, j) + ri-1rk rj


Matrix Product
 A Divide and Conquer Algorithm

function cost(i, j)

if i = j then return(0) else

return (mini≤k<j(cost(i, k) + cost(k+1, j) + ri-1rk rj ))

Correctness Proof: A simple induction


Analysis: Let T(n) be the worst case running time
of cost(i,j) when j-i+1=n. Then T(0) = c and for n>0,
n 1
T ( n)   T (m)  T (n  m)   dn
m 1

Hence, n  0
n 1
T ( n)  2 T ( m)  dn
m 1
Divide and Conquer algorithm
Therefore,
 n 1   n2 
T ( n)  T ( n  1)   2 T ( m)  dn    2 T (m)  d ( n  1) 
 m 1   m 1 
 2T ( n  1)  d
 T ( n)  3T ( n  1)  d So,
T ( n)  3T ( n  1)  d  3(3T ( n  2)  d )  d
 9T ( n  2)  3d  d
 9(3T ( n  3)  d )  3d  d
 27T ( n  3)  9d  3d  d
m 1 n 1
 3 T ( n  m)  d  3  ....  3 T (0)  d  3l
m l n

l 0 l 0

3 n
1
 c3n  d  (c  d / 2)3n
2
Hence, T ( n)  (3n )
Dynamic Programming
This is a job for

D
Dynamic Programming !!!

Faster than a speeding divide and conquer!

Able to leap tall recursions in a single bound!


Details
Store cost (i , j) in m[i , j]. Therefore, m[i , j] = 0 if i = j, and

min i ≤ k<j (m[i ,k] + m[k+1 ,j] + r i-1 r k r j)

Otherwise.

When we fill in the table , we had better makes sure that m[i, k]
and m[k+1, j] are filled in before we compute m[i, j]. Compute
entries of m[] in increasing order of difference between
i j
parameters.
i

j
Details
r0 = 10 , r1 = 20, r2 = 50, r3 = 1, r4 = 100 .

m[1,2] = min 1≤k<2 (m[1,k] + m[k+1,2] + r0rkr2)

= r0r1r2 = 10000

m[2,3] = r1r3r3 = 1000

m[3,4] = r2r3r4 = 5000


m[1,3]
= min 1≤k<3 (m[1,k] + m[k+1,3] + r0rkr3)
= min (m[1,1] + m[2,3] + r0r1r3,m[1,2] +
m[3,3] + r0r2r3)
= min( 0 + 1000 + 200,10000 + 0 + 500)
= 1200
m[2,4]
= min 2≤k<4 (m[2,k] + m[k+1,4] + r1rkr4)
= min (m[2,2] + m[3,4] + r1r2r4 , m[2,3] +
m[4,4] + r1r3r4)
= min( 0 + 5000 + 100000,1000 + 0 +
2000)
= 3000
m[1,4]
= min 1≤k<4 (m[1,k] + m[k+1,4] + r0rkr4)
= min (m[1,1] + m[2,4] + r0r1r4 , m[1,2] +
m[3,4] + r0r2r4, m[1,3] +
m[4,4] + r0r3r4 )
= min( 0 + 3000 + 20000,10000 + 5000 +
50000,1200 + 0 + 1000)
= 2200
Filling in the Diagonal
for i:= 1 to n do m[i,i] : = 0
Filling the first superdiagonal
for i := 1 to n-1 do

j := i + 1

m[i, j ] := …
Filling the second
superdiagonal
for i:= 1 to n – 2 do

j := i + 2

m[i, j] := …
Filling the dth superdiagonal
for i:= 1 to n – d do

j:= i + d

m[i , j] := …..
The algorithm
function matrix(n)

1. for i:=1 to n do m[i,i]:=0

2. for i:=1 to n-1 do

3. for i:=1 to n-d do

4. j := i + d

5. m[i,j] := min i ≤ k <j (m[i,k] + m[k+1,j] + r i-1 r k r j )

6. return (m[1,n])
Analysis
 Line 1 costs of O(n)
 Line 5 costs O(n) (can be done with a single for-loop)
 Lines 4 and 6 costs O(1)
 The for-loop on line 3-5 costs O(n2)
 The for-loop on lines 2-5 costs O(n3).
Therefore, the algorithm runs in the time O(n3). This is a
characteristic of many dynamic programming algorithms.
Reading: Cormen(2nd Ed.)= 15.2,15.3
Other orders
Assignment #4 Due Date: 30.9.13

 Implement the divide and conquer and dynamic


programming algorithms to carry out the product of
matrices. Take the case of :

 5 matrices

 10 matrices

 25 matrices

 Give a table to compare the running time.

You might also like