Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Algo VC Lecture24

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 32

Lecture 24.

Dynamic programming

1
Recap

• Greedy approach algorithms are used tofind the optimal


solution among all possible solution.
• Huffman coding and other real world problem uses greedy
approach to find optimal solution.
• Some graphs algorithms like prims, Kruskal and Dijkstra also
uses greedy approach to find MST.

2
Dynamic Programming

Dynamic Programming is a general algorithm design technique


for solving problems defined by or formulated as recurrences with overlapping sub
instances
• Invented by American mathematician Richard Bellman in the 1950s to solve
optimization problems and later assimilated by CS
• “Programming” here means “planning”
• Main idea:
- set up a recurrence relating a solution to a larger instance to solutions of
some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
3
Dynamic programming

 Dynamic programming is typically applied to optimization


problems. In such problem there can be many solutions.
Each solution has a value, and we wish to find a solution with
the optimal value.

4
The development of a dynamic programming

1. Characterize the structure of an optimal solution.


2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom up fashion.
4. Construct an optimal solution from computed information.

5
Rod cutting

 Input: A length n and table of prices pi , for i = 1, 2, …, n.


 Output: The maximum revenue obtainable for rods whose
lengths sum to n, computed as the sum of the prices for the
individual rods.

6
Ex: a rod of length 4

7
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:

F(n) = F(n-1) + F(n-2)


F(0) = 0
F(1) = 1
• Computing the nth Fibonacci number recursively (top-down):
F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)


...
8
Example: Fibonacci numbers (cont.)

Computing the nth Fibonacci number using bottom-up iteration and recording
results:

F(0) = 0
F(1) = 1
F(2) = 1+0 = 1

F(n-2) =
F(n-1) =
F(n) = F(n-1)
0 + F(n1-2) 1 . . . F(n-2) F(n-1) F(n)
What if we solve it
Efficiency: n recursively?
- time n
9 - space
Examples of DP algorithms
• Computing a binomial coefficient

• Longest common subsequence

• Warshall’s algorithm for transitive closure

• Floyd’s algorithm for all-pairs shortest paths

• Constructing an optimal binary search tree

• Some instances of difficult discrete optimization problems:


- traveling salesman
10 - knapsack
Computing a binomial coefficient by DP
•Binomial coefficients are coefficients of the binomial formula:
(a + b)n = C(n,0)anb0 + . . . + C(n,k)an-kbk + . . . + C(n,n)a0bn
•Recurrence: C(n,k) = C(n-1,k) + C(n-1,k-1) for n > k > 0
C(n,0) = 1, C(n,n) = 1 for n  0

Value of C(n,k) can be computed by filling a table:


0 1 2 . . . k-1 k
0 1
1 1 1
.
.
.
n-1 C(n-1,k-1) C(n-1,k)
n C(n,k)
11
Computing C(n,k): pseudocode and analysis

Time efficiency: Θ(nk)


12 Space efficiency: Θ(nk)
Strassen's Matrix Multiplication

Suppose we want to multiply two matrices of size N x N: for


example A x B = C.

C11 = a11b11 + a12b21

C12 = a11b12 + a12b22

C21 = a21b11 + a22b21

C22 = a21b12 + a22b22 2x2 matrix multiplication can be accomplished in 8


multiplication.(2log28 =23)
13
Basic Matrix Multiplication

void matrix_mult (){


for (i = 1; i <= N; i++) { algorithm

for (j = 1; j <= N; j++) {


compute Ci,j;
}
N
}}
Ci , j   ai ,k bk , j
k 1
Time analysis
N N N
Thus T ( N )   c  cN 3  O( N 3 )
i 1 j 1 k 1

14
Algorithm to Multiply 2 Matrices

Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)
Result: Matrix Cp×r resulting from the product A·B

MATRIX-MULTIPLY(Ap×q , Bq×r)
1. for i ← 1 to p
2. for j ← 1 to r
3. C[i, j] ← 0
4. for k ← 1 to q
5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
6. return C
15
Strassens’s Matrix Multiplication

 Strassen showed that 2x2 matrix multiplication can be


accomplished in 7 multiplication and 18 additions or
subtractions. .(2log27 =22.807)

 This reduce can be done by Divide and Conquer Approach.

16
Divide-and-Conquer

 Divide-and conquer is a general algorithm design paradigm:


– Divide: divide the input data S in two or more disjoint subsets S1, S2,

– Recur: solve the subproblems recursively
– Conquer: combine the solutions for S1, S2, …, into a solution for S
 The base case for the recursion are subproblems of constant
size
 Analysis can be done using recurrence equations

17
Divide and Conquer Matrix Multiply
A  B = R
A0 A1 B0 B1 A0B0+A1B2 A0B1+A1B3
 =
A2 A3 B2 B3 A2B0+A3B2 A2B1+A3B3

•Divide matrices into sub-matrices: A0 , A1, A2 etc


•Use blocked matrix multiply equations
•Recursively multiply sub-matrices

18
Divide and Conquer Matrix Multiply

A  B = R

a0  b0 = a0  b0

• Terminate recursion with a simple base case

19
Strassens’s Matrix Multiplication

P1 = (A11+ A22)(B11+B22) C11 = P1 + P4 - P5 + P7


P2 = (A21 + A22) * B11 C12 = P3 + P5
P3 = A11 * (B12 - B22) C21 = P2 + P4
P4 = A22 * (B21 - B11) C22 = P1 + P3 - P2 + P6
P5 = (A11 + A12) * B22
P6 = (A21 - A11) * (B11 + B12)
P7 = (A12 - A22) * (B21 + B22)

20
Comparison

C11 = P1 + P4 - P5 + P7
= (A11+ A22)(B11+B22) + A22 * (B21 - B11) - (A11 + A12) * B22+
(A12 - A22) * (B21 + B22)
= A11 B11 + A11 B22 + A22 B11 + A22 B22 + A22 B21 – A22 B11 -
A11 B22 -A12 B22 + A12 B21 + A12 B22 – A22 B21 – A22 B22
= A11 B11 + A12 B21

21
Strassen Algorithm
void matmul(int *A, int *B, int *R, int n) {
if (n == 1) {
(*R) += (*A) * (*B);
} else {
matmul(A, B, R, n/4);
matmul(A, B+(n/4), R+(n/4), n/4);
matmul(A+2*(n/4), B, R+2*(n/4), n/4);
matmul(A+2*(n/4), B+(n/4), R+3*(n/4), n/4);
matmul(A+(n/4), B+2*(n/4), R, n/4);
matmul(A+(n/4), B+3*(n/4), R+(n/4), n/4);
matmul(A+3*(n/4), B+2*(n/4), R+2*(n/4), n/4);
matmul(A+3*(n/4), B+3*(n/4), R+3*(n/4), n/4);
Divide matrices in
}
sub-matrices and
recursively multiply
22 sub-matrices
Time Analysis

23
Matrix-chain Multiplication

 Suppose we have a sequence or chain A1, A2, …, An of n


matrices to be multiplied
– That is, we want to compute the product A1A2…An

 There are many possible ways (parenthesizations) to


compute the product

24
Matrix-chain Multiplication

 Example: consider the chain A1, A2, A3, A4 of 4 matrices


– Let us compute the product A1A2A3A4
 There are 5 possible ways:
1. (A1(A2(A3A4)))
2. (A1((A2A3)A4))
3. ((A1A2)(A3A4))
4. ((A1(A2A3))A4)
5. (((A1A2)A3)A4)

25
Matrix-chain Multiplication
 To compute the number of scalar multiplications necessary, we
must know:
– Algorithm to multiply two matrices
– Matrix dimensions

26
Matrix-chain Multiplication

 Example: Consider three matrices A10100, B1005, and C550


 There are 2 ways to parenthesize
– ((AB)C) = D105 · C550
 AB  10·100·5=5,000 scalar multiplications Total:
 DC  10·5·50 =2,500 scalar multiplications 7,500
– (A(BC)) = A10100 · E10050
 BC  100·5·50=25,000 scalar multiplications
 AE  10·100·50 =50,000 scalar multiplications

Total:
27 75,000
Matrix-chain Multiplication
 Matrix-chain multiplication problem
– Given a chain A1, A2, …, An of n matrices, where for i=1, 2, …, n,
matrix Ai has dimension pi-1pi
– Parenthesize the product A1A2…An such that the total number of scalar
multiplications is minimized
 Brute force method of exhaustive search takes time exponential
in n

28
Dynamic Programming Approach
 The structure of an optimal solution
– Let us use the notation Ai..j for the matrix that results from the product
Ai Ai+1 … Aj
– An optimal parenthesization of the product A1A2…An splits the product
between Ak and Ak+1 for some integer k where1 ≤ k < n
– First compute matrices A1..k and Ak+1..n ; then multiply them to get the
final matrix A1..n

29
Dynamic Programming Approach

– Key observation: parenthesizations of the subchains A1A2…Ak and


Ak+1Ak+2…An must also be optimal if the parenthesization of the chain
A1A2…An is optimal (why?)

– That is, the optimal solution to the problem contains within it the
optimal solution to subproblems

30
Summary

 Dynamic programming is typically applied to optimization


problems. In such problem there can be many solutions.
 .Each solution has a value, and we wish to find a solution with the
optimal value.
 There are many problem which can be solved through dynamic
programming approach such as fibonacci series and binomial
series.
 Complexity of multiplication of two matrices is O(n3).
 Complexity of multiplication of two matrices through Stressen
approach is O(n2.78).
31
In Next Lecture

 In next Lecturer, we will discuss about the Knapsack problem


using dynamic programming approach.

32

You might also like