Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
16 views

Dynamic Programming

Design and Analysis of Algorithms - Dynamic Programming

Uploaded by

nvnmarcos5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Dynamic Programming

Design and Analysis of Algorithms - Dynamic Programming

Uploaded by

nvnmarcos5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Module 2

DynamicProgramming
1. All pair shortest path problem
2. Travelling sales person problem
3. Single Source shortest path problem
• Dynamic programming was invented by Richard Bellman
in 1950.
• Dynamic programming technique follows bottom up
approach, the solution of the smallest sub problem is
stored in a table and it is used whenever required.
Characteristics of dynamic programming

• Optimal Substructure
If an optimal solution contains optimal subsolutions, then the problem
exhibits optimal substructure.

• Overlapping subproblem
When the recursive algorithm revisits same subproblem over and
over, the problem is said to have overlapping subproblem.
Characteristics of dynamic programming

• Optimal Substructure
B
A

E
C F

Shortest path A-F is


A ->B ->E ->D ->F = Shortest path A->E + Shortest path E->F
Characteristics of dynamic programming

• Overlapping subproblem

/* simple recursive program for Fibonacci


numbers */
int fib(int n)
{
if ( n <= 1 )
return n;
return fib(n-1) + fib(n-2);
}
Characteristics of dynamic programming

• Overlapping subproblem

/* simple recursive program for Fibonacci


numbers */
int fib(int n)
{
if ( n <= 1 )
return n;
return fib(n-1) + fib(n-2);
}
Principle of Optimality

The principle of optimality states that an optimal


sequence of decisions has the property that whatever
the initial state and decision are, the remaining states
must constitute an optimal decision sequence with
regard to the state resulting from the first decision.
or

Principle of optimality states that the optimal solution


to a problem is a combination of optimal solution to
some of its subproblems
Components of Dynamic programming

1. Characterize the optimal substructure

In this step we characterize the structure of an optimal solution by


showing that it can be decomposed into optimal subproblems and
subproblems have overlapping property
2. Use the recursive solution

We can define recursive algorithm if the problem have


optimal substructure.

Recursive solution is expressed in terms of recurrence


relation that relates a value corresponding to the
problem’s solution with value corresponding to the
subproblem‘s solutions.
3. Compute the optimal value

Compute the value of an optimal solution in a


bottom-up fashion starting with the solution to the
smallest subproblem.

Store the subproblem solution values in a table,


because the subproblem solutions may be used
many times.
4. Construct the optimal solution

The optimal solution is constucted by combining the solutions of the


subproblems.
Divide & conquer Vs. Dynamic Programming

Similarities
1. Both technique view the original problem as a collection
of subproblems.
2. Both are recursive in nature.
Dissimilarities

Dynamic Programming Divide and Conquer


Generally used to solve Generally used to solve
optimization problem decision problem

Solve problem using Solve problem using top


bottom up approach down approach

It uses four steps to solve It uses three steps to


a problem : solve a problem : divide,
characterization, conquer, combine
recursive relation,
optimal values and
optimal solutions
Dissimilarities

Dynamic Programming Divide and Conquer


Guarantees optimal May or may not generate
solution optimal solution

Subproblems may not be Subproblems are


independent independent
Solutions to the sub Solutions to the sub
problems are computed problems are computed
once and stored in a recursively more than
table for later use. once.
Many decision sequence Only one decision
may be generated sequence may be
generated
Applications of dynamic programming
1. 0/1 knapsack problem
2. All pair shortest path problem
3. Travelling sales person problem
4. Fibonacci sequence
All pair shortest path problem

Given a weighted connected graph (directed or undirected)


The all pair shortest path problem find the length of the shortest path
from each vertex to all other vertices.
The distances are recorded in an nn matrix D, called distance
matrix.
Floyd’s Algorithm – All pair Shortest Path

• Floyd’s algorithm is applicable to both directed and undirected


graph provided that they do not contain a cycle of a negative
length.
Algorithm FloydAllpath(W[1..n, 1..n],n)
/*W the weight matrix of the graph with no negative
length cycle. n the number of vertices. W[i,i]=0 for
1≤i≤n
Output is the distance matrix D of the shortest path
lengths.*/
{
D=W // copy the cost to D
for k = 1 to n
for i = 1 to n
for j = 1 to n
D[i,j]= min(D[i,j], D[i,k]+D[k,j]
Return D
}
Time analysis - All pair shortest path algorithm

Algorithm FloydAllpath(W[1..n, 1..n],n)


{
D=W // copy the cost to D n2
for k = 1 to n n
for i = 1 to n nxn
for j = 1 to n n x n xn
D[i,j]= min(D[i,j], D[i,k]+D[k,j] n3
Return D 1
}

Time complexity of the algorithm is O(n3)


Travelling salesperson problem
• Let G=(V, E) be a directed graph with edge costs cij.
• Cij 0 for all i and j and cij= if (i, j) E
• Let |V| n and assume that n 1

• A tour of G is a directed simple cycle that includes every


vertex in V.
• The cost of the tour is the sum of the cost of edges on the
tour.
• Travelling salesperson problem is to find a tour of minimum
cost.
• Consider a tour that starts and ends at vertex 1.

• Every tour consists of an edge (1,k) for some k V- {1}


and a path from vertex k to vertex 1.

• The path from vertex k to vertex 1 goes through each


vertex in V- {1, k}.

• If the tour is optimal, then the path from k to 1 must be a


shortest k to 1 path going through all vertices in V- {1,
k}. Hence the principle of optimality holds.
• Let g(i, S) be the length of the shortest path
starting at vertex i, going through all the vertices
in S and terminating at vertex 1.
• The function g(1, V- {1}) is the length of an
optimal salesperson tour.
• From the principle of optimality it follows that

g (1, V- {1}) = min {c1k + g(k, V - {1,k} }


---(1)
2≤k≤n

Generalizing we obtain (for i not an element of S)


g(i, S) = min { cij + g( j, S - { j} )} ------
(a)
g(i, Φ) = ci1 1 i n
Hence, we can use the equation (a) to obtain g(i,S)
for all S of size 1. Then we can obtain g(i,S) for S
with |S|=2 , |S|=3, and so on.

When |S| n-1, the values of i and S for which g(i, S)


is needed are such that i≠1, 1 S and i S

Time complexity of TSP =O(n22n )


Time Analysis of TSP
Let the number of vertices be n.
Before calculating the final g value(g(1,V-{1}) we have to
calculate the g values of vertex 2 to n . (Ie. Total n-1 vertices ‘ g
values) . These n-1 vertices have total 2n-2 subsets.
Therefore we must calculate (n-1) 2n-2 g values before calculating
the final g value.
A subset S with k elements need k-1 comparisons to compute g
value.
So at most n-1 comparisons are needed to find the minimum cost
in the final g value .
Therefore, Time Complexity is O(n22n)
• Consider the directed graph with
edge length are given in the matrix
0 10 15 20
5 0 9 10 1 2
6 13 0 12
8 8 9 0
4 3
Solution
We have
g(2,Φ) = c21 =5
1
g(3,Φ) = c31 =6
g(4,Φ) = c41 =8
In TSP we start from vertex 1 and for
every other vertex i, we measure the
distance from each vertex to start
vertex and sum the minimum distance
at each vertex to start vertex.
Compute g(i, S) with |S|=1
g(2,{3}) = c23+g(3,Φ)=9 +6=15
g(2,{4}) = c24+g(4,Φ)=10 +8=18
g(3,{2}) = c32+g(2,Φ)=13 +5=18
g(3,{4}) = c34+g(4,Φ)=12 +8=20
g(4,{2}) = c42+g(2,Φ)=8 +5=13
g(4,{3}) = c43+g(3,Φ)=9 +6=15
Next Compute g(i, S) with |S|=2
g(2,{3,4}) = min(c23+g(3,{4}), c24+g(4,
{3}))
= min(9+20,10+15) = 25

g(3,{2,4}) = min(c32+g(2,{4}),
c34+g(4,{2}))
= min(13+18,12+13)=
25

g(4,{2,3}) =min(c42+g(2,{3}),
Finally from equation (1) we get,
g(1,{2,3,4}) = min { c12+ g(2,{3,4}),
c13+ g(3,{2,4}),
c14+ g(4,{2,3}) }
= min(10+25, 15+25, 20+23}
= min( 35, 40, 43)
= 35
Length of optimum tour is 35
A tour of this length can be constructed
as
Minimum of g(1,{2,3,4}) is obtained from c12+
g(2,{3,4})
so path is 12
Minimum of g(2,{3,4}) is obtained from
c24+g(4,{3})

so tour is 12 4 3 1
(j that minimize the RHS)
Bellman-Ford Algorithm
• Bellman –Ford algorithm computes shortest
path from a single source vertex to all other
vertices in a weighted directed connected
graph.
• This algorithm can be applied to graphs
containing negative edge weights.
• This algorithm works only if the graph has no
negative weight cycle.
• Bellman-Ford allows us to determine the
presence of negative weight cycle.
• Total complexity of the algorithm is O(V*E), V -
is the number of vertices and E number of
edges
Bellman-Ford Algorithm
Algorithm Bellman_Ford(G, W, S)
{ Initialize_Singe_Source(G,S)
Initialize_Singe_Source(G,S) {
for i= 1 to (|V|-1 ) for each vertex V in G
for each edge (U, V) in E(G) d[V] = infinite
Relax(U,V,W) p[V] = NIL
d[S] = 0
for each edge (U,V) in G }
if (d[V]> d[U] + W(U, V) )
Return False //Error: Negative Cycle
Exists
Return True // return d[], p[]
} Relax(U,V,W)
{
if (d[V]> d[U] + W(U, V) )
dist[V] =d[U] + W(U, V)
p[V]=U
}
Time Analysis - Bellman-Ford Algorithm

Total Count = 2V_2E+2EV+2 = O(EV)


Bellman-Ford Algorithm
Edges (1,2) (1,4) (3,2) (4,3)
2 Iteration
Initialize-source
verte 1 2 3 4
verte 1 2 3 4
x
x
∞ ∞ d[v] 0 -2 8 5
d[v] 0 ∞
Nil Nil Nil P[v] Nil 3 4 1
P[v] Nil
3 Iteration
1 Iteration
verte 1 2 3 4 verte 1 2 3 4
x x
d[v] 0 4 8 5 d[v] 0 -2 8 5

P[v] Nil 1 4 1 P[v] Nil 3 4 1


Bellman-Ford Algorithm
Edges (1,2) (1,4) (3,2) (4,3)

vert d[v] P[v] Edges d[u]+w(u,v) Verte path co


ex x st
1 0 Nil 1 Nil

2 -2 3 1-2 0+4=4 2 1-4-3-2 -2


3-2 8+-10=-2
3 8 4 4-3 5+3=8 3 1-4-3 8

4 5 1 1-4 0+5=5
4 1-4 5

If(d[v]> d[u]+w(u,v))
return False (ie., negative weight cycle)

Here no negative weight cycle


Bellman-Ford Algorithm
Finding negative weight cycles using
Bellman-Ford is as simple as checking
to see
• if the n edge path solution is the
same as the n - 1 edge solution,
there is no negative weight edge.
• If the n edge path solution is smaller
than the n - 1 edge solution, there is
a negative weight cycle.

You might also like