Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DAA ch4 Updated 2016

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Design and Analysis of Algorithm chapter 4 G3 CS 2016

UNIT-4
DYNAMIC PROGRMMING & TRAVERSAL TECHNIQUES
Dynamic programming is a computer programming technique where an algorithmic problem is
first broken down into sub-problems, the results are saved, and then the sub-problems are
optimized to find the overall solution, which usually has to do with finding the maximum and
minimum range of the algorithmic query.
When a more extensive set of equations is broken down into smaller groups of equations,
overlapping sub-problems are referred to as equations that reuse portions of the smaller
equations several times to arrive at a solution.
On the other hand, optimum substructures locate the best solution to an issue, then build the
solution that provides the best results overall. This is how they solve problems. When a vast
issue is split down into its constituent parts, a computer will apply a mathematical algorithm to
determine which elements have the most desirable solution. Then, it takes the solutions to the
more minor problems and utilizes them to get the optimal solution to the initial, more involved
issue.
This technique solves problems by breaking them into smaller, overlapping sub-problems. The
results are then stored in a table to be reused so the same problem will not have to be computed
again.
For example, when using the dynamic programming technique to figure out all possible results
from a set of numbers, the first time the results are calculated, they are saved and put into the
equation later instead of being calculated again. So, when dealing with long, complicated
equations and processes, it saves time and makes solutions faster by doing less work.
The dynamic programming algorithm tries to find the shortest way to a solution when solving a
problem. It does this by going from the top down or the bottom up. The top-down method
solves equations by breaking them into smaller ones and reusing the answers when needed. The
bottom-up approach solves equations by breaking them up into smaller ones, then tries to solve
the equation with the smallest mathematical value, and then works its way up to the equation
with the biggest value.
Recursion vs. dynamic programming
In computer science, recursion is a crucial concept in which the solution to a problem depends
on solutions to its smaller sub-problems.
Meanwhile, dynamic programming is an optimization technique for recursive solutions. It is the
preferred technique for solving recursive functions that make repeated calls to the same inputs.
A function is known as recursive if it calls itself during execution. This process can repeat itself
several times before the solution is computed and can repeat forever if it lacks a base case to
enable it to fulfill its computation and stop the execution.
However, not all problems that use recursion can be solved by dynamic programming. Unless
solutions to the sub-problems overlap, a recursion solution can only be arrived at using a divide-
and-conquer method.
For example, problems like merge, sort, and quick sort are not considered dynamic
programming problems. This is because they involve putting together the best answers to sub-
problems that don’t overlap.
Drawbacks of recursion
Recursion uses memory space less efficiently. Repeated function calls create entries for all the
variables and constants in the function stack. As the values are kept there until the function
returns, there is always a limited amount of stack space in the system, thus making less efficient
use of memory space. Additionally, a stack overflow error occurs if the recursive function
requires more memory than is available in the stack.

Page 1
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Recursion is also relatively slow in comparison to iteration, which uses loops. When a function
is called, there is an overhead of allocating space for the function and all its data in the function
stack in recursion. This causes a slight delay in recursive functions.
Dynamic programming can be achieved using two approaches:
1. Top-down approach
In computer science, problems are resolved by recursively formulating solutions,
employing the answers to the problems’ sub-problems. If the answers to the sub-
problems overlap, they may be memoized or kept in a table for later use. The top-down
approach follows the strategy of memorization. The memoization process is equivalent
to adding the recursion and caching steps. The difference between recursion and caching
is that recursion requires calling the function directly, whereas caching requires
preserving the intermediate results.
The top-down strategy has many benefits, including the following:
 The top-down approach is easy to understand and implement. In this approach, problems
are broken down into smaller parts, which help users identify what needs to be done.
With each step, more significant, more complex problems become smaller, less
complicated, and, therefore, easier to solve. Some parts may even be reusable for the
same problem.
 It allows for sub-problems to be solved upon request. The top-down approach will enable
problems to be broken down into smaller parts and their solutions stored for reuse. Users
can then query solutions for each part.
 It is also easier to debug. Segmenting problems into small parts allows users to follow the
solution quickly and determine where an error might have occurred.
Disadvantages of the top-down approach include:
 The top-down approach uses the recursion technique, which occupies more memory in
the call stack. This leads to reduced overall performance. Additionally, when the
recursion is too deep, a stack overflow occurs
2. Bottom-up approach
In the bottom-up method, once a solution to a problem is written in terms of its sub-
problems in a way that loops back on itself, users can rewrite the problem by solving the
smaller sub-problems first and then using their solutions to solve the larger sub-
problems.
Unlike the top-down approach, the bottom-up approach removes the recursion. Thus, there is
neither stack overflow nor overhead from the recursive functions. It also allows for saving
memory space. Removing recursion decreases the time complexity of recursion due to
recalculating the same values.
The advantages of the bottom-up approach include the following:
 It makes decisions about small reusable sub-problems and then decides how they will be
put together to create a large problem.
 It removes recursion, thus promoting the efficient use of memory space. Additionally,
this also leads to a reduction in timing complexity.
Two main features of problems in dynamic programming: sub-problems that overlap and the
optimal substructure.
 Overlapping sub-problems
When the answers to the same sub-problem are needed more than once to solve the main problem,
we say that the sub-problems overlap. In overlapping issues, solutions are put into a table so
developers can use them repeatedly instead of recalculating them. The recursive program for the
Fibonacci numbers has several sub-problems that overlap, but a binary search doesn’t have any
sub-problems that overlap.

Page 2
Design and Analysis of Algorithm chapter 4 G3 CS 2016

A binary search is solved using the divide and conquer technique. Every time, the sub-problems
have a unique array to find the value. Thus, binary search lacks the overlapping property.
For example, when finding the nth Fibonacci number, the problem F(n) is broken down into
finding F(n-1) and F. (n-2). You can break down F(n-1) even further into a subproblem that has to
do with F. (n-2).In this scenario, F(n-2) is reused, and thus, the Fibonacci sequence can be said to
exhibit overlapping properties.
 Optimal substructure
Refers to a property of a problem where an optimal solution to the overall problem can be
constructed from optimal solutions to its sub-problems.
In simpler terms, if a problem exhibits optimal substructure, it means that you can break down the
problem into smaller sub-problems and solve each sub-problem optimally. Then, you can
combine the optimal solutions of the sub-problems to obtain the optimal solution for the original
problem.
The key idea is that the optimal solution to a larger problem can be built by making optimal
choices at each step based on the optimal solutions to smaller sub-problems.

General Method:
 Dynamic programming is typically applied to optimizing problem. Here the
programming stands for planning.
 Dynamic programming is technique for solving problems with overlapping
subproblems.
 In this method, each subproblem is solved only once. The result of each subproblem is
recorded in a table from which we can obtain a solution to the original problem.
 For each given problem, we may get any number of solutions we seek for optimal
solution (maximum value or minimum value solution).and such an optimal solution
becomes the solution of the given problem.
Steps of Dynamic Programming:
It involves 4 steps:

 Characterize the structure of optimal solution. That means develop a mathematical


notation that can express any solution and sub solution for the given subproblem.
 Recursively define the value of an optimal solution.
 By using bottom-up technique compute the value of optimal solution. for that you
have to develop a recurrence relation that relates a solution to its subproblems using
mathematical notation of step 1.
 Compute an optimal solution from computed information.

Difference between divide and conquer and dynamic programming

Divide and conquer Dynamic programming


The problem is divided into sub problems. these
subproblems are solved independently. Finally, all In this, many decisions sequence is generated and
the solutions of subproblems are collected together all the overlapping sub instances are considered
to get the solution to the given problem.
In this method, duplications in sub solutions are
Duplications in solutions is avoided totally
Neglected
It is less efficient because of rework of solutions It is more efficient

Page 3
Design and Analysis of Algorithm chapter 4 G3 CS 2016

It uses top-down approach for problem solving It uses bottom-up approach for problem solving
It split its input at every possible split points rather
It splits the input at specific deterministic points
than t a particular point. After trying all split
usually in the middle
points, it determines which split point is optimal.
Difference between Greedy Algorithm and dynamic programming

Greedy Method Dynamic Programming

It is used for obtaining optimum solution. It is also used for obtaining optimum solution.

Page 4
Design and Analysis of Algorithm chapter 4 G3 CS 2016

In greedy method, a set of feasible solutions and There is no special set of feasible solution in this
the picks up the optimum solution method.

The optimum selection is without revising It considers all possible sequences in order to
previously generated solutions obtain the optimum solution.

It is guaranteed that the dynamic programming


There is no such guarantee of getting optimum
will generate optimal solution using principle of
solution
optimality.

Principle of optimality:

The dynamic programming algorithm obtains the solution using principle of optimality.

The principle of optimality states that, “In an optimal sequence of decisions or choices, each
subsequence must also be optimal”.

When it’s not possible to apply principle of optimality it is almost impossible to obtain the
solution using dynamic programming.

Applications of dynamic programming:

 All pairs shortest path problem,


 Multi stage graph
 o/I knapsack problem,
 Optimal binary search tree.

Multi stage Graphs:


 A multi stage graph G=(V,E) which is a directed graph.
 In this graph all the vertices are partitioned into the K stages where K2.
 In multi stage problem, we have to find the shortest path from source to sink. The cost
of each path is calculated by using the weight given along that edge.
 The cost of path from source ( denoted by S) to sink (denoted by T) is the sum of the
costs of edges on the path.
 In Multistage graph problem we have to find the path from S to T. there is a set of
vertices in each stage. The multi stage graph can be solved using forward and backward
approach.
Let is solve multistage graph for both the approaches with the help of some example.
Consider the graph G as shown in fig.
There is a single vertex in stage 1, then 3 vertices in stage 2, then 2 vertices in stage 3 and
only one vertex in stage 4 (this is a target stage)

Page 5
Design and Analysis of Algorithm chapter 4 G3 CS 2016

\
Backward approach:
d(S,T) = min { 1+ d(A,T), 2 +d(B,T), 7+d(C,T) } ------------- (1)
We will now compute d(A,T) , d(B,T) and d( C,T).
d(A,T) = min {3 +d(D,T), 6 +d(E,T) } -------------------------- (2)
d(B,T) = min{4+d(D,T) , 10 +d(E,T)}--------------------------- (3)
d( C,T) = min { 3+d(E,T) , d(C,T)} ------------------------------ (4)

now let us compute d(D,T) and d(E,T)


d(D,T) = 8
d(E,T) = 2 backward vertex is E

Let is put these values in eq (2) ,(3), (4)


d(A,T) = min { 3+ 8, 6 +2} = 8 A-E-T
d(B,T) =min [ 4+8 , 10 +2} = 12 A-D-T
d(C,T) = min{3 +2 , 10} = 5 C-E-T
d(S,T) = min { 1+ d( A,T), 2 +d(B,T) ,7+d(C,T)]
= min {1+8,2+12,7+5}
= min{9,14,12} =9 S-A-E-T
The path with minimum cost is S-A-E-T with the cost 9.

This method is called backward reasoning.


The algorithm for this method follows:
Algorithm Backward_Gr ( G,stages, n,p)
//problem description:This algorithm os for backward approach of multistage graph
//Input:the multistage graph G= (V,E) stages is for total number of stages.
// n is total number of vertices of G.
// p is an array for restoring paths.
//Output:TH epath with minimum cost.

Page 6
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Back_cost[i] 0
For( I 0 to n-2 ) do
{ r Get_min(I,n) // r is edge with min cost
Back_cost[i]  back_cost[r] + c[r][i] // computing backward cost of each vertex
D[i] r //Assigning minimum distance in d array
}
// finding minimum cost path
P[0] 0
P[stages-1] n-1
For( I  stages-1 to 1 ) do
P[i] d[p[I +1]);
Analysis:clearly the above algorithm has a time complexity [v+E].
Forward approach:
d(S,A) =1
d(S,B) =2
d(S,C) =7
d(S,D)= min [1+d(A,D) + d(B,D)}
= min{ 1+3,2+4}
d(S,D) = 4
d(S,E) = min { 1+ d(A, E) , 2 + d(B,E) , 7 +d(C,E)}
= min {1+6,2+10,7b +3}
=min {7,12,10} = 7 i.e path S-A-E is chosen.
d(S,T) = min {d(S,D) +d(D,T) ,d(S,E)+d(E,T),d(S,C)+d(C,T)}
=min {4+8,7+2,7+10} =9 path S-E,E-T is chosen
The minimum cost = 9 with the path S-A-E-T
This method is called forward reasoning.
The algorithm for forward approach is
Algorithm Forward_Gr(G,stages,n,p)
//problem description:This algorithm is for forward approach of multistage graphs
//Input:The multistage graph G=(V,E)
//’stages’is the variable representing number of stages.
//n is the total number of vertices of G.
// p is an array of restoring path.
// output: the path with minimum cost
Cost[i] 0
For(in-2 downto 0)\
{ rGet_min(I,n) // r is an edge with min cost.
Cost[i] c[i][r] +cost[r]
D[i] r
}
// finding minimum cost path
P[0] 0
P[stages-1] n-1

Page 7
Design and Analysis of Algorithm chapter 4 G3 CS 2016

For (i1 to stages-1) do


P[i] d[p(i+1)];
Analysis: the algorithm has (v+E) time complexity
Using dynamic programming strategy, the multistage graph problem is solved. This is
because in multistage graph problem, we obtain the minimum path at each current stage by
considering the path length of each vertex obtained in earlier stage.
Thus, the sequence of decisions is taken by considering overlapped solution.in dynamic
programming, we may get any number of solutions for given problem. From all these
solutions we seek for optimal solution, finally optimal solution becomes the solution to given
problem. Multistage graph problem is solved using this same approach.
ALL PAIRS SHORTEST PATH PROBLEM:

Problem description:
When a weighted graph, represented by its weight matrix W then objective is to find the
distance between every pair of nodes.
We will apply dynamic programming to solve all pairs shortest path.
Step 1: we will decompose the given problem into sub-problems
Let,AK(I,j) be the length of the shortest path from node i to node j such that the label for
every intermediate node will be <=k.
We will compute AK for k=1….n for n nodes.
Step 2:
For solving all pair shortest path, the principle of optimality is used. That means any sub
path of shortest path is a shortest path between the end nodes. Divide the path from i node
to j node for every intermediate node, say K.
Then there are two arises,
Path going from i to j via k ,
Path which is not going via k, select only shortest path from two cases.
Step 3:
The shortest path can be computed using bottom-up computation method.
Following the recursive method,
Initially: A0 =W[i,j];
Next computation:
Ak(i,j) = min(Ak-1(i,j), Ak-1(i,k), Ak-1(k,j)) where 1≤k≤n
The all pair shortest path algorithm is proposed by R.Floyd . hence this algorithm is
sometimes called as Floyd’s algorithm.

This algorithm allows negative weights but does not allow negative cycle.

Example:

Compute all pair shortest path for the following graph

Page 8
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Solution:

So, A3 gives shortest distances between any pair of vertices.

Algorithm:
Algorithm all_pair(W,A)
{
For i=1 to n do
For j=1 to n do
A[i,j]=W[i,j];//copy the weights as it is in matrix A
For k=1 to n do
{for i=1 to n do
{for j=1 to n do
{
A[i,j]=min(A[i,j],A[i,k]+A[k,j];
}}}}
Analysis:
The basic operation to find the minimum distance between any pair of vertices lies within
three nested loops so, T(n)=O(n3)
0/1 KNAPSACK PROBLEM:
Problem description:

If we are given n objects and a knapsack or a bag in which the object i that has weight wi is to
be placed. The knapsack has a capacity W. Then the profit that can be earned is PiXi. The
objective is to obtain filling of knapsack with maximum profit earned.
Maximized subject to constraint ≤W
Where 1≤ i≤n and n is total number of objects. And Xi=0 or 1.

Page 9
Design and Analysis of Algorithm chapter 4 G3 CS 2016

The greedy method does not work for this problem.


To solve this problem using dynamic programming method we will perform following steps
Step 1:the notations are used are
Let, fi(yi) be the value of optimal solution.
Then Si is a pair (p,w) where p=f(yi) and w=yi
Initially S0={(0,0)}
We can compute Si+1 from Si.
These computations of Si are basically the sequence of decisions made for obtaining optimal
solutions.
Step 2:
We can generate the sequence of decisions in order to obtain the optimum selection for
solving the knapsack problem.
Let Xn be the optimum sequence. Then there are two instances {Xn} and {Xn-1, Xn-2….. Xn}.so
from {Xn-1, Xn-2….. Xn}.we will choose the optimum sequence with respect to Xn.The
selection of sequence from remaining set should be such that we should be able to fulfill the
condition of filling knapsack of capacity W with maximum profit .otherwise Xn,….,X1 is not
optimal.
This proves that 0/1 knapsack problem is solved using principle of optimality.
Step 3:
The formula that are used while solving 0/1 knapsack problem.
Let,fi(yi) be the value of optimal solution.Then,
Fi(y)=max(fi-1(y),fi-1(y-wi)+pi)
Initially compute,
S0={(0,0)}
Si1={(P,W)| (P-p i),W-w i)Si}
S1i+1 can be computed by merging Si and Si 1.
Purging rule:
If Si+1 contains (Pj,Wj) and (Pk,Wk),these two pairs such that
Pj≤Pk and Wj≥WK,then (Pj,Wj) can be eliminated .This purging rule is also called as
dominance rule.In purging rule basically the dominated tuples get purged.In short,remove the
pair with less profit and more weight.
Example:
Solve Knapsack instance M=6 and n=3.let Pi and Wi are as shown below,
I Pi Wi
1 1 2
2 2 3
3 5 4
Solution: let us build the sequence of decision S0,S1,S2
S0 ={(0,0)}
S01=({1,2})
That means while building S01 we select the next ith pair. For S01 we have selected first (P,W)
pair which is(1,2).

Page 10
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Now,
S1=[Merge S0 and S01] =[(0,0),(1,2)]
S11={select next(P,W) pair and add it with S1}={(2,3),(2+0,3+0),(2+1,3+1)}={(2,3),(3,5)}
i.e repetition of (2,3) is avoided.
S2={Merge candidate from S1 and S 11}={(0,0),(1,2), (2,3),(3,5)}
S12={select next(P,W) pair and add it with S2}={(5,4),(6,6),(7,7),(8,9)}
Now,S3={Merge candidate from S2 and S 12}={(0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9)}
Note the pair (3,5) is purged from S3.this is because ,let us assume (Pj,Wj)=(3,5) and
(Pk,Wk)=(5,4)here Pj≤Pk and Wj≥WK is true hence we will eliminate the pair (Pj,Wj)i.e (3,5)
from S3.
As M=6 we will find tuple denoting weight 6.it is (6,6) ε S3. Hence we will set X3=1.now(6-
P3,6-W3)=(6-5,6-4)=(1,2).The pair (1,2) is present in S2 but it is originated in S1.
Hence set X1=1,X2=0,X3=1
i.e it is {1,0,1}. Analysis: the worst-case time complexity is O(2n)

Introduction
Graph is a nonlinear data structure which is used in many applications. Graph is a basically a
collection of nodes or vertices that are connected by links called edges.

Graphs:
Definition of Graph:
A graph is a collection of two sets V and E where Vis a finite non empty set of
vertices and e is a finite non empty set of edges.
Vertices are nothing but the nodes in the graph and the two adjacent vertices are joined by
edges. The graph is thus a set of two sets. Any Graph is denoted by G=(V,E)

Properties of Graph:

Complete Graph: If an undirected graph of n vertices consists of n(n-1)/2 number of edges


then it is called as complete graph.

Subgraph: A subgraph G’ of graph G is a graph such that the set of vertices and set of edges
G’ are proper subset of the set of edges of G.

Page 11
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Connected Graph: An undirected graph is said to be connected if for every pair of distinct
vertices Vl and Vj in V(G) there is a graph fromVi to Vj in G.

Representation of Graphs:

There are several representations of graphs, but we will discuss the two commonly used
representation called Adjacency matrix and adjacency list

Adjacency matrix:
Consider a graph G of n vertices and the matrix M.if there is an edge present between vertices
Vi and Vj then M[i][j]=1 else M[i][j]=0. Note that for an undierected graph if M[i][j]=1 then
for M[i][j]is also 1.

Adjacency list:
Here the graph is created with linked list is called adjacency list.so all the advantage of linked
list can be obtained in this type of graph. We need not have a prior knowledge of maximum
number of nodes.
Application of Graph:
 Graphs are used in the design of communication and transportation networks, VLSI
and other sorts of logic circuits
 Graphs are used for shape description in computer-aided design and Geographic
information systems, precedence
 Graph is used in scheduling systems.

Page 12
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Graph Traversals:
Traversing a graph means searching the remaining nodes or vertices of graph from the given
vertex. There are two commonly used techniques for traversing the graph.
1. Breadth first search
2. Depth first search
Breadth First search (BFS)
Some terminologies in BFS formally
Breadth first forest:
The breadth first forest is a collection of trees in which the traversal’s starting vertex
serves as the root of the first tree in such a forest. Whenever a new unvisited vertex is visited
then it is attached as a child to the vertex from which it is being reached.
Consider the following graph

 For finding the connected components in the graph.


 For checking if any cycle exists in the given graph
 To obtain shortest path between two vertices.
Depth First Search (DFS):
The depth first is a collection of trees in which traversals starting vertex serves as the root of
the first tree in such a forest. whenever a new unvisited vertex is visited then it is attached as
a child to the vertex from which it is being reached.

Application of Depth First search:

Page 13
Design and Analysis of Algorithm chapter 4 G3 CS 2016

DFS is used for checking connectivity of a graph.

Start traversing the graph using depth first method and after an algorithm halt if all the
vertices if graph are visited then the graph is said to be a connected graph

DFS is used for checking acyclicity of graph. If the DFS forest does not have back edge then
the graph is said to be acyclic.

This graph contains cycle, because it contains back edge

DFS is used to find articulation point

A vertex of connected graph is said to be its articulation point If its removal with all its
incident edges breaks the graph into disjoint pieces.

The vertex d is an articulation point

Page 14
Design and Analysis of Algorithm chapter 4 G3 CS 2016

Comparison between BFS and DFS

Depth First Traversal Breadth First Traversal

This traversal is done with the help of stack This traversal is done with the help of Queue
data structure. data structure.

It works using two ordering. The first order It works using one ordering. The order in
is the order in which the vertices are reached which the vertices are reached in the same
for the first time (i.e the visited vertices are order they get removed from the queue.
pushed onto the stacks) and the second order
in which the vertices become dead end (the
vertices are popped off the stack).

The DFS sequence is composed of tree The BFS sequence is composed of tree edges
edges and back edges. and cross edges.

The efficiency of the adjacency matrix The efficiency of the adjacency matrix graph
graph is (V2) is (V2)

The efficiency of the adjacency list graph is The efficiency of the adjacency list graph is

(V +E) (V +E)

Application: To check connectivity, Application: To check connectivity,


acyclicity of a graph and to find articulation acyclicity of a graph and to find shortest path
point DFS is used two vertices BFS is used

Page 15

You might also like