m5 Daa Tier
m5 Daa Tier
m5 Daa Tier
1. Backtracking
Some problems can be solved, by exhaustive search. The exhaustive-search technique suggests
generating all candidate solutions and then identifying the one (or the ones) with a desired
property.
Backtracking is a more intelligent variation of this approach. The principal idea is to construct
solutions one component at a time and evaluate such partially constructed candidates as
follows. If a partially constructed solution can be developed further without violating the
problem’s constraints, it is done by taking the first remaining legitimate option for the next
component. If there is no legitimate option for the next component, no alternatives for any
remaining component need to be considered. In this case, the algorithm backtracks to replace
the last component of the partially constructed solution with its next option.
It is convenient to implement this kind of processing by constructing a tree of choices being
made, called the state-space tree. Its root represents an initial state before the search for a
solution begins. The nodes of the first level in the tree represent the choices made for the first
component of a solution; the nodes of the second level represent the choices for the second
component, and soon. A node in a state-space tree is said to be promising if it corresponds to
a partially constructed solution that may still lead to a complete solution; otherwise, it is called
non-promising. Leaves represent either non-promising dead ends or complete solutions found
by the algorithm.
In the majority of cases, a statespace tree for a backtracking algorithm is constructed in the
manner of depth-first search. If the current node is promising, its child is generated by adding
the first remaining legitimate option for the next component of a solution, and the processing
moves to this child. If the current node turns out to be non-promising, the algorithm backtracks
to the node’s parent to consider the next possible option for its last component; if there is no
such option, it backtracks one more level up the tree, and so on. Finally, if the algorithm reaches
a complete solution to the problem, it either stops (if just one solution is required) or continues
searching for other possible solutions.
SVIT CSE 2
18
SVIT CSE 3
18
SVIT CSE 4
18
We start with the empty board and then place queen 1 in the first possible position of its row,
which is in column 1 of row 1. Then we place queen 2, after trying unsuccessfully columns 1
and 2, in the first acceptable position for it, which is square (2, 3), the square in row 2 and
column 3. This proves to be a dead end because there is no acceptable position for queen 3. So,
the algorithm backtracks and puts queen 2 in the next possible position at (2, 4). Then queen 3
is placed at (3, 2), which proves to be another dead end. The algorithm thenbacktracks all the
way to queen 1 and moves it to (1, 2). Queen 2 then goes to (2, 4), queen 3 to(3, 1), and queen
4 to (4, 3), which is a solution to the problem. The state-space tree of this search is shown in
figure.
If other solutions need to be found, the algorithm can simply resume its operations at the leaf
at which it stopped. Alternatively, we can use the board’s symmetry for this purpose.
Finally, it should be pointed out that a single solution to the n-queens problem for any n ≥ 4
can be found in linear time.
Note: The algorithm NQueens() is not in the syllabus. It is given here for interested learners.
The algorithm is referred from textbook T2.
SVIT CSE 5
18
SVIT CSE 6
18
The root of the tree represents the starting point, with no decisions about the given elements
made as yet. Its left and right children represent, respectively, inclusion and exclusion of a1 in
a set being sought.
Similarly, going to the left from a node of the first level corresponds to inclusion of a 2 while
going to the right corresponds to its exclusion, and so on. Thus, a path from the root to a node
on the ith level of the tree indicates which of the first in numbers have been included in the
subsets represented by that node.
We record the value of s, the sum of these numbers, in the node. If s is equal to d, we have a
solution to the problem. We can either report this result and stop or, if all the solutions need
to be found, continue by backtracking to the node’s parent. If s is not equal to d, we can
terminate the node as non-promising if either of the following two inequalities holds:
Example: Apply backtracking to solve the following instance of the subset sum problem: A
= {1, 3, 4, 5} and d = 11.
SVIT CSE 7
18
SVIT CSE 8
18
SVIT CSE 9
18
Analysis
SVIT CSE 10
18
SVIT CSE 11
18
SVIT CSE 12
18
SVIT CSE 13
18
How can we find a lower bound on the cost of an optimal selection without actually solving
the problem?
We can do this by several methods. For example, it is clear that the cost of any solution,
including an optimal one, cannot be smaller than the sum of the smallest elements in each
of the matrix’s rows. For the instance here, this sum is 2 + 3+ 1+ 4 = 10.We can and will apply
the same thinking to partially constructed solutions. For example, for any legitimate selection
that selects 9 from the first row, the lower bound will be 9 + 3 + 1+ 4 = 17.
Rather than generating a single child of the last promising node as we did in backtracking, we
will generate all the children of the most promising node among non-terminated leaves in the
current tree. (Non terminated, i.e., still promising, leaves are also called live.) How can we
tell which of the nodes is most promising? We can do this by comparing the lower bounds of
the live nodes. It is sensible to consider a node with the best bound as most promising, although
this does not, of course, preclude the possibility that an optimal solution will ultimately belong
to a different branch of the state-space tree. This variation of the strategy is called the best-first
branch-and-bound.
We start with the root that corresponds to no elements selected from the cost matrix. The lower-
bound value for the root, denoted lb, is 10. The nodes on the first level of the tree correspond
to selections of an element in the first row of the matrix, i.e., a job for person a. See the figure
given below.
Figure: Levels 0 and 1 of the state-space tree for the instance of the assignment
problem being solved with the best-first branch-and-bound algorithm. The number
above a node shows the order in which the node was generated. A node’s fields
indicate the job number assigned to person a and the lower bound value, lb, for this
node.
So we have four live leaves—nodes 1 through 4—that may contain an optimal solution. The
most promising of them is node 2 because it has the smallest lower bound value. Following our
best-first search strategy, we branch out from that node first by considering the three different
ways of selecting an element from the second row and not in the second column -the three
different jobs that can be assigned to person b. See the figure given below (Fig12.7).
SVIT CSE 14
18
Of the six live leaves—nodes 1, 3, 4, 5, 6, and 7—that may contain an optimal solution, we
again choose the one with the smallest lower bound, node 5. First, we consider selecting the
third column’s element from c’s row (i.e., assigning person c to job 3); this leaves us with no
choice but to select the element from the fourth column of d’s row (assigning person d to job
4). This yields leaf 8 (Figure 12.7), which corresponds to the feasible solution {a→2, b→1,
c→3, d →4} with the total cost of 13. Its sibling, node 9, corresponds to the feasible solution
{a→2,b→1, c→4, d →3} with the total cost of 25. Since its cost is larger than the cost of the
solution represented by leaf 8, node 9 is simply terminated. (Of course, if its cost were smaller
than 13, we would have to replace the information about the best solution seen so far with the
data provided by this node.)
Now, as we inspect each of the live leaves of the last state-space tree—nodes1, 3, 4, 6, and 7
in Figure 12.7—we discover that their lower-bound values are not smaller than 13, the value
of the best selection seen so far (leaf 8). Hence, we terminate all of them and recognize the
solution represented by leaf 8 as the optimal solution to the problem.
SVIT CSE 15
18
But there is a less obvious and more informative lower bound for instances with symmetric
matrix D, which does not require a lot of work to compute. We can compute a lower bound on
the length l of any tour as follows. For each city i, 1≤ i ≤ n, find the sum si of the distances from
city i to the two nearest cities; compute the sums of these n numbers, divide the result by 2,
and, if all the distances are integers, round up the result to the nearest integer:
lb = ⌈s/2⌉... (1)
For example, for the instance in Figure 2.2a, formula (1) yields
Moreover, for any subset of tours that must include particular edges of a given graph, we can
modify lower bound (formula 1) accordingly. For example, for all the Hamiltonian circuits of
the graph in Figure 2.2a that must include edge (a, d), we get the following lower bound by
summing up the lengths of the two shortest edges incident with each of the vertices, with the
required inclusion of edges (a, d)and (d, a):
We now apply the branch-and-bound algorithm, with the bounding function given by formula-
1, to find the shortest Hamiltonian circuit for the graph in Figure 2.2a.
To reduce the amount of potential work, we take advantage of two observations.
1. First, without loss of generality, we can consider only tours that start at a.
2. Second, because our graph is undirected, we can generate only tours in which b is
visited before c. (Refer Note at the end of section 2.2 for more details)
In addition, after visiting n−1= 4 cities, a tour has no choice but to visit the remaining unvisited
city and return to the starting one. The state-space tree tracing the algorithm’s application is
given in Figure 2.2b.
Note: An inspection of graph with 4 nodes (figure given below) reveals three pairs of tours that
differ only by their direction. Hence, we could cut the number of vertex permutations by half.
We could, for example, choose any two intermediate vertices, say, b and c, and then consider
only permutations in which b precedes c. (This trick implicitly defines a tour’s direction.)
Figure: Solution to a small instance of the traveling salesman problem by exhaustive search.
SVIT CSE 16
18
Figure 2.2(a)Weighted graph. (b) State-space tree of the branch-and-bound algorithm to find
a shortest Hamiltonian circuit in this graph. The list of vertices in a node specifies a beginning
part of the Hamiltonian circuits represented by the node.
Discussion
The strengths and weaknesses of backtracking are applicable to branch-and-bound as well. The
state-space tree technique enables us to solve many large instances of difficult combinatorial
problems. As a rule, however, it is virtually impossible to predict which instances will be
solvable in a realistic amount of time and which will not.
In contrast to backtracking, solving a problem by branch-and-bound has both the challenge and
opportunity of choosing the order of node generation and finding a good bounding function.
Though the best-first rule we used above is a sensible approach, it may or may not lead to a
solution faster than other strategies. (Artificial intelligence researchers are particularly
interested in different strategies for developing state-space trees.)
Finding a good bounding function is usually not a simple task. On the one hand, we want this
function to be easy to compute. On the other hand, it cannot be too simplistic - otherwise, it
would fail in its principal task to prune as many branches of a state-space tree as soon as
possible. Striking a proper balance between these two competing requirements may require
intensive experimentation with a wide variety of instances of the problem in question.
SVIT CSE 17
18
It is convenient to order the items of a given instance in descending order by their value-to-
weight ratios.
Each node on the ith level of state space tree, 0 ≤ i ≤ n, represents all the subsets of n items that
include a particular selection made from the first i ordered items. This particular selection is
uniquely determined by the path from the root to the node: a branch going to the left indicates
the inclusion of the next item, and a branch going to the right indicates its exclusion.
We record the total weight w and the total value v of this selection in the node, along with some
upper bound ub on the value of any subset that can be obtained by adding zero or more items
to this selection. A simple way to compute the upper bound ub is to add to v, the total value of
the items already selected, the product of the remaining capacity of the knapsack W
– w and the best per unit payoff among the remaining items, which is vi+1/wi+1:
ub = v + (W − w)(vi+1/wi+1).
Example: Consider the following problem. The items are already ordered in descending order
of their value-to-weight ratios.
Let us apply the branch-and-bound algorithm. At the root of the state-space tree (see Figure
12.8), no items have been selected as yet. Hence, both the total weight of the items already
selected w and their total value v are equal to 0. The value of the upper bound is 100.
Node 1, the left child of the root, represents the subsets that include item 1. The total weight
and value of the items already included are 4 and 40, respectively; the value of the upper bound
is 40 + (10 − 4) * 6 = 76.
SVIT CSE 18
18
Node 2 represents the subsets that do not include item 1. Accordingly, w = 0, v = 0, and ub =
0 + (10 − 0) * 6 = 60. Since node 1 has a larger upper bound than the upper bound of node 2,
it is more promising for this maximization problem, and we branch from node 1 first. Its
children—nodes 3 and 4—represent subsets with item 1 and with and without item 2,
respectively. Since the total weight w of every subset represented by node 3 exceeds the
knapsack’s capacity, node 3 can be terminated immediately.
Node 4 has the same values of w and v as its parent; the upper bound ub is equal to 40 + (10
− 4) * 5 = 70. Selecting node 4 over node 2 for the next branching (Due to better ub), we get
nodes 5 and 6 by respectively including and excluding item 3. The total weights and values as
well as the upper bounds for these nodes are computed in the same way as for the preceding
nodes.
Branching from node 5 yields node 7, which represents no feasible solutions, and node 8, which
represents just a single subset {1, 3} of value 65. The remaining live nodes 2 and 6 have smaller
upper-bound values than the value of the solution represented by node 8. Hence, both can be
terminated making the subset {1, 3} of node 8 the optimal solution to the problem.
SVIT CSE 19
18
preceding subsection.) For the knapsack problem, however, every node of the tree represents
a subset of the items given. We can use this fact to update the information about the best subset
seen so far after generating each new node in the tree. If we had done this for the instance
investigated above, we could have terminated nodes 2 and 6 before node 8 was generated
because they both are inferior to the subset of value 65 of node 5.
SVIT CSE 20
18
SVIT CSE 21
18
SVIT CSE 22
18
SVIT CSE 23
18
SVIT CSE 24
18
Conclusion
SVIT CSE 25
18
The assignment X = choice(1:n) could result in X being assigned any value from the integer
range[1..n]. There is no rule specifying how this value is chosen.
SVIT CSE 26
18
print(‘0’); failure
procedure NSORT(A,n);
//sort n positive integers//
var integer A(n), B(n), n, i, j;
begin
B := 0; //B is initialized to zero//
for i := 1 to n do
begin
j := choice(1:n);
if B(j) <> 0 then failure;
B(j) := A(j);
end;
SVIT CSE 27
18
Now, one more concept: given decision problems P and Q, if an algorithm can transform a
solution for P into a solution for Q in polynomial time, it’s said that Q is poly-time reducible
(or just reducible) to P.
The most famous unsolved problem in computer science is “whether P=NP or P≠NP? ”
SVIT CSE 28
18
NP-Complete problems have the property that it can be solved in polynomial time if all other
NP-Complete problems can be solved in polynomial time. i.e if anyone ever finds a poly-time
solution to one NP-complete problem, they’ve automatically got one for all the NP-complete
problems; that will also mean that P=NP.
Example for NP-complete is CNF-satisfiability problem. The CNF-satisfiability problem
deals with boolean expressions. This is given by Cook in 1971. The CNF-satisfiabilityproblem
asks whether or not one can assign values true and false to variables of a given boolean
expression in its CNF form to make the entire expression true.
Over the years many problems in NP have been proved to be in P (like Primality Testing). Still,
there are many problems in NP not proved to be in P. i.e. the question still remains whether
P=NP? NP Complete Problems helps in solving this question. They are a subsetof NP
problems with the property that all other NP problems can be reduced to any of them in
polynomial time. So, they are the hardest problems in NP, in terms of running time. If it can be
showed that any NP-Complete problem is in P, then all problems in NP will be in P (because
of NP-Complete definition), and hence P=NP=NPC.
NP Hard Problems - These problems need not have any bound on their running time. If any
NP-Complete Problem is polynomial time reducible to a problem X, that problem X belongs
to NP-Hard class. Hence, all NP-Complete problems are also NP-Hard. In otherwords if a NP-
Hard problem is non-deterministic polynomial time solvable, it is a NP- Complete problem.
Example of a NP problem that is not NPC is Halting Problem.
If a NP-Hard problem can be solved in polynomial time then all NP-Complete can be solved
in polynomial time.
“All NP-Complete problems are NP-Hard but not all NP-Hard problems are not NP-
Complete.”NP-Complete problems are subclass of NP-Hard
The more conventional optimization version of Traveling Salesman Problem for finding the
shortest route is NP-hard, not strictly NP-complete.
*****
SVIT CSE 29