Design and Analysis of Algorithms (Complete)
Design and Analysis of Algorithms (Complete)
OF ALGORITHMS
[R18A0507]
LECTURE NOTES
DEPARTMENT OF
INFORMATION TECHNOLOGY
UNIT I:
Introduction-Algorithm definition, Algorithm Specification, Performance Analysis- Space complexity,
Time complexity, Randomized Algorithms.
Divide and conquer- General method, applications - Binary search, Merge sort, Quick sort, Strassen‟s
Matrix Multiplication.
UNIT II:
Disjoint set operations, union and find algorithms, AND/OR graphs, Connected Components and Spanning
trees, Bi-connected components
Backtracking-Generalmethod,applications-The8-queenproblem,sumofsubsetsproblem,graphcoloring,
Hamiltoniancycles.
UNIT III:
Greedy method- General method, applications- Knapsack problem, Job sequencing with deadlines,
Minimum cost spanning trees, Single source shortest path problem.
UNIT IV:
Dynamic Programming- General Method, applications- Chained matrix multiplication, All pairs shortest
pathproblem,Optimalbinarysearchtrees,0/1knapsackproblem,Reliabilitydesign,Travelingsalesperson
problem.
UNIT V:
Branch and Bound- General Method, applications-0/1 Knapsack problem, LC Branch and Bound solution,
FIFO Branch and Bound solution, Traveling sales person problem.
NP-Hard and NP-Complete problems- Basic concepts, Non-deterministic algorithms, NP - Hard and
NP-Complete classes, Cook‟s theorem.
TEXT BOOKS:
1. Fundamentals of Computer Algorithms, 2nd Edition, Ellis Horowitz, Sartaj Sahni and S.
Rajasekharan, UniversitiesPress.
2. Design and Analysis of Algorithms, P. H. Dave, H. B. Dave, 2nd edition, PearsonEducation.
REFERENCES:
1. Algorithm Design: Foundations, Analysis and Internet examples, M. T. Goodrich and R.Tomassia,
John Wiley &sons.
2. Design and Analysis of Algorithms, S. Sridhar, Oxford Univ.Press
3. Design and Analysis of algorithms, Aho, Ullman and Hopcroft, PearsonEducation.
4. Foundations of Algorithms,, R. Neapolitan and K. Naimipour, 4th edition, Jones andBartlett
Student edition.
5. Introduction to Algorithms,3rd Edition, T. H. Cormen, C. E.Leiserson, R. L. Rivest, and C. Stein,
PHI
Outcomes:
Ability to analyze the performance ofalgorithms.
Ability to choose appropriate algorithm design techniques for solvingproblems.
Ability to understand how the choice of data structures and the algorithm designmethods
impact the performance ofprogram.
INDEX
UNIT TOPIC PAGE NO
Randomized Algorithms.
08
09
Divide and conquer: General method, applications
I Binary search 10
Merge sort
14
17
Strassen‟s matrix multiplication
Quick sort 19
Spanning trees
30
AND/OR graphs,
31
graph coloring,
50
Hamiltonian cycles.
51
III 54
Knapsack problem,
Job sequencing with deadlines,
55
III
- LC BB and FIFO BB 88
Algorithm:
An Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values may
be, an algorithm terminates after executing a finite number of instructions. In addition every algorithm
must satisfy the followingcriteria:
In formal computer science, one distinguishes between an algorithm, and a program. A program does not
necessarily satisfy the fourth condition. One important example of such a program for a computer is its
operating system, which never terminates (except for system crashes) but continues in a wait loop until
more jobs areentered.
2. Graphic representation called flowchart: This method will work well when thealgorithm
is small&simple.
Pseudo-Code Conventions:
3. An identifier begins with a letter. The data types of variables are not explicitlydeclared.
1
4. Compound data types can be formed with records. Here is anexample,
Node.Record
{
data type – 1data-1;
.
.
.
data type – n data – n;
node * link;
}
Here link is a pointer to the record type node. Individual data items of a record can
be accessed with and period.
<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
.
.
2
<statement-n>
until<condition>
Case statement:
Case
{
: <condition-1>:<statement-1>
.
.
.
: <condition-n>:<statement-n>
: else : <statement-n+1>
}
9. Input and output are done using the instructions read &write.
As an example, the following algorithm fields & returns the maximum of „n‟ given
numbers:
1. AlgorithmMax(A,n)
2. // A is an array of size n
3. {
4. Result :=A[1];
5. for I:= 2 to ndo
6. if A[I] > Resultthen
7. Result:=A[I];
8. return Result;
9. }
In this algorithm (named Max), A & n are procedure parameters. Result & I are Local
variables.
Algorithm:
3
7. for k:=i+1 to ndo
8. if (a[k]<a[j])
9. t:=a[I];
10. a[I]:=a[j];
11. a[j]:=t;
12. }
13. }
Performance Analysis:
The performance of a program is the amount of computer memory and time needed to
run a program. We use two approaches to determine the performance of a program. One
is analytical, and the other experimental. In performance analysis we use analytical
methods, while in performance measurement we conductexperiments.
Time Complexity:
The time needed by an algorithm expressed as a function of the size of a problem is
called the time complexity of the algorithm. The time complexity of a program is the
amount of computer time it needs to run to completion.
The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines
the size of problems that can be solved by the algorithm.
1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0
4
Space Complexity:
The space complexity of a program is the amount of memory it needs to run to
completion. The space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.
Data space: Data space is the space needed to store all constant and variable
values. Data space has two components:
Space needed by constants and simple variablesinprogram.
Space needed by dynamically allocated objects such as arrays andclass
instances.
Environment stack space: The environment stack is used to save information
needed to resume execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on
factors such as:
The compiler used to complete the program intomachinecode.
The compiler options in effect at the timeofcompilation
The targetcomputer.
The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics)
Where „c‟ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
The problem instances for this algorithm are characterized by n,the number of
elements to be summed. The space needed d by „n‟ is one word, since it is of type
integer.
The space needed by „a‟a is the space needed by variables of tyepe array offloating
point numbers.
This is atleast „n‟ words, since „a‟ must be large enough to hold the „n‟ elements tobe
summed.
So,we obtain Ssum(n)>=(n+s)
[ n for a[],one each for n,I a&s]
Complexity of Algorithms
The complexity of an algorithm M is the function f(n) which gives the running time
and/or storage space requirement of the algorithm in terms of the size „n‟ of the input
data. Mostly, the storage space required by an algorithm is simply a multiple of the data
size „n‟. Complexity shall refer to the running time ofthealgorithm.
The function f(n), gives the running time of an algorithm, depends not only on the
size „n‟ of the input data but also on the particular data. The complexity function f(n) for
certain casesare:
1. Best Case : The minimum possible value of f(n) is called the bestcase.
5
3. Worst Case : The maximum value of f(n) for any key possibleinput.
Asymptotic Notations:
The following notations are commonly use notations in performance analysis and
used to characterize the complexity of an algorithm:
1. Big–OH(O)
2. Big–OMEGA(Ω),
3. Big–THETA (Θ)and
4. Little–OH(o)
f(n) = O(g(n)), (pronounced order of or big oh), says that the growth rate of f(n) is less
than or equal (<) that of g(n).
f(n) = Ω (g(n)) (pronounced omega), says that the growth rate of f(n) is greater than or
equal to (>) that of g(n).
6
Big–THETA Θ (Same order)
f(n) = Θ (g(n)) (pronounced theta), says that the growth rate of f(n) equals (=) the
growth rate of g(n) [if f(n) = O(g(n)) and T(n) = Θ (g(n)].
little-o notation
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed,
given the problem size n, which is usually the number of items. Informally, saying some equation f(n) =
o(g(n)) means f(n) becomes insignificant relative to g(n) as n approaches infinity. The notation is read, "f
of n is little oh of g of n".
Formal Definition: f(n) = o(g(n)) means for all c > 0 there exists some k > 0 such that 0 ≤ f(n) < cg(n) for
all n ≥ k. The value of k must not depend on n, but may depend on c.
O(1), O(log2 n), O(n), O(n. log2 n), O(n2), O(n3), O(2n), n! and nn
Classification of Algorithms
If „n‟ is the number of data items to be processed or degree of polynomial or the size of
the file to be sorted or searched or the number of nodes in a graph etc.
1 Next instructions of most programs are executed once or at most only a few
times. If all the instructions of a program have this property, we say that its
running time is aconstant.
Logn When the running time of a program is logarithmic, the program gets
slightly slower as n grows. This running time commonly occurs in
programs that solve a big problem by transforming it into a smaller
problem, cutting the size by some constant fraction., When n is a million,
log n is a doubled. Whenever n doubles, log n increases by a constant, but
log n does not double until n increases ton2.
n Whentherunningtimeofaprogramislinear,itisgenerallythecasethata
7
small amount of processing is done on each input element. This is the
optimal situation for an algorithm that must process n inputs.
n log n This running time arises for algorithms that solve a problem by breaking it
up into smaller sub-problems, solving then independently, and then
combining the solutions. When n doubles, the running time more than
doubles.
The execution time for six of the typical functions is given below:
n log2 n n*log2n n2 n3 2n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65,536
32 5 160 1024 32,768 4,294,967,296
64 6 384 4096 2,62,144 Note 1
128 7 896 16,384 2,097,152 Note 2
256 8 2048 65,536 1,677,216 ????????
Randomized algorithms:
An algorithm that uses random numbers to decide what to do next anywhere in its logic is called
Randomized Algorithm. For example, in Randomized Quick Sort, we use random number to pick the next
pivot (or we randomly shuffle the array). Quicksort is a familiar, commonly used algorithm in which
randomness can be useful. Any deterministic version of this algorithm requires O(n2) time to
sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the
specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if
the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing
in O(n log n) time regardless of the characteristics of the input. Typically, this randomness is used to
reduce time complexity or space complexity in other standardalgorithms.
8
Divide and Conquer
General Method:
Divide and conquer is a design strategy which is well known to breaking down
efficiency barriers. When the method applies, it often leads to a large improvement in
time complexity. For example, from O (n2) to O (n log n) to sort the elements.
Divide and conquer strategy is as follows: divide the problem instance into two or
more smaller instances of the same problem, solve the smaller instances recursively,
and assemble the solutions to form a solution of the original instance. The recursion
stops when an instance is reached which is too small to divide. When dividing the
instance, one can either use whatever division comes most easily to hand or invest
time in making the division carefully so that the assembly is simplified.
Divide : Divide the problem into a number of sub problems. The sub problems
are solvedrecursively.
Conquer : The solution to the original problem is then formed from the solutions
to the sub problems (patching together theanswers).
Traditionally, routines in which the text contains at least two recursive calls are called
divide and conquer algorithms, while routines whose text contains only one recursive
call are not. Divide–and–conquer is a very powerful use ofrecursion.
DANDC (P)
{
if SMALL (P) then return S (p);
else
{
divide p into smaller instances p 1, p2, …. Pk, k 1;
apply DANDC to each of these sub problems;
return (COMBINE (DANDC (p1) , DANDC (p2),…., DANDC (pk));
}
}
SMALL (P) is a Boolean valued function which determines whether the input size is
small enough so that the answer can be computed without splitting. If this is so
function „S‟ is invoked otherwise, the problem „p‟ into smaller sub problems. These
sub problems p1, p2, . . . , pk are solved by recursive application ofDANDC.
9
If the sizes of the two sub problems are approximately equal then the computing
time of DANDC is:
g (n) n small
T (n) =
2 T(n/2)f (n) otherwise
Binary Search:
If we have „n‟ records which have been ordered by keys so that x 1 < x2 < … < xn .
When we are given a element „x‟, binary search is used to find the corresponding
element from the list. In case „x‟ is present, we have to determine a value „j‟ such
that a[j] = x (successful search). If „x‟ is not in the list then j is to set to zero (un
successfulsearch).
In Binary search we jump into the middle of the file, where we find key a[mid], and
compare „x‟ with a[mid]. If x = a[mid] then the desired record has been found.
If x < a[mid] then „x‟ must be in that portion of the file that precedes a[mid], if there
at all. Similarly, if a[mid] > x, then further search is only necessary in that past of
the file which follows a[mid]. If we use recursive procedure of finding the middle key
a[mid] of the un-searched portion of a file, then every un-successful comparison of
„x‟witha[mid]willeliminateroughlyhalftheun-searchedportionfromconsideration.
Since the array size is roughly halved often each comparison between „x‟ and
a[mid], and since an array of length „n‟ can be halved only about log 2n times before
reachingatriviallength,theworstcasecomplexityofBinarysearchis aboutlog2n
low and high are integer variables such that each time through the loop either „x‟
is found or low is increased by at least one or high is decreased by at least one. Thus
we have two sequences of integers approaching each other and eventually low will
become greater than high causing termination in a finite number of steps if „x‟ is not
present.
10
11
Example for Binary Search
Index 1 2 3 4 5 6 7 8 9
Elements -15 -6 0 7 9 23 54 82 101
Number of comparisons = 4
Continuing in this manner the number of element comparisons needed to find each of
nine elements is:
Index 1 2 3 4 5 6 7 8 9
Elements -15 -6 0 7 9 23 54 82 101
Comparisons 3 2 3 4 1 3 2 3 4
There are ten possible ways that an un-successful search may terminate depending
upon the value of x.
12
If x < a[1], a[1] < x < a[2], a[2] < x < a[3], a[5] < x < a[6], a[6] < x < a[7] or
a[7] < x < a[8] the algorithm requires 3 element comparisons to determine that „x‟
is not present. For all of the remaining possibilities BINSRCH requires 4 element
comparisons. Thus the average number of element comparisons for an unsuccessful
searchis:
(3 + 3 + 3 + 4 + 4 + 3 + 3 + 3 + 4 + 4) / 10 = 34/10 = 3.4
The time complexity for a successful search is O(log n) and for an unsuccessful
search is Θ(log n).
Therefore,
T(0) = 0
T(n) = 1 if x = a [mid]
= 1 + T([(n + 1) / 2] – 1) if x < a [mid]
= 1 + T(n – [(n + 1)/2]) if x > a [mid]
-1 -1
2K – 1 2K – 1
2K 1
n1
Algebraicallythisis 2K11 =2K–1 forK>1
2 2
Giving,
T(0) = 0
T(2k – 1) = 1 if x = a [mid]
= 1 + T(2K - 1 – 1) if x < a [mid]
= 1 + T(2 k-1
– 1) if x > a [mid]
In the worst case the test x = a[mid] always fails, so
w(0) = 0
w(2k – 1) = 1 + w(2k - 1 – 1)
This is now solved by repeated substitution:
13
= 1 + [1 + w(2k - 2–1)]
= 1 + [1 + [1 + w(2k - 3–1)]]
= . . . . . . ..
= . . . . . . ..
= i + w(2k - i –1)
Although it might seem that the restriction of values of „n‟ of the form 2K–1 weakens
the result. In practice this does not matter very much, w(n) is a monotonic
increasing function of „n‟, and hence the formula given is a good approximation even
when „n‟ is not of the form2K–1.
Merge Sort:
Merge sort algorithm is a classic example of divide and conquer. To sort an array,
recursively, sort its left and right halves separately and then merge them. The time
complexity of merge mort in the best case, worst case and average case is O(n log n)
and the number of comparisons used is nearly optimal.
This strategy is so simple, and so efficient but the problem here is that there seems
to be no easy way to merge two adjacent sorted arrays together in place (The result
must be build up in a separatearray).
The fundamental operation in this algorithm is merging two sorted lists. Because the
lists are sorted, this can be done in one pass through the input, if the output is put in
a third list.
Algorithm
14
Algorithm MERGE (low, mid, high)
// a (low : high) is a global array containing two sorted subsets
// in a (low : mid) and in a (mid + 1 : high).
// The objective is to merge these sorted sets into single sorted
// set residing in a (low : high). An auxiliary array B is used.
{
h :=low; i := low; j:= mid + 1;
while ((h < mid) and (J < high)) do
{
if (a[h] < a[j]) then
{
b[i] := a[h]; h := h + 1;
}
else
{
b[i] :=a[j]; j := j + 1;
}
i := i + 1;
}
if (h > mid) then
for k := j to high do
{
b[i] := a[k]; i := i + 1;
}
else
for k := h to mid do
{
b[i] := a[K]; i := i + l;
}
for k := low to high do
a[k] := b[k];
}
Example
7, 2, 9, 4 | 3, 8, 6, 1 1, 2, 3, 4, 6, 7, 8, 9
7, 2 | 9, 4 2, 4, 7, 9 3, 8 | 6, 1 1, 3, 6, 8
7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1
15
Tree Calls of MERGESORT(1, 8)
The following figure represents the sequence of recursive calls that are produced by
MERGESORT when it is applied to 8 elements. The values in each node are the values
of the parameters low and high.
1, 8
1, 4 5, 8
1, 2 3, 4 5, 6 7, 8
1, 1 2, 2 3, 3 4, 4 5, 5 6, 6 7, 7 8, 8
1, 1, 2 3, 3, 4 5, 5, 6 7, 7, 8
1, 2, 4 5, 6, 8
1, 4, 8
We will assume that „n‟ is a power of 2, so that we always split into even halves, so
we solve for the case n = 2k.
T(1) = 1
T(n) = 2 T(n/2) + n
This is a standard recurrence relation, which can be solved several ways. We will
solve by substituting recurrence relation continually on the right–hand side.
16
Since we can substitute n/2 into this main equation
T(n/2) = 2 T(n/4) + n
T(n) = 4 T(n/4) + 2n
T(n/4) = 2 T(n/8) + n
T(n) = 8 T(n/8) + 3n
= n T(1) + n log n
= n log n + n
We have assumed that n = 2k. The analysis can be refined to handle cases when „n‟
is not a power of 2. The answer turns out to be almostidentical.
Although merge sort‟s running time is O(n log n), it is hardly ever used for main
memory sorts. The main problem is that merging two sorted lists requires linear
extra memory and the additional work spent copying to the temporary array and
back, throughout the algorithm, has the effect of slowing down the sort considerably.
The Best and worst case time complexity of Merge sort is O(n logn).
The matrix multiplication of algorithm due to Strassens is the most dramatic example
of divide and conquer technique (1969).
The usual way to multiply two n x n matrices A and B, yielding result matrix „C‟ as
follows :
for i := 1 to n do
for j :=1 to n do
c[i, j] := 0;
for K: = 1 to n do
c[i, j] := c[i, j] + a[i, k] * b[k, j];
17
This algorithm requires n3 scalar multiplication‟s (i.e. multiplication ofsingle
numbers) and n3 scalar additions. So we naturally cannot improve upon.
We apply divide and conquer to this problem. For example let us considers three
multiplication like this:
A11 A12 B11 B 12 C11 C 12
A
A B B C C
21 22 21 22 21 22
T(1) = 1
T(n) = 8 T(n/2)
Strassens insight was to find an alternative method for calculating the C ij, requiring
seven (n/2) x (n/2) matrix multiplications and eighteen (n/2) x (n/2) matrix
additions andsubtractions:
Q = (A21 + A22)B11
T = (A11 + A12)B22
C11 = P + S – T +V
C12 = R + T
C21 = Q +S
C22 = P + R - Q +U.
This method is used recursively to perform the seven (n/2) x (n/2) matrix
multiplications, then the recurrence equation for the number of scalar multiplications
performed is:
18
T(1) = 1
T(n) = 7 T(n/2)
T(2k) = 7T(2k–1)
= 72T(2k-2)
= - - - - --
= - - - - --
= 7i T(2k–i)
Put i = k
= 7kT(1)
= 7k
logn
That is,T(n)= 7 2
= n log7
2
= O(nlog72) =O(2n.81)
So, concluding that Strassen‟s algorithm is asymptotically more efficient than the
standard algorithm. In practice, the overhead of managing the many small matrices
does not pay off until „n‟ revolves the hundreds.
Quick Sort
The main reason for the slowness of Algorithms like SIS is that all comparisons and
exchanges between keys in a sequence w1, w2, . . . . , wn take place between
adjacent pairs. In this way it takes a relatively long time for a key that is badly out of
place to work its way into its proper position in the sortedsequence.
Hoare his devised a very efficient way of implementing this idea in the early 1960‟s
that improves the O(n2) behavior of SIS algorithm with an expected performance that
is O(n log n).
In essence, the quick sort algorithm partitions the original array by rearranging it
into two groups. The first group contains those elements less than some arbitrary
chosen value taken from the set, and the second group contains those elements
greater than or equal to the chosenvalue.
The chosen value is known as the pivot element. Once the array has been rearranged
in this way with respect to the pivot, the very same partitioning is recursively applied
to each of the two subsets. When all the subsets have been partitioned and
rearranged, the original array is sorted.
The function partition() makes use of two pointers „i‟ and „j‟ which are moved toward
each other in the following fashion:
19
If j > i, interchange a[j] witha[i]
Repeat the steps 1, 2 and 3 till the „i‟ pointer crosses the „j‟ pointer. If „i‟
pointer crosses „j‟ pointer, the position for pivot is found and place pivot
element in „j‟ pointerposition.
The program uses a recursive function quicksort(). The algorithm of quick sort
function sorts all elements in an array „a‟ between positions „low‟ and „high‟.
It terminates when the condition low >= high is satisfied. This condition
will be satisfied only when the array is completelysorted.
Here we choose the first element as the „pivot‟. So, pivot = x[low]. Now it
calls the partition function to find the proper position j of the element
x[low] i.e. pivot. Then we will have two sub-arrays x[low], x[low+1], . . ..
. . . x[j-1] and x[j+1], x[j+2],x[high].
20
Example
Select first element as the pivot element. Move „i‟ pointer from left to right in search
of an element larger than pivot. Move the „j‟ pointer from right to left in search of an
element smaller than pivot. If such elements are found, the elements are swapped.
This process continues till the „i‟ pointer crosses the „j‟ pointer. If „i‟ pointer crosses „j‟
pointer, the position for pivot is found and interchange pivot and element at „j‟
position.
Let us consider the following example with 13 elements to analyze quick sort:
1 2 3 4 5 6 7 8 9 10 11 12 13 Remarks
38 08 16 06 79 57 24 56 02 58 04 70 45
pivot i j swap i & j
04 79
i j swap i & j
21
02 57
j i
swap pivot
(24 08 16 06 04 02) 38 (56 57 58 79 70 45)
&j
swap pivot
pivot j, i
&j
(02 08 16 06 04) 24
pivot, swap pivot
i
j &j
02 (08 16 06 04)
pivot i j swap i & j
04 16
j i
swap pivot
(06 04) 08 (16)
&j
pivot,
j i
swap pivot
(04) 06
&j
04
pivot,
j,i
16
pivot,
j,i
(02 04 06 08 16 24) 38
(56 57 58 79 70 45)
pivot i j swap i & j
45 57
j i
swap pivot
(45) 56 (58 79 70 57)
&j
45
swap pivot
pivot,
&j
j,i
(58 79 57)
pivot i 70 j swap i & j
57 79
j i
swap pivot
(57) 58 (70 79)
&j
57
pivot,
j,i
(70 79)
pivot, swap pivot
i
j &j
70
79
pivot,
j,i
(45 56 57 58 70 79)
02 04 06 08 16 24 38 45 56 57 58 70 79
22
Analysis of Quick Sort:
Like merge sort, quick sort is recursive, and hence its analysis requires solving a
recurrence formula. We will do the analysis for a quick sort, assuming a random pivot
(and no cut off for small files).
The running time of quick sort is equal to the running time of the two recursive calls
plus the linear time spent in the partition (The pivot selection takes only constant
time). This gives the basic quick sort relation:
The pivot is the smallest element, all the time. Then i=0 and if we ignore T(0)=1,
which is insignificant, the recurrence is:
T (n – 2) = T (n – 3) + C (n –2)
- - - - - - --
T (n) T (1)
i 2
i
=O (n2) - (3)
23
Best and Average Case Analysis
The number of comparisons for first call on partition: Assume left_to_right moves
over k smaller element and thus k comparisons. So when right_to_left crosses
left_to_right it has made n-k+1 comparisons. So, first call on partition makes n+1
comparisons. The average case complexity of quicksort is
T(n)=(n+1)2[ 2kn1
1/k
=2(n+1)[ ]
=2(n+1)[log (n+1) – log 2]
=2n log (n+1) + log (n+1)-2n log 2 –log 2
T(n)= O(n log n)
24
UNIT II:
Disjoint set operations, union and find algorithms, AND/OR graphs, Connected Components and
Spanning trees, Bi-connected components
Backtracking-General method, applications- The 8-queen problem, sum of subsets problem, graph
coloring, Hamiltonian cycles.
1 5 3
7 8 9 2 10 4 6
In this representation each set is represented as a tree. Nodes are linked from the child to
parent rather than usual method of linking from parent to child.
UNION operation:
Union(i,j) requires two tree with roots i and j be joined. S1 U S2 is
obtained by making any one of the sets as sub tree of other.
1
5
7 8 9 5 1
2 1
2 1 7 8 9
25
Simple Algorithm for Union:
Algorithm Union(i,j)
{
//replace the disjoint sets with roots i and j, I not equal to j by their
union Integer i,j;
P[j] :=i;
}
Example:
Implement following sequence of operations Union(1,3),Union(2,5),Union(1,2)
Solution:
Initially parent array contains zeros.
0 0 0 0 0 0
1 2 3 4 5 6
0 0 1 0 0 0
1 2 3 4 5 6
0 0 1 0 2 0
1 2 3 4 5 6
0 1 1 0 2 0
1 2 3 4 5 6
3 2
26
Process the following sequence of union
operations Union(1,2),Union(2,3)Union(n-1,n)
Degenerate Tree:
n-1
Algorithm Find(i)
{
j:=i;
while(p[j]>0) do
j:=p[j]; return j;
}
Find Operation: Find(i) implies that it finds the root node of ith node, in other words it
returns the name of the set i.
Find(1)=0
Find(3)=1, since its parent is 1. (i.e, root is 1)
27
Example:
Considering
3 2
5
Array Representation
P[i] 0 1 1 2
i 1 2 3 5
Find(5)=1
Find(2)=1
Find(3)=1
The root node represents all the nodes in the tree. Time Complexity of „n‟ find operations is
O(n2).
To improve the performance of union and find algorithms by avoiding the creation of
degenerate tree. To accomplish this, we use weighting rule for Union(i,j).
Weighting Rule for Union(i,j)
Union(1,2)
1 3 n
2
Union(1,3)
1 4 n
2 3
:
:
:
Union(1,n)
2 3 n
29
Spanning Trees:
Aspanningtreeis asubsetofGraphG,whichhas allthevertices coveredwithminimumpossible number
of edges. Hence, a spanning tree does not have cycles and it cannot bedisconnected..
By this definition, we can draw a conclusion that every connected and undirected Graph G has at
least one spanning tree. A disconnected graph does not have any spanning tree, as it cannot be
spanned to all its vertices.
We found three spanning trees off one complete graph. A complete undirected graph can have
maximum nn-2 number of spanning trees, where n is the number of nodes. In the above addressed
example, n is 3, hence 33−2 = 3spanning trees are possible.
All possible spanning trees of graph G, have the same number of edges andvertices.
Removing one edge from the spanning tree will make the graph disconnected, i.e. the spanning tree
is minimallyconnected.
Adding one edge to the spanning tree will create a circuit or loop, i.e. the spanning tree is maximally
acyclic.
30
Mathematical Properties of Spanning Tree
Spanning tree has n-1 edges, where n is the number of nodes(vertices).
Thus, we can conclude that spanning trees are a subset of connected Graph G and disconnected
graphs do not have spanning tree.
Civil NetworkPlanning
ClusterAnalysis
AND/OR GRAPHS:
And/or graph is a specialization of hypergraph which connects nodes by sets of arcs rather than
by a single arcs. A hypergraph is defined as follows:
An ordinary graph is a special case of hypergraph in which all the sets of decendent nodes have a
cardinality of 1.
Hyperarcs also known as K-connectors, where K is the cardinality of the set of decendent nodes.
If K = 1, the descendent may be thought of as an OR nodes. If K > 1, the elements of the set of
decendents may be thought of as AND nodes. In this case the connector is drawn with individual
edges from the parent node to each of the decendent nodes; these individual edges are then joined
with a curved link. And/or graph for the expression P and Q -> R is follows:
31
The K-connector is represented as a fan of arrows with a single tie is shown above. The and/or
graphsconsistsofnodeslabelledbyglobaldatabases.Nodeslabelledbycompounddatabaseshave sets
of successor nodes. These successor nodes are called AND nodes, in order to process the
compound database to termination, all the compound databases must be processed to termination.
Forexampleconsider,consideraboywhocollectsstamps(M).Hehasforthepurposeofexchange a
winning conker (C), a bat (B) and a small toy animal (A). In his class there are friends who are
also keen collectors of different items and will make the followingexchanges.
4. 1 small toy animal (A) for two bats (B, B) and a stamp(M).
2. Transformationrules:
a. If C then (D, S)
b. If C then (B, M)
c. If B then (M,M)
32
The figure shows that, a lot of extra work is done by redoing many of the transformations.
This repetition can be avoided by decomposing the problem into subproblems. There are
two major ways to order the components:
1. The components can either be arranged in some fixed order at the time they are
generated(or).
The more flexible system is to reorder dynamically as the processing unfolds. It can be
represented by and/or graph. The solution to the exchange problem willbe:
33
Example 1:
34
Connected components
In graph theory, a connected component (or just component) of an undirected graph is
a subgraph in which any two vertices are connected to each other by paths, and which is
connected to no additional vertices in the super graph. For example, the graph shown in the
illustration has three connected components. A vertex with no incident edges is itself a connected
component. A graph that is itself connected has exactly one connected component, consisting of
the whole graph.
Biconnected Components:
Let G = (V, E) be a connected undirected graph. Consider the following definitions:
36
Biconnected
Components
Articulation Point
Bridge
Let us consider the typical case of vertex v, where v is not a leaf and vis
not the root. Let w1, w2, . . . . . . . wk be the children of v. For each child
there is a subtree of the DFS tree rooted at this child. If for some child,
there is no back edge going to a proper ancestor of v, then if we remove v,
this subtree becomes disconnected from the rest of the graph, and hence v
is an articulationpoint.
L (u) = min {DFN (u), min {L (w) w is a child of u}, min {DFN
(w) (u, w) is a back edge}}.
L (u) is the lowest depth first number that can be reached from „u‟ using a
path of descendents followed by at most one back edge. It follows that, If
„u‟ is not the root then „u‟ is an articulation point iff „u‟ has a child „w‟ such
that:
37
Art(w,u); //w
is unvisited. L [u] := min (L [u], L
[w]);
}
else if (w v) then L [u] := min (L [u], dfn [w]);
}
}
For the following graph identify the articulation points and Biconnected components:
86
11
11 57
24
2 4 62 7 9
33
33 810
4 10 5 9 6 2
10 95
7 5
38
86 79
810
4 Graph Dept h First SpanningTree
L (u) = min {DFN (u), min {L (w) w is a child of u}, min {DFN (w) w
is a vertex to which there is back edge from u}}
L (1) = min {DFN (1), min {L (4)}} = min {1, L (4)} = min {1, 1} = 1
L (4) = min {DFN (4), min {L (3)}} = min {2, L (3)} = min {2, 1} = 1
39
Vertex 6, Vertex 8, Vertex 9 and Vertex 10 are leaf nodes.
Example:
For the following graph identify the articulation points and Biconnected components:
4 1 1
1
2 3 7 8 2 2 3 3
5 6 4 5 5 6 4 6
G ra p h
77
88
D F S s p a n ni n g T re e
L (u) = min {DFN (u), min {L (w) w is a child of u}, min {DFN (w) w
is a vertex to which there is back edge from u}}
Check for the condition if L (w) >DFN (u) is true, where w is any
40
II. L (6) = 5 and DFN (3) = 3and
III. L (4) = 6 and
cases
41
BACKTRACKING
General Method:
The solution is based on finding one or more vectors that maximize, minimize, or
satisfy a criterion function P (x1, , xn). Form a solution and check at everystep
if this has any chance of success. If the solution at any point seems not promising,
ignore it. All solutions requires a set of constraints divided into two categories: explicit
and implicit constraints.
Definition 1: Explicit constraints are rules that restrict each x i to take on values only
from a given set. Explicit constraints depend on the particular instance I
of problem being solved. All tuples that satisfy the explicit constraints
define a possible solution space for I.
Definition 2: Implicit constraints are rules that determine which of the tuples in the
solution space of I satisfy the criterion function. Thus, implicit
constraintsdescribethewayinwhichthexi‟smustrelatetoeachother.
For 8-queensproblem:
Explicit constraints using 8-tuple formation, for this problem are S= {1, 2, 3,
4, 5, 6, 7,8}.
The implicit constraints for this problem are that no two queens can be the
same (i.e., all queens must be on different columns) and no two queens can be
on the same diagonal.
Backtracking is the procedure whereby, after determining that a node can lead to
nothing but dead end, we go back (backtrack) to the nodes parent and proceed with
the search on the next child.
A backtracking algorithm need not actually create a tree. Rather, it only needs to
keep track of the values in the current branch being investigated. This is the way we
implement backtracking algorithm. We say that the state space tree exists implicitly in
the algorithm because it is not actuallyconstructed.
State space is the set of paths from root node to other nodes. State space tree is the
tree organization of the solution space. The state space trees are called static trees.
This terminology follows from the observation that the tree organizations are
independent of the problem instance being solved. For some problems it is
advantageous to use different tree organizations for different problem instance. In
this case the tree organization is determined dynamically as the solution space is
beingsearched.Treeorganizationsthatareprobleminstancedependentarecalled
42
dynamictrees.
Terminology:
Solution states are the problem states „S‟ for which the path from the root node to
„S‟ defines a tuple in the solution space.
Answer states are those solution states for which the path from root node to s
defines a tuple that is a member of the set of solutions.
Live node is a node that has been generated but whose children have not yet been
generated.
E-node is a live node whose children are currently being explored. In other words, an
E-node is a node currently being expanded.
Dead node is a generated node that is not to be expanded or explored any further.
All children of a dead node have already beenexpanded.
Branch and Bound refers to all state space search methods in which all children of
an E-node are generated before any other live node can become theE-node.
Depth first node generation with bounding functions is called backtracking. State
generation methods in which the E-node remains the E-node until it is dead, lead to
branch and bound methods.
N-Queens Problem:
The explicit constraints using this formulation are Si = {1, 2, 3, 4, 5, 6, 7, 8}, 1 <i <
8. Therefore the solution space consists of 888-tuples.
The implicit constraints for this problem are that no two x i‟s can be the same (i.e., all
queens must be on different columns) and no two queens can be on the same
diagonal.
This realization reduces the size of the solution space from 8 8 tuples to 8! Tuples.
The promising function must check whether two queens are in the same column or
diagonal:
Suppose two queens are placed at positions (i, j) and (k, l) Then:
Diag 45 conflict: Two queens i and j are on the same 450 diagonalif:
i – j = k – l.
This implies, j – l = i – k
Diag 135conflict:
i + j = k + l. This implies, j – l = k – i
43
Therefore, two queens lie on the same diagonal if and only if:
j - l= i – k
Where, j be the column of object in row i for the i th queen and l be the column of
object in row „k‟ for the kth queen.
To check the diagonal clashes, let us take the following tile configuration:
*
In this example, we have:
*
*
i 1 2 3 4 5 6 7 8
*
* xi 2 5 1 8 4 7 3 6
*
Let us consider for the
*
case whether the queenson 3rd row and 8th row
* are conflicting or not. Inthis
case (i, j) = (3, 1) and (k, l) = (8, 6). Therefore:
In the above example we have, j - l= i – k , so the two queens are attacking.
This is not a solution.
Example:
*
*
*
*
Step 1:
Add to the sequence the next number in the sequence 1, 2, . . . , 8 not yet
used.
Step 2:
If this new sequence is feasible and has length 8 then STOP with a solution. If
the new sequence is feasible and has length less then 8, repeat Step 1.
Step 3:
If the sequence is not feasible, then backtrack through the sequence until we
find the most recent place at which we can exchange a value. Go back to Step
1.
44
Remarks
1 2 3 4 5 6 7 8
7 5 3 1
j - l= 1 – 2= 1
7 5 3 1* 2*
i – k = 4 – 5= 1
7 5 3 1 4
j - l= 7 – 2= 5
7* 5 3 1 4 2*
i – k = 1 – 6= 5
j - l= 3 – 6= 3
7 5 3* 1 4 6*
i – k = 3 – 6= 3
7 5 3 1 4 8
j - l= 4 – 2= 2
7 5 3 1 4* 8 2*
i – k = 5 – 7= 2
j - l= 4 – 6= 2
7 5 3 1 4* 8 6*
i – k = 5 – 7= 2
7 5 3 1 4 8 Backtrack
7 5 3 1 4 Backtrack
7 5 3 1 6
j - l= 1 – 2= 1
7* 5 3 1 6 2*
i – k = 7 – 6= 1
7 5 3 1 6 4
7 5 3 1 6 4 2
j - l= 3 – 8=5
7 5 3* 1 6 4 2 8*
i – k=3 – 8=5
7 5 3 1 6 4 2 Backtrack
7 5 3 1 6 4 Backtrack
7 5 3 1 6 8
7 5 3 1 6 8 2
7 5 3 1 6 8 2 4 SOLUTION
*
*
*
*
*
*
*
*
45
4 – Queens Problem:
Let us see how backtracking works on the 4-queens problem. We start with the root
node as the only live node. This becomes the E-node. We generate one child. Let us
assume that the children are generated in ascending order. Let us assume that the
children are generated in ascending order. Thus node number 2 of figure is generated
and the path is now (1). This corresponds to placing queen 1 on column 1. Node 2
becomes the E-node. Node 3 is generated and immediately killed. The next node
generated is node 8 and the path becomes (1, 3). Node 8 becomes the E-node.
However, it gets killed as all its children represent board configurations that cannot
lead to an answer node. We backtrack to node 2 and generate another child, node 13.
The path is now (1, 4). The board configurations as backtracking proceeds is as
follows:
1 1 1 1
. . 2 2 2
. . . . 3
1 1 1 1
2 . . . 2 2
3 3
. . . . . . 4
(e) (f) (g) (h)
The above figure shows graphically the steps that the backtracking algorithm goes
through as it tries to find a solution. The dots indicate placements of a queen, which
were tried and rejected because another queen was attacking.
In Figure (b) the second queen is placed on columns 1 and 2 and finally settles on
column 3. In figure (c) the algorithm tries all four columns and is unable to place the
next queen on a square. Backtracking now takes place. In figure (d) the second
queen is moved to the next possible column, column 4 and the third queen is placed
on column 2. The boards in Figure (e), (f), (g), and (h) show the remaining steps that
the algorithm goes through until a solution isfound.
46
Complexity Analysis:
n 1
n 1
n 1
1 n n2 n3 nn
47
For the instance in which n = 8, the state space tree contains:
88 1 1 = 19, 173, 961 nodes
8 1
Sum of Subsets:
Given positive numbers wi, 1 ≤ i ≤ n, and m, this problem requires finding all subsets
of wi whose sums are „m‟.
Explicit constraints:
Implicit constraints:
A better formulation of the problem is where the solution subset is represented byan
n-tuple(x1, .......... , xn) such that xi Є {0,1}.
The above solutions are then represented by (1, 1, 0, 1) and (0, 0, 1, 1).
For example, n = 4, w = (11, 13, 24, 7) and m = 31, the desired subsets are(11,
13, 7) and (24, 7).
The following figure shows a possible tree organization for two possible formulations
of the solution space for the case n = 4.
x1=1 1 x1=4
x1=2 x1=3
2 3 4 5
x 2 =4
x2=2 x 2 =3 x2=4
x2 =3 x2=4
6 7 8 9 10 11
x 3 =3 S
x 3 =4 x3=4 x 3=4
12 13 14 15
S
x 4 =4
16
Apossiblesolutionspaceorganisationforthe s u m of t h e s
u b s et s pro blem.
The tree corresponds to the variable tuple size formulation. The edges are labeled
such that an edge from a level i node to a level i+1 node represents a value for xi. At
each node, the solution space is partitioned into sub - solution spaces. All paths from
therootnodetoanynodeinthetreedefinethesolutionspace,sinceanysuchpath
48
corresponds to a subset satisfying the explicit constraints.
The possible paths are (1), (1, 2), (1, 2, 3), (1, 2, 3, 4), (1, 2, 4), (1, 3, 4), (2), (2,
3), and so on. Thus, the left mot sub-tree defines all subsets containing w1, the next
sub-tree defines all subsets containing w2 but not w1, and so on.
49
Graph Coloring (for planar graphs):
Let G be a graph and m be a given positive integer. We want to discover whether the
nodes of G can be colored in such a way that no two adjacent nodes have the same
color, yet only m colors are used. This is termed the m-colorabiltiy decision problem.
The m-colorability optimization problem asks for the smallest integer m for which the
graph G can be colored.
Given any map, if the regions are to be colored in such a way that no two adjacent
regions have the same color, only four colors are needed.
For many years it was known that five colors were sufficient to color any map, but no
map that required more than four colors had ever been found. After several hundred
years, this problem was solved by a group of mathematicians with the help of a
computer. They showed that in fact four colors are sufficient for planar graphs.
The function m-coloring will begin by first assigning the graph to its adjacency matrix,
setting the array x [] to zero. The colors are represented by the integers 1, 2, . . . , m
and the solutions are given by the n-tuple (x1, x2, . . ., xn), where xi is the color of
node i.
A recursive backtracking algorithm for graph coloring is carried out by invoking the
statement mcoloring(1);
50
}
if (j = n+1)thenreturn; // New colorfound
}until(false); // Otherwise try to find anothercolor.
}
Example:
x1
1 3
2
1 2 2 3 1 3 1 2 x2
3 1 x3
1 2 2 3 1 2 2 3 1 3
4 3
Gra p h
x4
2 3 2 2 3 3 1 3 1 3 1 3 1 1 2 2 1 2
A4-nodegraphandallpossible3-colorings
Hamiltonian Cycles:
1 2 3 4 1 2 3
8 7 6 5 5 4
Graph G1 Graph G2
51
vertex xn can only be one remaining vertex and it must be connected to
both xn-1 and x1.
}
} until (false);
}
Algorithm Hamiltonian (k)
// This algorithm uses the recursive formulation of backtracking to find all the Hamiltonian
// cycles of a graph. The graph is stored as an adjacency matrix G [1: n, 1: n]. All cycles begin
// at node 1.
{
repeat
{ // Generate values for x[k].
NextValue (k); //Assign a legal Next
value to x [k]. if (x [k] = 0) thenreturn;
if (k = n) then write
(x [1: n]); else
Hamiltonian (k + 1)
} until (false)
52
UNIT III:
Greedy method- General method, applications- Knapsack problem, Job sequencing with
deadlines, Minimum cost spanning trees, Single source shortest path problem.
Greedy Method
GENERAL METHOD
Greedy is the most straight forward design technique. Most of the problems have n
inputs and require us to obtain a subset that satisfies some constraints. Any subset
that satisfies these constraints is called a feasible solution. We need to find a feasible
solution that either maximizes or minimizes the objective function. A feasible solution
that does this is called an optimalsolution.
For the problems that make decisions by considering the inputs in some order, each
decision is made using an optimization criterion that can be computed using decisions
already made. This version of greedy method is ordering paradigm. Some problems like
optimal storage on tapes, optimal merge patterns and single source shortest path are
based on ordering paradigm.
CONTROL ABSTRACTION
Procedure Greedy describes the essential way that a greedy based algorithm will look,
once a particular problem is chosen and the functions select, feasible and union are
properlyimplemented.
The function select selects an input from „a‟, removes it and assigns its value to „x‟.
Feasible is a Boolean valued function, which determines if „x‟ can be included into the
solution vector. The function Union combines „x‟ with solution and updates the objective
function.
53
KNAPSACK PROBLEM:
Let us apply the greedy method to solve the knapsack problem. We are given „n‟
objects and a knapsack. The object „i‟ has a weight wi and the knapsack has a capacity
„m‟. If a fraction xi, 0 < xi < 1 of object i is placed into the knapsack then a profit of
pixiisearned.Theobjectiveistofill theknapsackthatmaximizesthetotalprofitearned.
Since the knapsack capacity is „m‟, we require the total weight of all chosen objects to
be at most „m‟. The problem is stated as:
n
maximize p x i i
i 1
n
If the objects are already been sorted into non-increasing order of p[i] / w[i] then the
algorithm given below obtains solutions corresponding to this strategy.
Running time:
Example:
Consider the following instance of the knapsack problem: n = 3, m = 20, (p 1, p2, p3) =
(25, 24, 15) and (w1, w2, w3) = (18, 15, 10).
54
1. First, we try to fill the knapsack by selecting the objects in someorder:
x1 x2 x3 wi xi pi xi
2. Select the object with the maximum profit first (p = 25). So, x 1 = 1 and profit
earned is 25. Now, only 2 units of space is left, select the object with next largest
profit (p = 24). So, x2 =2/15
x1 x2 x3 wi xi pi xi
1 2/15 0 18 x 1 + 15 x 2/15 = 20 25 x 1 + 24 x 2/15 = 28.2
x1 x2 x3 wi xi pi xi
0 2/3 1 15 x 2/3 + 10 x 1 = 20 24 x 2/3 + 15 x 1 = 31
Sort the objects in order of the non-increasing order of the ratio pi / xi. Select the
object with the maximum pi / xi ratio, so, x2 = 1 and profit earned is 24. Now, only 5
units of space is left, select the object with next largest pi / xi ratio, so x3 = ½ and the
profit earned is 7.5.
x1 x2 x3 wi xi pi xi
0 1 1/2 15 x 1 + 10 x 1/2 = 20 24 x 1 + 15 x 1/2 = 31.5
When we are given a set of „n‟ jobs. Associated with each Job i, deadline di >0 and
profit Pi >0. For any job „i‟ the profit pi is earned iff the job is completed by its
deadline. Only one machine is available for processing jobs. An optimal solution is the
feasible solution with maximumprofit.
Sort the jobs in „j‟ ordered by their deadlines. The array d [1 : n] is used to store the
deadlines of the order of their p-values. The set of jobs j [1 : k] such that j [r], 1 ≤ r ≤
k are the jobs in „j‟ and d (j [1]) ≤ d (j[2]) ≤ . . . ≤ d (j[k]). To test whether J U {i} is
feasible, we have just to insert i into J preserving the deadline ordering and thenverify
55
that d [J[r]] ≤ r, 1 ≤ r ≤ k+1.
Example:
Let n = 4, (P1, P2, P3, P4,) = (100, 10, 15, 27) and (d1 d2 d3 d4) = (2, 1, 2, 1). The
feasible solutions and their values are:
The algorithm constructs an optimal set J of jobs that can be processed by their
deadlines.
A spanning tree for a connected graph is a tree whose vertex set is the same as the
vertex set of the given graph, and whose edge set is a subset of the edge set of the
given graph. i.e., any connected graph will have a spanning tree.
Weight of a spanning tree w (T) is the sum of weights of all edges in T. The Minimum
spanning tree (MST) is a spanning tree with the smallest possible weight.
56
G:
A gra p h G:
T hree ( of ma ny possible) spanning trees fro m gra ph G:
2 2
4
G: 3 5 3
6
1 1
To explain further upon the Minimum Spanning Tree, and what it applies to, let's
consider a couple of real-world examples:
2. Another useful application of MST would be finding airline routes. The vertices of
the graph would represent cities, and the edges would represent routes between
the cities. Obviously, the further one has to travel, the more it will cost, so MST
can be applied to optimize airline routes by finding the least costly paths with no
cycles.
To explain how to find a Minimum Spanning Tree, we will look at two algorithms: the
Kruskal algorithm and the Prim algorithm. Both algorithms differ in their methodology,
but both eventually end up with the MST. Kruskal's algorithm uses edges, and Prim‟s
algorithm uses vertex connections in determining the MST.
Kruskal’s Algorithm
This is a greedy algorithm. A greedy algorithm chooses some local optimum (i.e.
picking an edge with the least weight in a MST).
Kruskal's algorithm works as follows: Take a graph with 'n' vertices, keep on adding the
shortest (least cost) edge, while avoiding the creation of cycles, until (n - 1) edges
have been added. Sometimes two or more edges may have the same cost. The order in
which the edges are chosen, in this case, does not matter. Different MSTs may result,
buttheywill allhavethesametotalcost, whichwill alwaysbetheminimumcost.
57
The algorithm for finding the MST, using the Kruskal‟s method is as follows:
Running time:
The number of finds is at most 2e, and the number of unions at most n-1.
Including the initialization time for the trees, this part of the algorithm has a
complexity that is just slightly more than O (n +e).
We can add at most n-1 edges to tree T. So, the total time for operations on T is
O(n).
Example 1:
10 50
1 2
45 40
3
30 35
4 25 5
55
20 15
6
58
Arrange all the edges in the increasing order of their costs:
Cost 10 15 20 25 30 35 40 45 50 55
Edge (1, 2) (3, 6) (4, 6) (2, 6) (1, 4) (3, 5) (2, 5) (1, 5) (2, 3) (5, 6)
The edge set T together with the vertices of G define a graph that has up to n
connected components. Let us represent each component by a set of vertices in it.
These vertex sets are disjoint. To determine whether the edge (u, v) creates a cycle,
we need to check whether u and v are in the same vertex set. If so, then a cycle is
created. If not then no cycle is created. Hence two Finds on the vertex sets suffice.
When an edge is included in T, two components are combined into one and a union is
to be performed on the twosets.
59
MINIMUM-COST SPANNING TREES: PRIM'S ALGORITHM
A given graph can have many spanning trees. From these many spanning trees, we
have to select a cheapest one. This tree is called as minimal cost spanningtree.
Minimal cost spanning tree is a connected undirected graph G in which each edge is
labeled with a number (edge labels may signify lengths, weights other than costs).
Minimal cost spanning tree is a spanning tree for which the sum of the edge labels is as
small as possible
The slight modification of the spanning tree algorithm yields a very simple algorithm for
finding an MST. In the spanning tree algorithm, any vertex not in the tree but
connected to it by an edge can be added. To find a Minimal cost spanning tree, we
must be selective - we must always add a new vertex for which the cost of the new
edge is as small aspossible.
This simple modified algorithm of spanning tree is called prim's algorithm for finding an
Minimal cost spanning tree.
60
Example:
Considering the following graph, find the minimal spanning tree using prim‟s algorithm.
8
1 4 4
9
4 3 5
1
2 3 3
4
4 9 8
4 41
The costadjacentmatrix is 94 3 3
8 1 3 4
3 4
Vertex 1 Vertex 2 1 4
2 4
4 1 3 5
3 4 3
5 3 2 3
1 2
The algorithm starts by selecting the minimum cost from the graph. The minimum cost
edge is (2, 4).
K = 2, l = 4
Min cost = cost (2, 4) = 1
T [1, 1] =2
T [1, 2] =4
61
for i = 1 to 5 Near matrix Edges added to min spanning
tree:
Begin
T [1, 1] = 2
i=1 T [1, 2] = 4
is cost (1, 4) < cost (1, 2) 2
8 < 4, No
Than near (1) = 2 1 2 3 4 5
i=2
is cost (2, 4) < cost (2, 2) 2 4
1 <, Yes
So near [2] = 4 1 2 3 4 5
i=3
is cost (3, 4) < cost (3, 2) 2 4 4
1 < 4, Yes
So near [3] = 4 1 2 3 4 5
i=4
is cost (4, 4) < cost (4, 2) 2 4 4 2
< 1, no
So near [4] = 2 1 2 3 4 5
i=5
is cost (5, 4) < cost (5, 2) 2 4 4 2 4
4 <, yes
So near [5] = 4 1 2 3 4 5
end
2 0 4 0 4
near [k] = near [l] = 0
near [2] = near[4] = 0 1 2 3 4 5
i=2
for j = 1 to 5
j=1
near(1)0 and cost(1, near(1))
2 0 and cost (1, 2) = 4
j=2
near (2) = 0
j=3
is near (3) 0
4 0 and cost (3, 4) = 3
62
j=4
near (4) = 0
J=5
Is near (5) 0
4 0 and cost (4, 5) = 4
Near [j] = 0 1 2 3 4 5
i.e. near (3) =0
for (k = 1 to n)
K=1
is near (1) 0, yes
2 0
and cost (1,2) > cost(1, 3)
4 > 9, No
K=2
Is near (2) 0, No
K=3
Is near (3) 0, No
K=4
Is near (4) 0, No
K=5 2 0 0 0 3
Is near (5) 0
4 0, yes 1 2 3 4 5
and is cost (5, 4) > cost (5, 3)
4 > 3, yes
than near (5) = 3
i=3
for (j = 1 to 5)
J=1
is near (1) 0
2 0
cost (1, 2) = 4
J=2
Is near (2) 0, No
63
J=3
Is near (3) 0, no
Near (3) = 0
J=4
Is near (4) 0, no
Near (4) = 0
J=5
Is near (5) 0
Near (5) = 3 3 0, yes
And cost (5, 3) = 3
T (3, 1) = 5
T (3, 2) = 3
for (k=1 to 5) 1 2 3 4 5
k=1
is near (1) 0, yes
and cost(1,2) > cost(1,5)
4 >, No
K=2
Is near (2) 0 no
K=3
Is near (3) 0 no
K=4
Is near (4) 0 no
K=5
Is near (5) 0 no
i=4
for J = 1 to 5
J=1
Is near (1) 0
2 0, yes
cost (1, 2) = 4
j=2
is near (2) 0, No
64
J=3
Is near (3) 0, No
Near (3) = 0
J=4
Is near (4) 0, No
Near (4) = 0
J=5
Is near (5) 0, No
Near (5) = 0
T (4, 1) = 1 T (4, 1) = 1
T (4, 2) = 2 1 2 3 4 5 T (4, 2) = 2
for (k = 1 to 5)
K=1
Is near (1) 0, No
K=2
Is near (2) 0, No
K=3
Is near (3) 0, No
K=4
Is near (4) 0, No
K=5
Is near (5) 0, No
End.
In the previously studied graphs, the edge labels are called as costs, but here we think
them as lengths. In a labeled graph, the length of the path is defined to be the sum of
the lengths of its edges.
In the single source, all destinations, shortest path problem, we must find a shortest
path from a given source vertex to each of the vertices (called destinations) in the
graph to which there is a path.
Dijkstra‟s algorithm is similar to prim's algorithm for finding minimal spanning trees.
Dijkstra‟s algorithm takes a labeled graph and a pair of vertices P and Q, and finds the
65
shortest path between then (or one of the shortest paths) if there is more than one.
The principle of optimality is the basis for Dijkstra‟s algorithms.
The figure lists the shortest paths from vertex 1 for a five vertex weighted digraph.
8 0 1
4 2 1 3
1 2 5
2 4 5 3 1 3 4
3 4 3
1 4 1 2
Graph
6 1 3 4 5
Shortest Paths
Algorithm:
Running time:
66
UNIT IV:
Dynamic Programming: General method, applications-Matrix chain multiplication, Optimal
binary search trees, 0/1 knapsack problem, All pairs shortest path problem, Travelling sales
person problem, Reliability design.
Dynamic Programming
Dynamic programming is a name, coined by Richard Bellman in 1955. Dynamic
programming, as greedy method, is a powerful algorithm design technique that can
be used when the solution to the problem may be viewed as the result of a sequence
of decisions. In the greedy method we make irrevocable decisions one at a time,
using a greedy criterion. However, in dynamic programming we examine the decision
sequence to see whether an optimal decision sequence contains optimal decision
subsequence.
Dynamic programming differs from the greedy method since the greedy method
produces only one feasible solution, which may or may not be optimal, while dynamic
programming produces all possible sub-problems at most once, one of which
guaranteed to be optimal. Optimal solutions to sub-problems are retained in a table,
thereby avoiding the work of recomputing the answer every time a sub-problem is
encountered
The divide and conquer principle solve a large problem, by breaking it up into smaller
problems which can be solved independently. In dynamic programming this principle
is carried to an extreme: when we don't know exactly which smaller problems to
solve, we simply solve them all, then store the answers away in a table to be used
later in solving larger problems. Care is to be taken to avoid recomputing previously
computed values, otherwise the recursive program will have prohibitive complexity.
In some cases, the solution can be improved and in other cases, thedynamic
67
programming technique is the best approach.
Two difficulties may arise in any application of dynamic programming:
1. It may not always be possible to combine the solutions of smaller problemsto
form the solution of a largerone.
2. The number of small problems to solve may be un-acceptablylarge.
There is no characterized precisely which problems can be effectively solved with
dynamic programming; there are many hard problems for which it does not seen to
be applicable, as well as many easy problems for which it is less efficient than
standard algorithms.
When no edge has a negative length, the all-pairs shortest path problem may be
solved by using Dijkstra‟s greedy single source algorithm n times, once with each of
the n vertices as the source vertex.
The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the
length of a shortest path from i to j. The matrix A can be obtained by solving n
single-source problems using the algorithm shortest Paths. Since each application of
this procedure requires O (n2) time, the matrix A can be obtained in O (n3) time.
The dynamic programming solution, called Floyd‟s algorithm, runs in O (n 3) time. Floyd‟s
algorithm works even when the graph has negative length edges (provided there are no
negative length cycles).
Example 1:
Given a weighted digraph G = (V, E) with weight. Determine the length of the
shortest path between all pairs of vertices in G. Here we assume that there are no
cycles with zero or negative cost.
6
1 2 0 4 11
4 Cost adjacency matrix (A0)= 6 0 2
3 11 2
3 0
3
General formula: min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
1<k<n
69
0 4 11
A(1) = 2
6 0
3 7 0
A2 (1, 1) = min {(A1 (1, 2) + A1 (2, 1), c (1, 1)} = min {(4 + 6), 0} =0
A2 (1, 2) = min {(A1 (1, 2) + A1 (2, 2), c (1, 2)} = min {(4 + 0), 4} =4
A2 (1, 3) = min {(A1 (1, 2) + A1 (2, 3), c (1, 3)} = min {(4 + 2), 11} =6
A2(2,1)=min{(A (2,2)+A(2,1), c (2,1)}=min{(0+6),6}=6
A2(2,2)=min{(A(2,2)+A(2,2), c(2,2)}=min {(0+0),0}=0
A2(2,3)=min{(A(2,2)+A(2,3), c(2,3)}=min {(0+2),2}=2
A2(3,1)=min{(A(3,2)+A(2,1), c(3,1)}=min {(7+6),3}=3
A2(3,2)=min{(A(3,2)+A(2,2), c(3,2)}=min {(7+0),7}=7
A2(3,3)=min {(A(3,2)+A(2,3), c(3,3)}=min{(7+2),0}=0
0 4 6
A(2) = 2
6 0
3 7 0
A3 (1, 1) = min {A2 (1, 3) + A2 (3, 1), c (1, 1)} = min {(6 + 3), 0} =0
A3 (1, 2) = min {A2 (1, 3) + A2 (3, 2), c (1, 2)} = min {(6 + 7), 4} =4
A3 (1, 3) = min {A2 (1, 3) + A2 (3, 3), c (1, 3)} = min {(6 + 0), 6} =6
A3 (2, 1) = min {A2 (2, 3) + A2 (3, 1), c (2, 1)} = min {(2 + 3), 6} =5
A3 (2, 2) = min {A2 (2, 3) + A2 (3, 2), c (2, 2)} = min {(2 + 7), 0} =0
A3 (2, 3) = min {A2 (2, 3) + A2 (3, 3), c (2, 3)} = min {(2 + 0), 2} =2
A3 (3, 1) = min {A2 (3, 3) + A2 (3, 1), c (3, 1)} = min {(0 + 3), 3} =3
A3 (3, 2) = min {A2 (3, 3) + A2 (3, 2), c (3, 2)} = min {(0 + 7), 7} =7
A3 (3, 3) = min {A2 (3, 3) + A2 (3, 3), c (3, 3)} = min {(0 + 0), 0} =0
70
0 4 6
A(3) =5 0 2
3 7 0
Let G = (V, E) be a directed graph with edge costs C ij. The variable cij is defined such
that cij > 0 for all I and j and cij = if < i, j>E. Let |V| = n and assume n > 1. A tour
of G is a directed simple cycle that includes every vertex in V. The cost of a tour is
the sum of the cost of the edges on the tour. The traveling sales person problem is to
find a tour of minimum cost. The tour is to be a simple path that starts and ends at
vertex1.
Let g (i, S) be the length of shortest path starting at vertex i, going through all
vertices in S, and terminating at vertex 1. The function g (1, V – {1}) is the length of
an optimal salesperson tour. From the principal of optimality it follows that:
gi,Smin cij gi,Sj -- 2
j S
The Equation can be solved for g (1, V – 1}) if we know g (k, V – {1, k}) for all
choices of k.
Example :
For the following graph find minimum cost tour for the traveling salesperson
problem:
1 2
0 10 15 20
5
The cost adjacency matrix = 10
0 9
6 12
13 0
8 9 0
3 4 8
71
Let us start the tour from vertex 1:
g (2, ) = C21 = 5
g (3, ) = C31 = 6
g (4, ) = C41 = 8
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}, c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}
g (2, {3, 4}) = min {c23 + g (3, {4}), c24 + g (4, {3})}
= min {9 + g (3, {4}), 10 + g (4, {3})}
Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {41, 25} = 25
g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3, {2})}
Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 18} = min {23, 27} =23
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}
= min {10 + 25, 15 + 25, 20 + 23} = min {35, 40, 43} = 35
72
0/1 – KNAPSACK:
We are given n objects and a knapsack. Each object i has a positive weight wi and a
positive value Vi. The knapsack can carry a weight not exceeding W. Fill the knapsack
so that the value of objects in the knapsack is optimized.
decisions on the xi are made in the order xn, xn-1, x1. Following a decision on xn,
we may be in one of two possible states: the capacity remaining in m – wn and a
profit of pn has accrued. It is clear that the remaining decisions x n-1, , x1 must be
optimal with respect to the problem state resulting from the decision on x n.
Otherwise, xn, , x1 will not be optimal. Hence, the principal of optimalityholds.
Equation-2 can be solved for fn (m) by beginning with the knowledge fo (y) = 0 for all
y and fi (y) = - , y < 0. Then f1, f2, . . . fn can be successively computed using
equation–2.
When the wi‟s are integer, we need to compute fi (y) for integer y, 0 <y <m. Since fi
(y) = - for y < 0, these function values need not be computed explicitly. Since each
fi can be computed from fi - 1 in Θ (m) time, it takes Θ (m n) time to compute fn.
When the wi‟s are real numbers, fi (y) is needed for real numbers y such that 0 < y
<m. So, fi cannot be explicitly computed for all y in this range. Even when the wi‟s are
integer, the explicit Θ (m n) computation of fn may not be the most efficient
computation. So, we explore an alternative method for bothcases.
The fi (y) is an ascending step function; i.e., there are a finite number of y‟s, 0 =y1
< y2 < . . . . < yk, such that fi (y1) < fi (y2) < . . . . . < fi (yk); fi (y) = - , y < y1; fi
(y) = f (yk), y >yk; and fi (y) = fi (yj), yj <y <yj+1. So, we need to compute only fi (yj),
1 <j <k. We use the ordered set Si = {(f (yj), yj) | 1 <j <k} to represent fi (y). Each
number of Si is a pair (P, W), where P = fi (yj) and W = yj. Notice that S0=
{(0, 0)}. We can compute Si+1 from Si by first computing:
Now, Si+1 can be computed by merging the pairs in Si and Si to1gether. Note that if
Si+1 contains two pairs (Pj, Wj) and (Pk, Wk) with the property that Pj <Pk and Wj > Wk,
then the pair (Pj, Wj) can be discarded because of equation-2. Discarding or purging
rules such as this one are also known as dominance rules. Dominated tuples get
purged. In the above, (Pk, Wk) dominates (Pj,Wj).
Example 1:
Consider the knapsack instance n = 3, (w1, w2, w3) = (2, 3, 4), (P1, P2, P3) = (1,2,
5) and M = 6.
73
Solution:
Other Solution:
S2 = (S1 U S11) = {(0, 0), (1, 2), (2, 3), (3, 5)}
S3 = (S2 U S21) = {(0, 0), (1, 2), (2, 3), (3, 5), (5, 4), (6, 6), (7, 7), (8, 9)}
S3 = (S2 U S21) = {(0, 0), (1, 2), (2, 3), (5, 4), (6, 6)}
From (6, 6) we can infer that the maximum Profit pi xi = 6 and weight xi wi = 6
74
Reliability Design:
IfstageicontainsmimcopiesofdeviceDi.Thentheprobabilitythatallmiham vea
i
malfunction is (1 - r) i. Hence the reliability of stage i becomes 1 – (1 - r) .
i i
Our problem is to use device duplication. This maximization is to be carried out under
a cost constraint. Let ci be the cost of each unit of device i and let c be the maximum
allowable cost of the system being designed.
We wish to solve:
Maximize m
1i n
i i
Subject to C m C
1i n
i i
75
Assume each Ci > 0, each mi must be in the range 1 < mi < ui, where
n
ui C Ci C J
Ci
1
C m x
1 j i
J J and 1 < mj < uJ, 1 < j < i
76
Example :
Design a three stage system with device types D1, D2 and D3. The costs are $30, $15
and $20 respectively. The Cost of the system is to be no more than $105. The
reliability of each device is 0.9, 0.8 and 0.5 respectively.
Solution:
We assume that if if stage I has mi devices of type i in parallel, then i (mi) =1 – (1-
ri)mi
Since, we can assume each ci > 0, each mi must be in the range 1 ≤ mi ≤ ui. Where:
n
ui CCi
C J Ci
1
10515301520 55
u2 3
15 15
105203015 20 60
u3 3
20 20
i
We useS j i:stagenumberandJ:no.ofdevicesin stageimi
S ofo(x),x
S1 = depends on u1 value, as u1 = 2, so
1
S1 S ,S
1
1 2
S2 = depends on u2 value, as u2 = 3, so
77
2
S 2 S , S
2
, S2
1 2 3
S3 = depends on u3 value, as u3 = 3, so
3
S 3 S , S
3
, S3
1 2 3
Now find,S1
1
f (x), x
1
1 2110.92 0.99
S1 f1 x, x 0.9, 30
1
78
Dominance Rule:
If Si contains two pairs (f1, x1) and (f2, x2) with the property that f1 ≥ f2 and x1 ≤ x2,
then (f1, x1) dominates (f2, x2), hence by dominance rule (f 2, x2) can be discarded.
Discarding or pruning rules such as the one above is known as dominance rule.
Dominating tuples will be present in S i and Dominated tuples has to be discarded
fromSi.
S13 0.5 (0.72), 45 20, 0.5 (0.864), 60 20, 0.5 (0.8928), 75 20
S33 0.875 (0.72), 45 20 20 20, 0.875 (0.864), 60 20 20 20,
0.875 (0.8928), 75 20 20 20
The best design has a reliability of 0.648 and a cost of 100. Tracing back forthe
solution through Si „s we can determine that m3 = 2, m2 = 2 and m1 = 1.
79
Optimal Binary Search Tree:
In computer science, an optimal binary search tree (Optimal BST), sometimes called a weight-balanced
binary tree,[1]is a binary search tree which provides the smallest possible search time (or expected
searchtime) for a given sequence of accesses (or access probabilities).
80
The C (i, J) can be computed as:
We solve the problem by knowing W (i, i+1), C (i, i+1) and R (i, i+1), 0 ≤ i < 4;
Knowing W (i, i+2), C (i, i+2) and R (i, i+2), 0 ≤ i < 3 and repeating until W (0, n),
C (0, n) and R (0, n) areobtained.
81
82
83
Matrix chain multiplication
The problem
Given a sequence of matrices A1, A2, A3, ..., An, find the best way (using the minimal number
of multiplications) to compute their product.
• Isn‟tthereonlyoneway?((···((A1·A2 )·A3 )···)·An )
• No, matrix multiplication isassociative.
e.g.A1·(A2·(A3·(···(An−1·An )···)))yieldsthesamematrix.
• Different multiplication orders do not cost thesame:
– Multiplying p × q matrix A and q × r matrix B takes p · q · r multiplications; result isa
p × r matrix.
– Consider multiplying 10 × 100 matrix A1 with 100 × 5 matrix A2 and 5 ×50 matrixA3.
– (A1 · A2) · A3 takes 10 · 100 · 5 + 10 · 5 · 50 = 7500multiplications.
– A1 · (A2 · A3) takes 100 · 5 · 50 + 10 · 50 · 100 = 75000multiplications.
Notation
• In general, let Ai be pi−1 × pimatrix.
• Let m(i, j) denote minimal number of multiplications needed to compute Ai · Ai+1 · · ·Aj
• We want to compute m(1,n).
Recursive algorithm
• Assume that someone tells us the position of the last product, say k. Then we have
tocompute recursively the best way to multiply the chain from i to k, and from k + 1 to
j, and add the cost of the final product. This meansthat
84
Matrix-chain(i,j)
IF i = j THEN return 0
m=∞
FOR k = i TO j − 1 DO
q = Matrix-chain(i, k) + Matrix-chain(k + 1, j) +pi−1 · pk ·pj
IF q < m THEN m = q
OD
Return m
END Matrix-chain
Return Matrix-chain(1,n)
• Running
time: Σ
n−1
T(n)= (T (k) + T (n− k) + O(1))
k=1
Σn−1
= 2· T (k) + O(n)
k=1
≥ 2 · T (n − 1)
≥ 2 · 2 · T (n − 2)
≥2·2·2...
=2n
• Exponential is ...
SLOW!
85
For example, we compute Matrix-chain(3, 4) twice.
• Solutionisto“remember”thevalueswehavealreadycomputedinatable.Thisiscalled
memoization. We‟ll have a table T[1..n][1..n] such that T[i][j] stores the solution to
problem Matrix-CHAIN(i,j). Initially all entries will be set to∞.
FOR i = 1 to n
DO FOR j = i
to n DO
T[i][j]=∞
OD
OD
• ThecodeforMATRIX-CHAIN(i,j)staysthesame,exceptthatitnowusesthetable.
ThefirstthingMATRIX-CHAIN(i,j)doesistocheckthetabletoseeifT[i][j]is already
computed. Is so, it returns it, otherwise, it computes it and writes it in thetable.
Below is the updatedcode.
Matrix-chain(i,j)
IFT[i][j]<∞THENreturnT[i][j]
IFi=jTHENT[i][j]=0,return0 m =
∞
FOR k = i to j − 1 DO
q=Matrix-chain(i,k)+Matrix-chain(k+1,j)+pi−1·p k·pj
IF q < m THEN m = q
OD
T[i][j]=m
return m
END Matrix-chain
return Matrix-chain(1,n)
86
UNIT V:
Branch and Bound: General method, applications - Travelling sales person
problem,0/1 knapsack problem- LC Branch and Bound solution, FIFO Branch
and Bound solution.
NP-Hard and NP-Complete problems: Basic concepts, non deterministic
algorithms, NP - Hard and NP Complete classes, Cook‟s theorem.
General method:
Branch and Bound is another method to systematically search a solution space. Just
like backtracking, we will use bounding functions to avoid generating subtrees that
do not contain an answer node. However branch and Bound differs from backtracking
in two importantmanners:
2. It has a bounding function, which goes far beyond the feasibility test as a
mean to prune efficiently the searchtree.
Branch and Bound refers to all state space search methods in which all children of
the E-node are generated before any other live node becomes theE-node
Branch and Bound is the generalization of both graph search strategies, BFS and D-
search.
A BFS like state space search is called as FIFO (First in first out) search
as the list of live nodes in a first in first out list (orqueue).
A D search like state space search is called as LIFO (Last in first out)
search as the list of live nodes in a last in first out (orstack).
Definition 1: Live node is a node that has been generated but whose children have
not yet beengenerated.
Definition 2: E-node is a live node whose children are currently being explored. In
other words, an E-node is a node currently being expanded.
Definition 3: Dead node is a generated node that is not to be expanded or explored
any further. All children of a dead node have already been expanded.
Definition 4: Branch-an-bound refers to all state space search methods in which all
children of an E-node are generated before any other live node can
become the E-node.
Definition 5: The adjective "heuristic", means" related to improving problem solving
performance". As a noun it is also used in regard to "any method or trick
used to improve the efficiency of a problem solving problem". But
imperfect methods are not necessarily heuristic or vice versa. "A heuristic
(heuristic rule, heuristic method) is a rule of thumb, strategy, trick
simplification or any other kind of device which drastically limits search
for solutions in large problem spaces. Heuristics do not guarantee optimal
solutions, they do not guarantee any solution at all. A useful heuristic
offers solutions which are good enough most ofthetime.
87
Least Cost (LC) search:
In both LIFO and FIFO Branch and Bound the selection rule for the next E-node in
rigid and blind. The selection rule for the next E-node does not give any preference
to a node that has a very good chance of getting the search to an answer node
quickly.
The search for an answer node can be speeded by using an “intelligent” ranking
function c( ) for live nodes. The next E-node is selected on the basis of this ranking
function. The node x is assigned a rank using:
c( x ) = f(h(x)) + g( x )
h(x) is the cost of reaching x from the root and f(.) is any non-decreasing
function.
g(x)isanestimateoftheadditionaleffortneededtoreachananswernode fromx.
A search strategy that uses a cost function c( x ) = f(h(x) + g( x ) to select the next
E-node would always choose for its next E-node a live node with least c(.) is called a
LC–search (Least Cost search)
BFS and D-search are special cases of LC-search. If g( x ) = 0 and f(h(x)) = level of
node x, then an LC search generates nodes by levels. This is eventually the same as
a BFS. If f(h(x)) = 0 and g( x ) >g( y ) whenever y is a child of x, then the search is
essentially a D-search.
We associate a cost c(x) with each node x in the state space tree. It is not possible to
easily compute the function c(x). So we compute a estimate c( x ) of c(x).
Let t be a state space tree and c() a cost function for the nodes in t. If x is a node in
t, then c(x) is the minimum cost of any answer node in the subtree with root x. Thus,
c(t) is the cost of a minimum-cost answer node int.
A heuristic c(.) is used to estimate c(). This heuristic should be easy to compute and
generally has the property thatifx is either an answer node or a leaf node,then
c(x) = c( x ) .
LC-searchusesctofindananswernode.ThealgorithmusestwofunctionsLeast()and Add() to
delete and add a live node from or to the list of live nodes,respectively.
Least() finds a live node with least c(). This node is deleted from the list of live nodes
and returned.
88
Add(x) adds the new live node x to the list of live nodes. The list of live nodes be
implemented as a min-heap.
Algorithm LCSearch outputs the path from the answer node it finds to the root node
t. This is easy to do if with each node x that becomes live, we associate a field parent
which gives the parent of node x. When the answer node g is found, the path from g
to t can be determined by following a sequence of parent values starting from the
current E-node (which is the parent of g) and ending at node t.
Listnode = record
{
Listnode * next, *parent; float cost;
}
Algorithm LCSearch(t)
{ //Search t for an answernode
if*tisananswernodethenoutput*tandreturn; E:=t;
//E-node.
initialize the list of live nodes to be empty;
repeat
{
for each child x of E do
{
ifxisananswernodethenoutputthepathfromxtotandreturn; Add (x);
//x is a new livenode.
(x parent):=E; // pointer for path toroot
}
if there are no more live nodes then
{
write (“No answer node”);
return;
}
E := Least();
} until (false);
}
The root node is the first, E-node. During the execution of LC search, this list
contains all live nodes except the E-node. Initially this list should be empty.
Examine all the children of the E-node, if one of the children is an answer node, then
the algorithm outputs the path from x to t and terminates. If the child of E is not an
answer node, then it becomes a live node. It is added to the list of live nodes and its
parent field set to E. When all the children of E have been generated, E becomes a
dead node. This happens only if none of E‟s children is an answer node. Continue the
search further until no live nodes found. Otherwise, Least(), by definition, correctly
chooses the next E-node and the search continues fromhere.
LC search terminates only when either an answer node is found or the entire state
space tree has been generated and searched.
Bounding:
A branch and bound method searches a state space tree using any search
mechanism in which all the children of the E-node are generated before another node
becomes the E-node. We assume that each answer node x has a cost c(x) associated
with it and that a minimum-cost answer node is to be found. Three common search
strategies are FIFO, LIFO, and LC. The three search methods differ only in the
selection rule used to obtain the nextE-node.
89
A good bounding helps to prune efficiently the tree, leading to a faster exploration of
the solution space.
A cost function c(.) such that c( x ) < c(x) is used to provide lower bounds on
solutions obtainable from any node x. If upper is an upper bound on the cost of a
minimum-cost solution, then all live nodes x with c(x) >c( x ) > upper. The starting
value for upper can be obtained by some heuristic or can be set to .
As long as the initial value for upper is not less than the cost of a minimum-cost
answer node, the above rules to kill live nodes will not result in the killing of a live
node that can reach a minimum-cost answer node. Each time a new answer node is
found, the value of upper can be updated.
To formulate the search for an optimal solution for a least-cost answer node in a
state space tree, it is necessary to define the cost function c(.), such that c(x) is
minimum for all nodes representing an optimal solution. The easiest way to do this is
to use the objective function itself forc(.).
For nodes representing feasible solutions, c(x) is the value of the objective
function for that feasiblesolution.
For nodes representing partial solutions, c(x) is the cost of the minimum-cost
node in the subtree with rootx.
Since, c(x) is generally hard to compute, the branch-and-bound algorithm will use an
estimate c( x ) such that c( x ) < c(x) for all x.
A FIFO branch-and-bound algorithm for the job sequencing problem can begin with
upper = as an upper bound on the cost of a minimum-cost answer node.
Starting with node 1 as the E-node and using the variable tuple size formulation of
Figure 8.4, nodes 2, 3, 4, and 5 are generated. Then u(2) = 19, u(3) = 14, u(4) =
18, and u(5) = 21.
The variable upper is updated to 14 when node 3 is generated. Since c (4) and
c(5) are greater than upper, nodes 4 and 5 get killed. Only nodes 2 and 3 remain
alive.
Node 2 becomes the next E-node. Its children, nodes 6, 7 and 8 are generated.
Then u(6) = 9 and so upper is updated to 9. Thecost c(7) = 10 > upper and node 7
gets killed. Node 8 is infeasible and so it iskilled.
Next, node 3 becomes the E-node. Nodes 9 and 10 are now generated. Then u(9) =
8 and so upper becomes 8. The cost c(10) = 11 > upper, and this nodeis killed.
90
The next E-node is node 6. Both its children are infeasible. Node 9‟s only child is also
infeasible. The minimum-cost answer node is node 9. It has a cost of 8.
An LC Branch-and-Bound search of the tree of Figure 8.4 will begin with upper =
and node 1 as the first E-node.
Node 2 is the next E-node as c(2) = 0 and c(3) = 5. Nodes 6, 7 and 8 are generated
and upper is updated to 9 when node 6 is generated. So, node 7 is killed as c(7) = 10
> upper. Node 8 is infeasible and so killed. The only live nodes now are nodes 3 and
6.
Node 6 is the next E-node as c(6) = 0 <c(3) . Both its children are infeasible.
Node 3 becomes the next E-node. When node 9 is generated, upper is updated to 8
as u(9) = 8. So, node 10 with c(10) = 11 is killed on generation.
Node 9 becomes the next E-node. Its only child is infeasible. No live nodes remain.
The search terminates with node 9 representing the minimum-cost answernode.
2 3
The path = 1 3 9 = 5 + 3 = 8
By using dynamic programming algorithm we can solve the problem with time
complexity of O(n22n) for worst case. This can be solved by branch and bound
technique using efficient bounding function. The time complexity of traveling sale
person problem using LC branch and bound is O(n 22n) which shows that there is no
change or reduction of complexity than previous method.
We start at a particular node and visit all nodes exactly once and come back to initial
node with minimum cost.
LetG=(V,E)isaconnectedgraph.LetC(i,J)bethecostofedge<i,j>.cij=if
<i, j>E and let |V| = n, the number of vertices. Every tour starts at vertex 1 and
ends at the same vertex. So, the solution space is given by S = {1, , 1 |is a
91
permutation of (2, 3, . . . , n)} and |S| = (n – 1)!. The size of S can be reduced by
restricting S so that (1, i1, i2, . . . . in-1, 1) S iff <ij, ij+1>E, 0 <j <n - 1 and i0 =
in=1.
1. Reduce the given cost matrix. A matrix is reduced if every row and column is
reduced. A row (column) is said to be reduced if it contain at least one zero
and all-remaining entries are non-negative. This can be done asfollows:
a) Row reduction: Take the minimum element from first row, subtract it
from all elements of first row, next take minimum element from the
second row and subtract it from second row. Similarly apply the same
procedure for allrows.
b) Find the sum of elements, which were subtracted fromrows.
e) Obtain the cumulative sum of row wise reduction and column wise
reduction.
2. Calculate the reduced cost matrix for every node R. Let A is the reduced cost
matrix for node R. Let S be a child of R such that the tree edge (R, S)
corresponds to including edge <i, j> in the tour. If S is not a leaf node, then
the reduced cost matrix for S may be obtained asfollows:
c) Reduce all rows and columns in the resulting matrix except for rows
and column containing only . Let r is the total amount subtracted to
reduce thematrix.
c) Find cScRA i, jr, where „r‟ is the total amount subtracted to
reduce the matrix, cRindicates the lower bound of the ith node in (i, j)
path and c S is called the costfunction.
92
Example:
Find the LC branch and bound solution for the traveling sale person problem whose
cost matrix is as follows:
20
30 1011
15 16 4 2
The cost matrix is 35 2 4
18 3
19 6
16 4 7 16
Step 1: Find the reduced cost matrix.
10 20 0 1
13
14 2 0
The resulting row wise reduced cost matrix 1 3 0 0
3 15 0
16
12 0 3 12
Deduct 1 (which is the minimum) from all values in the 1st column.
Deduct 3 (which is the minimum) from all values in the 3rd column.
10 17 0 1
12
11 0
2
The resulting column wise reduced cost matrix (A) =0 3 0 2
3 12 0
15
11 0 0 12
Cumulative reduced sum = row wise reduction + column wise reduction sum.
= 21 + 4 = 25.
This is the cost of a root i.e., node 1, because this is the initially reduced costmatrix.
93
Starting from node 1, we can next visit 2, 3, 4 and 5 vertices. So, consider to explore
the paths (1, 2), (1, 3), (1, 4) and (1,5).
i=2 i =4 i =5
i=3
2 3 4 5
Step 2:
Change all entries of row 1 and column 2 of A to and also set A(2, 1) to .
112 0
0 0 2
12 0
15
11 0 12
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
112 0
Then the resultant matrix is 0 0 2
12 0
15
11 0 12
Row reduction sum = 0 + 0 + 0 + 0 = 0
Column reduction sum = 0 + 0 + 0 + 0 = 0
Cumulative reduction (r) = 0 + 0 = 0
Change all entries of row 1 and column 3 of A to and also set A(3, 1) to .
94
12 2 0
3 0 2
0
15 3
11 0 12
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
Change all entries of row 1 and column 4 of A to and also set A(4, 1) to .
12
11 0
0 3 2
3 12 0
11 0 0
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
12
11 0
Then the resultant matrixis 0 3 2
3 12 0
11 0 0
95
Therefore, as cS cRA 1, 4r
c S = 25 + 0 + 0 =25
Change all entries of row 1 and column 5 of A to and also set A(5, 1) to .
12
11 2
03 0
12
15 3
0 0 12
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
10
9 0
Then the resultant matrixis 03 0
9
12 0
0 0 12
i=2 i =4 i =5
i=3
35 2 53 3 25 4 31 5
i =2 i =5
i=3
6 7 8
The cost of the paths between (1, 2) = 35, (1, 3) = 53, (1, 4) = 25 and (1, 5) = 31.
The cost of the path between (1, 4) is minimum. Hence the matrix obtained for path
(1, 4) is considered as reduced cost matrix.
96
12
11 0
A =0 3 2
3 12 0
11 0 0
The new possible paths are (4, 2), (4, 3) and (4,5).
Change all entries of row 4 and column 2 of A to and also set A(2, 1) to .
11 0
0 2
11 0
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
11 0
Then the resultant matrixis 0 2
11 0
Change all entries of row 4 and column 3 of A to and also set A(3, 1) to .
12
0
3 2
11 0
97
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
1 0
Then the resultant matrixis 1 0
00
Change all entries of row 4 and column 5 of A to and also set A(5, 1) to .
12
11
03
0 0
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
1 0
Then the resultant matrix is 0 3
0 0
Row reduction sum = 11
Column reduction sum = 0
Cumulative reduction (r) = 11+0 = 11
98
The tree organization up to this point is as follows:
U =
1 L = 25
i=2 i =4 i =5
i=3
35 2 53 3 25 4 31 5
i =2 i =5
i=3
28 6 7 8
36
50
i=3
i =5
9 10
The cost of the paths between (4, 2) = 28, (4, 3) = 50 and (4, 5) = 36. The cost of
the path between (4, 2) is minimum. Hence the matrix obtained for path (4, 2) is
considered as reduced cost matrix.
11 0
A = 0 2
11 0
The new possible paths are (2, 3) and (2, 5).
Change all entries of row 2 and column 3 of A to and also set A(3, 1) to .
2
11
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
99
Then the resultant matrixis 0
0
Row reduction sum = 2
Column reduction sum = 11
Cumulative reduction (r) = 2 + 11 = 13
Change all entries of row 2 and column 5 of A to and also set A(5, 1) to .
0
0
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
Then the resultant matrix is 0
0
Row reduction sum = 0
Column reduction sum = 0
Cumulative reduction (r) = 0 + 0 = 0
100
U =
1 L = 25
i=2 i =4 i =5
i=3
35 2 53 3 25 4 31 5
i =2 i =5
i=3
28 6 7 8
36
50
i=3
i=5
52 9 10 28
i =3
11
The cost of the paths between (2, 3) = 52 and (2, 5) = 28. The cost of the path
between (2, 5) is minimum. Hence the matrix obtained for path (2, 5) is considered
as reduced cost matrix.
A = 0
0
The new possible paths is (5, 3).
Change all entries of row 5 and column 3 of A to and also set A(3, 1) to .
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
Then the resultant matrix is
Row reduction sum = 0
Column reduction sum = 0
Cumulative reduction (r) = 0 + 0 = 0
101
U =
1 L = 25
i=2 i =4 i =5
i=3
35 2 53 3 25 4 31 5
i =2 i =5
i=3
28 6 7 8
36
50
i=3
i=5
52 9 10 28
i =3
11 28
1 4 2 5 3 1
0/1 knapsack problem can be solved by using branch and bound technique. In this
problem we will calculate lower bound and upper bound for each node.
Profit = P1 + P2 + P3 = 10 + 10 +12
So, Upper bound = 32
To calculate lower bound we can place w4 in knapsack since fractions are allowed in
calculation of lower bound.
3
Lower bound = 10 + 10 + 12 + ( X 18) = 32 + 6 = 38
9
Knapsack problem is maximization problem but branch and bound technique is
applicable for only minimization problems. In order to convert maximization problem
into minimization problem we have to take negative sign for upper bound and lower
bound.
Wechoosethepath,whichhasminimumdifferenceofupperboundandlowerbound. If the
difference is equal then we choose the path by comparing upper bounds and we
discard node with maximum upperbound.
102
U = - 32
1 L =-38
x1 = 1 x1 = 0
U=- U = - 22
2 3
32L = - L = -32
38
Now we will calculate upper bound and lower bound for nodes 2, 3.
For node 2, x1= 1, means we should place first item in the knapsack.
3
L = 10 + 10 + 12 + x 18 = 32 + 6 = 38, make it as -38
9
For node 3, x1 = 0, means we should not place first item in the knapsack.
Next, we will calculate difference of upper bound and lower bound for nodes 2, 3
U = -32
1 L =-38
x1 = 1 x1 =0
U = - 32 U = - 22
2 3
L = -38 L =-32
x2 =1 x2 =0
U=- U = -22
4 5
32L = - L =-36
38
Now we will calculate lower bound and upper bound of node 4 and 5. Calculate
difference of lower and upper bound of nodes 4 and 5.
103
U = - 32
1 L =-38
x1 = 1 x1 = 0
U = - 32 U = - 22
2 3
L = -38 L = -32
x2 =1 x2 =0
U =-32 U = - 22
4 5
L =-38 L = -36
x3 = 1 x3 = 0
U = -32 U = -38
6 7
L = -38 L = -38
Now we will calculate lower bound and upper bound of node 8 and 9. Calculate
difference of lower and upper bound of nodes 8 and 9.
U = -32
1 L =-38
x1 = 1 x1 = 0
U = - 32 U = - 22
2 3
L = -38 L = -32
x2 = 1 x2 = 0
U = - 32 U = - 22
4 5
L = -38 L = -36
x3 = 1 x3 = 0
U = - 32 U = - 38
6 7
L = -38 L = -38
x4 = 1 x4 = 0
U = -38 U = -20
8 9
L =-38 L =-20
Now we will calculate lower bound and upper bound of node 4 and 5. Calculate
difference of lower and upper bound of nodes 4 and 5.
Here the difference is same, so compare upper bounds of nodes 8 and 9. Discard the
node, which has maximum upper bound. Choose node 8, discard node 9 since, it has
maximum upper bound.
X1 =1
X2 =1
X3 =0
104
X4 = 1
The solution for 0/1 Knapsack problem is (x1, x2, x3, x4) = (1, 1, 0, 1)
Pixi=10x1+10x1+12x0+18x1
= 10 + 10 + 18 = 38.
Portion of state space tree using FIFO Branch and Bound for above problem:
As follows:
105
NP-Hard and NP-Complete problems
Deterministic and non-deterministic algorithms
Deterministic: The algorithm in which every operation is uniquely defined is called
deterministic algorithm.
Non-Deterministic: The algorithm in which the operations are not uniquely defined but
are limited to specific set of possibilities for every operation, such an algorithm is called
non-deterministic algorithm.
The non-deterministic algorithms use the following functions:
1. Choice: Arbitrarily chooses one of the element from givenset.
2. Failure: Indicates an unsuccessfulcompletion
3. Success: Indicates a successfulcompletion
A non-deterministic algorithm terminates unsuccessfully if and only if there exists no
set of choices leading to a success signal. Whenever, there is a set of choices that leads to
a successful completion, then one such set of choices is selected and the algorithm
terminates successfully.
In case the successful completion is not possible, then the complexity is O(1). In case of
successful signal completion then the time required is the minimum number of steps
needed to reach a successful completion of O(n) where n is the number of inputs.
The problems that are solved in polynomial time are called tractable problems and the
problems that require super polynomial time are called non-tractable problems. All
deterministic polynomial time algorithms are tractable and the non-deterministic
polynomials are intractable.
106
Satisfiability Problem:
The satisfiability is a boolean formula that can be constructed using the
following literals and operations.
1. A literal is either a variable or its negation of thevariable.
2. The literals are connected with operators˅,˄͢, ⇒ ,⇔
3. Parenthesis
Example:
107
Reducability:
A problem Q1 can be reduced to Q2 if any instance of Q1 can be easily rephrased as an
instance of Q2. If the solution to the problem Q2 provides a solution to the problem Q1,
then these are said to be reducable problems.
Let L1 and L2 are the two problems. L1 is reduced to L2 iff there is a way to solve L1 by
a deterministic polynomial time algorithm using a deterministic algorithm that solves L2
in polynomial time and is denoted by L1α L2.
If we have a polynomial time algorithm for L2 then we can solve L1 in polynomial time.
Two problems L1 and L2 are said to be polynomially equivalent iff L1α L2 and L2 α L1.
Example: Let P1 be the problem of selection and P2 be the problem of sorting. Let the
input have n numbers. If the numbers are sorted in array A[ ] the ith smallest element of
the input can be obtained as A[i]. Thus P1 reduces to P2 in O(1) time.
Decision Problem:
Any problem for which the answer is either yes or no is called decision problem. The
algorithm for decision problem is called decision algorithm.
Example: Max clique problem, sum of subsets problem.
Optimization Problem: Any problem that involves the identification of an optimal value
(maximum or minimum) is called optimization problem.
Example: Knapsack problem, travelling salesperson problem.
In decision problem, the output statement is implicit and no explicit statements are
permitted.
The output from a decision problem is uniquely defined by the input parameters and
algorithm specification.
Many optimization problems can be reduced by decision problems with the property that
a decision problem can be solved in polynomial time iff the corresponding optimization
problem can be solved in polynomial time. If the decision problem cannot be solved in
polynomial time then the optimization problem cannot be solved in polynomial time.
108
Class P:
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a
polynomial of problem‟s input size n
Examples:
• searching
• elementuniqueness
• graph connectivity
• graphacyclicity
• primalitytesting
Class NP
NP(nondeterministic polynomial): class of decision problems whose proposed
solutions can be verified in polynomial time = solvable by a nondeterministic
polynomial algorithm
A nondeterministic polynomial algorithmis an abstract two-stage procedure that:
• generates a random string purported to solve theproblem
• checks whether this solution is correct in polynomialtime
By definition, it solves the problem if it‟s capable of generating and verifying a
solution on one of its tries
Example: CNFsatisfiability
Problem: Is a boolean expression in its conjunctive normal form (CNF) satisfiable, i.e.,
are there values of its variables that makes it true? This problem is in NP.
Nondeterministicalgorithm:
• Guess truthassignment
• Substitute the values into the CNF formula to see if it evaluates totrue
109
NP HARD AND NP COMPLETE CLASSES
Polynomial Time algorithms
Problems whose solutions times are bounded by polynomials of small degree are called
polynomial time algorithms
Example: Linear search, quick sort, all pairs shortest path etc.
Non- Polynomial time algorithms
Problems whose solutions times are bounded by non-polynomials are called non-
polynomial time algorithms
Examples: Travelling salesman problem, 0/1 knapsack problem etc
It is impossible to develop the algorithms whose time complexity is polynomial for
non-polynomial time problems, because the computing times of non-polynomial are
greater than polynomial. A problem that can be solved in polynomial time in one model
can also be solved in polynomial time.
NP-Hard and NP-Complete Problem:
Let P denote the set of all decision problems solvable by deterministic algorithm in
polynomial time. NP denotes set of decision problems solvable by nondeterministic
algorithms in polynomial time. Since, deterministic algorithms are a special case of
nondeterministic algorithms, P ⊆ NP. The nondeterministic polynomial time problems
can be classified into two classes. They are
1. NP Hard and
2. NP Complete
NP-Hard: A problem L is NP-Hard iff satisfiability reduces to L i.e., any
nondeterministic polynomial time problem is satisfiable and reducable then the problem
is said to beNP-Hard.
Example: Halting Problem, Flow shop scheduling problem
A problem that is NP-Complete has the property that it can be solved in polynomial time
iff all other NP-Complete problems can also be solved in polynomial time. (NP=P)
110
If an NP-hard problem can be solved in polynomial time, then all NP- complete problems
can be solved in polynomial time. All NP-Complete problems are NP-hard, but some NP-
hard problems are not known to be NP- Complete.
Normally the decision problems are NP-complete but the optimization problems are NP-
Hard.
However if problem L1 is a decision problem and L2 is an optimization problem, then it is
possible that L1α L2.
Example: Knapsack decision problem can be reduced to knapsack
optimizationproblem.
There are some NP-hard problems that are not NP-Complete.
Let P, NP, NP-hard, NP-Complete are the sets of all possible decision problems that are
solvable in polynomial time by using deterministic algorithms, non-deterministic
algorithms, NP-Hard and NP-complete respectively. Then the relationship between P,
NP, NP-hard, NP-Complete can be expressed using Venn diagramas:
Problem conversion
A decision problem D1 can be converted into a decision problem D2 if there is an
algorithm which takes as input an arbitrary instance I1 of D1 and delivers as output an
instance I2 of D2such that I2 is a positive instance of D2 if and only if I1 is a positive
instance of D1. If D1 can be converted into D2, and we have an algorithm which solves
D2, then we thereby have an algorithm which solves D1. To solve an instance I of D1,
we first use the conversion algorithm to generate an instance I0 of D2, and then use the
algorithm for solving D2 to determine whether or not I0 is a positive instance of D2. If it
is, then we know that I is a positive instance of D1, and if it is not, then we know that I is
a negative instance of D1. Either way, we have solved D1 for that instance. Moreover, in
this case, we can say that the computational complexity of D1 is at most the sum of the
computational complexities of D2 and the conversion algorithm. If the conversion
algorithm has polynomial complexity, we say that D1 is at most polynomially harder than
D2.ItmeansthattheamountofcomputationalworkwehavetodotosolveD1,overand
111
above whatever is required to solve D2, is polynomial in the size of the problem instance.
In such a case the conversion algorithm provides us with a feasible way of solving D1,
given that we know how to solve D2.
Given a problem X, prove it is in NP-Complete.
1. Prove X is inNP.
2. Select problem Y that is known to be inNP-Complete.
3. Define a polynomial time reduction from Y toX.
4. Prove that given an instance of Y, Y has a solution iff X has asolution.
Cook’s theorem:
Cook‟s Theorem implies that any NP problem is at most polynomially harder than SAT.
This means that if we find a way of solving SAT in polynomial time, we will then be in a
position to solve any NP problem in polynomial time. This would have huge practical
repercussions, since many frequently encountered problems which are so far believed to
be intractable are NP. This special property of SAT is called NP-completeness. A
decision problem is NP-complete if it has the property that any NP problem can be
converted into it in polynomial time. SAT was the first NP-complete problem to be
recognized as such (the theory of NP-completeness having come into existence with the
proof of Cook‟s Theorem), but it is by no means the only one. There are now literally
thousands of problems, cropping up in many different areas of computing, which have
been proved to be NP-complete.
In order to prove that an NP problem is NP-complete, all that is needed is to show that
SAT can be converted into it in polynomial time. The reason for this is that the sequential
composition of two polynomial-time algorithms is itself a polynomial-time algorithm,
since the sum of two polynomials is itself a polynomial.
Suppose SAT can be converted to problem D in polynomial time. Now take any NP
problem D0. We know we can convert it into SAT in polynomial time, and we know we
can convert SAT into D in polynomial time. The result of these two conversions is a
polynomial-time conversion of D0 into
D. since D0 was an arbitrary NP problem, it follows that D is NP-complete
112