Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
522 views

Design and Analysis of Algorithms (DAA) Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
522 views

Design and Analysis of Algorithms (DAA) Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 112

UNIT – 1

INTRODUCTION
1.1 Notion of Algorithm
1.2 Review of Asymptotic Notation

1.3 Mathematical Analysis of Non-Recursive and Recursive Algorithms


1.4 Brute Force Approaches: Introduction
1.5 Selection Sort and Bubble Sort
1.6 Sequential Search and Brute Force String Matching.

1.1 Notion of Algorithm


Need for studying algorithms:
The study of algorithms is the cornerstone of computer science. It can be recognized as the
core of computer science. Computer programs would not exist without algorithms. With computers
becoming an essential part of our professional & personal life‘s, studying algorithms becomes a
necessity, more so for computer science engineers. Another reason for studying algorithms is that if we
know a standard set of important algorithms, They further our analytical skills & help us in developing
new algorithms for required applications
Algorithm
An algorithm is finite set of instructions that is followed, accomplishes a particular task. In
addition, all algorithms must satisfy the following criteria:
1. Input. Zero or more quantities are externally supplied.
2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and produced.
4. Finiteness. If we trace out the instruction of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
5. Effectiveness. Every instruction must be very basic so that it can be carried out,
in principal, by a person using only pencil and paper. It is not enough that each
operation be definite as in criterion 3; it also must be feasible.

1
An algorithm is composed of a finite set of steps, each of which may require one or more op-
erations. The possibility of a computer carrying out these operations necessitates that certain
constraints be placed on the type of operations an algorithm can include. The fourth criterion
for algorithms we assume in this book is that they terminate after a finite number of opera-
tions.

Criterion 5 requires that each operation be effective; each step must be such that it can, at least
in principal, be done by a person using pencil and paper in a finite amount of time. Performing
arithmetic on integers is an example of effective operation, but arithmetic with real numbers is
not, since some values may be expressible only by infinitely long decimal expansion. Adding
two such numbers would violet the effectiveness property.

• Algorithms that are definite and effective are also called computational procedures.
• The same algorithm can be represented in same algorithm can be represented in several ways
• Several algorithms to solve the same problem
• Different ideas different speed
Example:
Problem:GCD of Two numbers m,n
Input specifiastion :Two inputs,nonnegative,not both zero
Euclids algorithm
-gcd(m,n)=gcd(n,m mod n)
Untill m mod n =0,since gcd(m,0) =m
Another way of representation of the same algorithm
Euclids algorithm
Step1:if n=0 return val of m & stop else proceed step 2
Step 2:Divide m by n & assign the value of remainder to r
Step 3:Assign the value of n to m,r to n,Go to step1.
Another algorithm to solve the same problem
Euclids algorithm
Step1:Assign the value of min(m,n) to t
Step 2:Divide m by t.if remainder is 0,go to step3 else goto step4
Step 3: Divide n by t.if the remainder is 0,return the value of t as the answer and
stop,otherwise proceed to step4
Step4 :Decrease the value of t by 1. go to step 2

1.1 Review of Asymptotic Notation


Fundamentals of the analysis of algorithm efficiency
• Analysis of algorithms means to investigate an algorithm‘s efficiency with
respect to resources:
• running time ( time efficiency )
• memory space ( space efficiency )
Time being more critical than space, we concentrate on Time efficiency of algorithms.
The theory developed, holds good for space complexity also.
Experimental Studies: requires writing a program implementing the algorithm and
running the program with inputs of varying size and composition. It uses a function, like
the built-in clock() function, to get an accurate measure of the actual running time, then
analysis is done by plotting the results.
Limitations of Experiments
• It is necessary to implement the algorithm, which may be difficult
• Results may not be indicative of the running time on other inputs not included in
the experiment.
• In order to compare two algorithms, the same hardware and software
environments must be used
Theoretical Analysis: It uses a high-level description of the algorithm instead of an
implementation. Analysis characterizes running time as a function of the input size, n,
and takes into account all possible inputs. This allows us to evaluate the speed of an
algorithm independent of the hardware/software environment. Therefore theoretical
analysis can be used for analyzing any algorithm

Framework for Analysis


We use a hypothetical model with following assumptions
• Total time taken by the algorithm is given as a function on its input size
• Logical units are identified as one step
• Every step require ONE unit of time
• Total time taken = Total Num. of steps executed
Input’s size: Time required by an algorithm is proportional to size of the problem
instance. For e.g., more time is required to sort 20 elements than what is required to sort 10 elements.
Units for Measuring Running Time: Count the number of times an algorithm‘s basic operation is
executed. (Basic operation: The most important operation of the algorithm, the operation contributing
the most to the total running time.) For e.g., The basic operation is usually the most time-
consuming operation in the algorithm‘s innermost loop.
Consider the following example:

ALGORITHM sum_of_numbers ( A[0… n-1] )


// Functionality : Finds the Sum
// Input : Array of n numbers
// Output : Sum of „n‟numbers
i 0
sum 0
while i < n
sum sum + A[i] n
i i+1
return sum

Total number of steps for basic operation execution, C (n) = n


NOTE:
Constant of fastest growing term is insignificant: Complexity theory is an Approximation
theory. We are not interested in exact time required by an algorithm to solve the problem. Rather we
are interested in order of growth. i.e How much faster will algorithm run on computer that is twice as
fast? How much longer does it take to solve problem of double input size? We can crudely estimate
running time by
T (n) ≈ Cop �C (n)
Where,
T (n): running time as a function of n.
Cop : running time of a single operation.
C (n): number of basic operations as a function of n.
Order of Growth: For order of growth, consider only the leading term of a formula and ignore the
constant coefficient. The following is the table of values of several functions important for analysis of
algorithms.

Worst-case, Best-case, Average case efficiencies


Algorithm efficiency depends on the input size n. And for some algorithms efficiency depends
on type of input. We have best, worst & average case efficiencies.

• Worst-case efficiency: Efficiency (number of times the basic operation will be executed) for
the worst case input of size n. i.e. The algorithm runs the longest among all possible inputs of
size n.

• Best-case efficiency: Efficiency (number of times the basic operation will be executed) for the
best case input of size n. i.e. The algorithm runs the fastest among all possible inputs of size n.

• Average-case efficiency: Average time taken (number of times the basic operation will be
executed) to solve all the possible instances (random) of the input. NOTE: NOT the average of
worst and best case
Asymptotic Notations
Asymptotic notation is a way of comparing functions that ignores constant factors and small input
sizes. Three notations used to compare orders of growth of an algorithm‘s basic operation count are:
O, Ω, Θ notations
Big Oh- O notation
Definition:
A function t(n) is said to be in O(g(n)), denoted t(n)=O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0

Big Omega- Ω notation


Definition:
A function t (n) is said to be in Ω (g(n)), denoted t(n) = Ω (g (n)), if t (n) is bounded below bysome
constant multiple of g (n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that

t(n) ≥ cg(n) for all n ≥ n0


Big Theta- Θ notation
Definition:
A function t (n) is said to be in Θ(g (n)), denoted t(n) = Θ(g (n)), if t (n) is bounded both above and
below by some constant multiple of g (n) for all large n, i.e., if there exist some positive constant c1 and
c2 and some nonnegative integer n0 such that

c2 g (n) ≤ t (n) ≤ c1 g (n) for all n ≥ n0


Basic Efficiency classes
The time efficiencies of a large number of algorithms fall into only a few classes.

High time efficiency


fast 1 constant
log n logarithmic
n linear

n log n n log n
quadratic
n2
cubic
n3
exponential
2n
n! factorial low time efficiency
slow

1.2 Mathematical Analysis of Non-Recursive and Recursive Algorithms


Mathematical analysis (Time Efficiency) of Non-recursive Algorithms
General plan for analyzing efficiency of non-recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only on the input
size n. If it also depends on the type of input, investigate worst, average, and best case
efficiency separately.
4. Set up summation for C(n) reflecting the number of times the algorithm‘s basic operation is
executed.
5. Simplify summation using standard formulas
Example: Finding the largest element in a given array
ALOGORITHM MaxElement(A[0..n-1])
//Determines the value of largest element in a given array
//nput: An array A[0..n-1] of real numbers
//Output: The value of the largest element in A
currentMax ← A[0]
for i ← 1 to n - 1 do
if A[i] > currentMax
currentMax ← A[i]
return currentMax
Analysis:
1. Input size: number of elements = n (size of the array)
2. Basic operation:
a) Comparison
b) Assignment
3. NO best, worst, average cases.
4.
Let C (n) denotes number of comparisons: Algorithm makes one comparison on
each execution of the loop, which is repeated for each value of the loop‘s variable
i within the bound between 1 and n – 1.

Example: Element uniqueness problem


Algorithm UniqueElements (A[0..n-1])
//Checks whether all the elements in a given array are distinct
//Input: An array A[0..n-1]
//Output: Returns true if all the elements in A are distinct and false otherwise
for i 0 to n - 2 do
for j i + 1 to n – 1 do
if A[i] = = A[j]
return false
return true
Analysis
1. Input size: number of elements = n (size of the array)
2. Basic operation: Comparison
3. Best, worst, average cases EXISTS.
Worst case input is an array giving largest comparisons.
• Array with no equal elements
• Array with last two elements are the only pair of equal elements
4. Let C (n) denotes number of comparisons in worst case: Algorithm makes one
comparison for each repetition of the innermost loop i.e., for each value of the
loop‘s variable j between its limits i + 1 and n – 1; and this is repeated for each value of the outer
loop i.e, for each value of the loop‘s variable i between its
limits 0 and n – 2
Mathematical analysis (Time Efficiency) of recursive Algorithms
General plan for analyzing efficiency of recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only
on the input size n. If it also depends on the type of input, investigate worst,
average, and best case efficiency separately.
4. Set up recurrence relation, with an appropriate initial condition, for the number
of times the algorithm‘s basic operation is executed.
5. Solve the recurrence.

Example: Factorial function


ALGORITHM Factorial (n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = = 0
return 1
else
return Factorial (n – 1) * n
Analysis:
1. Input size: given number = n
2. Basic operation: multiplication
3. NO best, worst, average cases.
4. Let M (n) denotes number of multiplications.
M (n) = M (n – 1) + 1 for n > 0
M (0) = 0 initial condition
Where: M (n – 1) : to compute Factorial (n – 1)
1 :to multiply Factorial (n – 1) by n
5. Solve the recurrence: Solving using “Backward substitution method”:
M (n) = M (n – 1) + 1
= [ M (n – 2) + 1 ] + 1
= M (n – 2) + 2
= [ M (n – 3) + 1 ] + 3
= M (n – 3) + 3

In the ith recursion, we have
= M (n – i ) + i
When i = n, we have
= M (n – n ) + n = M (0 ) + n
Since M (0) = 0
= n

M (n) = Θ (n)
Example: Find the number of binary digits in the binary representation of a positive
decimal integer
ALGORITHM BinRec (n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n‟sbinary representation
if n = = 1
return 1
else
return BinRec (└ n/2 ┘) + 1

Analysis:
1. Input size: given number = n
2. Basic operation: addition
3. NO best, worst, average cases.
4. Let A (n) denotes number of additions.
A (n) = A (└ n/2 ┘) + 1 for n > 1
A (1) = 0 initial condition
Where: A (└ n/2 ┘) : to compute BinRec (└ n/2 ┘)
1 : to increase the returned value by 1
5. Solve the recurrence:
A (n) = A (└ n/2 ┘) + 1 for n > 1
Assume n = 2k (smoothness rule)
A (2k) = A (2k-1) + 1 for k > 0; A (20) = 0
Solving using “Backward substitution method”:
A (2k) = A (2k-1) + 1
= [A (2k-2) + 1] + 1
= A (2k-2) + 2
= [A (2k-3) + 1] + 2
= A (2k-3) + 3

In the ith recursion, we have

13
= A (2k-i) + i

14
When i = k, we have
= A (2k-k) + k = A (20) + k
Since A (20) = 0
A (2k) = k
Since n = 2k, HENCE k = log2 n
A (n) = log2 n
A (n) = Θ ( log n)

1.2 Brute Force Approaches:


Introduction
Brute force is a straightforward approach to problem solving, usually directly based on the
problem‘s statement and definitions of the concepts involved.Though rarely a source of clever or
efficient algorithms,the brute-force approach should not be overlooked as an important algorithm
design strategy. Unlike some of the otherstrategies, brute force is applicable to a very wide variety of
problems.For some important problems (e.g., sorting, searching, string matching),the brute-force
approach yields reasonable algorithms of at least some practical value with no limitation on instance
size Even if too inefficient in general, a brute-force algorithm can still be useful for solving small-size
instances of a problem. A brute-force algorithm can serve an important theoretical or educational
purpose.
1.3 Selection Sort and Bubble Sort
Problem:Given a list of n orderable items (e.g., numbers, characters from some alphabet,
character strings), rearrange them in nondecreasing order.
Selection Sort
ALGORITHM SelectionSort(A [0..n - 1])
//The algorithm sorts a given array by selection sort
//Input: An array A[0..n - 1] of orderable elements
//Output: Array A[0..n - 1] sorted in ascending order
for i=0 to n - 2 do
min=i
for j=i + 1 to n - 1 do
if A[j ]<A[min] min=j

14
swap A[i] an
Example:

Performance Analysis of the selection sort algorithm:


The input‘s size is given by the number of elements n.
The algorithm‘s basic operation is the key comparison A[j]<A[min]. The number of times it is
executed depends only on the array‘s size and is given by

Thus, selection sort is a O(n2) algorithm on all inputs. The number of key swaps is only O(n)
or, more precisely, n-1 (one for each repetition of the i loop).This property distinguishes selection sort
positively from many other sorting algorithms.
Bubble Sort
Compare adjacent elements of the list and exchange them if they are out of order.Then we
repeat the process,By doing it repeatedly, we end up ‗bubbling up‘ the largest element to the last
position on the list
ALGORITHM
BubbleSort(A [0..n - 1])
//The algorithm sorts array A[0..n - 1] by bubble sort
//Input: An array A[0..n - 1] of orderable elements
//Output: Array A[0..n - 1] sorted in ascending order
for i=0 to n - 2 do

15
for j=0 to n - 2 - i do

16
if A[j + 1]<A[j ]
swap A[j ] and A[j + 1]
Example

The first 2 passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new line is shown after
a swap of two elements is done. The elements to the right of the vertical bar are in their final positions
and are not considered in subsequent iterations of the algorithm

Bubble Sort the analysis


Clearly, the outer loop runs n times. The only complexity in this analysis in the inner loop. If
we think about a single time the inner loop runs, we can get a simple bound by noting that it can never
loop more than n times. Since the outer loop will make the inner loop complete n times, the

comparison can't happen more than O(n2) times.


The number of key comparisons for the bubble sort version given above is the same for all
arrays of size n.

The number of key swaps depends on the input. For the worst case of decreasing arrays, it is
the same as the number of key comparisons.
Observation: if a pass through the list makes no exchanges, the list has been sorted and we can
stop the algorithm Though the new version runs faster on some inputs, it is still in O(n2) in the worst
and average cases. Bubble sort is not very good for big set of input. How ever bubble sort is very
simple to code.
General Lesson From Brute Force Approach
A first application of the brute-force approach often results in an algorithm that can be
improved with a modest amount of effort. Compares successive elements of a given list with a given
search key until either a match is encountered (successful search) or the list is exhausted without
finding a match (unsuccessful search)
1. 4 Sequential Search and Brute Force String Matching.
Sequential Search
ALGORITHM SequentialSearch2(A [0..n], K)
//The algorithm implements sequential search with a search key as a sentinel
//Input: An array A of n elements and a search key K
//Output: The position of the first element in A[0..n - 1] whose value is
// equal to K or -1 if no such element is found
A[n]=K
i=0
while A[i] = K do
i=i + 1
if i < n return i
else return

Brute-Force String Matching


Given a string of n characters called the text and a string of m characters (m = n) called the
pattern, find a substring of the text that matches the pattern. To put it more precisely, we want to find
i—the index of the leftmost character of the first matching
substring in the text—such that
ti = p0, . . . , ti+j = pj , . . . , ti+m-1 = pm-1:
t0 . . . ti . . . ti+j . . . ti+m-1 . . . tn-1 text T
p0 . . . pj . . . pm-1 pattern
P
1. Pattern: 001011
Text: 10010101101001100101111010
2. Pattern: happy
Text: It is never too late to have a happy childho

The algorithm shifts the pattern almost always after a single character comparison. in the
worst case, the algorithm may have to make all m comparisons before shifting the pattern, and this can
happen for each of the n - m + 1 tries. Thus, in the worst case, the algorithm is in θ(nm).
UNIT - 2
DIVIDE & CONQUER

1.1 Divide and Conquer

1.2 General Method

1.3 Binary Search

1.4 Merge Sort

1.5 Quick Sort and its performance

1.1 Divide and Conquer


Definition:
Divide & conquer is a general algorithm design strategy with a general plan as follows:
1. DIVIDE:
A problem‘s instance is divided into several smaller instances of the same
problem, ideally of about the same size.
2. RECUR:
Solve the sub-problem recursively.
3. CONQUER:
If necessary, the solutions obtained for the smaller instances are combined to get a
solution to the original instance.

Diagram 1 shows the general divide & conquer plan

Problem
of size n

Problem Problem
of size n of size n

Solution to Solution to
sub sub
problem 1 problem 1

Solution to original problem

NOTE:

1
9
The base case for the recursion is sub-problem of constant size.

Advantages of Divide & Conquer technique:


• For solving conceptually difficult problems like Tower Of Hanoi, divide &
conquer is a powerful tool
• Results in efficient algorithms
• Divide & Conquer algorithms are adapted foe execution in multi-processor
machines
• Results in algorithms that use memory cache efficiently.

Limitations of divide & conquer technique:


• Recursion is slow
• Very simple problem may be more complicated than an iterative approach.
Example: adding n numbers etc

1.2 General Method


General divide & conquer recurrence:
An instance of size n can be divided into b instances of size n/b, with ―a‖ of them
needing to be solved. [ a ≥ 1, b > 1].
Assume size n is a power of b. The recurrence for the running time T(n) is as follows:

T(n) = aT(n/b) + f(n)


where:
f(n) – a function that accounts for the time spent on dividing the problem into
smaller ones and on combining their solutions

Therefore, the order of growth of T(n) depends on the values of the constants a & b and
the order of growth of the function f(n).

Master theorem
Theorem: If f(n) Є Θ (nd) with d ≥ 0 in recurrence equation
T(n) = aT(n/b) + f(n),
then

Θ (nd) if a < bd
T(n) = Θ (ndlog n) if a = bd
Θ (nlogba ) if a > bd

Example:

Let T(n) = 2T(n/2) + 1, solve using master theorem.


Solution:
Here: a = 2
b=2
f(n) = Θ(1)

20
d=0
Therefore:
a > bd i.e., 2 > 20
Case 3 of master theorem holds good. Therefore:
T(n) Є Θ (nlogba )
Є Θ (nlog22 )
Є Θ (n)

1.3Binary Search
Description:
Binary tree is a dichotomic divide and conquer search algorithm. Ti inspects the middle
element of the sorted list. If equal to the sought value, then the position has been found.
Otherwise, if the key is less than the middle element, do a binary search on the first half,
else on the second half.
Algorithm:
Algorithm can be implemented as recursive or non-recursive algorithm.

ALGORITHM BinSrch ( A[0 … n-1], key)


//implements non-recursive binary search
//i/p: Array A in ascending order, key k
//o/p: Returns position of the key matched else -1

l 0
r n-1

while l ≤ r do
m ( l + r) / 2
if key = = A[m]
return m
else
if key < A[m]
r m-1
else
l m+1
return -1
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Depend on
Best – key matched with mid element
Worst – key not found or key sometimes in the list
• Let C(n) denotes the number of times basic operation is executed. Then
Cworst(n) = Worst case efficiency. Since after each comparison the algorithm
divides the problem into half the size, we have
Cworst(n) = Cworst(n/2) + 1 for n > 1
C(1) = 1
• Solving the recurrence equation using master theorem, to give the number of
times the search key is compared with an element in the array, we have:
C(n) = C(n/2) + 1
a=1

b=2

f(n) = n0 ; d = 0

case 2 holds:
C(n) = Θ (ndlog n)
= Θ (n0log n)
= Θ ( log n)
Applications of binary search:
• Number guessing game
• Word lists/search dictionary etc
Advantages:
• Efficient on very big list
• Can be implemented iteratively/recursively
Limitations:
• Interacts poorly with the memory hierarchy
• Requires given list to be sorted
• Due to random access of list element, needs arrays instead of linked list.
1.4Merge Sort
Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.
Features:
• Is a comparison based algorithm
• Is a stable algorithm
• Is a perfect example of divide & conquer algorithm design strategy
• It was invented by John Von Neumann
Algorithm:

ALGORITHM Mergesort ( A[0… n-1] )


//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order

if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )
Mergesort ( C[0… (n/2 -1)] )
M
e
r
g
e

B
,

C
,

)
ALGORITHM Merge ( B[0… p-1], C[0… q-1], A[0… p+q-1] )
//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C

I →0
j→0
k→0
while i < p and j < q do
if B[i] ≤ C[j]
A[k] →B[i]
i→i + 1
else
A[k] →C[j]
j→j + 1
k→k + 1
if i == p
copy C [ j… q-1 ] to A [ k… (p+q-1) ]
else
copy B [ i… p-1 ] to A [ k… (p+q-1) ]
Example:
Apply merge sort for the following list of elements: 6, 3, 7, 8, 2, 4, 5, 1

6 3 7 8 2 4 5 1

6378 2451

6 3 7 8 24 51

6 3 7 8 2 4 5 1

24

3 6 7 8 1 5
3678 1 2 4 5

12345678
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Worst case: During key comparison, neither of the two arrays becomes empty
before the other one contains just one element.
• Let C(n) denotes the number of times basic operation is executed. Then
C(n) = 2C(n/2) + Cmerge(n) for n > 1
C(1) = 0
where, Cmerge(n) is the number of key comparison made during the merging stage.
In the worst case:
Cmerge(n) = 2 Cmerge(n/2) + n-1 for n > 1
Cmerge(1) = 0
• Solving the recurrence equation using master theorem:
C(n) = 2C(n/2) + n-1 for n > 1
C(1) = 0
Here a=2
b=2
f(n) = n; d = 1
Therefore 2 = 21, case 2 holds
C(n) = Θ (ndlog n)
= Θ (n1log n)
= Θ (n log n)
Advantages:
• Number of comparisons performed is nearly optimal.
• Mergesort will never degrade to O(n2)
• It can be applied to files of any size
Limitations:
• Uses O(n) additional memory.
1.6 Quick Sort and its performance
Definition:
Quick sort is a well –known sorting algorithm, based on divide & conquer approach. The
steps are:
1. Pick an element called pivot from the list
2. Reorder the list so that all elements which are less than the pivot come before the
pivot and all elements greater than pivot come after it. After this partitioning, the
pivot is in its final position. This is called the partition operation
3. Recursively sort the sub-list of lesser elements and sub-list of greater elements.
Features:
• Developed by C.A.R. Hoare
• Efficient algorithm
• NOT stable sort
• Significantly faster in practice, than other algorithms
ALGORITHM Quicksort (A[ l …r ])
//sorts by quick sort
//i/p: A sub-array A[l..r] of A[0..n-1],defined by its left and right indices l and r
//o/p: The sub-array A[l..r], sorted in ascending order
if l < r
Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]
ALGORITHM Partition (A[l ..r])
//Partitions a sub-array by using its first element as a pivot
//i/p: A sub-array A[l..r] of A[0..n-1], defined by its left and right indices l and r (l < r)
//o/p: A partition of A[l..r], with the split position returned as this function‘s value
p→A[l]
i→l
j→r + 1;
Repeat
repeat i→i + 1 until A[i] >=p //left-right scan
repeat j→j – 1 until A[j] < p //right-left scan
if (i < j) //need to continue with the scan
swap(A[i], a[j])
until i >= j //no need to scan
swap(A[l], A[j])
return j
Example: Sort by quick sort the following list: 5, 3, 1, 9, 8, 2, 4, 7, show recursion tree.
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Best case: when partition happens in the middle of the array each time.
Worst case: When input is already sorted. During key comparison, one half is
empty, while remaining n-1 elements are on the other partition.
• Let C(n) denotes the number of times basic operation is executed in worst case:
Then
C(n) = C(n-1) + (n+1) for n > 1 (2 sub-problems of size 0 and n-1 respectively)
C(1) = 1

Best case:
C(n) = 2C(n/2) + Θ(n) (2 sub-problems of size n/2 each)

• Solving the recurrence equation using backward substitution/ master theorem,


we have:
C(n) = C(n-1) + (n+1) for n > 1; C(1) = 1
C(n) = Θ (n2)

C(n) = 2C(n/2) + Θ(n).


= Θ (n1log n)
= Θ (n log n)

NOTE:
The quick sort efficiency in average case is Θ( n log n) on random input.
UNIT - 3
THE GREEDY METHOD
3.1 The General Method
3.2 Knapsack Problem
3.3 Job Sequencing with Deadlines
3.4 Minimum-Cost Spanning Trees
3.5 Prim’sAlgorithm
3.6 Kruskal’s Algorithm
3.7 Single Source Shortest Paths.

3.1 The General Method


Definition:
Greedy technique is a general algorithm design strategy, built on following elements:
• configurations: different choices, values to find
• objective function: some configurations to be either maximized or minimized

The method:
• Applicable to optimization problems ONLY
• Constructs a solution through a sequence of steps
• Each step expands a partially constructed solution so far, until a complete solution
to the problem is reached.
On each step, the choice made must be
• Feasible: it has to satisfy the problem‘s constraints
• Locally optimal: it has to be the best local choice among all feasible choices
available on that step
• Irrevocable: Once made, it cannot be changed on subsequent steps of the
algorithm

NOTE:
• Greedy method works best when applied to problems with the greedy-choice
property
• A globally-optimal solution can always be found by a series of local
improvements from a starting configuration.

Greedy method vs. Dynamic programming method:


• LIKE dynamic programming, greedy method solves optimization problems.
• LIKE dynamic programming, greedy method problems exhibit optimal
substructure
• UNLIKE dynamic programming, greedy method problems exhibit the greedy
choice property -avoids back-tracing.

Applications of the Greedy Strategy:


• Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
• Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem

3.2 Knapsack problem


• One wants to pack n items in a luggage
• The ith item is worth vi dollars and weighs wi pounds
• Maximize the value but cannot exceed W pounds
• vi , wi, W are integers
• 0-1 knapsack each item is taken or not taken
• Fractional knapsack fractions of items can be taken
• Both exhibit the optimal-substructure property
• 0-1: If item j is removed from an optimal packing, the remaining packing is an optimal
packing with weight at most W-wj
• Fractional: If w pounds of item j is removed from an optimal packing, the remaining
packing is an optimal packing with weight at most W-w that can be taken from other n-1
items plus wj – w of item j
Greedy Algorithm for Fractional Knapsack problem
• Fractional knapsack can be solvable by the greedy strategy
• Compute the value per pound vi/wi for each item
• Obeying a greedy strategy, take as much as possible of the item with the greatest value per
pound.
• If the supply of that item is exhausted and there is still more room, take as much as
possible of the item with the next value per pound, and so forth until there is no more
room
• O(n lg n) (we need to sort the items by value per pound)
O-1 knapsack is harder
• knapsack cannot be solved by the greedy strategy
• Unable to fill the knapsack to capacity, and the empty space lowers the effective value per
pound of the packing
• We must compare the solution to the sub-problem in which the item is included with the
solution to the sub-problem in which the item is excluded before we can make the choice

3.3 Job sequencing with deadlines


The problem is stated as below.
• There are n jobs to be processed on a machine.
• Each job i has a deadline di≥ 0 and profit pi≥0 .
• Pi is earned iff the job is completed by its deadline.
• The job is completed if it is processed on a machine for unit time.
• Only one machine is available for processing jobs.
• Only one job is processed at a time on the machine
• A feasible solution is a subset of jobs J such that each job is completed by its deadline.
• An optimal solution is a feasible solution with maximum profit value.
Example : Let n = 4, (p1,p2,p3,p4) = (100,10,15,27), (d1,d2,d3,d4) = (2,1,2,1)
• Consider the jobs in the non increasing order of profits subject to the constraint that the resulting
job sequence J is a feasible solution.
• In the example considered before, the non-increasing profit vector is
(100 27 15 10) (2 1 2 1)
p1 p4 p3 p2 d1 d d3 d2
J = { 1} is a feasible one
J = { 1, 4} is a feasible one with processing sequence ( 4,1)
J = { 1, 3, 4} is not feasible
J = { 1, 2, 4} is not feasible
J = { 1, 4} is optimal

Theorem: Let J be a set of K jobs and


Σ = (i1,i2,….ik) be a permutation of jobs in J such that di1 ≤ di2 ≤…≤ dik.
• J is a feasible solution iff the jobs in J can be processed in the order Σ without violating any
deadly.
Proof:
• By definition of the feasible solution if the jobs in J can be processed in the order without
violating any deadline then J is a feasible solution.
• So, we have only to prove that if J is a feasible one, then Σ represents a possible order in which
the jobs may be processed.
• Suppose J is a feasible solution. Then there exists Σ1 = (r1,r2,…,rk) such that
drj ≥ j, 1 ≤ j <k
i.e. dr1 ≥1, dr2 ≥ 2, …, drk ≥ k.
each job requiring an unit time.

• Σ = (i1,i2,…,ik) and Σ1 = (r1,r2,…,rk)


• Assume Σ 1 ≠ Σ. Then let a be the least index in which Σ 1 and Σ differ. i.e. a is such that ra ≠ ia.
• Let rb = ia, so b > a (because for all indices j less than a rj = ij).
• In Σ 1 interchange ra and rb.
• Σ = (i1,i2,… ia ib ik ) [rb occurs before ra
• in i1,i2,…,ik]
• Σ 1 = (r1,r2,… ra rb … rk )
• i1=r1, i2=r2,…ia-1= ra-1, ia ≠ rb but ia = rb
• We know di1 ≤ di2 ≤ … dia ≤ dib ≤… ≤ dik.
• Since ia = rb, drb ≤ dra or dra ≥ drb.
• In the feasible solution dra ≥ a drb ≥ b
• So if we interchange ra and rb, the resulting permutation Σ11= (s1, … sk) represents an order with
the least index in which Σ11 and Σ differ is incremented by one.

• Also the jobs in Σ11 may be processed without violating a deadline.


• Continuing in this way, Σ1 can be transformed into Σ without violating any deadline.
• Hence the theorem is proved
GREEDY ALGORITHM FOR JOB SEQUENSING WITH DEADLINE

Procedure greedy job (D, J, n) J may be represented by


// J is the set of n jobs to be completed / one dimensional array J (1:
/ K)
// by their deadlines // The deadlines are
J {1} D (J(1)) ≤ D(J(2)) ≤ .. ≤ D(J(K))
for I 2 to n do To test if JU {i} is feasible,
If all jobs in JU{i} can be completed we insert i into J and verify
by their deadlines D(J®) ≤ r 1 ≤ r ≤ k+1
then J JU{I}
end if
repeat
end greedy-job
Procedure JS(D,J,n,k)
// D(i) ≥ 1, 1≤ i ≤ n are the deadlines //
// the jobs are ordered such that //
// p1 ≥ p2 ≥ ……. ≥ pn //
// in the optimal solution ,D(J(i) ≥ D(J(i+1)) //
// 1 ≤ i ≤ k //
integer D(o:n), J(o:n), i, k, n, r
D(0) J(0) 0
// J(0) is a fictious job with D(0) = 0 //
K1; J(1) 1 // job one is inserted into J //
for i 2 to do // consider jobs in non increasing order of pi //
// find the position of i and check feasibility of insertion //
rk // r and k are indices for existing job in J //
// find r such that i can be inserted after r //
while D(J(r)) > D(i) and D(i) ≠ r do
// job r can be processed after i and //
// deadline of job r is not exactly r //
rr-1 // consider whether job r-1 can be processed after i //
repeat
if D(J(r)) ≥ d(i) and D(i) > r then
// the new job i can come after existing job r; insert i into J at position r+1 //
for I k to r+1 by –1 do
J(I+1)J(l) // shift jobs( r+1) to k right by//
//one position //
repeat
if D(J(r)) ≥ d(i) and D(i) > r then
// the new job i can come after existing job r; insert i into J at position r+1 //
for I k to r+1 by –1 do
J(I+1)J(l) // shift jobs( r+1) to k right by//
//one position //
Repeat
COMPLEXITY ANALYSIS OF JS ALGORITHM
• Let n be the number of jobs and s be the number of jobs included in the solution.
• The loop between lines 4-15 (the for-loop) is iterated (n-1)times.
• Each iteration takes O(k) where k is the number of existing jobs.

∴ The time needed by the algorithm is 0(sn) s ≤ n so the worst case time is 0(n2).
If di = n - i+1 1 ≤ i ≤ n, JS takes θ(n2) time
D and J need θ(s) amount of space.
3.4 Minimum-Cost Spanning Trees
Spanning Tree
Spanning tree is a connected acyclic sub-graph (tree) of the given graph (G) that includes
all of G‘s vertices

Example: Consider the following graph


1 b
a

5 2
d

c 3

The spanning trees for the above graph are as follows:


1 1
1 b
b b
a a
a 2
5 2 d d
5
3 d c c 3
c
Weight (T2) = 8 Weight (T3) = 6
Weight (T1) = 9
Minimum Spanning Tree (MST)

Definition:
MST of a weighted, connected graph G is defined as: A spanning tree of G with
minimum total weight.
Example: Consider the example of spanning tree:
For the given graph there are three possible spanning trees. Among them the spanning
tree with the minimum weight 6 is the MST for the given graph

Question: Why can‘t we use BRUTE FORCE method in constructing MST ?


Answer: If we use Brute force method-
• Exhaustive search approach has to be applied.
• Two serious obstacles faced:
1. The number of spanning trees grows exponentially with graph size.
2. Generating all spanning trees for the given graph is not easy.
MST Applications:
• Network design.
Telephone, electrical, hydraulic, TV cable, computer, road
• Approximation algorithms for NP-hard problems.
Traveling salesperson problem, Steiner tree
• Cluster analysis.
• Reducing data storage in sequencing amino acids in a protein
• Learning salient features for real-time face verification
• Auto config protocol for Ethernet bridging to avoid cycles in a network, etc
3.5 Prim’s Algorithm
Some useful definitions:
• Fringe edge: An edge which has one vertex is in partially constructed tree Ti and
the other is not.
• Unseen edge: An edge with both vertices not in Ti

Algorithm:
ALGORITHM Prim (G)
//Prim‘s algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G
// the set of tree vertices can be initialized with any vertex
VT → { v0}
ET → Ø
for i→ 1 to |V| - 1 do
Find a minimum-weight edge e* = (v*, u*) among all the edges (v, u) such
that v is in VT and u is in V - VT
VT → VT U { u*}
ET → ET U { e*}
return ET
STEP 1: Start with a tree, T0, consisting of one vertex
STEP 2: ―Grow‖ tree one vertex/edge at a time
• Construct a series of expanding sub-trees T1, T2, … Tn-1.
• At each stage construct Ti + 1 from Ti by adding the minimum weight edge
connecting a vertex in tree (Ti) to one vertex not yet in tree, choose from
“fringe” edges (this is the “greedy” step!)
Algorithm stops when all vertices are included

Example:
Apply Prim‘s algorithm for the following graph to find MST.

1
b c
34 4 6

a 5 f 5
d
2
6 8
e

Solution:

Tre Remaini Graph


e ng
vertic vertice
b(a,3) b
c(-,∞) 3
a ( -, - d(-,∞)
) e(a,6) a
f(a,5)

1
c(b,1) b c
3
d(-,∞)
b ( a, 3
e(a,6) a
)
f(b,4)

1 c
b
d(c,6) 3
c ( b, 1 e(a,6)
) f(b,4)
a f
4
1
b c
3 4

d(f,5) a f
f ( b, 4)
e(f,2)
2

1
b c
3 4

e ( f, 2) d(f,5) a f 5
d
2

Algorithm stops since all


vertices are included.
d( f, 5)
The weight of the minimum
spanning tree is 15

Efficiency:
Efficiency of Prim‘s algorithm is based on data structure used to store priority queue.
• Unordered array: Efficiency: Θ(n2)
• Binary heap: Efficiency: Θ(m log n)
• Min-heap: For graph with n nodes and m edges: Efficiency: (n + m) log n

Conclusion:
• Prim‘s algorithm is a ―evrtex based algorithm‖
• Prim‘s algorithm ―Needs priority queue for locating the nearest vertex.‖
The choice of priority queue matters in Prim implementation.
o Array - optimal for dense graphs
o Binary heap - better for sparse graphs
o Fibonacci heap - best in theory, but not in practice
3.6 Kruskal’s Algorithm

Algorithm:

ALGORITHM Kruskal (G)


//Kruskal‘s algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G

Sort E in ascending order of the edge weights

// initialize the set of tree edges and its size


ET→Ø
edge_counter →0

//initialize the number of processed edges


K →0
while edge_counter < |V| - 1
k→k + 1
if ET U { ei k} is acyclic
ET →ET U { ei k }
edge_counter →edge_counter + 1
return ET

The method:
STEP 1: Sort the edges by increasing weight
STEP 2: Start with a forest having |V| number of trees.
STEP 3: Number of trees are reduced by ONE at every inclusion of an edge
At each stage:
• Among the edges which are not yet included, select the one with minimum
weight AND which does not form a cycle.
• the edge will reduce the number of trees by one by combining two trees of
the forest

Algorithm stops when |V| -1 edges are included in the MST i.e : when the number of
trees in the forest is reduced to ONE.
Example:
Apply Kruskal‘s algorithm for the following graph to find MST.

1
b c
344
6

a 5 f 5
d
2
6 8
e
Solution:
The list of edges is:
Edge ab af ae bc bf cf cd df de ef
Weigh 3 5 6 1 4 4 6 5 8 2
t
Sort the edges in ascending order:
Edge bc ef ab bf cf af df ae cd de
Weigh 1 2 3 4 4 5 5 6 6 8
t

1
bc b
Edge c
1 f
Weight a d
Inserti
YES
on
statu
Inserti e
1
on
orde
ef 1
Edge b c
2
Weight a f d
Inserti
YES
on 2
statu
Inserti e
2
on
orde
ab 1
Edge 3 b c
3
Weight a f d
Inserti
YES
on 2
statu
Inserti
3 e
on
orde

bf 1
Edge 3 b c
4
Weight a 4 f d
Inserti
YES
on 2
statu
Inserti
4 e
on
orde
Edge cf
Weight 4
Inserti
NO
on
statu
Inserti
-
on
orde
Edge af
Weight 5
Inserti
NO
on
statu
Inserti
-
on
orde
df 1
Edge
3 c
5
Weight
Inserti b f d
YES 5
on 4
statu a 2
Inserti
5 e
on
orde
Algorithm stops as |V| -1 edges are included in the MST

39
Efficiency:
Efficiency of Kruskal‘s algorithm is based on the time needed for sorting the edge
weights of a given graph.
• With an efficient sorting algorithm: Efficiency: Θ(|E| log |E| )

Conclusion:
• Kruskal‘s algorithm is an ―dege based algorithm‖
• Prim‘s algorithm with a heap is faster than Kruskal‘s algorithm.
3.7 Single Source Shortest Paths.

Some useful definitions:


• Shortest Path Problem: Given a connected directed graph G with non-negative
weights on the edges and a root vertex r, find for each vertex x, a directed path P
(x) from r to x so that the sum of the weights on the edges in the path P (x) is as
small as possible.
Algorithm
• By Dutch computer scientist Edsger Dijkstra in 1959.
• Solves the single-source shortest path problem for a graph with nonnegative edge
weights.
• This algorithm is often used in routing.
E.g .: Dijkstra's algorithm is usually the working principle behind link-
state routing protocols
ALGORITHM Dijkstra(G, s)
//Input: Weighted connected graph G and source vertex s
//Output: The length Dv of a shortest path from s to v and its penultimate vertex Pv for
every vertex v in V

//initialize vertex priority in the priority queue


Initialize (Q)
for every vertex v in V do
Dv→∞ ; Pv→null // Pv , the parent of v
insert(Q, v, Dv) //initialize vertex priority in priority queue
ds→0
//update priority of s with ds, making ds, the minimum
Decrease(Q, s, ds)

VT→0
for i→0 to |V| - 1 do
u*→DeleteMin(Q)
//expanding the tree, choosing the locally best vertex
VT→VT U {u*}
for every vertex u in V – VT that is adjacent to u* do
if Du* + w (u*, u) < Du
Du→Du + w (u*, u); Pu u*
Decrease(Q, u, Du)
The method
Dijkstra‘s algorithm solves the single source shortest path problem in 2 stages.
Stage 1: A greedy algorithm computes the shortest distance from source to all other
nodes in the graph and saves in a data structure.
Stage 2 : Uses the data structure for finding a shortest path from source to any vertex v.
• At each step, and for each vertex x, keep track of a “distance” D(x)
and a directed path P(x) from root to vertex x of length D(x).
• Scan first from the root and take initial paths P( r, x ) = ( r, x ) with
D(x) = w( rx ) when rx is an edge,
D(x) = ∞ when rx is not an edge.
For each temporary vertex y distinct from x, set
D(y) = min{ D(y), D(x) + w(xy) }

Example:
Apply Dijkstra‘s algorithm to find Single source shortest paths with vertex a as the
source.
1
b c
344
6

a 5 f 5
d
2
6 8
e

Solution:
Length Dv of shortest path from source (s) to other vertices v and Penultimate vertex Pv
for every vertex v in V:
Da = 0 , Pa = null
Db = ∞ , Pb = null
Dc = ∞ , Pc = null
Dd = ∞ , Pd = null
De = ∞ , Pe = null
Df = ∞ , Pf = null

41
Tre Remainin istance & Graph
e gD Path
vertic verticesD 0 vertex
Pa = a
a=b(a 3 Pb = [ a, b ] b
, 3D ) b = ∞ Pc = 3
a ( -, 0 c(-, ∞ null a
) ∞D)c = 6 Pd =
d(-, 5 null
∞D)d = Pe = [ a, e ]
e ( a , D 0 Pa Pf = a[ a, f ] 1
a = c ( b , 3 Pb = [ a, b ] b c
3+D1b)= 4 Pc = [a,b,c] 3
b ( a, 3 )
d(-, ∞ Pd = null a
∞D)c = 6 Pe = [ a,
e(a, 5 e]
6D)dDa = = 0 Pa = a
Db
d ( c , 4 = 3 Pb = [ a, b ]
5
4 Pc = [a,b,c]
c ( b, 4 ) Dc+6
)= 0 Pd = a f
e(a,
6)Dd =1 6 [a,b,c,d] Pe
f(a,5)
De = 5 = [ a, e ]
D 0 Pa = a
a 3 Pb = [ a, b ]
= 4 Pc = [a,b,c] a

f ( a, 5) D 0 Pd = 6
b 6 [a,b,c,d] e
= 5 Pe = [ a, e ]
D Pf = [ a, f ]
c
D 0 Pa = a 1
a 3 Pb = [ a, b ] b c
= 4 Pc = [a,b,c]
e ( a, 6) D 10 Pd = 3 6
b [a,b,c,d] 6 d
= Pe = [ a, e ] 5 a
D Pf = [ a, f ]
c Algorithm stops since
d( c, 10)
edge no s to scan

Conclusion:
• Doesn‘t work with negative weights
• Applicable to both undirected and directed graphs
• Use unordered array to store the priority queue: Efficiency = Θ(n2)
• Use min-heap to store the priority queue: Efficiency = O(m log n)
UNIT - 4
Dynamic Programming
4.1 The General Method
4.2 Warshall’s Algorithm
4.3 Floyd’s Algorithm for the All-Pairs Shortest Paths Problem
4.4 Single-Source Shortest Paths
4.5 General Weights 0/1 Knapsack
4.6 The Traveling Salesperson problem.
4.1 The General Method
Definition
Dynamic programming (DP) is a general algorithm design technique for solving
problems with overlapping sub-problems. This technique was invented by American
mathematician ―Richard Bellman‖ in 1950s.
Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid re-
computation.
Dynamic Programming Properties
• An instance is solved using the solutions for smaller instances.
• The solutions for a smaller instance might be needed multiple times, so store their
results in a table.
• Thus each smaller instance is solved only once.
• Additional space is used to save time.
Dynamic Programming vs. Divide & Conquer
LIKE divide & conquer, dynamic programming solves problems by combining solutions
to sub-problems. UNLIKE divide & conquer, sub-problems are NOT independent in
dynamic programming.

Divide & Conquer Dynamic Programming

1. Partitions a problem into 1. Partitions a problem


independent smaller sub-problems into overlapping sub-
problems
2. Doesn‘t store solutions of sub-
2. Stores solutions of sub-
problems. (Identical sub-
problems: thus avoids
problems may arise - results in
calculations of same
the same computations are
quantity twice
performed repeatedly.)

3. Bottom up algorithms: in
3. Top down algorithms: which which the smallest sub-
logically progresses from the initial problems are explicitly solved
instance down to the smallest sub- first and the results of these
instances via intermediate sub- used to construct solutions to
instances. progressively larger sub-
instances
Dynamic Programming vs. Divide & Conquer: EXAMPLE
Computing Fibonacci Numbers

1. Using standard recursive formula:

0 if
F(n) = 1 n=0
if
F(n-1) + F(n-2) n=1
if n
>1
Algorithm F(n)
// Computes the nth Fibonacci number recursively by using its definitions
// Input: A non-negative integer n
// Output: The nth Fibonacci number
if n==0 || n==1 then
return n
else
return F(n-1) + F(n-2)

Algorithm F(n): Analysis


• Is too expensive as it has repeated calculation of smaller Fibonacci numbers.
• Exponential order of growth.

F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)

2. Using Dynamic Programming:


Algorithm F(n)
// Computes the nth Fibonacci number by using dynamic programming method
// Input: A non-negative integer n
// Output: The nth Fibonacci number
A[0] 0
A[1] 1
for i 2 to n do
A[i] A[i-1] + A[i-2]
return A[n]

Algorithm F(n): Analysis


• Since it caches previously computed values, saves time from repeated
computations of same sub-instance
• Linear order of growth
Rules of Dynamic Programming
1. OPTIMAL SUB-STRUCTURE: An optimal solution to a problem contains
optimal solutions to sub-problems
2. OVERLAPPING SUB-PROBLEMS: A recursive solution contains a ―ms
all‖ number of distinct sub-problems repeated many times
3. BOTTOM UP FASHION: Computes the solution in a bottom-up fashion in the
final step

Three basic components of Dynamic Programming solution


The development of a dynamic programming algorithm must have the following three
basic components
1. A recurrence relation
2. A tabular computation
3. A backtracking procedure

Example Problems that can be solved using Dynamic Programming


method
1. Computing binomial co-efficient
2. Compute the longest common subsequence
3. Warshall‘s algorithm for transitive closure
4. Floyd‘s algorithm for all-pairs shortest paths
5. Some instances of difficult discrete optimization problems like knapsack problem
And traveling salesperson problem
4.2Warshall’s Algorithm
Some useful definitions:
• Directed Graph: A graph whose every edge is directed is called directed graph
OR digraph
• Adjacency matrix: The adjacency matrix A = {aij} of a directed graph is the
boolean matrix that has
o 1 - if there is a directed edge from ith vertex to the jth vertex
o 0 - Otherwise
• Transitive Closure: Transitive closure of a directed graph with n vertices can be
defined as the n-by-n matrix T={tij}, in which the elements in the ith row (1≤ i ≤
n) and the jth column(1≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a
directed path of a positive length) from the ith vertex to the jth vertex, otherwise
tij is 0. The transitive closure provides reach ability information about a digraph.
Computing Transitive Closure:
• We can perform DFS/BFS starting at each vertex
• Performs traversal starting at the ith vertex.
• Gives information about the vertices reachable from the ith vertex
• Drawback: This method traverses the same graph several times.
• Efficiency : (O(n(n+m))
• Alternatively, we can use dynamic programming: the Warshall‘s Algorithm
Underlying idea of Warshall’s algorithm:
• Let A denote the initial boolean matrix.
• The element r(k) [ i, j] in ith row and jth column of matrix Rk (k = 0, 1, …, n) is
equal to 1 if and only if there exists a directed path from ith vertex to jth vertex
with intermediate vertex if any, numbered not higher than k
• Recursive Definition:
• Case 1: A path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices does not use vk, Then
R(k) [ i, j ] = R(k-1) [ i, j ].
• Case 2: A path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices do use vk. Then
R(k) [ i, j ] = R(k-1) [ i, k ] AND R(k-1) [ k, j ].
R(k)[ i, j ] = R(k-1) [ i, j ] OR (R(k-1) [ i, k ] AND R(k-1) [ k, j ] )
Algorithm:
Algorithm Warshall(A[1..n, 1..n])
// Computes transitive closure matrix
// Input: Adjacency matrix A
// Output: Transitive closure matrix R
R(0) A
for k→1 to n do
for i→ 1 to n do
for j→ 1 to n do
R(k)[i, j]→ R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] )
return R(n)
Find Transitive closure for the given digraph using Warshall‘s algorithm.

A C

D
B

Solution:
A B C D
R(0 = A
0 0 1 0
) B 1 0 0 1
0 0 0 0
C 0 1 0 0
R(0) k=1 A B C D A B C D
Vertex 1 AR1[2,3]
0 0 1 0 A 0 0 1 0
can be B= R0[2,3]
1 OR
0 0 1 B 1 0 1 1
intermediat CR0[2,1]
0 AND
0 0R0[1,3] 0 C 0 0 0 0
D= 0 OR
0 ( 11AND
0 1) 0 D 0 1 0 0
node
=1
ABCD
ABCD
A
B
C
D
R(1 k=
) 2
0 0 R2[4,1]
1 0 0 0 1 0
Vert
{1,2 } = R1[4,1]
can be 1 0 1 1 OR 1 0 1 1
0 0 R1[4,2]
0 AND
0 R1[2,1] 0 0 0 0
intermedia = 0 OR ( 1 AND 1)
te 0 1 0 0 1 1 1 1
=1

R2[4,3]
= R1[4,3] OR
R1[4,2] AND R1[2,3]
= 0 OR ( 1 AND 1)
=1

R2[4,4]
= R1[4,4] OR
R1[4,2] AND R1[2,4]
= 0 OR ( 1 AND 1)
=1

ABCDABCD AA
B
C
D

NO CHANGE
R(2 k=
) 3
Vert
{1,2,3 } 0 0 1 0 0 0 1 0
can be 1 0 1 1 1 0 1 1
intermedi 0 0 0 0 0 0 0 0
nodes
ate 1 1 1 1 1 1 1 1
R(3) k= A B C D A B C D
4 A 0 0 1 0 A 0 0 1 0
Vert B 1 0 1 1 B 1 1 1 1
ex C 0 0 0 0 C 0 0 0 0
{1,2,3,4 } D 1 1 1 1 D 1 1 1 1
can be
intermedi R4[2,2]
ate nodes = R3[2,2] OR
R3[2,4] AND R3[4,2]
= 0 OR ( 1 AND 1)
=1

R(4) A B C D
A 0 0 1 0 TRANSITIVE CLOSURE
1 1 1 1
B 0 0 0 0
1 1 1 1
C

Efficiency:
• Time efficiency is Θ(n3)
• Space efficiency: Requires extra space for separate matrices for recording
intermediate results of the algorithm.

4.3 Floyd’s
Algorithm to find -ALL PAIRS SHORTEST PATHS
Some useful definitions:
• Weighted Graph: Each edge has a weight (associated numerical value). Edge
weights may represent costs, distance/lengths, capacities, etc. depending on the
problem.
• Weight matrix: W(i,j) is
o 0 if i=j
o ∞ if no edge b/n i and j.
o ―wieght of edge‖ if edge b/n i and j.

Problem statement:
Given a weighted graph G( V, Ew), the all-pairs shortest paths problem is to find the
shortest path between every pair of vertices ( vi, vj ) Є V.
Solution:
A number of algorithms are known for solving All pairs shortest path problem
• Matrix multiplication based algorithm
• Dijkstra's algorithm
• Bellman-Ford algorithm
• Floyd's algorithm
Underlying idea of Floyd’s algorithm:
• Let W denote the initial weight matrix.
• Let D(k) [ i, j] denote cost of shortest path from i to j whose intermediate vertices
are a subset of {1,2,…,k}.
• Recursive Definition
Case 1:
A shortest path from vi to vj restricted to using only vertices from {v1,v2,…,vk}
as intermediate vertices does not use vk. Then
D(k) [ i, j ] = D(k-1) [ i, j ].
Case 2:
A shortest path from vi to vj restricted to using only vertices from {v1,v2,…,vk}
as intermediate vertices do use vk. Then
D(k) [ i, j ] = D(k-1) [ i, k ] + D(k-1) [ k, j ].
We conclude:
D(k)[ i, j ] = min { D(k-1) [ i, j ], D(k-1) [ i, k ] + D(k-1) [ k, j ] }

Algorithm:
Algorithm Floyd(W[1..n, 1..n])
// Implements Floyd‘s algorithm
// Input: Weight matrix W
// Output: Distance matrix of shortest paths‘ length
D W
for k → 1 to n do
for i→ 1 to n do
for j→ 1 to n do
D [ i, j]→ min { D [ i, j], D [ i, k] + D [ k, j]
return D
Example:
Find All pairs shortest paths for the given weighted connected graph using Floyd‘s
algorithm.
A 5

4 2 C

B
3

Solution:

D(0) =
ABC A B C
0 2 5
4 0 ∞
∞ 3 0
D(0) k=1
Vertex 1
can be

0 2 5 0 2 5
4 0 9 4 0 9
∞ 3 0 7 3 0
4.40/1 Knapsack Problem Memory function
Definition:
Given a set of n items of known weights w1,…,wn and values v1,…,vn and a knapsack
of capacity W, the problem is to find the most valuable subset of the items that fit into the
knapsack.
Knapsack problem is an OPTIMIZATION PROBLEM

Dynamic programming approach to solve knapsack problem

Step 1:
Identify the smaller sub-problems. If items are labeled 1..n, then a sub-problem would be
to find an optimal solution for Sk = {items labeled 1, 2, .. k}

Step 2:
Recursively define the value of an optimal solution in terms of solutions to smaller
problems.
Initial conditions:
V[ 0, j ] = 0 for j ≥ 0
V[ i, 0 ] = 0 for i ≥ 0

Recursive step:
max { V[ i-1, j ], vi +V[ i-1, j - wi ] }
V[ i, j ] = if j - wi ≥ 0
V[ i-1, j ] if j - wi < 0
Step 3:
Bottom up computation using iteration

Question:
Apply bottom-up dynamic programming algorithm to the following instance of the
knapsack problem Capacity W= 5

Item # Weight Value (Rs.)


1 (Kg) 2 3
2 3 4
3 4 5
4 5 6

Solution:
Using dynamic programming approach, we have:
Step Calculation Table
1 Initial conditions:
V[ 0, j ] = 0for j ≥ 0 V[ i, 0 ] = 0for i ≥ 0

V[i, j= 1 2 3 4 5
j]i=0 0 0 0 0 0 0
W1 = 2, 2 1 0
Available knapsack capacity = 1 W1 > WA,CASE 1 holds: V[2 i, j ] =0 V[ i-1, j ]
V[ 1,1] = V[ 0, 1 ] = 0 3 0
4 0

V[i, j= 1 2 3 4 5
j]i=0 00 0 0 0 0 0
1 0 0
W1 = 2,3 2 0
Available knapsack capacity = 2 W1 = WA,CASE 32 holds: 0
V[ i, j ] = max { V[ i-1, j ], 4 0
vi +V[ i-1, j - wi ] }
V[ 1,2] = max { V[ 0, 2 ],
3 +V[ 0, 0 ] } V[i, j= 1 2 3 4 5
= max { 0, 3 + 0 } = 3 j]i=0 00 0 0 0 0 0
1 0 0 3
2 0
4 W1 = 2, 3 0
Available knapsack capacity = 4 0
3,4,5
W1 < WA,CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ], V[i, j= 1 2 3 4 5
vi +V[ i-1, j - wi ] } j]i=0 00 0 0 0 0 0
V[ 1,3] = max { V[ 0, 3 ], 1 0 0 3 3 3 3
3 +V[ 0, 1 ] } 2 0
= max { 0, 3 + 0 } = 3 3 0
4 0
W2 = 3, 5
Available knapsack capacity = 1 W2 >WA,CASE 1 holds: V[ i, j ] = V[ i-1, j ]
V[ 2,1] = V[ 1, 1 ] = 0
V[i, j= 1 2 3 4 5
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0
3 0
4 0
W2 = 3, 6
Available knapsack capacity = 2 W2 >WA,CASE 1 holds: V[ i, j ] = V[ i-1, j ]
V[ 2,2] = V[ 1, 2 ] = 3

V[i, j= 1 2 3 4 5
j]i=0 00 0 0 0 0 0
W2 = 3,7 1 0 0 3 3 3 3
Available knapsack capacity = 3 W2 = WA,CASE 22 holds: 0 0 3
V[ i, j ] = max { V[ i-1, j ], 3 0
vi +V[ i-1, j - wi ] } 4 0
V[ 2,3] = max { V[ 1, 3 ],
4 +V[ 1, 0 ] }
= max { 3, 4 + 0 } = 4 V[i, j= 1 2 3 4 5
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3
W2 = 3,8 2 0 0 3 4
Available knapsack capacity = 4 W2 < WA,CASE 32 holds: 0
V[ i, j ] = max { V[ i-1, j ], 4 0
vi +V[ i-1, j - wi ] }
V[ 2,4] = max { V[ 1, 4 ],
4 +V[ 1, 1 ] } V[i, j= 1 2 3 4 5
= max { 3, 4 + 0 } = 4 j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4
3 0
W2 = 3,9 4 0
Available knapsack capacity = 5 W2 < WA,CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 2,5] = max { V[ 1, 5 ], V[i, j= 1 2 3 4 5
4 +V[ 1, 2 ] } j]i=0 00 0 0 0 0 0
= max { 3, 4 + 3 } = 7 1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0
4 0
W3
10 = 4,
Available knapsack capacity =
1,2,3
V[i, j= 1 2 3 4 5
W3 > WA,CASE 1 holds: V[ i, j ] = V[ i-1, j ]
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4
4 0
11 W3 = 4,
Available knapsack capacity = V[i, j= 1 2 3 4 5
4 W3 = WA, CASE 2 holds: j]i=0 00 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] 2 0 0 3 4 4 7
} 3 0 0 3 4 5
V[ 3,4] = max { V[ 2, 4 ],
4 0
5 +V[ 2, 0 ] }
= max { 4, 5 + 0 } = 5
12 W3 = 4,
Available knapsack capacity = V[i, j= 1 2 3 4 5
5 W3 < WA, CASE 2 holds: j]i=0 00 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] 2 0 0 3 4 4 7
} 3 0 0 3 4 5 7
V[ 3,5] = max { V[ 2, 5 ],
4 0
5 +V[ 2, 1 ] }
= max { 7, 5 + 0 } = 7
13 W4 = 5,
Available knapsack capacity = V[i, j= 1 2 3 4 5
1,2,3,4 j]i=0 00 0 0 0 0 0
W4 < WA, CASE 1 1 0 0 3 3 3 3
holds: V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
14 W4 = 5,
Available knapsack capacity = V[i, j= 1 2 3 4 5
5 W4 = WA, CASE 2 holds: j]i=0 00 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] 2 0 0 3 4 4 7
} 3 0 0 3 4 5 7
V[ 4,5] = max { V[ 3, 5 ],
4 0 0 3 4 5 7
6 +V[ 3, 0 ]
}
= max { 7, 6 + 0 } = 7
Maximal value is V [ 4, 5 ] = 7/-

What is the composition of the optimal subset?


The composition of the optimal subset if found by tracing back the computations
for the entries in the table.
Ste Table Remarks
p
1
V[i, j= 1 2 3 4 5 V[ 4, 5 ] = V[ 3, 5 ]
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 4 NOT included in
2 0 0 3 4 4 7 the subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7

2
V[i, j= 1 2 3 4 5 V[ 3, 5 ] = V[ 2, 5 ]
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 3 NOT included in
2 0 0 3 4 4 7 the subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7

3
V[i, j= 1 2 3 4 5 V[ 2, 5 ] ≠ V[ 1, 5 ]
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 2 included in the subset
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7

4 Since item 2 is included in the


knapsack: Weight of item 2 is
3kg, therefore, remaining
capacity of the knapsack is V[ 1, 2 ] ≠ V[ 0, 2 ]
(5 - 3 =) 2kg
V[i, j= 1 2 3 4 5 ITEM 1 included in the subset
j]i=0 00 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7

5 Since item 1 is included in the Optimal subset: { item 1, item 2 }


knapsack: Weight of item 1 is
2kg,therefore, remaining Total weight is: 5kg (2kg +
capacity of the knapsack is 3kg) Total profit is: 7/- (3/-
(2 - 2 =) 0 kg. + 4/-)

55
Efficiency:
• Running time of Knapsack problem using dynamic programming algorithm is:
O( n * W )
• Time needed to find the composition of an optimal solution is: O( n + W )
Memory function

• Memory function combines the strength of top-down and bottom-up approaches


• It solves ONLY sub-problems that are necessary and does it ONLY ONCE.

The method:
• Uses top-down manner.
• Maintains table as in bottom-up approach.
• Initially, all the table entries are initialized with special ―unll‖ symbol to
indicate that they have not yet been calculated.
• Whenever a new value needs to be calculated, the method checks the
corresponding entry in the table first:
• If entry is NOT ―unll‖, it is simply retrieved from the table.
• Otherwise, it is computed by the recursive call whose result is then recorded in
the table.

Algorithm:

Algorithm MFKnap( i, j )
if V[ i, j] < 0
if j < Weights[ i ]
value → MFKnap( i-1, j )
else
value → max {MFKnap( i-1, j ),
Values[i] + MFKnap( i-1, j - Weights[i] )}
V[ i, j ]→ value
return V[ i, j]

Example:
Apply memory function method to the following instance of the knapsack problem
Capacity W= 5

Item # Weight Value


1 (Kg) 2 (Rs.)3
2 3 4
3 4 5
4 5 6

Solution:
Using memory function approach, we have:
Computation Remarks
1 Initially, all the table entries are
initialized with special ―unll‖ symbol V[i, j= 1 2 3 4 5
to indicate that they have not yet been j]i=0 0 0 0 0 0 0
calculated. Here null is indicated with -1 1 0 - - - - -
value. 2 0 -1 -1 -1 -1 -1
3 0 1- 1- 1- 1- 1-
4 0 -1 -1 -1 -1 -1
2 MFKnap( 4, 5 ) 1 1 1 1 1
V[ 1, 5 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i, j= 1 2 3 4 5
j]i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
1 0 - - - - 3
2 0 -1 -1 -1 -1 -
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 1- 1- 1- 1- 1-
0 3 4 0 -1 -1 -1 -1 -1
1 1 1 1 1
MFKnap( 0, 5 ) 3 + MFKnap( 0, 3 )
0 3+0

3 MFKnap( 4, 5 )
V[ 1, 2 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i, j= 1 2 3 4 5
j]i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
1 0 - 3 - - 3
2 0 1- - 1- 1- -
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 )
3 0 -1 -1 -1 -1 -1
3 3
0
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 2 ) 3+ ) 1 1 1 1 1
MFKnap( 0, 0
0 3+0
4 MFKnap( 4, 5 )
V[ 2, 5 ] = 7
MFKnap( 3, 5 ) 6 + MFKnap( )
3, 0 V[i, j= 1 2 3 4 5
j]i=0 0 0 0 0 0 0
1 0 - 3 - - 3
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
2 0 1- - 1- 1- 7
3 7
3 0 -1 -1 -1 -1 -
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 4 0 -1 -1 -1 -1 -1
3 3 1 1 1 1 1
V[i, j= 1 2 3 4 5
j]i=0 0 0 0 0 0 0
1 0 0 3 - - 3
2 0 0 - -1 -1 7
3 0 - -1 -1 -1 7
4 0 -1 -1 -1 -1 -
1 1 1 1 1

V[i, j= 1 2 3 4 5
j]i=0 0 0 0 0 0 0
1 0 0 3 - - 3
2 0 0 - -1 -1 7
3 0 - -1 -1 -1 7
4 0 1- 1- 1- 1- 7
1 1 1 1

Efficiency:
• Time efficiency same as bottom up algorithm: O( n * W ) + O( n + W )
• Just a constant factor gain by using memory function
• Less space efficient than a space efficient version of a bottom-up algorithm
UNIT-5
DECREASE-AND-CONQUER APPROACHES, SPACE-TIMETRADEOFFS

5.1DECREASE-AND-CONQUER APPROACHES: INTRODUCTION


5.2 INSERTION SORT
5.3 DEPTH FIRST SEARCH AND BREADTH FIRST SEARCH
5.4TOPOLOGICAL SORTING
5.5SPACE-TIME TRADEOFFS: INTRODUCTION
5.6SORTING BY COUNTING
5.7 INPUT ENHANCEMENT IN STRING MATCHING.

5.1 INTRODUCTION:
Decrease & conquer is a general algorithm design strategy based on exploiting the
relationship between a solution to a given instance of a problem and a solution to a
smaller instance of the same problem. The exploitation can be either top-down
(recursive) or bottom-up (non-recursive).

The major variations of decrease and conquer are


1. Decrease by a constant :(usually by 1):
a. insertion sort
b. graph traversal algorithms (DFS and BFS)
c. topological sorting
d. algorithms for generating permutations, subsets

2. Decrease by a constant factor (usually by half)


a. binary search and bisection method

3. Variable size decrease


a. Euclid‘s algorithm

Decrease by a constant :(usually by 1):

59
Decrease by a constant factor (usually by half)

5.2 INSERTION SORT

Description:
Insertion sort is an application of decrease & conquer technique. It is a comparison based
sort in which the sorted array is built on one entry at a time

Algorithm:
ALGORITHM Insertionsort(A [0 … n-1] )
//sorts a given array by insertion sort
//i/p: Array A[0…n-1]
//o/p: sorted array A[0…n-1] in ascending order

for i 1 to n-1
V A[i]
j i-1
while j ≥ 0 AND A[j] > V do
A[j+1] A[j]
j j–1
A[j + 1] V

Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists
Best case: when input is a sorted array in ascending order:
Worst case: when input is a sorted array in descending order:
• Let Cworst(n) be the number of key comparison in the worst case. Then

• Let Cbest(n) be the number of key comparison in the best case.


Then

Example:
Sort the following list of elements using insertion sort:
89, 45, 68, 90, 29, 34, 17

89 45 68 90 29 34 1
45 89 68 90 29 34 17
45 68 89 90 29 34 17
45 68 89 90 29 34 17
29 45 68 89 90 34 71
29 34 45 68 89 90 17
17 29 34 45 68 89 79
0
Advantages of insertion sort:
• Simple implementation. There are three variations
o Left to right scan
o Right to left scan
o Binary insertion sort
• Efficient on small list of elements, on almost sorted list
• Running time is linear in best case
• Is a stable algorithm
• Is a in-place algorithm
5.3 DEPTH-FIRST SEARCH (DFS) AND BREADTH-FIRST SEARCH (BFS)

DFS and BFS are two graph traversing algorithms and follow decrease and conquer
approach – decrease by one variation to traverse the graph

Some useful definition:


• Tree edges: edges used by DFS traversal to reach previously unvisited vertices
• Back edges: edges connecting vertices to previously visited vertices other than
their immediate predecessor in the traversals
• Cross edges: edge that connects an unvisited vertex to vertex other than its
immediate predecessor. (connects siblings)
• DAG: Directed acyclic graph

Depth-first search (DFS)


Description:
• DFS starts visiting vertices of a graph at an arbitrary vertex by marking it as
visited.
• It visits graph‘s vertices by always moving away from last visited vertex to an
unvisited one, backtracks if no adjacent unvisited vertex is available.
• Is a recursive algorithm, it uses a stack
• A vertex is pushed onto the stack when it‘s reached for the first time
• A vertex is popped off the stack when it becomes a dead end, i.e., when there is
no adjacent unvisited vertex
• ―Redraws‖ graph in tree-like fashion (with tree edges and back edges
for undirected graph)

Algorithm:
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree

Mark each vertex in V with 0 as a mark of being


―unvisited‖ count 0
for each vertex v in V do
if v is marked with 0
dfs(v)

dfs(v)
count count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
Example:
Starting at vertex A traverse the following graph using DFS traversal method:

A B C D

E F G H

Solution:

Ste Graph Remarks


p
1 Insert A into
A
stack A(1)

2
Insert B into
A B
stack B (2)
A(1)
3
A B Insert F into stack

F (3)
F B (2)
A(1)
4
Insert E into stack
A B
E (4)
F (3)
E F B (2)
A(1)
5 NO unvisited adjacent vertex for E, backtrack Delete E from stack

E (4, 1)
F (3)
B (2)
A(1)
6 NO unvisited adjacent vertex for F, backtrack Delete F from stack

E (4, 1)
F (3, 2)
B (2)
A(1)
7
Insert G into
A B
stack E (4, 1)
F (3, 2) G (5)
E F G B (2)
A(1)
8
A B C Insert C into

stack E (4, 1) C
E F G
(6)
F (3, 2) G (5)
9
C D Insert D into stack
A B
D (7)
E (4, 1) C (6)
E F G
F (3, 2) G (5)
B (2)
A(1)
10
A B C D Insert H into stack
H (8)
D (7)
G H E (4, 1) C (6)
E F
F (3, 2) G (5)
B (2)
A(1)
11 NO unvisited adjacent vertex for H, backtrack
Delete H from stack
H (8, 3)
D (7)
E (4, 1) C (6)
F (3, 2) G (5)
B (2)
A(1)

64
12 NO unvisited adjacent vertex for D, backtrack Delete D from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6)
F (3, 2) G (5)
B (2)
A(1)
13 NO unvisited adjacent vertex for C, backtrack Delete C from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5)
B (2)
A(1)
14 NO unvisited adjacent vertex for G, backtrack Delete G from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2)
A(1)
15 NO unvisited adjacent vertex for B, backtrack Delete B from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2,
7)
16 NO unvisited adjacent vertex for A, backtrack A(1) A from stack
Delete
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2, 7)
A(1, 8)
Stack becomes empty. Algorithm stops as all
the nodes in the given graph are visited

The DFS tree is as follows: (dotted lines are back edges)


A

F G

E C

Applications of DFS:
• The two orderings are advantageous for various applications like topological
sorting, etc
• To check connectivity of a graph (number of times stack becomes empty tells the
number of components in the graph)
• To check if a graph is acyclic. (no back edges indicates no cycle)
• To find articulation point in a graph
Efficiency:
• Depends on the graph representation:
o Adjacency matrix : Θ(n2)
o Adjacency list: Θ(n + e)
Breadth-first search (BFS)
Description:
• BFS starts visiting vertices of a graph at an arbitrary vertex by marking it as
visited.
• It visits graph‘s vertices by across to all the neighbors of the last visited vertex
• Instead of a stack, BFS uses a queue
• Similar to level-by-level tree traversal
• ―Redraws‖ graph in tree-like fashion (with tree edges and cross edges
for undirected graph)

Algorithm:
ALGORITHM BFS (G)
//implements BFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: BFS tree/forest
Mark each vertex in V with 0 as a mark of being ―unvisited‖

66
count 0
for each vertex v in V do
if v is marked with 0
bfs(v)
bfs(v)
count count + 1
mark v with count and initialize a queue with v
while the queue is NOT empty do
for each vertex w in V adjacent to front‘s vertex v do
if w is marked with 0
count count + 1
mark w with count
add w to the queue
remove vertex v from the front of the queue
Example:
Starting at vertex A traverse the following graph using BFS traversal method:

A B C D

E F G H

Solution:

Ste Graph Remarks


p
1 Insert A into
A
queue A(1)

2
A Insert B, E into
B
queue A(1), B (2),

E E(3)
B (2), E(3)
3
A B Insert F, G into

queue B(2), E(3),


E F G
F(3), G(4)
47 NO unvisited adjacent vertex for E, backtrack Delete E from queue

F(3), G(4)
5 NO unvisited adjacent vertex for F, backtrack Delete F from queue

G(4)
6
A B C Insert C, H into

queue G(4), C(5),


E F G H
H(6)

A C D Insert D into
B
7
queue C(5),
E F G H
H(6), D(7)
H(6), D(7)
NO unvisited adjacent vertex for H, backtrack Delete H from queue
8
D(7)
9 NO unvisited adjacent vertex for D, backtrack Delete D from queue
Queue becomes empty. Algorithm stops as all
the nodes in the given graph are visited

The BFS tree is as follows: (dotted lines are cross edges)

B E

F G

C H

D
Applications of BFS:
• To check connectivity of a graph (number of times queue becomes empty tells the
number of components in the graph)
• To check if a graph is acyclic. (no cross edges indicates no cycle)
• To find minimum edge path in a graph
Efficiency:
• Depends on the graph representation:
o Array : Θ(n2)
o List: Θ(n + e)

Difference between DFS & BFS:

DFS BFS

Data structure Stack Queue


No. of vertex orderings 2 orderings 1 ordering
Edge types Tree edge Tree edge
Back edge Cross edge
Applications Connectivity Connectivity
Acyclicity Acyclicity
Articulation points Minimum edge paths
Efficiency for Θ(n2) Θ(n2)
adjacency matrix
Efficiency for Θ(n + e) Θ(n + e)
adjacency lists

5.4 Topological Sorting


Description:
Topological sorting is a sorting method to list the vertices of the graph in such an order
that for every edge in the graph, the vertex where the edge starts is listed before the
vertex where the edge ends.

NOTE: There is no solution for topological sorting if there is a cycle in the digraph .
[MUST be a DAG]

Topological sorting problem can be solved by using


1. DFS method
2. Source removal method

DFS Method:
• Perform DFS traversal and note the order in which vertices become dead ends
(popped order)
• Reverse the order, yield the topological sorting.
Example:
Apply DFS – based algorithm to solve the topological sorting problem for the given
graph:
C4
C1
C3

C2 C5

Ste Graph Remarks


p1 Insert C1 into
C1
stack C1(1)

2
C1 Insert C2 into
C3
stack C2 (2)
C1(1)

3 Insert C4 into stack


C4
C1
C4 (3)
C3 C2 (2)
C1(1)
4 Insert C5 into stack
C4
C1 C5 (4)
C3 C4 (3)
C2 (2)
C5 C1(1)

5 NO unvisited adjacent vertex for C5, backtrack Delete C5 from stack

C5 (4, 1)
C4 (3)
C2 (2)
C1(1)
6 NO unvisited adjacent vertex for C4, backtrack Delete C4 from stack

C5 (4, 1)
C4 (3, 2)
C2 (2)
C1(1)

70
7 NO unvisited adjacent vertex for C3, backtrack Delete C3 from stack

C5 (4, 1)
C4 (3,2)
C2 (2,
3)
8 NO unvisited adjacent vertex for C1, backtrack C1(1) C1 from stack
Delete

C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4)
Stack becomes empty, but there is a node which is unvisited, therefore start the
DFS again from arbitrarily selecting a unvisited node as source
9 Insert C2 into stack
C2
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4) C2(5)

10 NO unvisited adjacent vertex for C2, backtrack Delete C2 from stack

C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4) C2(5, 5)
Stack becomes empty, NO unvisited node left, therefore algorithm
stops. The popping – off order is:
C5, C4, C3, C1, C2,
Topologically sorted list (reverse of pop
order): C2, C1 C3 C4 C5

Source removal method:


• Purely based on decrease & conquer
• Repeatedly identify in a remaining digraph a source, which is a vertex with no
incoming edges
• Delete it along with all the edges outgoing from it.

Example:
Apply Source removal – based algorithm to solve the topological sorting problem for the
given graph:
C4
C1
C3
C2 C5
Solution:

C4
C4 Delete C1
C1
C3
C3
C5
C2 C C2
5

Delete C2 C4 C4

C3

C5 C5

Delete C4 Delete C5
C5

The topological order is C1, C2, C3, C4, C5

7
2
5.5 SPACE-TIME TRADEOFFS:
Introduction
Two varieties of space-for-time algorithms:

• input enhancement — preprocess the input (or its part) to store some info
to be used later in solving the problem

• counting sorts
• string searching algorithms

• pre structuring — preprocess the input to make accessing its elements


easier 1)hashing 2)indexing schemes (e.g., B-trees)

5.6 SORTING BY COUNTING


Assume elements to be sorted belong to a known set of small values between l and u, with
potential duplication
Constraint: we cannot overwrite the original list Distribution Counting: compute the
frequency of each
element and later accumulate sum of frequencies (distribution)

Algorithm:

for j ← 0 to u-l do D[j] ← 0 // init frequencies


for i ← 0 to n-1 do D[A[i]-l] ← D[A[i] - l] + 1 // compute frequencies
for j ← 1 to u-l do D[j] ← D[j-1] + D[j] // reuse for distribution
for i ← n-1 downto 0 do
j ← A[i] - l
S[D[j] - 1] ← A[i]
D[j] ← D[j] - 1
return S

7
3
5.7 INPUT ENHANCEMENT IN STRING MATCHING.
Horspool’s Algorithm
A simplified version of Boyer-Moore algorithm: preprocesses pattern to
generate a shift table that determines how much to shift the patter when a
mismatch occurs. Always makes a shift based on the text‘s character c aligned
with the last compared (mismatched) character in the pattern according to the shift
table‘s entry for c

void horspoolInitocc()
{
int j;
char a;

for (a=0; a<alphabetsize; a++)


occ[a]=-1;

for (j=0; j<m-1; j++)


{
a=p[j];
occ[a]=j;

74
}
}
void horspoolSearch()
{
int i=0, j;
while (i<=n-m)
{
j=m-1;
while (j>=0 && p[j]==t[i+j]) j--;
if (j<0) report(i);
i+=m-1;
i-=occ[t[i]];
}
}
Time complexity

• preprocessing phase in O(m+ п) time and O(п) space complexity.

• searching phase in O(mn) time complexity.

• the average number of comparisons for one text character is between 1/п and 2/(п+1).

(п is the number of storing characters)


UNIT 6
LIMITATIONS OF ALGORITHMIC POWER AND COPING WITH THEM

6.1LOWER-BOUND ARGUMENTS
6.2DECISION TREES
6.3 P, NP, AND NP-COMPLETE PROBLEMS

Objectives
We now move into the third and final major theme for this course.
1. Tools for analyzing algorithms.
2. Design strategies for designing algorithms.
3. Identifying and coping with the limitations of algorithms.

Efficiency of an algorithm
• By establishing the asymptotic efficiency class

• The efficiency class for selection sort (quadratic) is lower. Does this mean that
selection sort is a ―better‖ algorithm?
– Like comparing ―paples‖ to ―roanges‖
• By analyzing how efficient a particular algorithm is compared to other algorithms for
the same problem
– It is desirable to know the best possible efficiency any algorithm solving
This problem may have – establishing a lower bound

6.1 LOWER BOUNDS ARGUMENTS

Lower bound: an estimate on a minimum amount of work needed to solve a given problem
Examples:
• number of comparisons needed to find the largest element in a set of n numbers
• number of comparisons needed to sort an array of size n
• number of comparisons necessary for searching in a sorted array
• number of multiplications needed to multiply two n-by-n matrices
Lower bound can be
– an exact count
– an efficiency class (Ω)
• Tight lower bound: there exists an algorithm with the same efficiency as the lower
bound

7
7
Problem Lower Tightne
sorting bound ye ss
searching in a sorted array Ω(nlog n) s
element uniqueness Ω(log n)
Ω(nlog yes
n-digit integer multiplication n) unkno
multiplication of n-by-n Ω(n) wn
matrices Ω(n2) unkno

Methods for Establishing Lower Bounds


• Trivial lower bounds
• Information-theoretic arguments (decision trees)
• Adversary arguments
• Problem reduction

Trivial Lower Bounds


Trivial lower bounds: based on counting the number of items that must be processed in input
and generated as output
Examples
• finding max element
• polynomial evaluation
• sorting
• element uniqueness
• computing the product of two n-by-n matrices
Conclusions
• may and may not be useful
• be careful in deciding how many elements must be processed

6.2 DECISION TREES


Decision tree — a convenient model of algorithms involving Comparisons in which: (i)
internal nodes represent comparisons (ii) leaves represent outcomes
Decision tree for 3-element insertion sort

78
79
Deriving a Lower Bound from Decision Trees
• How does such a tree help us find lower bounds?
– There must be at least one leaf for each correct output.
– The tree must be tall enough to have that many leaves.
• In a binary tree with l leaves and height h,
h ≥ log2 l
Decision Tree and Sorting Algorithms

Decision Tree and Sorting Algorithms


• Number of leaves (outcomes) ≥ n!
• Height of binary tree with n! leaves ≥ log 2n!
• Minimum number of comparisons in the worst case ≥ log2n! for any comparison-based
sorting algorithm
• log2n! ≈ n log2n
• This lower bound is tight (e.g. mergesort)

Decision Tree and Searching a Sorted Array


Decision Tree and Searching a Sorted Array
• Number of leaves (outcomes) = n + n+1 = 2n+1
• Height of ternary tree with 2n+1 leaves ≥ log3 (2n+1)
• This lower bound is NOT tight (the number of worst-case comparisons for binary search
is log2 (n+1), and log3 (2n+1) ≤ log2 (n+1))
• Can we find a better lower bound or find an algorithm with better efficiency than binary
search?

Decision Tree and Searching a Sorted Array

Decision Tree and Searching a Sorted Array


• Consider using a binary tree where internal nodes also serve as successful searches and
leaves only represent unsuccessful searches
• Number of leaves (outcomes) = n + 1
• Height of binary tree with n+1 leaves ≥ log2 (n+1)
• This lower bound is tight

Decision-tree example
Decision-tree model
A decision tree can model the execution of any comparison sort:
• One tree for each input size n.
• View the algorithm as splitting whenever it compares two elements.
• The tree contains the comparisons along all possible instruction traces.
• The running time of the algorithm = the length of the path taken.
• Worst-case running time = height of tree.

Any comparison sort can be turned into a Decision tree

81
82
Decision Tree Model
• In the insertion sort example, the decision tree reveals all possible key comparison
sequences for 3 distinct numbers.
• There are exactly 3!=6 possible output sequences.
• Different comparison sorts should generate different decision trees.
• It should be clear that, in theory, we should be able to draw a decision tree for ANY
comparison sort algorithm.
• Given a particular input sequence, the path from root to the leaf path traces a particular
key comparison sequence performed by that comparison sort.
- The length of that path represented the number of key comparisons performed by
the sorting algorithm.
• When we come to a leaf, the sorting algorithm has determined the sorted order.
• Notice that a correct sorting algorithm should be able to sort EVERY possible output
sorted order.
• Since, there are n! possible sorted order, there are n! leaves in the decision tree.
• Given a decision tree, the height of the tree represent the longest length of a root to leaf
path.
• It follows the height of the decision tree represents the largest number of key
comparisons, which is the worst-case running time of the sorting algorithm.

―Ayn comparison based sorting algorithm takes Ω(n logn) to sort a list of n distinct
elements in the worst-case.‖
– any comparison sort ← model by a decision tree
– worst-case running time ← the height of decision tree
―Any comparison based sorting algorithm takes Ω(n logn) to sort a list of n distinct
elements in the worst-case.‖
• We want to find a lower bound (Ω) of the height of a binary tree that has n! Leaves.
◇What is the minimum height of a binary tree that has n! leaves?

83
• The binary tree must be a complete tree (recall the definition of complete tree).
• Hence the minimum (lower bound) height is θ(log2(n!)).

• log2(n!)
= log2(n) + log2(n-1) + …+ log2(n/2)+….
≥ n/2 log2(n/2) = n/2 log2(n) – n/2
So, log2(n!) = Ω(n logn).
• It follows the height of a binary tree which has n! leaves is at least Ω(n logn) ◊ worst-
case running time is at least Ω(n logn)
• Putting everything together, we have
―Any comparison based sorting algorithm takes Ω(n logn) to sort a list of n
distinct elements in the worst-case.‖

Adversary Arguments
Adversary argument: a method of proving a lower bound by playing role of adversary that makes
algorithm work the hardest by adjusting input
Example: ―Geussing‖ a number between 1 and n with yes/no questions
Adversary: Puts the number in a larger of the two subsets generated by last question

Lower Bounds by Problem Reduction


Idea: If problem P is at least as hard as problem Q, then a lower bound for Q is also a lower
bound for P.
Hence, find problem Q with a known lower bound that can be reduced to problem P in question.

Example: Euclidean MST problem


• Given a set of n points in the plane, construct a tree with minimum total length that
connects the given points. (considered as problem P)
• To get a lower bound for this problem, reduce the element uniqueness problem to it.
(considered as problem Q)
• If an algorithm faster than n log n exists for Euclidean MST, then one exists for element
uniqueness also. Aha! A contradiction! Therefore, any algorithm for Euclidean MST
must take Ω(n log n) time.

Classifying Problem Complexity


Is the problem tractable, i.e., is there a polynomial-time (O(p(n)) algorithm that solves it?
Possible answers:
• yes (give examples)
• no
–because it‘s been proved that no algorithm exists at all
(e.g., Turing‘s halting problem)
–because it‘s been proved that any algorithm for the problem takes exponential
time unknown
Problem Types: Optimization and Decision
• Optimization problem: find a solution that maximizes or minimizes some objective
function
• Decision problem: answer yes/no to a question
Many problems have decision and optimization versions.
E.g.: traveling salesman problem
• optimization: find Hamiltonian cycle of minimum length
• decision: find Hamiltonian cycle of length ≤ m
Decision problems are more convenient for formal investigation of their complexity.

6.3 CLASS P
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a polynomial of
problem‘s input size n
Examples:
• searching
• element uniqueness
• graph connectivity
• graph acyclicity
• primality testing (finally proved in 2002)

6.4 CLASS NP
NP (nondeterministic polynomial): class of decision problems whose proposed solutions can be
verified in polynomial time = solvable by a nondeterministic polynomial algorithm
A nondeterministic polynomial algorithm is an abstract two-stage procedure that:
• generates a random string purported to solve the problem
• checks whether this solution is correct in polynomial time
By definition, it solves the problem if it‘s capable of generating and verifying a solution on one
of its tries
Why this definition?
• led to development of the rich theory called ―ocmputational complexity‖

Example: CNF satisfiability


Problem: Is a boolean expression in its conjunctive normal form (CNF) satisfiable, i.e., are there
values of its variables that makes it true?
This problem is in NP. Nondeterministic algorithm:
• Guess truth assignment
• Substitute the values into the CNF formula to see if it evaluates to true

Example: (A | ¬B | ¬C) & (A | B) & (¬B | ¬D | E) & (¬D | ¬E)


Truth assignments:
ABCDE
0 0 0 0 0
. . .
1 1 1 1 1
Checking phase: O(n)

85
What problems are in NP?
• Hamiltonian circuit existence
• Partition problem: Is it possible to partition a set of n integers into two disjoint subsets
with the same sum?
• Decision versions of TSP, knapsack problem, graph coloring, and many other
combinatorial optimization problems. (Few exceptions include: MST, shortest paths)
• All the problems in P can also be solved in this manner (but no guessing is necessary), so
we have:
P = NP
• Big question: P = NP ?

P = NP ?
• One of the most important unsolved problems is computer science is whether or not
P=NP.
– If P=NP, then a ridiculous number of problems currently believed to be very
difficult will turn out have efficient algorithms.
– If P≠NP, then those problems definitely do not have polynomial time solutions.
• Most computer scientists suspect that P ≠ NP. These suspicions are based partly on the
idea of NP-completeness.

6.5 NP-COMPLETE PROBLEMS


A decision problem D is NP-complete if it‘s as hard as any problem in NP, i.e.,
• D is in NP
• every problem in NP is polynomial-time reducible to D
NP problems

NP -complete
problem

Other NP-complete problems obtained through polynomial-


time reductions from a known NP-complete problem
NP problems

known
NP -complete problem

candidate
for NP - completeness

Examples: TSP, knapsack, partition, graph-coloring and hundreds of other problems of


combinatorial nature

General Definitions: P, NP, NP-hard, NP-easy, and NP-complete


• Problems
- Decision problems (yes/no)
- Optimization problems (solution with best score)
• P
- Decision problems (decision problems) that can be solved in polynomial time
- can be solved ―feficiently‖
• NP
- Decision problems whose ―YES‖ answer can be verified in polynomial
time, if we already have the proof (or witness)
• co-NP
- Decision problems whose ―NO‖ answer can be verified in polynomial
time, if we already have the proof (or witness)
• e.g. The satisfiability problem (SAT)

- Given a boolean formula

(x1 �x2 � x3 � x4 ) � ( x5 � x6 � x7 )� x8 � x9
is it possible to assign the input x1...x9, so that the formula evaluates to TRUE?

- If the answer is YES with a proof (i.e. an assignment of input value), then we can check the
proof in polynomial time (SAT is in NP)
- We may not be able to check the NO answer in polynomial time (Nobody really knows.)
• NP-hard
- A problem is NP-hard iff an polynomial-time algorithm for it implies a polynomial-
time algorithm for every problem in NP
- NP-hard problems are at least as hard as NP problems
• NP-complete
- A problem is NP-complete if it is NP-hard, and is an element of NP (NP-easy)
• Relationship between decision problems and optimization problems

- every optimization problem has a corresponding decision problem


- optimization: minimize x, subject to constraints
- yes/no: is there a solution, such that x is less than c?
- an optimization problem is NP-hard (NP-complete)
if its corresponding decision problem is NP-hard (NP-complete)

• How to know another problem, A, is NP-complete?


- To prove that A is NP-complete, reduce a known NP-complete problem to A

• Requirement for Reduction


- Polynomial time
- YES to A also implies YES to SAT, while
NO to A also implies No to SAT

Examples of NP-complete problems


Vertex cover
• Vertex cover
- given a graph G=(V,E), find the smallest number of vertexes that cover each edge
- Decision problem: is the graph has a vertex cover of size K?
• Independent set
- independent set: a set of vertices in the graph with no edges between each pair of nodes.
- given a graph G=(V,E), find the largest independent set
- reduction from vertex cover:
• Set cover
- given a universal set U, and several subsets S1,...Sn
- find the least number of subsets that contains each elements in the universal set

Polynomial (P) Problems


• Are solvable in polynomial time
• Are solvable in O(nk), where k is some constant.
• Most of the algorithms we have covered are in P

Nondeterministic Polynomial (NP) Problems


• This class of problems has solutions that are verifiable in polynomial time.
– Thus any problem in P is also NP, since we would be able to solve it in
polynomial time, we can also verify it in polynomial time
NP-Complete Problems
• Is an NP-Problem
• Is at least as difficult as an NP problem (is reducible to it)
• More formally, a decision problem C is NP-Complete if:

– C is in NP
– Any known NP-hard (or complete) problem ≤p C
– Thus a proof must show these two being satisfied

Exponential Time Algorithms

Examples
• Longest path problem: (similar to Shortest path problem, which requires polynomial
time) suspected to require exponential time, since there is no known polynomial
algorithm.
• Hamiltonian Cycle problem: Traverses all vertices exactly once and form a cycle.
Reduction
• P1 : is an unknown problem (easy/hard ?)
• P2 : is known to be difficult
If we can easily solve P2 using P1 as a subroutine then P1 is difficult
Must create the inputs for P1 in polynomial time.

* P1 is definitely difficult because you know you cannot solve P2 in polynomial time unless you
use a component that is also difficult (it cannot be the mapping since the mapping is known to be
polynomial)

Decision Problems
Represent problem as a decision with a boolean output
– Easier to solve when comparing to other problems
– Hence all problems are converted to decision problems.
P = {all decision problems that can be solved in polynomial time}
NP = {all decision problems where a solution is proposed, can be verified in polynomial time}
NP-complete: the subset of NP which are the ―ahrdest problems‖

Alternative Representation
• Every element p in P1 can map to an element q in P2 such that p is true (decision
problem) if and only if q is also true.
• Must find a mapping for such true elements in P1 and P2, as well as for false elements.
• Ensure that mapping can be done in polynomial time.
• *Note: P1 is unknown, P2 is difficult

Cook’s Theorem
• Stephen Cook (Turing award winner) found the first NP-Complete problem, 3SAT.
Basically a problem from Logic.
Generally described using Boolean formula.
A Boolean formula involves AND, OR, NOT operators and some variables.
Ex: (x or y) and (x or z), where x, y, z are boolean variables.
Problem Definition – Given a boolean formula of m clauses, each containing ‗n‘
boolean variables, can you assign some values to these variables so that the
formula can be true?
Boolean formula: (x v y v ẑ) Λ (x v y v ẑ)
Try all sets of solutions. Thus we have exponential set of possible solutions. So it
is a NPC problem.
• Having one definite NP-Complete problem means others can also be proven NP-
Complete, using reduction.

90
Unit 7
COPING WITH LIMITATIONS OF ALGORITHMIC POWER
7.1 Backtracking: n - Queens problem,
7.2 Hamiltonian Circuit Problem,
7.3 Subset –Sum Problem.
7.4 Branch-and-Bound: Assignment Problem,
7.5 Knapsack Problem,
7.6 Traveling Salesperson Problem.
7.7 Approximation Algorithms for NP-Hard Problems – Traveling Salesperson Problem,
Knapsack Problem

Introduction
Tackling Difficult Combinatorial Problems
• There are two principal approaches to tackling difficult combinatorial problems (NP-hard
problems):
• Use a strategy that guarantees solving the problem exactly but doesn‘t guarantee to find a
solution in polynomial time
• Use an approximation algorithm that can find an approximate (sub-optimal) solution in
polynomial time

Exact Solution Strategies


• exhaustive search (brute force)
– useful only for small instances
• dynamic programming
– applicable to some problems (e.g., the knapsack problem)
• backtracking
– eliminates some unnecessary cases from consideration
– yields solutions in reasonable time for many instances but worst case is still
exponential
• branch-and-bound
– further refines the backtracking idea for optimization problems

7.1 Backtracking
• Suppose you have to make a series of decisions, among various choices, where
– You don‘t have enough information to know what to choose
– Each decision leads to a new set of choices
– Some sequence of choices (possibly more than one) may be a solution to your
problem

• Backtracking is a methodical way of trying out various sequences of decisions, until you
find one that ―wroks‖
Backtracking : A Scenario

A tree is composed of nodes


Backtracking can be thought of as searching a tree for a particular ―gaol‖ leaf node
• Each non-leaf node in a tree is a parent of one or more other nodes (its children)
• Each node in the tree, other than the root, has exactly one parent

The backtracking algorithm


• Backtracking is really quite simple--we ―xeplore‖ each node, as follows:
• To ―xeplore‖ node N:
1. If N is a goal node, return ―scucess‖
2. If N is a leaf node, return ―afilure‖
3. For each child C of N,
3.1. Explore C
3.1.1. If C was successful, return ―usccess‖
4. Return ―afilure‖

• Construct the state-space tree


– nodes: partial solutions
– edges: choices in extending partial solutions

• Explore the state space tree using depth-first search

• ―Prune‖ nonpromising nodes


– dfs stops exploring subtrees rooted at nodes that cannot lead to a solution and
backtracks to such a node‘s parent to continue the search

Example:
n-Queens Problem
Place n queens on an n-by-n chess board so that no two of them are in the same row, column, or
diagonal

92
State-Space Tree of the 4-Queens Problem

7.1.1N-Queens Problem:
• The object is to place queens on a chess board in such as way as no queen can capture
another one in a single move
– Recall that a queen can move horz, vert, or diagonally an infinite distance
• This implies that no two queens can be on the same row, col, or diagonal
– We usually want to know how many different placements there are

4-Queens
• Lets take a look at the simple problem of placing queens 4 queens on a 4x4 board
• The brute-force solution is to place the first queen, then the second, third, and forth
– After all are placed we determine if they are placed legally
• There are 16 spots for the first queen, 15 for the second, etc.
– Leading to 16*15*14*13 = 43,680 different combinations
• Obviously this isn‘t a good way to solve the problem
• First lets use the fact that no two queens can be in the same col to help us
– That means we get to place a queen in each col
• So we can place the first queen into the first col, the second into the second, etc.
• This cuts down on the amount of work
– Now there are 4 spots for the first queen, 4 spots for the second, etc.
• 4*4*4*4 = 256 different combinations
• However, we can still do better because as we place each queen we can look at the
previous queens we have placed to make sure our new queen is not in the same row or
diagonal as a previously place queen
• Then we could use a Greedy-like strategy to select the next valid position for each col

• So now what do we do?


• Well, this is very much like solving a maze

– As you walk though the maze you have to make a series of choices
– If one of your choices leads to a dead end, you need to back up to the last choice
you made and take a different route
• That is, you need to change one of your earlier selections
– Eventually you will find your way out of the maze
• This type of problem is often viewed as a state-space tree
– A tree of all the states that the problem can be in
• We start with an empty board state at the root and try to work our way down to a
leaf node
– Leaf nodes are completed boar

Eight Queen Problem


• The solution is a vector of length 8 (a(1), a(2), a(3), ...., a(8)).
a(i) corresponds to the column where we should place the i-th queen.
• The solution is to build a partial solution element by element until it is complete.
• We should backtrack in case we reach to a partial solution of length k, that we couldn't
expand any more.
Eight Queen Problem: Algorithm
putQueen(row) {
for every position col on the same row
if position col is available
place the next queen in position col
if (row<8)
putQueen(row+1);
else success;
remove the queen from position col
}
putQueen(row) {
for every position col on the same row
if position col is available
place the next queen in position col
if (row<8)
putQueen(row+1);
else success;
remove the queen from position col
}
Eight Queen Problem: Implementation
• Define an 8 by 8 array of 1s and 0s to represent the chessboard
• The array is initialized to 1s, and when a queen is put in a position (c,r), board[r][c] is set
to zero
• Note that the search space is very huge: 16,772, 216 possibilities.
• Is there a way to reduce search space? Yes Search Pruning.
• We know that for queens:
each row will have exactly one queen
each column will have exactly one queen
each diagonal will have at most one queen
• This will help us to model the chessboard not as a 2-D array, but as a set of rows,
columns and diagonals.

7.2 HAMILTONIAN CYCLE


• Hamiltonian Cycle: a cycle that contains every node exactly once
• Problem:Given a graph, does it have a Hamiltonian cycle?

Background
• NP-complete problem:
– Most difficult problems in NP (non- deterministic polynomial time)
• A decision problem D is NP-complete if it is complete for NP, meaning that:
– it is in NP
– it is NP-hard (every other problem in NP is reducible to it.)
• As they grow large, we are not able to solve them in a reasonable time (polynomial time)

Alternative Definition
• . NP Problem such as Hamiltonian Cycle :
– Cannot be solved in Poly-time
– Given a solution, easy to verify in poly-time

a b

c f
0
d e
with 3 w/o 3

3 0
with 5 w/o 5 with 5 w/o 5

8 3 5 0
with 6 w/o 6 with 6 w/o 6 with 6 w/o 6 X
0+13<15
14 8 9 3 11 5
X with 7 w/o 7 X X X X
14+7>15 9+7>15 3+7<15 11+7>14 5+7<15

15 solution
8
X
8
<
1
5
7.3 SUBSET –SUM PROBLEM.

• Problem: Given n positive integers w1, ... wn and a positive integer S. Find all subsets
of w1, ... wn that sum to S.
• Example:
n=3, S=6, and w1=2, w2=4, w3=6
• Solutions:
{2,4} and {6}
• We will assume a binary state space tree.
• The nodes at depth 1 are for including (yes, no) item 1, the nodes at depth 2 are for
item 2, etc.
• The left branch includes wi, and the right branch excludes wi.
• The nodes contain the sum of the weights included so far

Sum of subset Problem: State SpaceTree for 3 items

A Depth First Search solution


• Problems can be solved using depth first search of the (implicit) state space tree.
• Each node will save its depth and its (possibly partial) current solution
• DFS can check whether node v is a leaf.
– If it is a leaf then check if the current solution satisfies the constraints
– Code can be added to find the optimal solution
A DFS solution
• Such a DFS algorithm will be very slow.
• It does not check for every solution state (node) whether a solution has been reached, or
whether a partial solution can lead to a feasible solution
• Is there a more efficient solution?
Backtracking solution to Sum of Subsets
• Definition: We call a node nonpromising if it cannot lead to a feasible (or optimal)
solution, otherwise it is promising
• Main idea: Backtracking consists of doing a DFS of the state space tree, checking
whether each node is promising and if the node is nonpromising backtracking to the
node‘s parent
• The state space tree consisting of expanded nodes only is called the pruned state space
tree
• The following slide shows the pruned state space tree for the sum of subsets example
• There are only 15 nodes in the pruned state space tree
• The full state space tree has 31 nodes

Backtracking algorithm
void checknode (node v) {
node u

if (promising ( v ))
if (aSolutionAt( v ))
write the solution
else //expand the node
for ( each child u of v )
checknode ( u )

Checknode
• Checknode uses the functions:
– promising(v) which checks that the partial solution represented by v can lead to
the required solution
– aSolutionAt(v) which checks whether the partial solution represented by node v
solves the problem.
Sum of subsets – when is a node “promising”?
• Consider a node at depth i
• weightSoFar = weight of node, i.e., sum of numbers included in partial solution node
represents
• totalPossibleLeft = weight of the remaining items i+1 to n (for a node at depth i)
• A node at depth i is non-promising
if (weightSoFar + totalPossibleLeft < S )
or (weightSoFar + w[i+1] > S )
• To be able to use this ―rpomising function‖ the wi must be sorted in non-decreasing order

sumOfSubsets ( i, weightSoFar, totalPossibleLeft )


1) if (promising ( i )) //may lead to solution
2) then if ( weightSoFar == S )
3) then print include[ 1 ] to include[ i ] //found solution
4) else //expand the node when weightSoFar < S
5) include [ i + 1 ] = "yes‖ //try including
6) sumOfSubsets ( i + 1,
weightSoFar + w[i + 1],
totalPossibleLeft - w[i + 1] )
7) include [ i + 1 ] = "no‖ //try excluding
8) sumOfSubsets ( i + 1, weightSoFar ,
totalPossibleLeft - w[i + 1] )
boolean promising (i )
1) return ( weightSoFar + totalPossibleLeft ≥ S) &&
( weightSoFar == S || weightSoFar + w[i + 1] ≤ S )
Prints all solutions!
n
Initial call sum Of Subsets (0, 0, )
∑ w i
i=1
7.4 Branch and Bound Searching Strategies

Feasible Solution vs. Optimal Solution


• DFS, BFS, hill climbing and best-first search can be used to solve some searching
problem for searching a feasible solution.
• However, they cannot be used to solve the optimization problems for searching an (the)
optimal solution.

The Branch-and-bound strategy


• This strategy can be used to solve optimization problems without an exhaustive search in
the average case.
• 2 mechanisms:
– A mechanism to generate branches when searching the solution space
– A mechanism to generate a bound so that many braches can be terminated
• It is efficient in the average case because many branches can be terminated very early.
• Although it is usually very efficient, a very large tree may be generated in the worst case.
• Many NP-hard problem can be solved by B&B efficiently in the average case; however,
the worst case time complexity is still exponential.

Bounding
• A bound on a node is a guarantee that any solution obtained from expanding the node
will be:
– Greater than some number (lower bound)
– Or less than some number (upper bound)
• If we are looking for a minimal optimal, as we are in weighted graph coloring, then we
need a lower bound
– For example, if the best solution we have found so far has a cost of 12 and the
lower bound on a node is 15 then there is no point in expanding the node
• The node cannot lead to anything better than a 15
• We can compute a lower bound for weighted graph color in the following way:
– The actual cost of getting to the node
– Plus a bound on the future cost
• Min weight color * number of nodes still to color
– That is, the future cost cannot be any better than this
• Recall that we could either perform a depth-first or a breadth-first search
– Without bounding, it didn‘t matter which one we used because we had to expand
the entire tree to find the optimal solution
– Does it matter with bounding?
• Hint: think about when you can prune via bounding
• We prune (via bounding) when:
(currentBestSolutionCost <= nodeBound)
• This tells us that we get more pruning if:
– The currentBestSolution is low
– And the nodeBound is high
• So we want to find a low solution quickly and we want the highest possible lower bound

You might also like