Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

DAA Notes Module 2

The document outlines the course content for 'Analysis and Design of Algorithms' at RV Institute of Technology and Management, focusing on the Divide and Conquer algorithm. It details the methodology, advantages, limitations, and applications of this approach, including examples like detecting counterfeit coins and finding maximum and minimum values in an array. Additionally, it covers sorting algorithms such as Merge Sort and Quick Sort, providing their definitions, algorithms, and complexities.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

DAA Notes Module 2

The document outlines the course content for 'Analysis and Design of Algorithms' at RV Institute of Technology and Management, focusing on the Divide and Conquer algorithm. It details the methodology, advantages, limitations, and applications of this approach, including examples like detecting counterfeit coins and finding maximum and minimum values in an array. Additionally, it covers sorting algorithms such as Merge Sort and Quick Sort, providing their definitions, algorithms, and complexities.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

RV Institute of Technology and Management®

RV Educational Institutions®
RV Institute of Technology and Management
(Affiliated to VTU, Belagavi)

th
JP Nagar 8 Phase, Bengaluru - 560076
Department of Information Science and Engineering

Course Name: Analysis and Design of Algorithms


Course Code: BCS401
IV Semester
2022 Scheme

Prepared By :

Dr. Niharika P. Kumar

Dr. Shruthi P.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 1


RV Institute of Technology and Management®

Module -2
Divide and Conquer
2.1 Divide And Conquer Algorithm
In this approach ,we solve a problem recursively by applying 3 steps as shown in Fig 2.1.
1. DIVIDE-break the problem into several sub problems of smaller size.
2. CONQUER-solve the problem recursively.
3. COMBINE-combine these solutions to create a solution to the original problem.

Fig 2.1: shows the general divide & conquer plan

CONTROL ABSTRACTION FOR DIVIDE AND CONQUER ALGORITHM


Algorithm D and C (P)
{
if small(P)
then return S(P)
else
{ divide P into smaller instances P1 ,P2 .....Pk
Apply D and C to each sub problem
Return combine (D and C(P1)+ D and C(P2)+.......+D and C(Pk))
}
}
Let a recurrence relation is expressed as
T(n)= ϴ(1), if n<=C

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 2


RV Institute of Technology and Management®

aT(n/b) + D(n)+ C(n) ,otherwise


then n=input size a=no. Of sub-problems n/b= input size of the sub-problems

Advantages of Divide & Conquer technique:


• For solving conceptually difficult problems like Tower Of Hanoi, divide & conquer is a
powerful tool
• Results in efficient algorithms
• Divide & Conquer algorithms are adapted foe execution in multi-processor
machines
• Results in algorithms that use memory cache efficiently.
Limitations of divide & conquer technique:
• Recursion is slow
• Very simple problem may be more complicated than an iterative approach.
• Stack usage is very high where function states needs to be stored.
• Memory management is needed.

General divide & conquer recurrence:


An instance of size n can be divided into b instances of size n/b, with “a” of them needing to be
solved. [ a _ 1, b > 1]. Assume size n is a power of b.
The recurrence for the running time T(n) is as follows:
T(n) = T(1) for n=1
T(n) = aT(n/b) + f(n) for n>1
where:
f(n) – a function that accounts for the time spent on dividing the problem into smaller
ones and on combining their solutions.Therefore, the order of growth of T(n) depends on the values
of the constants a & b and the order of growth of the function f(n).
Application of Divide and Conquer Approach
Following are some problems, which are solved using divide and conquer approach.
• Detecting a Counter fiet coin
• Finding the maximum and minimum of a sequence of numbers
• Strassen’s matrix multiplication
• Merge sort
• Binary search

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 3


RV Institute of Technology and Management®

Detecting a Counter fiet coin:

Given a two pan fair balance and N identically looking coins, out of which only one coin is lighter
(or heavier). To figure out the odd coin, how many minimum number of weighing are required in
the worst case?
Difficult: Given a two pan fair balance and N identically looking coins out of which only one
coin may be defective. How can we trace which coin, if any, is odd one and also determine
whether it is lighter or heavier in minimum number of trials in the worst case?
Approach 1: Linear method
In this method taking 2 and weighing repeatedly takes (n-1) comparisons.

Approach 2: Divide and Conquer method


Algorithms (n is a power of 2)
• Left-to-right: Compare (1,2), then (3,4), and so forth .... till you find the counterfeit one. (≤ 8
comparisons or n/2)
• Divide-and-Conquer: Split into two sets of eight, then the lighter set is split into two sets and so
on. (= 4 comparisons or log n). Illustration is shown in Fig 2.2.

Fig 2.2: Divide and conquer method for Counter fiet Coin problem

Max-Min Problem
Problem Statement: The Max-Min Problem in algorithm analysis is finding the maximum and
minimum value in an array.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 4


RV Institute of Technology and Management®

Solution
• To find the maximum and minimum numbers in a given array numbers[] of size n, the
following algorithm can be used. First we are representing the naive method and then we will
present divide and conquer approach.

• Naïve Method
• Naïve method is a basic method to solve any problem. In this method, the maximum and
minimum number can be found separately. To find the maximum and minimum numbers, the
following straightforward algorithm can be used.

Algorithm: Max-Min-Element (numbers[])


max := numbers[1]
min := numbers[1]
for i = 2 to n do
{
if numbers[i] > max then max := numbers[i] ;
if numbers[i] < min then min := numbers[i] ;
}
return (max, min) ;

Analysis
• The number of comparison in Naive method is 2n - 2.
• The number of comparisons can be reduced using the divide and conquer approach.
Following is the technique: Divide and Conquer Approach
• In this approach, the array is divided into two halves. Then using recursive approach maximum
and minimum numbers in each halves are found. Later, return the maximum of two maxima of
each half and the minimum of two minima of each half.
• In this given problem, the number of elements in an array is y−x+1y−x+1, where y is greater than
or equal to x.
• Max−Min(x,y)Max−Min(x,y) will return the maximum and minimum values of an
array numbers[x...y]numbers[x...y].
Algorithm: Max - Min(i,j,max,min)
if (i==j) then max:=min:=a[i]; // small(P)
else if (i=j-1) then //small(P)
{ if(a[i]<a[j]) then
{
max:=a[j]; min:=a[i];
}
else
{
max:=a[i]; min:=a[j];

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 5


RV Institute of Technology and Management®

}
else
{
mid:= └(i+j)/2┘;
Max-Min(i,mid,max.min);
Max-Min(mid+1,j,max1,min1);

if (max<max1) then max:=max1;


if (min > min1) then min:=min1;

}
}

Let P=(n,a[i],....,a[j]) denote the arbitrary instances of the problem.


n= number of elements in the list a[i],.,a[j].
Let small(P) be true when n<=2

• In this case, the maximum and minimum are a[i] if n=1


• If ==2, the problem can be solved by making one comparison.
• If the list has more than 2 elements, P has to be divided into smaller
instances,
P1=(n/2,a[i],..,d(n/2)) and
P2=(n-n/2,a[n/2+1],..,a[n])
After this it can be solved by recursively invoking the same algorithm.

Example
a: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
A good way of keeping track of recursive calls is to build a tree by adding a node each time
a new call is made. On the array a[ ] above, the following tree is produced as shown in Fig 2.3.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 6


RV Institute of Technology and Management®

Fig 2.3: Tree produced by algorithm Max-Min


Analysis
• Let T(n) be the number of comparisons made by Max−Min().
• If T(n) represents the numbers, then the recurrence relation can be represented as

• Let us assume that n is in the form of power of 2. Hence, n = 2k where k is height of the
recursion tree.

When n is a power of two, n = 2k


for some positive integer k, then
T(n) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2
= 4T(n/4) + 4 + 2
.
.
.
= 2k-1 T(2) + ∑(1≤i≤k-1) 2k
= 2k-1 + 2k – 2
= 3n/2 – 2 = O(n)
Note that 3n/2 – 2 is the best, average, worst case number of comparison when n is a power of
two.

• Compared to Naïve method, in divide and conquer approach, the number of comparisons is
less. However, using the asymptotic notation both of the approaches are represented by O(n).

2.2 Merge sort and its complexity.


Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.

Features:
• Is a comparison based algorithm
• Is a stable algorithm
• Is a perfect example of divide & conquer algorithm design strategy

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 7


RV Institute of Technology and Management®

• It was invented by John Von Neumann


Algorithm:

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 8


RV Institute of Technology and Management®

ALGORITHM Mergesort ( A[0… n-1] )


//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order
if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )
Mergesort ( C[0… (n/2 -1)] )
Merge ( B, C, A )

ALGORITHM Merge ( B[0… p-1], C[0… q-1], A[0… p+q-1] )


//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C
i <-- 0
j<-- 0
k<--0
while i < p and j < q do
if B[i]<= C[j]
A[k] <-- B[i]
i<-- i + 1
else
A[k]<-- C[j]
j<--j + 1
k<-- k + 1
if i == p
copy C [ j… q-1 ] to A [ k… (p+q-1) ]
else
copy B [ i… p-1 ] to A [ k… (p+q-1) ]

Example:
Apply merge sort for the following list of elements: 6, 3, 7, 8, 2, 4, 5, 1
Solution: Merge sort illustration is shown in Fig 2.4.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 9


RV Institute of Technology and Management®

Fig 2.4: Merge Sort illustration


Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Worst case: During key comparison, neither of the two arrays becomes empty before the other one
contains just one element.
• Let T(n) denotes the number of times basic operation is executed. Then
T(n) = 2T(n/2) + Cmerge(n) for n > 1
T(1) = 0
where, Cmerge(n) is the number of key comparison made during the merging stage.
In the worst case:
Cmerge(n) = 2 Cmerge(n/2) + n-1 for n > 1
Cmerge(1) = 0
Thus we have:
(1) T(1) = 1
(2) T(N) = 2T(N/2) + N
Next we will solve this recurrence relation. First we divide (2) by N:
(3) T(N) / N = T(N/2) / (N/2) + 1
N is a power of two, so we can write
(4) T(N/2) / (N/2) = T(N/4) / (N/4) +1

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 10


RV Institute of Technology and Management®

(5) T(N/4) / (N/4) = T(N/8) / (N/8) +1


(6) T(N/8) / (N/8) = T(N/16) / (N/16) +1
(7) ……
(8) T(2) / 2 = T(1) / 1 + 1

Now we add equations (3) through (8) : the sum of their left-hand sides
will be equal to the sum of their right-hand sides:
T(N) / N + T(N/2) / (N/2) + T(N/4) / (N/4) + … + T(2)/2 =
T(N/2) / (N/2) + T(N/4) / (N/4) + ….+ T(2) / 2 + T(1) / 1 + LogN
(LogN is the sum of 1s in the right-hand sides)

After crossing the equal term, we get


(9) T(N)/N = T(1)/1 + LogN
T(1) is 1, hence we obtain
(10) T(N) = N + NlogN = O(NlogN)
Hence the complexity of the MergeSort algorithm is O(NlogN).

Advantages:
• Number of comparisons performed is nearly optimal.
• Mergesort will never degrade to O(n2)
• It can be applied to files of any size

Limitations:
• Uses O(n) additional memory.

2.3 Quick Sort (Also known as “partition-exchange sort”)


Definition:
Quick sort is a well –known sorting algorithm, based on divide & conquer approach. The steps are:
1. Pick an element called pivot from the list
2. Reorder the list so that all elements which are less than the pivot come before the
pivot and all elements greater than pivot come after it. After this partitioning, the
pivot is in its final position. This is called the partition operation
3. Recursively sort the sub-list of lesser elements and sub-list of greater elements.
Features:
• Developed by C.A.R. Hoare
• Efficient algorithm
• NOT stable sort
• Significantly faster in practice, than other algorithms

Algorithm

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 11


RV Institute of Technology and Management®

ALGORITHM Quicksort (A[ l …r ])


//sorts by quick sort
//i/p: A sub-array A[l..r] of A[0..n-1],defined by its left and right indices l and r
//o/p: The sub-array A[l..r], sorted in ascending order
if l < r
s <-- Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]

ALGORITHM Partition (A[l ..r])


//Partitions a sub-array by using its first element as a pivot
//i/p: A sub-array A[l..r] of A[0..n-1], defined by its left and right indices l and r (l < r)
//o/p: A partition of A[l..r], with the split position returned as this function’s value
p<-- A[l]
i <--l;
j <--r + 1;
Repeat
repeat i<-- i + 1 until A[i] >=p //left-right scan
repeat j<--j – 1 until A[j] < p //right-left scan
if (i < j) //need to continue with the scan
swap(A[i], a[j])
until i >= j //no need to scan
swap(A[l], A[j])
return j

Example: Sort by quick sort the following list: 5, 3, 1, 9, 8, 2, 4, 7, show recursion tree.
Illustration of quick sort is shown in Fig 2.5.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 12


RV Institute of Technology and Management®

Fig 2.5: Quick Sort Illustration

Recurrence relation based on the code

1. the for loop stops when the indexes cross, hence there are N iterations
2. swap is one operation – disregarded
3. Two recursive calls:
a. Best case: each call is on half the array, hence time is 2T(N/2)
b. Worst case: one array is empty, the other is N-1 elements, hence time is T(N-1)

T(N) = T(i) + T(N - i -1) + cN

The time to sort the file is equal to


o the time to sort the left partition with i elements, plus

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 13


RV Institute of Technology and Management®

o the time to sort the right partition with N-i-1 elements, plus
o the time to build the partitions

Worst case analysis:


The pivot is the smallest element
T(N) = T(N-1) + cN, N > 1
Telescoping:
T(N-1) = T(N-2) + c(N-1)
T(N-2) = T(N-3) + c(N-2)
T(N-3) = T(N-4) + c(N-3)
T(2) = T(1) + c.2

Add all equations:


T(N) + T(N-1) + T(N-2) + … + T(2) =
= T(N-1) + T(N-2) + … + T(2) + T(1) + c(N) + c(N-1) + c(N-2) + … + c.2
T(N) = T(1) + c times (the sum of 2 thru N) = T(1) + c(N(N+1)/2 -1) = O(N2)

Best-case analysis:
The pivot is in the middle
T(N) = 2T(N/2) + cN
Divide by N:
T(N) / N = T(N/2) / (N/2) + c
Telescoping:
T(N/2) / (N/2) = T(N/4) / (N/4) + c
T(N/4) / (N/4) = T(N/8) / (N/8) + c
……
T(2) / 2 = T(1) / (1) + c

Add all equations:


T(N) / N + T(N/2) / (N/2) + T(N/4) / (N/4) + …. + T(2) / 2 =
= (N/2) / (N/2) + T(N/4) / (N/4) + … + T(1) / (1) + c.logN

After crossing the equal terms: T(N)/N = T(1) + cLogN


T(N) = N + NcLogN = O(NlogN)

Average case analysis


Similar computations, resulting in T(N) = O(NlogN)
The average value of T(i) is 1/N times the sum of T(0) through T(N-1)
1/N S T(j), j = 0 thru N-1
T(N) = 2/N (S T(j)) + cN
Multiply by N

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 14


RV Institute of Technology and Management®

NT(N) = 2(S T(j)) + cN*N

To remove the summation, we rewrite the equation for N-1:


(N-1)T(N-1) = 2(S T(j)) + c(N-1)2, j = 0 thru N-2

and subtract:
NT(N) - (N-1)T(N-1) = 2T(N-1) + 2cN -c

Prepare for telescoping. Rearrange terms, drop the insignificant c:


NT(N) = (N+1)T(N-1) + 2cN

Divide by N(N+1):
T(N)/(N+1) = T(N-1)/N + 2c/(N+1)

Telescope:
T(N)/(N+1) = T(N-1)/N + 2c/(N+1)
T(N-1)/(N) = T(N-2)/(N-1)+ 2c/(N)
T(N-2)/(N-1) = T(N-3)/(N-2) + 2c/(N-1)
….
T(2)/3 = T(1)/2 + 2c /3

Add the equations and cross equal terms:


T(N)/(N+1) = T(1)/2 +2c S (1/j), j = 3 to N+1
The sum S (1/j), j =3 to N-1, is about LogN
Thus T(N) = O(NlogN)

2.4 Binary search


Binary search can be performed on a sorted array. In this approach, the index of an element x is
determined if the element belongs to the list of elements. If the array is unsorted, linear search is
used to determine the position.

Solution
In this algorithm, we want to find whether element x belongs to a set of numbers stored in an
array numbers[]. Where l and r represent the left and right index of a sub-array in which searching
operation should be performed.

Algorithm: Binary-Search(numbers[], x, l, r)
if l = r then
return l
else

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 15


RV Institute of Technology and Management®

m := ⌊(l + r) / 2⌋
if x ≤ numbers[m] then
return Binary-Search(numbers[], x, l, m)
else
return Binary-Search(numbers[], x, m+1, r)

Analysis
Linear search runs in O(n) time. Whereas binary search produces the result in O(log n) time.
Let T(n) be the number of comparisons in worst-case in an array of n elements.
Hence,

Using this recurrence relation T(n)=log n.


Therefore, binary search uses O(log n) time.
Example
In this example, we are going to search element 63.

Best case - O (1) comparisons


In the best case, the item X is the middle in the array A. A constant number of comparisons (actually
just 1) are required.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 16


RV Institute of Technology and Management®

Worst case - O (log n) comparisons


In the worst case, the item X does not exist in the array A at all. Through each recursion or iteration of
Binary Search, the size of the admissible range is halved. This halving can be done ceiling(lg n ) times.
Thus, ceiling(lg n ) comparisons are required.
Average case - O (log n) comparisons
To find the average case, take the sum over all elements of the product of number of comparisons
required to find each element and the probability of searching for that element. To simplify the
analysis, assume that no item which is not in A will be searched for, and that the probabilities of
searching for each element are uniform.

The difference between O(log(N)) and O(N) is extremely significant when N is large: for any practical
problem it is crucial that we avoid O(N) searches. For example, suppose your array contains 2 billion (2
* 10**9) values. Linear search would involve about a billion comparisons; binary search would require
only 32 comparisons!

The space requirements for the recursive and iterative versions of binary search are different. Iterative
Binary Search requires only a constant amount of space, while Recursive Binary Search requires space
proportional to the number of comparisons to maintain the recursion stack.

Applications of binary search:


• Number guessing game
• Word lists/search dictionary etc
Advantages:
• Efficient on very big list
• Can be implemented iteratively/recursively
Limitations:
• Interacts poorly with the memory hierarchy
• Requires given list to be sorted
• Due to random access of list element, needs arrays instead of linked list.

2.5 Matrix multiplication


The general method of matrix multiplication and later we will discuss Strassen’s matrix multiplication
algorithm.
Problem Statement
Let us consider two matrices X and Y. We want to calculate the resultant matrix Z by
multiplying X and Y.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 17


RV Institute of Technology and Management®

Naïve Method
First, we will discuss naïve method and its complexity. Here, we are calculating Z = X × Y. Using
Naïve method, two matrices (X and Y) can be multiplied if the order of these matrices are p × q and q
× r. Following is the algorithm.

Algorithm: Matrix-Multiplication (X, Y, Z)


for i = 1 to p do
for j = 1 to r do
Z[i,j] := 0
for k = 1 to q do
Z[i,j] := Z[i,j] + X[i,k] × Y[k,j]
Complexity
Here, we assume that integer operations take O(1) time. There are three forloops in this algorithm
and one is nested in other. Hence, the algorithm takes O(n3) time to execute.

Strassen’s Matrix Multiplication Algorithm


Description :
Strassen’s algorithm is used for matrix multiplication. It is asymptotically faster than the standard
matrix multiplication algorithm.

ALGORITHM using Divide & Conquer method:


Let A & B be two square matrices.
C= A * B
We have,

Where:
M1 = (A00 + A11) * (B00 + B11)
M2 = (A10 + A11) * B00
M3 = A00 * (B01 – B11)
M4 = A11 * (B10 – B00)
M5 = (A00 + A01) * B11
M6 = (A10 – A00) * (B00 + B01)
M7 = (A01 – A11) * (B10 + B11)

Analysis:
• Input size: n – order of square matrix.
• Basic operation:

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 18


RV Institute of Technology and Management®

o Multiplication (7)
o Addition (18)
o Subtraction (4)
• No best, worst, average case
• Let M(n) be the number of multiplication’s made by the algorithm, Therefore we have:
M (n) = 7 M(n/2) for n > 1
M (1) = 1
Assume n = 2k
M (2k) = 7 M(2k-1)
= 7 [7 M(2k-2)]
= 72 M(2k-2)

= 7i M(2k-i)
When i=k
= 7k M(2k-k)
= 7k

2.6 Decrease & Conquer


Description:
Decrease & conquer is a general algorithm design strategy based on exploiting the relationship
between a solution to a given instance of a problem and a solution to a smaller instance of the same
problem. The exploitation can be either top-down (recursive) or bottom-up (non-recursive).
The major variations of decrease and conquer are:
1. Decrease by a constant :(usually by 1):
a. insertion sort
b. graph traversal algorithms (DFS and BFS)
c. topological sorting
d. algorithms for generating permutations, subsets
2. Decrease by a constant factor (usually by half)
a. binary search and bisection method
3. Variable size decrease
a. Euclid’s algorithm

Following Fig 2.6 shows the major variations of decrease & conquer approach.
Decrease by a constant :(usually by 1):

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 19


RV Institute of Technology and Management®

Fig 2.6: Decrease by a Constant

Decrease by a constant factor (usually by half) is shown in Fig 2.7.

Fig 2.7 : Decrease by a constant factor

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 20


RV Institute of Technology and Management®

2.7 Depth-first search (DFS) and Breadth-first search (BFS)


DFS and BFS are two graph traversing algorithms and follow decrease and conquer approach –
decrease by one variation to traverse the graph
Some useful definition:
• Tree edges: edges used by DFS traversal to reach previously unvisited vertices
• Back edges: edges connecting vertices to previously visited vertices other than their
immediate predecessor in the traversals
• Cross edges: edge that connects an unvisited vertex to vertex other than its
immediate predecessor. (connects siblings)
• DAG: Directed acyclic graph

Depth-first search (DFS)


Description:
• DFS starts visiting vertices of a graph at an arbitrary vertex by marking it as visited.
• It visits graph’s vertices by always moving away from last visited vertex to an
unvisited one, backtracks if no adjacent unvisited vertex is available.
• Is a recursive algorithm, it uses a stack
• A vertex is pushed onto the stack when it’s reached for the first time
• A vertex is popped off the stack when it becomes a dead end, i.e., when there is no
adjacent unvisited vertex
• “Redraws” graph in tree-like fashion (with tree edges and back edges for undirected
graph)

Algorithm:
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree
Mark each vertex in V with 0 as a mark of being “unvisited”
count <--0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
count <--count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 21


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 22


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 23


RV Institute of Technology and Management®

The DFS tree is shown in the Fig 2.8 below.

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 24


RV Institute of Technology and Management®

Fig 2.8: DFS tree

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 25


RV Institute of Technology and Management®

Breadth-first search (BFS)


Description:
• BFS starts visiting vertices of a graph at an arbitrary vertex by marking it as visited.
• It visits graph’s vertices by across to all the neighbors of the last visited vertex
• Instead of a stack, BFS uses a queue
• Similar to level-by-level tree traversal
• “Redraws” graph in tree-like fashion (with tree edges and cross edges for undirected
graph)

Algorithm:
ALGORITHM BFS (G)
//implements BFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: BFS tree/forest

Mark each vertex in V with 0 as a mark of being “unvisited”


count <--0
for each vertex v in V do
if v is marked with 0
bfs(v)

bfs(v)
count <-- count + 1
mark v with count and initialize a queue with v
while the queue is NOT empty do
for each vertex w in V adjacent to front’s vertex v do
if w is marked with 0
count<-- count + 1
mark w with count
add w to the queue
remove vertex v from the front of the queue

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 26


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 27


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 28


RV Institute of Technology and Management®

2.9 Topological Sorting


Description:
Topological sorting is a sorting method to list the vertices of the graph in such an order that for
every edge in the graph, the vertex where the edge starts is listed before the vertex where the edge
ends.

NOTE:
There is no solution for topological sorting if there is a cycle in the digraph .
[MUST be a DAG]
Topological sorting problem can be solved by using
1. DFS method
2. Source removal method

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 29


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 30


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 31


RV Institute of Technology and Management®

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 32


RV Institute of Technology and Management®

Topological Sort Algorithms: DFS based algorithm


Topological-Sort(G)
{
1. Call dfsAllVertices on G to compute f[v] for each vertex v
2. If G contains a back edge (v, w) (i.e., if f[w] > f[v]) , report error ;
3. else, as each vertex is finished prepend it to a list; // or push in stack
4. Return the list; // list is a valid topological sort
}

• Running time is O(V+E), which is the running time for DFS.

Topological Sort Algorithms: Source Removal Algorithm


• The Source Removal Topological sort algorithm is:
– Pick a source u [vertex with in-degree zero], output it.
– Remove u and all edges out of u.
– Repeat until graph is empty.

int topologicalOrderTraversal( ){
int numVisitedVertices = 0;
while(there are more vertices to be visited){
if(there is no vertex with in-degree 0)
break;
else{
select a vertex v that has in-degree 0;
visit v;
numVisitedVertices++;
delete v and all its emanating edges;
}
}
return numVisitedVertices;
}

*****

ANALYSIS AND DESIGN OF ALGORITHMS(BCS401) 33

You might also like