Notes Data
Notes Data
1
UNIT V
Divide and Conquer Strategy – Greedy Algorithm – Dynamic Programming – Backtracking
Strategy - List Searches using Linear Search - Binary Search - Fibonacci Search - Sorting
Techniques - Insertion sort - Heap sort - Bubble sort - Quick sort - Merge sort - Analysis of sorting
techniques.
5.1 DIVIDE AND CONQUER STRATEGY
Divide and conquer is an algorithmic paradigm. A typical Divide and Conquer algorithm solves a
problem using following three steps:
1. Divide: This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent a part of the original problem. This step generally takes a recursive approach to divide
the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic in
nature but still represent some part of the actual problem.
2. Conquer: This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the
problems are considered ‘solved’ on their own.
3. Merge/Combine: When the smaller sub-problems are solved, this stage recursively combines them
until they formulate a solution of the original problem. This algorithmic approach works
recursively and conquer & merge steps works so close that they appear as one.
The some examples of Divide and conquer problem based algorithms are
Merge Sort
Quick Sort
Binary Search
Master Theorem
Fibonacci Search
Strassen’s Matrix multiplication
Karatsuba Algorithm
The complexity for the multiplication of two matrices using the naïve method is O(n3), whereas
using the divide and conquer approach. This approach also simplifies other problems, such as
the Tower of Hanoi.
This approach is suitable for multiprocessing systems.
It makes efficient use of memory caches.
2
5.2 GREEDY ALGORITHM
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment, without worrying about the future result it would bring. In other words,
the locally best choices aim at producing globally best results. This algorithm may not be the best
option for all the problems. It may produce wrong results in some cases. This algorithm never goes
back to reverse the decision made.
A greedy algorithm is designed to achieve optimum solution for a given problem. In greedy
algorithm approach, decisions are made from the given solution domain. As being greedy, the
closest solution that seems to provide an optimum solution is chosen. Greedy algorithms try to find
a localized optimum solution, which may eventually lead to globally optimized solutions.
However, generally greedy algorithms do not provide globally optimized solutions. This algorithm
works in a top-down approach.
3
5.3 DYNAMIC PROGRAMMING
Dynamic programming approach is similar to divide and conquer in breaking down the problem
into smaller and yet smaller possible sub-problems. But, unlike, divide and conquer, these sub-
problems are not solved independently. Here the results of these smaller sub-problems are
remembered and used for similar or overlapping sub-problems.
In backtracking problem, the algorithm tries to find a sequence path to the solution which has some
small checkpoints from where the problem can backtrack if no feasible solution is found for the
problem.
Example:
4
Fig 5.1 Backtracking algorithm method
In fig 5.1, Green is the start point, blue is the intermediate point, red are points with no feasible
solution, dark green is end solution. Here, when the algorithm propagates to an end to check if it is
a solution or not, if it is then returns the solution otherwise backtracks to the point one step behind
it to find track to the next point to find solution.
Algorithm
Backtrack(x)
if x is not a solution
return false
if x is a new solution
backtrack(expand x)
Let’s use this backtracking problem to find the solution to N-Queen Problem.
In N-Queen problem, we are given an NxN chessboard and we have to place n queens on the board
in such a way that no two queens attack each other. A queen will attack another queen if it is
placed in horizontal, vertical or diagonal points in its way. Here, we will do 4-Queen problem is
shown in Fig 5.2.
5
Fig 5.2 N Queens Problem
Here, the binary output for n queen problem with 1’s as queens to the positions are placed.
{0 , 1 , 0 , 0}
{0 , 0 , 0 , 1}
{1 , 0 , 0 , 0}
{0 , 0 , 1 , 0}
For solving n queens problem, we will try placing queen into different positions of one row. And
checks if it clashes with other queens. If current positioning of queens if there are any two queens
attacking each other. If they are attacking, we will backtrack to previous location of the queen and
change its positions. And check clash of queen again.
A space state tree is a tree representing all the possible states (solution or non-solution) of the
problem from the root as an initial state to the leaf as a terminal state .
Linear search is the simplest method of searching. In this method, the element to be found
is sequentially searched in the list (Hence also called sequential search). This method can be
applied to a sorted or an unsorted list. Hence, it is used when the records are not stored in order.
6
Algorithm :
ALGORITHM LINEARSEARCH(K, N, X )
// K is the array containing the list of data items
// N is the number of data items in the list
// X is the data item to be searched
Repeat For I = 0 to N -1 Step 1
If K( I ) = X
Then
WRITE(“ELEMENT IS PRESENT AT LOCATION ” I)
RETURN
End If
End Repeat
WRITE(“ELEMENT NOT PRESENT IN THE COLLECTION”)
End LINEARSEARCH
In the above algorithm, K is the list of data items containing N data items. X is the data
item, which is to be searched in K. If the data item to be searched is found then the position where
it is found will be displayed. If the data item to be searched is not found then the appropriate
message will be displayed to indicate the user, that the data item is not found.
The data item X is compared with each and every element in the list K During this
comparison, if X matches with a data item in K, then the position where the data item was found
will gets displayed and the control comes out of the loop and the procedure comes to an end. If X
does not match with any of the data items in K, then finally the element not found will be
displayed.
Example:
X Number to be searched : 40
45 56 15 76 43 92 35 40 28 65
7
X K[0]
45 56 15 76 43 92 35 40 28 65
X K[1]
45 56 15 76 43 92 35 40 28 65
X K[2]
45 56 15 76 43 92 35 40 28 65
X K[3]
45 56 15 76 43 92 35 40 28 65
X K[4]
45 56 15 76 43 92 35 40 28 65
X K[5]
45 56 15 76 43 92 35 40 28 65
X K[6]
45 56 15 76 43 92 35 40 28 65
X = K[7] I = 7 : Number found at location 7 i.e., as a 8th element
The search( ) function gets the number to be searched in the variable ‘x’ as a argument and
compares it with each and every element in the array K. If the number ‘x’ is found in the array,
then the position ‘i’, where it is found will gets printed. If the number is not found in the entire list,
then the function will display the “not found message” to the user.
In the main( ) function receives the n values from the user and stored in the array K. The
user is prompted to enter the number to be searched and is passed to the search( ) function as a
argument. The search which receives the value x will give the appropriate message.
Advantages:
1. Simple and straight forward method.
2. Can be applied on both sorted and unsorted list.
8
Disadvantages:
1. Inefficient when the number of data items in the list increases.
Analysis
Best Case:Ω(1)
Average Case:Ө(n)
Principle: The data item to be searched is compared with the approximate middle entry of the list.
If it matches with the middle entry, then the position will be displayed. If the data item to be
searched is lesser than the middle entry, then it is compared with the middle entry of the first half
of the list and procedure is repeated on the first half until the required item is found. If the data
item is greater than the middle entry, then it is compared with the middle entry of the second half
of the list and procedure is repeated on the second half until the required item is found. This
process continues until the desired number is found or the search interval becomes empty.
Algorithm:
ALGORITHM BINARYSEARCH(K, N, X)
// K is the array containing the list of data items
// N is the number of data items in the list
// X is the data item to be searched
Lower 0, Upper N – 1
While Lower Upper
Mid ( Lower + Upper ) / 2
If (X < K[Mid])Then
Upper Mid -1
Else If (X>K[Mid]) Then
Lower Mid + 1
Else
9
Write(“ELEMENT FOUND AT”, MID)
Quit
End If
End If
End While
Write(“ELEMENT NOT PRESENT IN THE COLLECTION”)
End BINARYSEARCH
In Binary Search algorithm given above, K is the list of data items containing N data items.
X is the data item, which is to be searched in K. If the data item to be searched is found then the
position where it is found will be printed. If the data item to be searched is not found then
“Element Not Found” message will be printed, which will indicate the user, that the data item is
not found.
Initially lower is assumed 0 to point the first element in the list and upper is assumed as N-1
to point the last element in the list because the range of any array is 0 to N-1. The mid position of
the list is calculated by finding the average between lower and upper and X is compared with
K[mid]. If X is found equal to K[mid] then the value mid will gets printed, the control comes out
of the loop and the procedure comes to an end. If X is found lesser than K[mid], then upper is
assigned mid – 1, to search only in the first half of the list. If X is found greater than K[mid], then
lower is assigned mid + 1, to search only in the second half of the list. This process is continued
until the element searched is found or the collection becomes becomes empty.
Example:
X Number to be searched : 40
U Upper
L Lower=N-1
M Mid
1 22 35 40 43 56 75 83 90 98
L=0 M = (0+9)/2 =4 U=9
X< K[4] U = 4 – 1 = 3
10
1 22 35 40 43 56 75 83 90 98
L = 0 M = (0+3)/2=1 U = 3
X > K[1] L = 1 + 1 = 2
1 22 35 40 43 56 75 83 90 98
L, M = 2 U = 3
K > A [2] L = 2 + 1 = 3
1 22 35 40 43 56 75 83 90 98
L, M, U = 3
K = A[3] P = 3 : Number found at position 3
The binarysearch( ) function gets the element to be searched in the variable X. Initially
lower is assigned 0 and upper is assumed N – 1. The mid position is calculated and if K[mid] is
found equal to X, then mid position will gets displayed. If X is less than K[mid] upper is assigned
mid – 1 to search only in first half of the list else lower is assigned mid + 1 to search only in the
second half of the list. This is process is continued until lower is less than or equal to upper. If the
element is not found even after the loop is completed, then the Not Found Message will be
displayed to the user indicating that the element is not found.
Advantages:
1. Searches several times faster than the linear search.
2. In each iteration, it reduces the number of elements to be searched from n to n/2.
Disadvantages:
1. Binary search can be applied only on a sorted list.
Analysis of Binary Search
Bestcase :O(1)
Worst Case: O(log 2 n)
Average Case: O(log 2 n)
11
Fibonacci Search is a comparison-based technique that uses Fibonacci numbers to search an
element in a sorted array. Fibonacci search has some similarities and differences when compared
to the binary search.
Similarities:
1. Works for sorted arrays
2. A Divide and Conquer Algorithm.
3. Has Log n time complexity.
Differences:
Algorithm:
1. Find the smallest Fibonacci Number greater than or equal to n. Let this number be fibM [m’th
Fibonacci Number]. Let the two Fibonacci numbers preceding it be fibMm1 [(m-1)’th
Fibonacci Number] and fibMm2 [(m-2)’th Fibonacci Number].
2. While the array has elements to be inspected:
1. Compare x with the last element of the range covered by fibMm2
2. If x matches, return index
3. Else if x is less than the element, move the three Fibonacci variables two Fibonacci down,
indicating elimination of approximately rear two-third of the remaining array.
4. Else x is greater than the element, move the three Fibonacci variables one Fibonacci down.
Reset offset to index. Together these indicate the elimination of approximately front one-
third of the remaining array.
12
3. Since there might be a single element remaining for comparison, check if fibMm1 is 1. If
Yes, compare x with that remaining element. If match, return index.
Analysis
5.7 Sorting
Sorting is an operation of arranging data, in some given order, such as ascending or descending
with numerical data, or alphabetically with character data.
Let A be a list of n elements A 1, A2,…An in memory. Sorting A refers to the operation of
rearranging the contents of A so that they are increasing in order (numerically or
lexicographically), that is, A1 A2 A3 ….An
Sorting methods can be characterized into two broad categories:
Internal Sorting
External Sorting
Internal Sorting : Internal sorting methods are the methods that can be used when the list to be
sorted is small enough so that the entire sort can be carried out in main memory.
The key principle of internal sorting is that all the data items to be sorted are retained in the
main memory and random access in this memory space can be effectively used to sort the data
items.
The various internal sorting methods are:
Bubble Sort
Selection Sort
Insertion Sort
Quick Sort
Merge Sort
Heap Sort
External Sorting : External sorting methods are the methods to be used when the list to be sorted
is large and cannot be accommodated entirely in the main memory. In this case some of the data
is present in the main memory and some is kept in auxiliary memory such as hard disk, floppy
disk, tape, etc.
Objectives involved in design of sorting algorithms.
13
The main objectives involved in the design of sorting algorithm are:
1. Minimum number of exchanges
2. Large volume of data block movement
This implies that the designed and desired sorting algorithm must employ minimum number of
exchanges and the data should be moved in large blocks, which in turn increase the efficiency of
the sorting algorithm.
The main idea behind the insertion sort is to insert the i th element in its correct place in the
ith pass. Suppose an array K with n elements K[1], K[2],…K[N] is in memory. The insertion sort
algorithm scans K from K[0] to K[N-1], inserting each element K[I] into its proper position in the
previously sorted subarray K[0], K[1],..K[I-1].
Principle: In Insertion Sort algorithm, each element K[I] in the list is compared with all the
elements before it ( K[1] to K[I-1]). If any element K[J] is found to be greater than K[I] then K[J]
is inserted in the place of K[J}. This process is repeated till all the elements are sorted.
Algorithm:
ALGORITHM INSERTIONSORT(K, N)
// K is the array containing the list of data items
// N is the number of data items in the list
Repeat For I = 1 to N-1
Repeat For J = 0 to I – 1
If (K[I] < K[J])Then
Temp K[I]
Repeat For L = I-1 to J
K[L +1] K[L]
End Repeat
K[J] Temp
End If
End Repeat
End Repeat
End INSERTIONSORT
14
In Insertion Sort algorithm, N represents the total number of elements in the array K. I is
made to point to the second element in the list. In every pass the J is incremented to point to the
next element and is continued till it reaches the last element. During each pass K[I] is compared
all elements before it. If K[I] is lesser than K[J] in the list, then K[I] is inserted in position J.
Finally, a sorted list is obtained.
For performing the insertion operation, a variable temp is used to safely store K[I] in it and
then shift right elements starting from K[J] to K[I-1].
Example:
N = 10 Number of elements in the list
L Last
i=0 i =1 i=2 i=3 i=4 i=5 i=6 i=7 i=8 i=9
42 23 74 11 65 58 94 36 99 87
I=1 K[I] < K[0] Insert K[I] at 0 L=9
23 42 74 11 65 58 94 36 99 87
I=2 L=9
K[I] is greater than all elements before it. Hence No Change
23 42 74 11 65 58 94 36 99 87
I=3 K[I] < K[0] Insert K[I] at 0 L=9
11 23 42 74 65 58 94 36 99 87
I=4 L=9
K[I] < K[3] Insert K[I] at 3
11 23 42 65 74 58 94 36 99 87
I=5 L=9
K[I] < K[3] Insert K[I] at 3
15
11 23 42 58 65 74 94 36 99 87
I=6 L=9
K[I] is greater than all elements before it. Hence No Change
11 23 42 58 65 74 94 36 99 87
I=7 L=9
K[I] < K[2] Insert K[I] at 2
11 23 36 42 58 65 74 94 99 87
I=8 L=9
K[I] is greater than all elements before it. Hence No Change
11 23 36 42 58 65 74 94 99 87
I, L=9
K[I] < K[7] Insert K[I] at 7
Sorted List:
11 23 36 42 58 65 74 87 94 99
Advantages:
Sorts the list faster when the list has less number of elements.
Efficient in cases where a new element has to be inserted into a sorted list.
Disadvantages:
Very slow for large values of n.
Poor performance if the list is in almost reverse order.
16
algorithms. This algorithm is based on the fact that it is faster and easier to sort two small lists
than one larger one. The basic strategy of quick sort is to divide and conquer. Quick sort is also
known as partition exchange sort.
The purpose of the quick sort is to move a data item in the correct direction just enough for
it to reach its final place in the array. The method, therefore, reduces unnecessary swaps, and
moves an item a great distance in one move.
Principle: A pivotal item near the middle of the list is chosen, and then items on either side are
moved so that the data items on one side of the pivot element are smaller than the pivot element,
whereas those on the other side are larger. The middle or the pivot element is now in its correct
position. This procedure is then applied recursively to the 2 parts of the list, on either side of the
pivot element, until the whole list is sorted.
Algorithm:
ALGORITHM QUICKSORT(K, Lower, Upper)
// K is the array containing the list of data items
// Lower is the lower bound of the array
// Upper is the upper bound of the array
If (Lower < Upper) Then
BEGIN
I Lower
J Upper
pivotK[Lower]
If (lower < Upper)
then
While (I < J)
Begin
While (K[I] <= pivot)
II+1
End While
While (K[J] > pivot)
JJ–1
End While
17
If (I < J)Then
K[I] K[J]
End If
End While
K[J] K[Lower]
QUICKSORT(K, Lower, J – 1)
QUICKSORT(K, J + 1, Upper)
End If
End QUICKSORT
In Quick sort algorithm, Lower points to the first element in the list and the Upper points to
the last element in the list. Now I is made to point to the next location of Lower and J is made to
point to the Upper. K[Lower] is considered as the pivot element and at the end of the pass, the
correct position of the pivot element will be decided. Keep on incrementing I and stop when K[I]
> Key. When I stops, start decrementing J and stop when K[J] < Key. Now check if I < J. If so,
swap K[I] and K[J] and continue moving I and J in the same way. When I meets J the control
comes out of the loop and K[J] and K[Lower] are swapped. Now the element at position J is at
correct position and hence split the list into two partitions: (K{Lower] to K[J-1] and K[J+1] to
K[Upper] ). Apply the Quick sort algorithm recursively on these individual lists. Finally, a sorted
list is obtained.
Example:
42 23 74 11 65 58 94 36 99 87
L=0 I=0 U, J=9
Initially I=L+1 and J=U, Key=K[L]=42 is the pivot element.
18
42 23 74 11 65 58 94 36 99 87
L=0 I=2 J=7 U=9
K[2] > Key hence I stops at 2. K[7] < Key hence J stops at 7
Since I < J Swap K[2] and A[7]
42 23 36 11 65 58 94 74 99 87
L=0 J=3 I=4 U=9
K[4] > Key hence I stops at 4. K[3] < Key hence J stops at 3
Since I > J Swap K[3] and K[0]. Thus 42 go to correct position.
The list is partitioned into two lists as shown. The same process is applied to these lists
individually as shown.
List 1 List 2
11 23 36 42 65 58 94 74 99 87
L=0, I=1 J,U=2
(applying quicksort to list 1)
11 23 36 42 65 58 94 74 99 87
L=0, I=1 U=2 J=0 Since I>0 K[L] &K[J] gets swapped i.e., K[0] gets swapped with same
element because L,J=0
11 23 36 42 65 58 94 74 99 87
L=4 J=5 I=6 U=9
(applying quicksort to list 2)
(after swapping 58 & 65)
11 23 36 42 58 65 94 74 99 87
L=6 I=8 U, J=9
11 23 36 42 58 65 94 74 87 99
19
L=6 J=8 U, I=9
11 23 36 42 58 65 87 74 94 99
L=6 U, I, J=7
Sorted List:
11 23 36 42 58 65 74 87 94 99
Advantages:
1. Faster than any other commonly used sorting algorithms.
2. It has a best average case behavior.
Disadvantages:
1. As it uses recursion, stack space consumption is high.
Principle: The given list is divided into two roughly equal parts called the left and the right subfiles.
These subfiles are sorted using the algorithm recursively and then the two subfiles are merged together
to obtain the sorted file. Given a sequence of N elements K[0],K[1] ….K[N-1], the general idea is to
imagine them split into various subtables of size is equal to 1. So each set will have a individually
sorted items with it, then the resulting sorted sequences are merged to produce a single sorted sequence
of N elements. Thus this sorting method follows Divide and Conquer strategy. The problem gets
divided into various subproblems and by providing the solutions to the subproblems the solution for the
original problem will be provided.
Algorithm:
20
Temp[L] K[I]
II+1
L L+1
Else
Temp[L] K[J]
JJ+1
LL+1
End If
End While
21
mid (low + high)/2
MERGESORT(low, mid)
MERGESORT(mid + 1, high)
MERGE(low, mid, high)
End If
End MERGESORT
The first algorithm MERGE can be applied on two sorted lists to merge them. Initially, the
index variable I points to low and J points to mid + 1. K[I] is compared with K[J] and if K[I]
found to be lesser than K[J] then K[I] is stored in a temporary array and I is incremented otherwise
K[J] is stored in the temporary array and J is incremented. This comparison is continued till either
I crosses mid or J crosses high. If I crosses the mid first then that implies that all the elements in
first list is accommodated in the temporary array and hence the remaining elements in the second
list can be put into the temporary array as it is. If J crosses the high first then the remaining
elements of first list is put as it is in the temporary array. After this process we get a single sorted
list. Since this method merges 2 lists at a time, this is called 2-way merge sort.
In the MERGESORT algorithm, the given unsorted list is first split into N number of lists,
each list consisting of only 1 element. Then the MERGE algorithm is applied for first 2 lists to get
a single sorted list. Then the same thing is done on the next two lists and so on. This process is
continued till a single sorted list is obtained.
Example:
Let L low, M mid, H high
42 23 74 11 65 58 94 36 99 87
U M H
In each pass the mid value is calculated and based on that the list is split into two. This is done
recursively and at last N number of lists each having only one element is produced as shown.
22
Now merging operation is called on first two lists to produce a single sorted list, then the same
thing is done on the next two lists and so on. Finally a single sorted list is obtained.
Heap Sort is the sorting technique based on the interpretation of the given sequence of
elements as a binary tree. For interpretation the principle given below has to be used.
If a given node is in position I then the position of the left child and the right child can be
calculated using Left (L) = 2I and Right (R) = 2I + 1.
To check whether the right child exists or not, use the condition R ≤ N. If true, Right child exists
otherwise not.
The last node of the tree is N/2. After this position tree has only leaves.
Principle: The Max heap has the greatest element in the root. Hence the element in the root node
is pushed to the last position in the array and the remaining elements are converted into a max
heap. The root node of this new max heap will be the second largest element and hence pushed to
the last but one position in the array. This process is repeated till all the elements get sorted.
HEAPSORT ALGORITHM:
FUNCTION HEAPSORT()
BEGIN
CALL BUILDHEAP(A)
FOR I=HEAPSIZE DOWN TO 2
23
DO
(*SWAP BETWEEN A[1] AND A[I]*)
A[1]↔A[I]
HEAPSIZE=HEAPSIZE-1
CALL HEAPIFY(A,1)
END FOR
END FUNCTION HEAPSORT
FUNCTION BUILDHEAP(A)
BEGIN
N=HEAPSIZE
FOR I= N/2 DOWN TO 1 STEP -1
CALL HEAPIFY(A,I)
END FOR
END BUILDHEAP
FUNCTION HEAPIFY(A,I)
L=2 *I
R=L+1
IF L<=HEAPSIZE AND A[L]>A[I]
THEN
LARGE=L
ELSE
LARGE=I
END IF
IF R<=HEAPSIZE AND A[R]>A[LARGE]
THEN
LARGE=R
END IF
IF I<>LARGE
24
THEN
(*SWAP A[I] AND A[LARGE]*)
A[I] ↔A[LARGE]
CALL HEAPIFY(A,LARGE)
END IF
END HEAPIFY
Example:
Given a list A with 8 elements:
42 23 74 11 65 58 94 36
Phase 1:
The rearranged tree elements after the first phase is
25
Max heap is constructed.
Phase 2:
26
5.7.5 BUBBLE SORT
Bubble sort is a simple sorting algorithm where number of comparisons and number of swaps are
more.
Algorithm
Function Bubble sort( )
Read n
For I= 0 to n-1
Read a[I]
End for
//sort
For I=0 to n-2
For J=I+1 to n-1
If a[I]>a[J]
Then
T=a[I]
a[I]=a[J]
a[J]=T
27
End If
End For J
End For I
//print the sorted array
For I=0 to n-1
Write a[I]
End For
End bubble sort
Example:
N = 10 à Number of elements in the list
L à Points to last element ( Last )
Pass 1
42 23 74 11 65 58 94 36 99 87
Out of order à Swap L=9
23 42 74 11 65 58 94 36 99 87
Out of order à Swap L=9
23 42 11 74 65 58 94 36 99 87
Out of order à Swap L=9
23 42 11 65 74 58 94 36 99 87
Out of order à Swap L=9
23 42 11 65 58 74 94 36 99 87
28
Out of order à Swap L=9
23 42 11 65 58 74 36 94 99 87
Out of order à Swap L=9
Pass 2
23 42 11 65 58 74 36 94 87 99
Out of order à Swap L=8
23 11 42 65 58 74 36 94 87 99
Out of order à Swap L=8
23 11 42 58 65 74 36 94 87 99
Out of order à Swap L=8
23 11 42 58 65 36 74 94 87 99
Out of order à Swap L=8
Pass 3
23 11 42 58 65 36 74 87 94 99
Out of order à Swap L=7
23 11 42 58 65 36 74 87 94 99
Out of order à Swap L=7
Pass 4
29
23 11 42 58 36 65 74 87 94 99
Out of order à Swap L=6
11 23 42 58 36 65 74 87 94 99
Out of order à Swap L=6
Pass 5
11 23 42 36 58 65 74 87 94 99
Out of order à Swap L=5
Pass 6
Adjacent numbers are compared up to L=4. But no swapping takes place. As there was no
swapping taken place in this pass, the procedure comes to an end and we get a sorted list:
11 23 36 42 58 65 74 87 94 99
Advantages:
1. Simple and works well for list with less number of elements.
Disadvantages:
1. Inefficient when the list has large number of elements.
Requires more number of exchanges for every pass.
30
QUICK SORT DIVIDE AND CONQUER O(n log n)
TECHNIQUE
MERGE SORT DIVIDE AND CONQUER O(n log n)
TECHNIQUE
HEAP SORT TREE SORTING (selection O(n log n)
technique)
SELECTION SORT SELECTION O(n2)
31