DS Unit Iv
DS Unit Iv
DS Unit Iv
Introduction to sorting
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search
through it quickly and easily. The simplest example of sorting is a dictionary.
Sorting is the process of arranging items in a specific order or sequence. It is a common algorithmic problem in computer
science and is used in various applications such as searching, data analysis, and information retrieval.
In other words, you can say that sorting is also used to represent data in a more readable format. Some real-life examples
of sorting are:-
o Contact List in Your Mobile Phone also contains all contacts arranged alphabetically (lexicographically). So if
you look for contact then you don’t have to look randomly and can be searched easily and many others like Apps
on your phone.
o Keywords in Your book are also in a lexicographical manner and you can find it according to Chapter.
o When you perform sorting on an array/elements, many problems become easy (e.g. min/max, kth smallest/largest)
o Performing Sorting also gives no. of algorithmic solutions that contain many other ideas such as:
o Iterative
o Divide-and-conquer
o Comparison vs non-comparison based
o Recursive
The main advantage of sorting is time complexity and that’s the most important thing when you solve a problem
because it’s not enough you’re able to solve a problem but you should be able to solve it in the minimum time possible.
Sometimes problems can be solved easily and quickly based on sorting which can prevent you from every Coder’s
Nightmare i.e. TLE (Time Limit Exceeded).
Sorting Categories
The Sorting categories in data structures can be broadly classified into the following types:
Comparison-based Sorting Algorithms: These algorithms compare the elements being sorted to each other
and then place them in the desired order. Examples include Bubble Sort, Selection Sort, Insertion Sort, QuickSort, Merge
Sort, and Heap Sort.
Non-Comparison-based Sorting Algorithms: These algorithms do not compare the elements being sorted to each
other. Instead, they use some specific characteristics of the data to sort them. Examples include Counting Sort, Radix
Sort, and Bucket Sort.
Stable Sorting Algorithms: These algorithms maintain the relative order of elements with equal keys during Sorting.
Examples include Merge Sort and Insertion Sort.
Unstable Sorting Algorithms: These algorithms do not maintain the relative order of the elements with equal keys
during Sorting. Examples include QuickSort and Heap Sort.
An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some
acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an
available computer, typically as a function of the size of the input.
Algorithm Efficiency
Space efficiency - a measure of the amount of memory needed for an algorithm to execute.
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the
desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be
implemented in more than one programming language.
From the data structure point of view, following are some important categories of algorithms −
Search − Algorithm to search an item in a data structure.
Sort − Algorithm to sort items in a certain order.
Insert − Algorithm to insert item in a data structure.
Update − Algorithm to update an existing item in a data structure.
Delete − Algorithm to delete an existing item from a data structure.
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following characteristics −
Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their
inputs/outputs should be clear and must lead to only one meaning.
Input − An algorithm should have 0 or more well-defined inputs.
Output − An algorithm should have 1 or more well-defined outputs, and should match the desired output.
Finiteness − Algorithms must terminate after a finite number of steps.
Feasibility − Should be feasible with the available resources.
Independent − An algorithm should have step-by-step directions, which should be independent of any
programming code.
Hence, many solution algorithms can be derived for a given problem. The next step is to analyze those proposed solution
algorithms and implement the best suitable solution.
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation.
They are the following −
A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm is measured by
assuming that all other factors, for example, processor speed, are constant and have no effect on the
implementation.
A Posterior Analysis − This is an empirical analysis of an algorithm. The selected algorithm is implemented
using programming language. This is then executed on target computer machine. In this analysis, actual statistics
like running time and space required, are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or running time of various
operations involved. The running time of an operation can be defined as the number of computer instructions executed
per operation.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are the two main
factors, which decide the efficiency of X.
Time Factor − Time is measured by counting the number of key operations such as comparisons in the sorting
algorithm.
Space Factor − Space is measured by counting the maximum memory space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the algorithm in terms
of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the algorithm in its life cycle. The
space required by an algorithm is equal to the sum of the following two components −
A fixed part that is a space required to store certain data and variables, that are independent of the size of the
problem. For example, simple variables and constants used, program size, etc.
A variable part is a space required by variables, whose size depends on the size of the problem. For example,
dynamic memory allocation, recursion stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the variable part of
the algorithm, which depends on instance characteristic I. Following is a simple example that tries to explain the concept
−
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space depends on data types of
given variables and constant types and it will be multiplied accordingly.
Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run to completion. Time
requirements can be defined as a numerical function T(n), where T(n) can be measured as the number of steps, provided
each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c ∗ n,
where c is the time taken for the addition of two bits. Here, we observe that T(n) grows linearly as the input size
increases.
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its run-time performance.
Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an
algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant time.
Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. For
example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2).
This means the first operation running time will increase linearly with the increase in n and the running time of the
second operation will increase exponentially when n increases. Similarly, the running time of both operations will be
nearly the same if n is significantly small.
Usually, the time required by an algorithm falls under three types −
Best Case − Minimum time required for program execution.
Average Case − Average time required for program execution.
Worst Case − Maximum time required for program execution.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.
Ο Notation
Ω Notation
θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It measures the worst
case time complexity or the longest amount of time an algorithm can possibly take to complete.
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Constant − Ο(1)
Logarithmic − Ο(log n)
Linear − Ο(n)
Quadratic − Ο(n2)
Cubic − Ο(n3)
Polynomial − nΟ(1)
Exponential − 2Ο(n)
Sorting Algorithms
Sorting is the process of arranging the elements of an array so that they can be placed either in ascending or descending
order. For example, consider an array A = {A1, A2, A3, A4, ?? An }, the array is called to be in ascending order if
element of A are arranged like A1 > A2 > A3 > A4 > A5 > ? > An .
Consider an array;
There are many techniques by using which, sorting can be performed. In this section of the tutorial, we will discuss each
method in detail.
Sorting Algorithms
Sorting algorithms are described in the following table along with the description.
SN Sorting Description
Algorithms
1 Bubble Sort It is the simplest sort method which performs sorting by repeatedly moving the largest element to the
highest index of the array. It comprises of comparing each element to its adjacent element and
replace them accordingly.
2 Bucket Sort Bucket sort is also known as bin sort. It works by distributing the element into the array also called
buckets. In this sorting algorithms, Buckets are sorted individually by using different sorting
algorithm.
3 Comb Sort Comb Sort is the advanced form of Bubble Sort. Bubble Sort compares all the adjacent values while
comb sort removes all the turtle values or small values near the end of the list.
4 Counting Sort It is a sorting technique based on the keys i.e. objects are collected according to keys which are
small integers. Counting sort calculates the number of occurrence of objects and stores its key
values. New array is formed by adding previous key elements and assigning to objects.
5 Heap Sort In the heap sort, Min heap or max heap is maintained from the array elements deending upon the
choice and the elements are sorted by deleting the root element of the heap.
6 Insertion Sort As the name suggests, insertion sort inserts each element of the array to its proper place. It is a very
simple sort method which is used to arrange the deck of cards while playing bridge.
7 Merge Sort Merge sort follows divide and conquer approach in which, the list is first divided into the sets of
equal elements and then each half of the list is sorted by using merge sort. The sorted list is
combined again to form an elementary sorted array.
8 Quick Sort Quick sort is the most optimized sort algorithms which performs sorting in O(n log n) comparisons.
Like Merge sort, quick sort also work by using divide and conquer approach.
9 Radix Sort In Radix sort, the sorting is done as we do sort the names according to their alphabetical order. It is
the lenear sorting algorithm used for Inegers.
10 Selection Sort Selection sort finds the smallest element in the array and place it on the first place on the list, then it
finds the second smallest element in the array and place it on the second place. This process
continues until all the elements are moved to their correct ordering. It carries running time O(n2)
which is worst than insertion sort.
11 Shell Sort Shell sort is the generalization of insertion sort which overcomes the drawbacks of insertion sort by
comparing elements separated by a gap of several positions.
Bubble sort Algorithm
In the algorithm given below, suppose arr is an array of n elements. The assumed swap function in the algorithm will
swap the values of given array elements.
1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8. end BubbleSort
To understand the working of bubble sort algorithm, let's take an unsorted array. We are taking a short and accurate
array, as we know the complexity of bubble sort is O(n2).
Let the elements of array are -
First Pass
Sorting will start from the initial two elements. Let compare them to check which is greater.
Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.
Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look like -
Here, 35 is greater than 32. So, there is no swapping required as they are already sorted.
Now, the comparison will be in between 35 and 10.
Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we reach at the end of the array. After first
pass, the array will be -
Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be -
Now, move to the third iteration.
Third Pass
The same process will be followed for third iteration.
Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will be -
2. Space Complexity
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is required for
swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are required in
optimized bubble sort.
Now, let's discuss the optimized bubble sort algorithm.
Optimized Bubble sort Algorithm
In the bubble sort algorithm, comparisons are made even when the array is already sorted. Because of that, the execution
time increases.
To solve it, we can use an extra variable swapped. It is set to true if swapping requires; otherwise, it is set to false.
It will be helpful, as suppose after an iteration, if there is no swapping required, the value of variable swapped will
be false. It means that the elements are already sorted, and no further iterations are required.
This method will reduce the execution time and also optimizes the bubble sort.
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate the elements one by one
from the heap part of the list, and then insert them into the sorted part of the list.
Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End
BuildMaxHeap(arr)
1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End
MaxHeapify(arr,i)
1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End
In heap sort, basically, there are two phases involved in the sorting of elements. By using the heap sort algorithm, they
are as follows -
o The first step includes the creation of a heap by adjusting the elements of the array.
o After the creation of heap, now remove the root element of the heap repeatedly by shifting it to the end of the
array, and then store the heap structure with the remaining elements.
First, we have to construct a heap from the given array and convert it into max heap.
After converting the given heap into max heap, the array elements are -
Next, we have to delete the root element (89) from the max heap. To delete this node, we have to swap it with the last
node, i.e. (11). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 89 with 11, and converting the heap into max-heap, the elements of array are -
In the next step, again, we have to delete the root element (81) from the max heap. To delete this node, we have to swap
it with the last node, i.e. (54). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 81 with 54 and converting the heap into max-heap, the elements of array are -
In the next step, we have to delete the root element (76) from the max heap again. To delete this node, we have to swap it
with the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 76 with 9 and converting the heap into max-heap, the elements of array are -
In the next step, again we have to delete the root element (54) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (14). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 54 with 14 and converting the heap into max-heap, the elements of array are -
In the next step, again we have to delete the root element (22) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (11). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 22 with 11 and converting the heap into max-heap, the elements of array are -
In the next step, again we have to delete the root element (14) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 14 with 9 and converting the heap into max-heap, the elements of array are -
In the next step, again we have to delete the root element (11) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.
After swapping the array element 11 with 9, the elements of array are -
Now, heap has only one element left. After deleting it, heap will be empty.
Now, let's see the time complexity of Heap sort in the best case, average case, and worst case. We will also see the space
complexity of Heapsort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of heap sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of heap sort is O(n log n).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of heap sort is O(n log n).
The time complexity of heap sort is O(n logn) in all three cases (best case, average case, and worst case). The height of a
complete binary tree having n elements is logn.
2. Space Complexity
1. It is efficient for smaller data sets, but very inefficient for larger lists.
2. Insertion Sort is adaptive, that means it reduces its total number of steps if a partially sorted array is provided as
input, making it efficient.
4. Its space complexity is less. Like bubble Sort, insertion sort also requires a single additional memory space.
5. It is a stable sorting technique, as it does not change the relative order of elements which are equal.
Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step3 - Now, compare the key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to the next element. Else, shift
greater elements in the array towards the right.
As we mentioned above that insertion sort is an efficient sorting algorithm, as it does not run on preset conditions
using for loops, but instead it uses one while loop, which avoids extra steps once the array gets sorted.
Even though insertion sort is efficient, still, if we provide an already sorted array to the insertion sort algorithm, it will
still execute the outer for loop, thereby requiring n steps to sort an already sorted array of n elements, which makes
its best case time complexity a linear function of n.
The sub-lists are divided again and again into halves until the list cannot be divided further. Then we combine the pair of
one element lists into two-element lists, sorting them in the process. The sorted two-element pairs is merged into the
four-element lists, and so on until we get the sorted list.
Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and end is the last element of the array.
The important part of the merge sort is the MERGE function. This function performs the merging of two sorted sub-
arrays that are A[beg…mid] and A[mid+1…end], to build one sorted array A[beg…end]. So, the inputs of
the MERGE function are A[], beg, mid, and end.
Merge sort complexity
Now, let's see the time complexity of merge sort in best case, average case, and in worst case. We will also see the space
complexity of the merge sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of merge sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of merge sort is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of merge sort is O(n*logn).
2. Space Complexity
Stable YES
o The space complexity of merge sort is O(n). It is because, in merge sort, an extra variable is required for
swapping.
Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used sorting algorithm that makes n
log n comparisons in average case for sorting an array of n elements. It is a faster and highly efficient sorting algorithm.
This algorithm follows the divide and conquer approach. Divide and conquer is a technique of breaking down the
algorithms into subproblems, then solving the subproblems, and combining the results back together to solve the original
problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-arrays such that
each element in the left sub-array is less than or equal to the pivot element and each element in the right sub-array is
larger than the pivot element.
After that, left and right sub-arrays are also partitioned using the same approach. It will continue until the single element
remains in the sub-array.
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a good
pivot. Some of the ways of choosing a pivot are as follows -
o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the given array.
Algorithm
Algorithm:
Partition Algorithm:
Now, let's see the time complexity of quicksort in best case, average case, and in worst case. We will also see the space
complexity of quicksort.
1. Time Complexity
Case Time Complexity
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the middle element or near
to the middle element. The best-case time complexity of quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of quicksort is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either greatest or smallest
element. Suppose, if the pivot element is always the last element of the array, the worst case would occur when
the given array is sorted already in ascending or descending order. The worst-case time complexity of quicksort
is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms such as Merge sort and Heap sort,
still it is faster in practice. Worst case in quick sort rarely occurs because by changing the choice of pivot, it can be
implemented in different ways. Worst case in quicksort can be avoided by choosing the right pivot element.
2. Space Complexity
Stable NO
In this article, we will discuss the Radix sort Algorithm. Radix sort is the linear sorting algorithm that is used for
integers. In Radix sort, there is digit by digit sorting is performed that is started from the least significant digit to the most
significant digit.
The process of radix sort works similar to the sorting of students names, according to the alphabetical order. In this case,
there are 26 radix formed due to the 26 alphabets in English. In the first pass, the names of students are grouped
according to the ascending order of the first letter of their names. After that, in the second pass, their names are grouped
according to the ascending order of the second letter of their name. And the process continues until we find the sorted
list.
Algorithm
1. radixSort(arr)
5. for i -> 0 to d
6. sort the array elements using counting sort (or any stable sort) according to the digits at
Now, let's see the time complexity of Radix sort in best case, average case, and worst case. We will also see the space
complexity of Radix sort.
1. Time Complexity
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of Radix sort is θ(nk).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting algorithms. It has linear time
complexity that is better than the comparative algorithms with complexity O(n logn).
2. Space Complexity
It is a sorting algorithm that is an extended version of insertion sort. Shell sort has improved the average time complexity
of insertion sort. As similar to insertion sort, it is a comparison-based and in-place sorting algorithm. Shell sort is
efficient for medium-sized data sets.
In insertion sort, at a time, elements can be moved ahead by one position only. To move an element to a far-away
position, many movements are required that increase the algorithm's execution time. But shell sort overcomes this
drawback of insertion sort. It allows the movement and swapping of far-away elements as well.
This algorithm first sorts the elements that are far away from each other, then it subsequently reduces the gap between
them. This gap is called as interval. This interval can be calculated by using the Knuth's formula given below -
1. hh = h * 3 + 1
Algorithm
The simple steps of achieving the shell sort are listed as follows -
4. temp = a[i];
7. a[j] = temp;
8. End ShellSort
Now, let's see the time complexity of Shell sort in the best case, average case, and worst case. We will also see the space
complexity of the Shell sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e., the array is already sorted. The best-
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of Shell sort is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
3. Space Complexity
Searching is the process of finding some particular element in the list. If the element is present in the list, then the
process is called successful, and the process returns the location of that element; otherwise, the search is called
unsuccessful.
Two popular search methods are Linear Search and Binary Search. So, here we will discuss the popular searching
technique, i.e., Linear Search Algorithm.
Linear search is also called as sequential search algorithm. It is the simplest searching algorithm. In Linear search, we
simply traverse the list completely and match each element of the list with the item whose location is to be found. If the
match is found, then the location of the item is returned; otherwise, the algorithm returns NULL.
It is widely used to search an element from the unordered list, i.e., the list in which items are not sorted. The worst-case
time complexity of linear search is O(n).
The steps used in the implementation of Linear Search are listed as follows -
o In each iteration of for loop, compare the search element with the current array element, and -
o If the element matches, then return the index of the corresponding array element.
o If the element does not match, then move to the next element.
o If there is no match or the search element is not present in the given array, return -1.
Algorithm
1. Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val' is the value to search
3. Step 2: set i = 1
6. set pos = i
7. print pos
8. go to step 6
9. [end of if]
10. set ii = i + 1
Linear Search and Binary Search are the two popular searching techniques. Here we will discuss the Binary Search
Algorithm.
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element into some list
using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two halves, and the item is
compared with the middle element of the list. If the match is found then, the location of the middle element is returned.
Otherwise, we search into either of the halves depending upon the result produced through the match.
Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the first array ele
ment, 'upper_bound' is the index of the last array element, 'val' is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1
16. print "value is not present in the array"
17. [end of if]
18. Step 6: exit
Comparison of Searching methods in Data Structures
In different cases, we perform different searching schemes to find some keys. In this section we will see what are the
basic differences between two searching techniques, the sequential search and binary search.
Finds the key present at first position in constant time Finds the key present at center position in constant time
Sequence of elements in the container does not affect. The elements must be sorted in the container
Arrays and linked lists can be used to implement this It cannot be implemented directly into the linked list. We need
to change the basic rules of the list to implement this
Algorithm is easy to implement, and requires less amount of Algorithm is slightly complex. It takes more amount of code to
code. implement.
N number of comparisons are required for worst case. Log n number of comparisons are sufficient in worst case.