Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Merge Sort: Merge Sort Is Defined As A: Sorting Algorithm

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

1.

Merge Sort:
Merge sort is defined as a sorting algorithm that works by dividing an array into smaller
subarrays, sorting each subarray, and then merging the sorted subarrays back together to
form the final sorted array.
In simple terms, we can say that the process of merge sort is to divide the array into two
halves, sort each half, and then merge the sorted halves back together. This process is
repeated until the entire array is sorted.

Merge Sort Algorithm


How does Merge Sort work?
Merge sort is a recursive algorithm that continuously splits the array in half until it cannot
be further divided i.e., the array has only one element left (an array with one element is
always sorted). Then the sorted subarrays are merged into one sorted array.
See the below illustration to understand the working of merge sort.
Illustration:
Lets consider an array arr[] = {38, 27, 43, 10}
 Initially divide the array into two equal halves:
Merge Sort: Divide the array into two halves
 These subarrays are further divided into two halves. Now they become array of unit
length that can no longer be divided and array of unit length are always sorted.

Merge Sort: Divide the subarrays into two halves (unit length subarrays here)
These sorted subarrays are merged together, and we get bigger sorted subarrays.

Merge Sort: Merge the unit length subarrys into sorted subarrays
This merging process is continued until the sorted array is built from the smaller subarrays.
Merge Sort: Merge the sorted subarrys to get the sorted array

The following diagram shows the complete merge sort process for an example array {38, 27,
43, 3, 9, 82, 10}.
Complexity Analysis of Merge Sort:
Time Complexity: O(N log(N)), Merge Sort is a recursive algorithm and time complexity
can be expressed as following recurrence relation.
T(n) = 2T(n/2) + θ(n)
The above recurrence can be solved either using the Recurrence Tree method or the Master
method. It falls in case II of the Master Method and the solution of the recurrence is
θ(Nlog(N)). The time complexity of Merge Sort isθ(Nlog(N)) in all 3 cases (worst, average,
and best) as merge sort always divides the array into two halves and takes linear time to
merge two halves.
Auxiliary Space: O(N), In merge sort all elements are copied into an auxiliary array. So N
auxiliary space is required for merge sort.
Applications of Merge Sort:
 Sorting large datasets: Merge sort is particularly well-suited for sorting large
datasets due to its guaranteed worst-case time complexity of O(n log n).
 External sorting: Merge sort is commonly used in external sorting, where the data to
be sorted is too large to fit into memory.
 Custom sorting: Merge sort can be adapted to handle different input distributions,
such as partially sorted, nearly sorted, or completely unsorted data.
 Inversion Count Problem
Advantages of Merge Sort:
 Stability: Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.
 Guaranteed worst-case performance: Merge sort has a worst-case time complexity
of O(N logN), which means it performs well even on large datasets.
 Parallelizable: Merge sort is a naturally parallelizable algorithm, which means it can
be easily parallelized to take advantage of multiple processors or threads.
Drawbacks of Merge Sort:
 Space complexity: Merge sort requires additional memory to store the merged sub-
arrays during the sorting process.
 Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in applications
where memory usage is a concern.
 Not always optimal for small datasets: For small datasets, Merge sort has a higher
time complexity than some other sorting algorithms, such as insertion sort. This can
result in slower performance for very small datasets.
2. Quick Sort:
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an
element as a pivot and partitions the given array around the picked pivot by placing the pivot
in its correct position in the sorted array.
How does QuickSort work?
The key process in quickSort is a partition(). The target of partitions is to place the pivot
(any element can be chosen to be a pivot) at its correct position in the sorted array and put
all smaller elements to the left of the pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct
position and this finally sorts the array.

How Quicksort works


Choice of Pivot:
There are many different choices for picking pivots.
 Always pick the first element as a pivot.
 Always pick the last element as a pivot (implemented below)
 Pick a random element as a pivot.
 Pick the middle as the pivot.
Partition Algorithm:
The logic is simple, we start from the leftmost element and keep track of the index of smaller
(or equal) elements as i. While traversing, if we find a smaller element, we swap the current
element with arr[i]. Otherwise, we ignore the current element.
Let us understand the working of partition and the Quick Sort algorithm with the help of the
following example:
 Compare 80 with the pivot. It is greater than pivot.

Partition in QuickSort: Compare pivot with 30


 Compare 90 with the pivot. It is greater than the pivot.

Partition in QuickSort: Compare pivot with 90


 Arrange the pivot in its correct position.

Partition in QuickSort: Place pivot in its correct position


Illustration of Quicksort:
As the partition process is done recursively, it keeps on putting the pivot in its actual position
in the sorted array. Repeatedly putting pivots in their actual position makes the array sorted.
Follow the below images to understand how the recursive implementation of the partition
algorithm helps to sort the array.
 Initial partition on the main array:
Quicksort: Performing the partition
 Partitioning of the subarrays:

Complexity Analysis of Quick Sort:


Time Complexity:
 Best Case: Ω (N log (N))
 Average Case: θ ( N log (N))
 Worst Case: O(N2)
Auxiliary Space: O(1), if we don’t consider the recursive stack space. If we consider
the recursive stack space then, in the worst case quicksort could make O(N).
Advantages of Quick Sort:
 It is a divide-and-conquer algorithm that makes it easier to solve problems.
 It is efficient on large data sets.
 It has a low overhead, as it only requires a small amount of memory to function.
Disadvantages of Quick Sort:
 It has a worst-case time complexity of O(N2), which occurs when the pivot is chosen
poorly.
 It is not a good choice for small data sets.
 It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here we
are swapping elements according to the pivot’s position (without considering their
original positions).
3. Radix Sort:
Radix Sort is a linear sorting algorithm that sorts elements by processing them digit by digit.
It is an efficient sorting algorithm for integers or strings with fixed-size keys.
Rather than comparing elements directly, Radix Sort distributes the elements into buckets
based on each digit’s value. By repeatedly sorting the elements by their significant digits,
from the least significant to the most significant, Radix Sort achieves the final sorted order.
Radix Sort Algorithm
The key idea behind Radix Sort is to exploit the concept of place value. It assumes that
sorting numbers digit by digit will eventually result in a fully sorted list. Radix Sort can be
performed using different variations, such as Least Significant Digit (LSD) Radix Sort or
Most Significant Digit (MSD) Radix Sort.
How does Radix Sort Algorithm work?
To perform radix sort on the array [170, 45, 75, 90, 802, 24, 2, 66], we follow these steps:

How does Radix Sort Algorithm work | Step 1


Step 1: Find the largest element in the array, which is 802. It has three digits, so we will
iterate three times, once for each significant place.
Step 2: Sort the elements based on the unit place digits (X=0). We use a stable sorting
technique, such as counting sort, to sort the digits at each significant place.
Sorting based on the unit place:
 Perform counting sort on the array based on the unit place digits.
 The sorted array based on the unit place is [170, 90, 802, 2, 24, 45, 75, 66].
How does Radix Sort Algorithm work | Step 2
Step 3: Sort the elements based on the tens place digits.
Sorting based on the tens place:
 Perform counting sort on the array based on the tens place digits.
 The sorted array based on the tens place is [802, 2, 24, 45, 66, 170, 75, 90].

How does Radix Sort Algorithm work | Step 3


Step 4: Sort the elements based on the hundreds place digits.
Sorting based on the hundreds place:
 Perform counting sort on the array based on the hundreds place digits.
 The sorted array based on the hundreds place is [2, 24, 45, 66, 75, 90, 170, 802].

How does Radix Sort Algorithm work | Step 4


Step 5: The array is now sorted in ascending order.
The final sorted array using radix sort is [2, 24, 45, 66, 75, 90, 170, 802].

How does Radix Sort Algorithm work | Step 5


Complexity Analysis of Radix Sort:
Time Complexity:
 Radix sort is a non-comparative integer sorting algorithm that sorts data with integer
keys by grouping the keys by the individual digits which share the same significant
position and value. It has a time complexity of O (d * (n + b)), where d is the number
of digits, n is the number of elements, and b is the base of the number system being
used.
 In practical implementations, radix sort is often faster than other comparison-based
sorting algorithms, such as quicksort or merge sort, for large datasets, especially when
the keys have many digits. However, its time complexity grows linearly with the
number of digits, and so it is not as efficient for small datasets.
Auxiliary Space:
 Radix sort also has a space complexity of O (n + b), where n is the number of
elements and b is the base of the number system. This space complexity comes from
the need to create buckets for each digit value and to copy the elements back to the
original array after each digit has been sorted.

You might also like