Sorting
Sorting
Sorting
The bubble sort makes multiple passes through a list. It compares adjacent items and exchanges
those that are out of order. Each pass through the list places the next largest value in its proper
place. In essence, each item “bubbles” up to the location where it belongs
We take an unsorted array for our example. Bubble sort takes Ο(n2) time so we're keeping it
short and precise.
Bubble sort starts with very first two elements, comparing them to check which one is greater.
In this case, value 33 is greater than 14, so it is already in sorted locations. Next, we compare
33 with 27.
We find that 27 is smaller than 33 and these two values must be swapped.
Next we compare 33 and 35. We find that both are in already sorted positions.
We know then that 10 is smaller 35. Hence they are not sorted.
We swap these values. We find that we have reached the end of the array. After one iteration,
the array should look like this −
To be precise, we are now showing how an array should look like after each iteration. After the
second iteration, it should look like this −
Notice that after each iteration, at least one value moves at the end.
And when there's no swap required, bubble sorts learns that an array is completely sorted.
return list
end BubbleSort
Pseudocode
We observe in algorithm that Bubble Sort compares each pair of array element unless the whole
array is completely sorted in an ascending order. This may cause a few complexity issues like
what if the array needs no more swapping as all the elements are already ascending.
To ease-out the issue, we use one flag variable swapped which will help us see if any swap has
happened or not. If no swap has occurred, i.e. the array requires no more processing to be
sorted, it will come out of the loop.
Pseudocode of BubbleSort algorithm can be written as follows −
procedure bubbleSort( list : array of items )
loop = list.count;
end for
end for
It finds that both 14 and 33 are already in ascending order. For now, 14 is in sorted sub-list.
By now we have 14 and 27 in the sorted sub-list. Next, it compares 33 with 10.
So we swap them.
We swap them again. By the end of third iteration, we have a sorted sub-list of 4 items.
This process goes on until all the unsorted values are covered in a sorted sub-list. Now we shall see
some programming aspects of insertion sort
.
Algorithm
Now we have a bigger picture of how this sorting technique works, so we can derive simple
steps by which we can achieve insertion sort.
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shift all the elements in the sorted sub-list that is greater than the
value to be sorted
Step 5 − Insert the value
Step 6 − Repeat until list is sorted
Pseudocode
procedure insertionSort( A : array of items )
int holePosition
int valueToInsert
for i = 1 to length(A) inclusive do:
valueToInsert = A[i]
holePosition = i
end for
end procedure
Selection sort
Selection sort is another algorithm that is used for sorting. This sorting algorithm, iterates through
the array and finds the smallest number in the array and swaps it with the first element if it is
smaller than the first element. Next, it goes on to the second element and so on until all elements
are sorted.
Consider the following depicted array as an example.
For the first position in the sorted list, the whole list is scanned sequentially. The first position
where 14 is stored presently, we search the whole list and find that 10 is the lowest value.
So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in the
list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a linear
manner.
We find that 14 is the second lowest value in the list and it should appear at the second place.
We swap these values.
After two iterations, two least values are positioned at the beginning in a sorted manner.
The same process is applied to the rest of the items in the array. Following is a
pictorial depiction of the entire sorting process −
Now, let us learn some programming aspects of selection sort. Algorithm
for i = 1 to n - 1
/* set current element as minimum*/
min = i
for j = i+1 to n
if list[j] < list[min] then
min = j;
end if
end for
end procedure
Postman Sort
Definition: A highly engineered variant of top-down radix sort where attributes of the key are
described so the algorithm can allocate buckets and distribute efficiently.
This is the algorithm used by letter-sorting machines in the post office: first states, then post
offices, then routes, etc. Since keys are not compared against each other, sorting time is
O(cn), where c depends on the size of the key and number of buckets.
MERGE SORT
Merge sort is one of the external sorting technique.Merge sort algorithm follows divide and
conquer strategy.Given a sequence of „n‟ elements A[1],A[2],………A[N].The basic idea
behind the merge sort algorithm is to split the list into two sub lists A[1],…….A[N/2] and
A[(N/2)+1],…….A[N].If the list has even length, split the list into equal sub lists.If the list has
odd length, divide the list in two by making the first sub list one entry greater than the second
sub list.Then split both the sub list is to two and go on until each of the sub lists are of size
one.Finally, start merging the individual sub list to obtain a sorted list.Time complexity of
merge sort is O(n log n).
Principle
The given list is divided into two roughly equal parts called the left and right sub files.
These sub files are sorted using the algorithm recursively and then the two sub files are
merged together to obtain the sorted file.
Advantages
Very useful for sorting bigger lists.
Applicable for external sorting also.
Disadvantages
Needs a temporary array every time, for sorting the new list
QUICK SORT
This is the most widely used internal sorting algorithm. In its basic form, it was invented by
C.A.R. Hoare in 1960. Its popularity lies in the- ease of implementation, moderate use of
resources and acceptable behaviour for a variety of sorting cases. The basis of quick sort is the
'divide' and conquer' strategy i.e. Divide the problem [list to be sorted] into sub-problems [sub-
lists], until solved sub problems [sorted sub-lists] are found. This is implemented as Choose
one item A[I] from the list A[ ]. Rearrange the list so that this item is in the proper position i.e.
all preceding items have a lesser value and all succeeding items have a greater value than this
item.
1. A[0], A[1] .. A[I-1] in sub list 1
2. A[I] 3. A[I + 1], A[I + 2] ... A[N] in sublist 2 Repeat steps 1 & 2 for sublist & sublist2 till A[
] is a sorted list. As can be seen, this algorithm has a recursive structure., Step 2 or the 'divide'
procedure is of utmost importance in this algorithm. This is usually implemented as follows:
1. Choose A[I] the dividing element.
2. From the left end of the list (A[O] onwards) scan till an item A[R] is found whose value is
greater than A[I]
3. From the right end of list [A[N] backwards] scan till an item A[L] is found whose Value is
less than
A[1].
4. Swap A[-R] & A[L].
5. Continue steps 2, 3 & 4 till the scan pointers cross. Stop at this stage.
6. At this point sublist 1 & sublist2 are ready.
7. Now do the same for each of sublist 1 & sublist2.
We will now give the implementation of Quicksort and illustrate it by an example. Quicksort
(int A[], int
X, int 1)
{
int L, R, V 1.
1. If (IX)
{
2. V = A[1], L = X-1, R = I; 3.
3. For (;;)
4. While (A[ + + L] V);
5. While (A[- -R] V);
6. If (L = R) /* left & right ptrs. have crossed */
7. break;
8. Swap (A, L, R) /* Swap A[L] & A[R] */ }
9. Swap (A, L, I)
10. Quicksort (A, X, L-1)
11. Quicksort (A, L + 1, I) } }
The main idea of the quick sort is to divide the initial unsorted list into two parts, such that the
every element in the first list is less than all the elements present in the second list.The
procedure is repeated recursively for both the parts, upto relatively short sequence which can
be sorted until the sequences reduces to length one.The first step of the algorithm requires
choosing a pivot value that will be used to divide large and small numbers. The first element of
list is chosen as a pivot value.Once, the pivot value has been selected, all the elements smaller
than the pivot are placed towards the beginning of the set and all the elements larger than the
pivot are placed at the right.This process essentially sets the pivot value in the correct place
each time.Each side of the pivot is then quick sorted. The quick sort algorithm reduces the
unnecessary swaps and moves an item a great distance in one move.The median-of-three
portioning method is used to select the pivot. In this method, three elements are randomly
chosen and the median of these three values is chosen as the pivot element.
Principle
A pivot item near the middle of the list is chosen, and the items on either side are moved so
that the data items on one side of the pivot element are smaller than the pivot element where as
those on the other side are larger. The middle (or) the pivot element is now in its correct
position. This procedure is then applied recursively to the 2 parts of the list, on either side of
the pivot element until the whole list is sorted
Example
Advantages
Faster than any other commonly used sorting algorithm. It has a best average case
behavior.
Reduces complexity.
Disadvantages
As it uses recursion, stack space consumption is high.
RADIX SORT
Radix sort is a small method used when alphabetizing a large list of names.
Intuitively, one might want to sort numbers on their most significant digit. However,
Radix sort works counter- intuitively by sorting on the least significant digits first. On the
first pass, all the numbers are sorted on the least significant digit and combined in an
array. Then on the second pass, the entire numbers are sorted again on the second least
significant digits and combined in an array and so on.
keysize do for
entry = 1 to n do
mod 10 append
(bucket[bucketnumber], list[entry])
list = combinebuckets()
shift = shift * 10
==+