Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Quicksort

Quicksort

Uploaded by

hima bindu
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Quicksort

Quicksort

Uploaded by

hima bindu
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Quicksort

• Quicksort is the other important sorting algorithm that is based on the


divide-and-conquer approach. Quicksort divides its input elements
according to their value. A partition is an arrangement of the array’s
elements so that all the elements to the left of some element A[s] are
less than or equal to A[s], and all the elements to the right of A[s] are
greater than or equal to it:
ALGORITHM Quicksort(A[l..r])
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
We start by selecting a pivot—an element with respect to whose value
we are going to divide the subarray. There are several different
strategies for selecting a pivot; the simplest strategy is selecting the
subarray’s first element: p = A[l].
• We will scan the subarray from both ends, comparing the
subarray’s elements to the pivot.
• The left-to-right scan is denoted by index pointer i, and starts
with the second element. Since we want elements smaller
than the pivot to be in the left part of the subarray, this scan
skips over elements that are smaller than the pivot and stops
upon encountering the first element greater than or equal to
the pivot.
• The right-to-left scan, denoted by index pointer j, starts with
the last element of the subarray. Since we want elements
larger than the pivot to be in the right part of the subarray,
this scan skips over elements that are larger than the pivot
and stops on encountering the first element smaller than or
equal to the pivot.
• After both scans stop, three situations may arise, depending on
whether or not the scanning indices have crossed.
• If scanning indices i and j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j ] and resume the scans by incrementing I and
decrementing j, respectively:
• If the scanning indices have crossed over, i.e., i > j, we will have
partitioned the subarray after exchanging the pivot with A[j ]:
• Finally, if the scanning indices stop while pointing to the same
element, i.e., i = j, the value they are pointing to must be equal to
p. Thus, we have the subarray partitioned, with the split position
s = i = j.

• We can combine the last case with the case of crossed-over


indices (i > j ) by exchanging the pivot with A[j ] whenever i ≥ j .
ALGORITHM HoarePartition(A[l..r])
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j
Quicksort’s efficiency
• The number of key comparisons made before a partition is achieved is
n + 1 if the scanning indices cross over and n if they coincide. If all the
splits happen in the middle of corresponding subarrays, we will have
the best case. The number of key comparisons in the best case satisfies
the recurrence.
Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.

• According to the Master Theorem, Cbest(n) ∈ Θ(n log2 n); solving it


exactly for n = 2k yields Cbest(n) = n log2 n.
• In the worst case, all the splits will be skewed to the extreme: one of
the two subarrays will be empty, and the size of the other will be just 1
less than the size of the subarray being partitioned. This unfortunate
situation will happen, in particular, for increasing arrays, i.e., for
inputs for which the problem is already solved! Indeed, if A[0..n − 1]
is a strictly increasing array and we use A[0] as the pivot, the left-to-
right scan will stop on A[1] while the right-to-left scan will go all the
way to reach A[0], indicating the split at position 0:
• So, after making n + 1 comparisons to get to this partition and
exchanging the pivot A[0] with itself, the algorithm will be left with
the strictly increasing array A[1..n − 1] to sort. This sorting of strictly
increasing arrays of diminishing sizes will continue until the last one
A[n − 2..n − 1] has been processed. The total number of key
comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2))/2 − 3 ∈ Θ(n2).
Average case behavior
• Let Cavg(n) be the average number of key comparisons made by quicksort on a
randomly ordered array of size n. A partition can happen in any position s (0 ≤ s ≤
n−1) after n+1comparisons are made to achieve the partition. After the partition,
the left and right subarrays will have s and n − 1− s elements, respectively.
Assuming that the partition split can happen in each position s with the same
probability 1/n, we get the following recurrence relation:
• Its solution, which is much trickier than the worst- and best-case
analyses, turns out to be

Cavg(n) ≈ 2n ln n ≈ 1.39n log2 n.

• Thus, on the average, quicksort makes only 39% more comparisons


than in the best case. Moreover, its innermost loop is so efficient that it
usually runs faster than mergesort.
Because of quicksort’s importance, there have been persistent efforts
over the years to refine the basic algorithm. Among several
improvements discovered by researchers are:
 Better pivot selection methods such as randomized quicksort that
uses a random element or the median-of-three method that uses the
median of the leftmost, rightmost, and the middle element of the array
 Switching to insertion sort on very small subarrays (between 5 and 15
elements for most computer systems)’
Disadvantages
1. Instability: Quicksort is not a stable sorting algorithm.
2. Space Requirements:
o Uses a stack for subarray parameters.
o Less space efficient than heapsort, which uses O(1)O(1)O(1) space.
3. Worst-Case Performance:
o Worst-case time complexity is O(n2), though sophisticated pivot
selection can make this rare.
4. Sensitivity:
o Performance varies with implementation details, computer
architecture, and data type.
Conclusion
• Despite its weaknesses, Quicksort remains highly influential and
widely used due to its efficiency and average-case performance.

You might also like