Unit I (Design and Analysis of Algorithm)
Unit I (Design and Analysis of Algorithm)
Lecture-1
Unit-II:
Advanced Data Structures: Red-Black Trees, B–Trees,
Binomial Heaps, Fibonacci Heaps, Tries, Skip List.
2
Syllabus
Unit-III:
Divide and Conquer with Examples Such as Sorting, Matrix
Multiplication, Convex Hull and Searching. Greedy Methods
with Examples Such as Optimal Reliability Allocation,
Knapsack, Minimum Spanning Trees – Prim’s and Kruskal’s
Algorithms, Single Source Shortest Paths - Dijkstra’s and
Bellman Ford Algorithms.
3
Syllabus
Unit-IV:
Dynamic Programming with Examples Such as Knapsack. All
Pair Shortest Paths – Warshal’s and Floyd’s Algorithms,
Resource Allocation Problem. Backtracking, Branch and
Bound with Examples Such as Travelling Salesman Problem,
Graph Coloring, n-Queen Problem, Hamiltonian Cycles and
Sum of Subsets.
Unit-V:
Selected Topics: Algebraic Computation, Fast Fourier
Transform, String Matching, Theory of NP Completeness,
Approximation Algorithms and Randomized Algorithms
4
Text books
5
Course Outcome
CO1 Design new algorithms, prove them correct, and analyze
their asymptotic and absolute runtime and memory
demands.
CO2 Find an algorithm to solve the problem (create) and prove
that the algorithm solves the problem correctly (validate).
CO3 Understand the mathematical criterion for deciding
whether an algorithm is efficient, and know many
practically important problems that do not admit any
efficient algorithms.
CO4 Apply classical sorting, searching, optimization and graph
algorithms.
CO5 Understand basic techniques for designing algorithms,
including the techniques of recursion, divide-and-conquer,
6
and greedy.
Definition of Algorithm
7
Characteristics of an algorithm
8
Algorithm vs Program
Algorithm Program
•Simple English like language, •Predefined syntax and grammar,
with no predefined syntax. compatible to a programming
•Person writing algorithm must language.
have domain knowledge. •Developed and made by
•Critical usage in Design Phase of programmer
SDLC. •Used in Coding phase of SDLC.
•H/W and OS independent. •H/W and OS dependent.
•Analysis is done to ensure its •Testing is done to check its
efficiency. correctness.
9
Pseudo Code
Pseudo code gives a high-level description of an algorithm
without the ambiguity associated with plain text but also
without the need to know the syntax of a particular
programming language.
It is an artificial and informal language that helps
programmer to develop a program.
It is text based detailed design tool.
Pseudo Code is an intermediatary between an algorithm
and a program.
10
Pseudo Code Vs Algorithm: Example
11
Pseudo Code Vs Algorithm: Example
Pseudo Code : Insertion Sort
for j=2 to A.length
Key=A[j]
//Insert A[j] into sorted sequence A[1…j-1
i=j-1
while(i>0 && A[i]> key)
A[i+1]=A[i]
i=i-1
A[i+1]=key
12
Pseudo code conventions
14
Design Strategy of Algorithm
15
Analysis of algorithms
16
Analysis of algorithms
17
Running time of an algorithm
18
Insertion Sort
19
Insertion Sort Algorithm
Insertion_Sort(A)
1. n length[A]
2. for j2 to n
1. aA[j]
2. //Insert A[j] into the sorted sequence A[1 .. j-1].
3. i j-1
4. While i > 0 and a < A[i]
1. A[i+1]A[i]
2. i i-1
5. A[i+1] a
20
Insertion Sort Algorithm Analysis
Insertion_Sort(A)
Instruction cost times
n length[A] c1 1
for j2 to n c2 n
aA[j] c3 n-1
//Insert A[j] into the sorted 0 n-1
sequence A[1 .. j-1].
i j-1 c4 n-1
𝑛
∑𝑗=
While i > 0 and a < A[i] c5 2 𝑡𝑗𝑗
c6 𝑛
∑𝑗=
A[i+1]A[i] 2 (𝑡𝑗𝑗-1)
i i-1 c7 𝑛
∑𝑗= (𝑡𝑗𝑗-1)
2
A[i+1] a c8 n-1
21
Time complexity of Insertion sort
22
Time complexity of Insertion sort
23
Time complexity of Insertion sort
Worst case: This case will be occurred when data is in reverse sorted
order.
In this case, value of tj will be j. Put the value of tj in equation (1) and
find T(n). Therefore,
T(n) = c1.1 + c2.n + c3.(n-1) + c 4.(n-1) + c 5.∑ 𝑗𝑛=2 𝑗
+ c6.∑ 𝑗𝑛=2 (j-1) + c7.∑𝑛𝑗=2 (𝑗𝑗-1) + c .(n-1)
8
= c1.1 + c2.n + c3.(n-1) + c4.(n-1) + c5. (n+2).(n-1)/2
+ c6.n(n-1)/2+ c7. n(n-1)/2+ c8.(n-1)
= (c5+c6+c7).n2/2+ (c2+c3+c4+(c5 – c6- c7)/2+c8).n +
(c1- c3 - c4 - c5 - c8)
= an2 + bn + c
Clearly T(n) is in quadratic form, therefore, T(n) = θ(n2)
24
Time complexity of Insertion sort
Average case: This case will be occurred when data is in any order
except best and worst case.
In this case, value of tj will be j/2. Put the value of tj in equation (1) and
find T(n). Therefore,
T(n) = c1.1 + c2.n + c3.(n-1) + c 4.(n-1) + c 5.∑ 𝑗𝑛=2 (𝑗𝑗/2)
+ c6.∑ 𝑗𝑛=2 (j/2-1) + c7.∑𝑛𝑗=2 (𝑗𝑗/2-1) + c .(n-1)
8
= c1.1 + c2.n + c3.(n-1) + c4.(n-1) + c5. (n+2).(n-1)/4
+ c6.(n-2)(n-1)/4+ c7. (n-2)(n-1)/4+ c8.(n-1)
= (c5+c6+c7).n2/4+ (c2+c3+c4+(c5 – 3c6- 3c7)/4+c8).n +
(c1- c3 - c4 - c5/2 + c6/2+ c7/2 - c8)
= an2 + bn + c
Clearly T(n) is in quadratic form, therefore, T(n) = θ(n2)
25
Design and Analysis of Algorithms
Lecture-2
27
Some other sorting algorithms
Selection_sort(A)
n ←length[A]
for i←1 to n-1
min ← i
for j←i+1 to n
if(A[j] < A[min])
min ←j
Interchange A[i] ↔ A[min]
29
Some other sorting algorithms
Bubble _Sort(A)
30
Divide and Conquer approach
The divide-and-conquer paradigm involves three steps at each level
of the recursion:
31
Analysis of Divide and Conquer based algorithm
32
Analysis of Divide and Conquer based algorithm
A recurrence equation for the running time of a divide-
and-conquer algorithm uses the three steps of the basic
paradigm.
34
Merge Sort Algorithm
35
Merge Sort Algorithm
36
Merge Sort Algorithm Analysis
37
Merge Sort Algorithm Analysis
38
Merge Sort Algorithm Analysis
Iterative method:
T(n) = 2T(n/2) + cn
= 2(2T(n/4) + cn/2) + cn
= 2 2 T(n/4) + 2cn
= 2 2 (2T(n/8)+cn/4) + 2cn
= 2 3 T(n/8)+ 3cn
=………………….
= 2 kT(n/2k) + kcn
= nT(1) + c nlog(n) (Let n = 2k )
= dn + cnlog(n) (since T(1) = d)
= θ(nlog(n))
Lecture-3
This notation is
said to be tight
bound.
If f(n) ∈ (g(n)) then
f(n) = (g(n)) 42
Asymptotic Notations
notation ( Big-oh notation )
For a given function g(n), it is denoted by (g(n)).
It is defined as following:-
(g(n)) = { f(n) ! ∃ positive constants c and n 0 such
that
0 ≤ f(n) ≤ cg(n), ∀ n ≥ n0 }
45
Asymptotic Notations
𝜔notation ( little-omega notation )
The asymptotic lower bound provided by 𝛀-
notation may or may not be asymptotically tight.
𝜔-notation denotes an upper bound that is not
asymptotically tight.
For a given function g(n), it is denoted by 𝜔(g(n)).
It is defined as following:-
𝜔 (g(n)) = { f(n) ! for any positive constants c, there
exists a constant n0 such that
0 ≤ cg(n) < f(n), ∀ n ≥n0 }
46
Asymptotic Notations
Example: Show that (1/2)n2 - 3n = θ(n2).
Solution: Using definition of θ-notation,
c1g(n) ≤ f(n) ≤c2g(n), ∀ n ≥n0
In this question, f(n) = (1/2)n2 - 3n and g(n) = n2 , therefore
c1 n 2 ≤(1/2)n2 - 3n ≤c2 n 2, ∀ n ≥n0
We divide above by n2, we get
c1 ≤(1/2)- (3/n) ≤c2 , ∀ n ≥n 0 … … … … … … . .(1)
Now, we have to find c1, c2 and n0, such that equation (1) is satisfied.
Consider, left part of (1), c1 ≤(1/2)- (3/n) … … … … . (2)
The value of c1 will be positive value less than or equal to the minimum value
of (1/2)- (3/n). Minimum value of (1/2)- (3/n) = 1/14. Therefore, c1 =
1 /14 . This value of c1 will satisfyequation(2)forn≥7.
Here, c1 = 1 /14 and n ≥7 which satisfy (2). 47
Asymptotic Notations
Consider, right part of (1), (1/2)- (3/n) ≤c2 , … … … … . (3)
The value of c2 will be positive value greater than or equal
to the maximum value of (1/2)- (3/n). Maximum value of
(1/2) - (3/n) = 1/2. Therefore, c2 = 1/2 . This value of c2
will satisfy equation (3) for n ≥1.
Here, c2 = 1/ 2 and n ≥1 which satisfy (3).
Therefore, for c1 = 1/14 , c2 = 1/2 and n0 = 7, equation (1)
is satisfied.
Hence by using definition of θ-notation ,
(1/2)n 2 - 3n = θ(n2).
It is proved.
48
Asymptotic Notations
Example: Show that 2n+5 = O(n2).
Solution: Using definition of O-notation,
f(n) ≤cg(n) , ∀ n ≥n0
In this question, f(n) = 2n+5 and g(n) = n2 , therefore
2n+5≤cn 2 ∀ n ≥n 0 We divide
above by n2, we get
(2/n)+(5 /n 2) ≤c , ∀ n ≥n0… … … (1)
Now, we have to find c and n0, such that equation (1) is
satisfied.
49
Asymptotic Notations
The value of c will be positive value greater than or equal to
the maximum value of (2/n)+(5/n 2 ) .
Maximum value of (2/n)+(5/n 2 ) = 7.
Therefore, c = 7.
Clearly equation (1) is satisfied for c = 7 and n ≥1.
Hence by using definition of O-notation ,
2n+5 = O(n2).
It is proved.
50
Asymptotic Notations
Example: Show that 2n 2 +5n+6 = 𝛀(n).
Solution: Using definition of 𝛀 -notation,
cg(n) ≤f(n) , ∀ n ≥n0
In this question, f(n) = 2n2+5n+6 and g(n) = n , therefore
cn ≤2n2+5n+6 , ∀ n ≥n0
We divide above by n, we get
c ≤2n +5 +(6/n) , ∀ n ≥n0 ………(1)
Now, we have to find c and n0, such that equation (1) is
always satisfied.
51
Asymptotic Notations
The value of c will be positive value less than or equal to the
minimum value of 2n + 5 + (6/n) .
Minimum value of 2n + 5 + (6/n) = 12.
Therefore, c = 12.
Clearly equation (1) is satisfied for c = 12 and n ≥2.
Hence by using definition of 𝛀 -notation ,
2n 2 +5n+6 = 𝛀 (n).
It is proved.
52
Asymptotic Notations
Example: Show that 2n2 = o(n3).
Solution: Using definition of o-notation,
f(n) < cg(n) , ∀ n ≥n 0
Here, f(n) = 2n2, and g(n) = n3. Therefore,
2n2 < cn3 , ∀ n ≥n 0
We divide above by n3, we get
(2/n) < c , ∀ n ≥n 0 … … … … (1)
for c = 1, there will be n0 = 3, which satisfy (1).
for c = 0.5, there will be n0 = 7, which satisfy (1).
Therefore, for every c, there exists n0 which satisfy (1).
Hence 2n2 = o(n3). 53
Asymptotic Notations
Example: Show that 2n2 ≠o(n2).
Solution: Using definition of o-notation,
f(n) < cg(n) , ∀ n ≥n 0
Here, f(n) = 2n2, and g(n) = n2. Therefore,
2n2 < cn2 , ∀ n ≥n 0
We divide above by n2, we get
2 < c, ∀ n ≥n 0 … … … … (1)
Clearly for c = 1, inequality (1) does not satisfy.
Therefore, for every c, there does not exist n0 which satisfy
(1). Hence 2n2 ≠o(n2).
54
Asymptotic Notations
Example: Show that 2n2 = 𝜔(n).
Solution: Using definition of 𝜔 -notation,
cg(n) < f(n) , ∀ n ≥n 0
Here, f(n) = 2n2, and g(n) = n. Therefore,
cn < 2n2 , ∀ n ≥n 0
We divide above by n, we get
c < 2n , ∀ n ≥n 0 … … … … (1)
for c = 1, there will be n0 = 1, which satisfy (1).
for c = 10,there will be n0 = 6, which satisfy (1).
Therefore, for every c, there exists n0 which satisfy (1).
Hence 2n2 = 𝜔(n). 55
Asymptotic Notations
Example: Show that 2n2 ≠𝜔 (n2).
Solution: Using definition of 𝜔 -notation,
cg(n) < f(n) , ∀ n ≥n 0
Here, f(n) = 2n2, and g(n) = n2. Therefore,
cn2 < 2n2 , ∀ n ≥n 0
We divide above by n2, we get
c < 2, ∀ n ≥n 0 … … … … (1)
Clearly for c = 3, there does not exists n0, which satisfy (1).
Therefore, for every c, there does not exist n 0 which satisfy
(1). Hence 2n2 ≠𝜔 (n2).
56
Design and Analysis of Algorithms
Lecture-4
58
Asymptotic Notations
Limit based method to compute notations for a function
𝑓 𝑛
First compute lim = c.
𝑛→∞ 𝑔 𝑛
59
Asymptotic Notations
Exercises
62
Design and Analysis of Algorithms
Lecture-5
Recurrence Relation
64
65
66
Recurrence relation
Recurrence equations will be of the following form:-
68
Back Substitution Method
69
Solution
70
Example
71
Solution
72
Example
73
Solution
74
Substitution method
75
Substitution method
Example: Find the upper bound of following recurrence
relation T(n) = 2T(⌊n/2⌋ + 17) + n
Solution: ( Do yourself)
76
Substitution method
Example: Solve the following recurrence relation
T(n) = 2T(⌊√n⌋ ) + lg n
Solution: ( Do yourself)
77
Substitution method
Example: Solve the following recurrence relation T(n) = T(⌊n/2⌋ )
+ T(⌈ n/2 ⌉ ) + 1
78
Design and Analysis of Algorithms
Lecture-6
Recurrence Relation
80
Example
81
Solution
82
Example
83
Solution
84
Recurrence tree method
Example: Solve the following recurrence equation
T(n) = 3T(⌊n/4⌋) + θ(n2)
Solution: (Do Yourself)
85
Recurrence tree method
Example: Solve the following recurrence relation using recurrence
tree method
T(n) = T(n/3) + T(2n/3) + θ(n)
Solution: (Do Yourself)
86
Recurrence tree method
Exercise
(1)Draw the recursion tree for T(n) = 4 T(⌊n/2⌋) + cn, where
c is a constant and provide a tight asymptotic bound on its
solution. Verify your bound by the substitution method.
(2)Use a recursion tree to give an asymptotically tight
solution to the recurrence T(n) = T(n-a) + T(a) + cn, where a
≥1 and c > 0 are constants.
(3)Use a recursion tree to give an asymptotically tight
solution to the recurrence T(n) = T(αn) + T((1- α)n) + cn,
where α is a constant in the range 0 < α < 1 and c > 0 is also
a constant.
87
Design and Analysis of Algorithms
Lecture-7
Recurrence Relation
If logba = k,
(i) if p > -1 then solution will be Θ(nk logp+1 n)
(ii) if p = -1 then Θ(nk log logn)
(iii) ifp < -1 then Θ (nk )
Case III
if logba < k,
(i) p>= 0 then solution is Θ(nk logp n)
(ii) p < 0 then solution is Θ (nk )
90
Example
91
Master Theorem Method
Example: Solve the following recurrence relations using
master theorem method
(a) T(n) = 9T(n/3) + n
(b) T(n) = T(2n/3) + 1
(c) T(n) = 3T(n/4) + n log n
Solution:
(a) Consider T(n) = 9T(n/3) + n
In this recurrence relation, a = 9, b= 3 and f(n) = n.
Therefore, n𝑙𝑜𝑔𝑏𝑎 = nlog 39 = n2
Clearly, n𝑙𝑜𝑔 𝑏𝑎 > f(n) , therefore case1 can be applied.
Now determine ϵ such that f(n) = 𝑂(n2−ϵ) . Here ϵ = 1.
Therefore case 1 will be applied.
Hence solution will be T(n) = θ(n2).
92
Master Theorem Method
Solution:
(b) Consider T(n) = T(2n/3) + 1
In this recurrence relation, a = 1, b= 3/2 and f(n) = 1.
Therefore, n𝑙𝑜𝑔𝑏𝑎 = nlog3/2 1 = 0
Clearly, f(n) =θ(nlog ba) , therefore case 2 will be applied.
Hence solution will be T(n) = θ(log n).
93
Master Theorem Method
Solution:
(c) Consider T(n) = 3T(n/4) + n log n
In this recurrence relation, a = 3, b= 4 and f(n) = n log n.
Therefore, n𝑙𝑜𝑔𝑏𝑎 = nlog 43 = n0.793
Clearly, n𝑙𝑜𝑔 𝑏𝑎 < f(n) , therefore case 3 can be applied.
Now determine ϵ such that f(n) =𝛺(n0.793+ϵ) . Here ϵ = 0.207.
Now, af(n/b) ≤ cf(n) imply that 3f(n/4) ≤ cf(n)
⇒ 3(n/4)log (n/4) ≤ c n log n
⇒ (¾)log(n/4) ≤ c log n
Clearly above inequality is satisfied for c = 3/ 4. Therefore
case 3 will be applied.
Hence solution will be T(n) = θ(n log n).
94
Master theorem method
Example: Solve the following recurrence relation
T(n) = 2T(n/2) + n log n
Solution: Here, a = 2 , b=2 and f(n) = n log n.
nlogba = nlog 22 = n
If we compare nlog a and f(n), we get f(n) is greater than nlog a
b b
. Therefore, case 3 may be applied.
Now we have to determine ϵ >0 which satisfy f(n) = 𝛺
(nlogba+ϵ), i.e. n logn = 𝛺(n1+ϵ). Clearly there does not exist
any ϵ which satisfy this condition. Therefore case 3 can not
be applied. Other two cases are also not satisfied. Therefore
Master theorem can not be applied in this recurrence
relation. 95
Generalized Master theorem
Theorem: If f(n) = θ(n log ba lgkn), where k ≥ 0, then the
solution of recurrence will be T(n) = θ(nlogba lgk+1n).
= θ(n lg2n)
96
Recurrence relation
Exercise
1.Use the master method to give tight asymptotic bounds for
the following recurrences:-
(a) T(n) = 8T(n/2) + θ(n2)
(b) T(n) = 7T(n/2) + θ(n2)
(c) T(n) = 2T(n/4) + 1
(d) T(n) = 2T(n/4) + √n
2.Can the master method be applied to the recurrence
4T(n/2) + n2 log n ? Why or why not? Give an asymptotic
upper bound for this recurrence.
97
Design and Analysis of Algorithms
Lecture-8
100
Design and Analysis of Algorithms
Lecture-9
Recurrence Relation
Lecture-10
105
How does Merge Sort work?
106
Example
107
Solution Contd..
These subarrays are further divided into two halves. Now they
become array of unit length that can no longer be divided and
array of unit length are always sorted.
108
Solution Contd..
These sorted subarrays are merged together, and we
get bigger sorted subarrays.
109
Solution Contd..
This merging process is continued until the sorted array is
built from the smaller subarrays.
110
Algorithm: Merge Sort
Algorithm MergeSort(l,h)
{
If l < h
{
mid = (l + h) /2 ;
MergeSort(l,mid);
MergeSort(mid + 1, h);
Merge(l,mid,h);
}
}
111
Complexity Analysis of Merge Sort:
112
Applications of Merge Sort:
Sorting large datasets: Merge sort is particularly well-suited for
sorting large datasets due to its guaranteed worst-case time
complexity of O(n log n).
External sorting: Merge sort is commonly used in external
sorting, where the data to be sorted is too large to fit into
memory.
Custom sorting: Merge sort can be adapted to handle different
input distributions, such as partially sorted, nearly sorted, or
completely unsorted data.
113
Advantage s and Disadvantages of Merge Sort
Advantages of Merge Sort:
Stability: Merge sort is a stable sorting algorithm, which means it maintains
the relative order of equal elements in the input array.
Guaranteed worst-case performance: Merge sort has a worst-case time
complexity of O(N logN), which means it performs well even on large
datasets.
Parallelizable: Merge sort is a naturally parallelizable algorithm, which
means it can be easily parallelized to take advantage of multiple processors
or threads.
Drawbacks of Merge Sort:
Space complexity: Merge sort requires additional memory to store the
merged sub-arrays during the sorting process.
Not in-place: Merge sort is not an in-place sorting algorithm, which means it
requires additional memory to store the sorted data. This can be a
disadvantage in applications where memory usage is a concern.
Not always optimal for small datasets: For small datasets, Merge sort has a
higher time complexity than some other sorting algorithms, such as
insertion sort. This can result in slower performance for very small
114
datasets.
Heapsort
Heap
115
Heapsort
Max-heap Array
116
Heapsort
Index: If i the index of a node, then the index of parent and its
child are the following:-
Parent(i) = ⌊i/2⌋
Left(i) = 2i
Right(i) = 2i+1
Note: Root node has always index 1 i.e. A[1] is root element.
Heap-size: Heap-size is equal to the number of elements in the heap.
Height of a node: The height of a node in a heap is the
number of edges on the longest simple downward path from the
node to a leaf.
Height of heap: The height of the heap is equal to the height of its
root.
117
Heapsort
Types of heap
There are two kinds of binary heaps:
(1) max-heaps (2) min-heaps
Max-heap: The heap is said to be max-heap if it satisfy the
max-heap property.
The max-heap property is that the value at the parent node
is always greater than or equal to value at its children.
Min-heap: The heap is said to be min-heap if it satisfy the
min-heap property.
The min-heap property is that the value at the parent node
is always less than or equal to value at its children.
118
Heapsort
Heap sort algorithm consists of the following two sub-
algorithms.
(1)Max-Heapify: It is used to maintain the max-heap
property.
(2)Build-Max-Heap: It is used to construct a max-heap for
the given set of elements.
119
Heapsort
Max-Heapify Algorithm
Action done by max-heapify algorithm is shown in the following
figures:-
120
Heapsort
Max-Heapify Algorithm
121
Heapsort
Time complexity of Max-Heapify Algorithm
The running time of max-heapify is determined by the
following recurrence relation:-
T(n) ≤T(2n/3)+ θ(1)
Here n is the size of the sub-tree rooted at node i.
Using master theorem, the solution of this recurrence
relation is
T(n) = θ(lg n)
122
Design and Analysis of Algorithms
Lecture-11
124
Heapsort
Build-Max-Heap Algorithm (cont.)
125
Heapsort
Build-Max-Heap Algorithm (cont.)
Our tighter analysis relies on the properties that an n-element heap has
height ⌊lg n⌋ and at most ⌈n/2h+1⌉ nodes of any height h.
127
Heapsort
Time complexity of Build-Max-Heap Algorithm
Now, T(n) =
=
Since
Therefore,
T(n) = O( 2n)
= O(n)
128
Heapsort Algorithm
Example: Sort the following elements using heapsort
5, 13, 2, 25, 7, 17, 20, 8, 4.
Solution: ( Do Yourself)
129
Heapsort Algorithm
132
Design and Analysis of Algorithms
Lecture-12
Divide
Conquer
Combine
134
Quicksort
Algorithm
It divides the large array into smaller sub-arrays. And then quicksort
recursively sort the sub-arrays.
Pivot
1. Picks an element called the "pivot".
Partition
2.Rearrange the array elements in such a way that the all values lesser
than the pivot should come before the pivot and all the values greater
than the pivot should come after it.
This method is called partitioning the array. At the end of the partition
function, the pivot element will be placed at its sorted position.
Recursive
3.Do the above process recursively to all the sub-arrays and sort the
elements.
135
Quicksort
Example: Sort the following elements using quicksort
2, 8, 7, 1, 3, 5, 6, 4
Solution: Here, we pick the last element in the list as a pivot
1 2 3 4 5 6 7 8
2 8 7 1 3 5 6 4
2 8 7 1 3 5 6 4
2 8 7 1 3 5 6 4
2 8 7 1 3 5 6 4
2 1 7 8 3 5 6 4
2 1 3 8 4 5 6 4
136
Quicksort
1 2 3 4 5 6 7 8
2 1 3 8 7 5 6 4
2 1 3 4 7 5 6 8
Partition completed in first pass
2 1 3 4 7 5 6 8
2 1 3 4 7 5 6 8
2 1 3 4 7 5 6 8
2 1 3 4 7 5 6 8
2 1 3 4 7 5 6 8
137
Partition completed in second pass
Quicksort
1 2 3 4 5 6 7 8
2 1 3 4 7 5 6 8
2 1 3 4 7 5 6 8
1 2 3 4 5 7 6 8
1 2 3 4 5 6 7 8
Partition completed in third pass
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
Process
138
completed
1 2 3 4 5 6 7 8
Quicksort Algorithm
Quicksort(A, p, r)
1 if p < r
2 q = PARTITION(A, p, r)
3 QUICKSORT(A, p, q-1)
4 QUICKSORT(A, q+1, r)
139
Quicksort Algorithm
140
Quicksort Algorithm Analysis
Time Complexity:
142
Quicksort Algorithm Analysis
Worst-case partitioning
After solving this recurrence relation, we get
T(n) = θ(n2)
143
Quicksort Algorithm Analysis
Best-case partitioning
If PARTITION produces two sub-problems, each of size no
more than n / 2, since one is of size ⌊n /2⌋ and one of size
⌈n/2⌉-1 . In this
case, quicksort runs much faster. The recurrence for the
running time is then
T(n) = 2T(n/2) + θ(n)
After solving this recurrence relation, we get
T(n) = θ(n lgn)
145
Quicksort Algorithm Analysis
Recurrence tree will be
146
Quicksort Algorithm Analysis
Therefore, the solution of recurrence relation will be
T(n)≤cn + cn + cn+……………..+cn
= cn(1+lg10/9n)
= cn + cnlg 10/9n
147
Quicksort Algorithm Analysis
Exercise
1. Sort the following elements using quicksort
13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11.
2.What is the running time of QUICKSORT when all elements
of array A have the same value?
3.Show that the running time of QUICKSORT is‚ θ(n2) when
the array A contains distinct elements and is sorted in
decreasing order.
4.Suppose that the splits at every level of quicksort are in the
proportion 1-α to α, where 0 < α ≤ 1/2 is a constant. Show that
the minimum depth of a leaf in the recursion tree is
approximately lgn/ lgα and the maximum depth is
approximately lgn/ lg(1-α) . (Don’t worry about integer round-
off.) 148
A randomized version of quicksort
149
A randomized version of quicksort
Lecture-13
154
Sorting in Linear Time
Theorem:
Any comparison sort algorithm requires 𝛺(nlg n) comparisons
in the worst case.
Proof: The worst-case number of comparisons for a given
comparison sort algorithm equals the height of its decision
tree.
Consider a decision tree of height h with l reachable leaves
corresponding to a comparison sort on n elements. Because
each of the n! permutations of the input appears as some
leaf, we have n! ≤ l . Since a binary tree of height h has no
more than 2h leaves, therefore
n! ≤l ≤2h ⇒ h ≥ lg(n!)
Therefore h = 𝛺(nlgn)
155
Counting sort
Counting sort assumes that each of the n input elements
is an integer in the range 0 to k, for some integer k.
156
Counting sort
Example: Sort the following elements using counting sort
2, 5, 3, 0, 2, 3, 0, 3
Solution:
157
Counting sort
158
Counting sort
Lecture-14
162
Radix sorting
Example: Sort the following elements using radix sort
326, 453, 608, 835, 751, 435, 704, 690.
Solution: In these elements, number of digits is 3. Therefore, we
have to used 3 iterations.
163
Radix sorting
164
Radix sorting
Time complexity of radix sort is
T(n) = θ(d(n+k))
165
Radix sorting
Time complexity of radix sort is
T(n) = θ(d(n+k))
166
Bucket sorting
Bucket sort assumes that the input is generated by a
random process that distributes elements uniformly and
independently over the interval [0,1).
168
Bucket sorting
169
Linear search
170
Binary search
172
Binary search
Binary-search(A, n, x)
l=1
r=n
while l ≤ r
do
m = ⌊(l + r) / 2⌋
if A[m] < x then
l=m+1
else if A[m] > x then
r=m−1
else
return m
return unsuccessful
175
Comparison of Sorting
Merge Sort:
Best Case: O(n log n)
Average Case: O(n log n)
Worst Case: O(n log n)
Quick Sort:
Best Case: O(n log n)
Average Case: O(n log n)
Worst Case: O(n^2) - but rarely occurs with good pivot selection
Heap Sort:
Best Case: O(n log n)
Average Case: O(n log n)
Worst Case: O(n log n)
Radix Sort:
Best Case: O(n * k)
Average Case: O(n * k)
Worst Case: O(n * k)
176
2- Space Complexity:
The amount of additional memory required for
sorting.
Bubble Sort, Selection Sort, Insertion Sort:
O(1) - They are in-place sorting algorithms and do not
require additional memory.
Merge Sort, Quick Sort:
O(n) - They require additional memory for temporary
storage.
Heap Sort:
O(1) - It's an in-place sorting algorithm but involves
restructuring the heap.
Radix Sort:
O(n + k) - It depends on the range of the input data.177
3- Stability:
Whether the sorting algorithm preserves the relative
order of equal elements.
Bubble Sort, Insertion Sort, Merge Sort, and Radix Sort:
Stable - They preserve the order of equal elements.
Selection Sort, Quick Sort, and Heap Sort:
Not stable - They may change the order of equal
elements.
178
4-Use Cases:
Different sorting algorithms are suitable for different
scenarios.
Bubble Sort, Insertion Sort, and Selection Sort:
Small data sets or nearly sorted data.
Merge Sort and Quick Sort:
General-purpose sorting for larger data sets.
Heap Sort:
In-place sorting with better average-case performance than
Bubble, Insertion, or Selection Sort.
Radix Sort:
When sorting integers with a fixed number of digits.
179
Advantages and Disadvantages:
•Each algorithm has its unique advantages and
disadvantages in terms of implementation complexity,
stability, and performance.
180
Thank you.
181