Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
23 views

Unit 1 Algo

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Unit 1 Algo

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CS 3401- ALGORITHMS

UNIT I
Algorithm analysis: Time and space complexity - Asymptotic Notations and its properties Best case, Worst
case and average case analysis – Recurrence relation: substitution method - Lower bounds – searching: linear
search, binary search and Interpolation Search, Pattern search: The naïve string- matching algorithm - Rabin-
Karp algorithm - Knuth-Morris-Pratt algorithm. Sorting: Insertion sort – heap sort

PART - A
1. What do you mean by algorithm? (Nov/Dec 2008) (May/June 2013) (Apr/May 17) (U)
An algorithm is a sequence of unambiguous for solving a problem i.e., for obtaining a required output for
any legitimate input in a finite amount of time. In addition, all algorithms must satisfy the following criteria:
Input
Output
Definiteness
Finiteness
Effectiveness.
2. What is performance measurement? (R)
Performance measurement is concerned with obtaining the space and the time requirements of a particular
algorithm.
3. What are the types of algorithm efficiencies? (R)
The two types of algorithm efficiencies are
Time efficiency: indicates how fast the algorithm runs.
Space efficiency: indicates how much extra memory the algorithm needs.
4. What is space complexity? (Nov/Dec 2012) (R)
Space Complexity indicates how much extra memory the algorithm needs. Basicallyit has three components.
They are instruction space, data space and environment space.
5. What is time complexity? (Nov/Dec 2012) (R)
Time Complexity indicates how fast an algorithm runs. T (P) =Compile Time + Run Time.(Tp) ,Where Tp is
no of add, sub, mul...
6. What is an algorithm design technique? (R)
An algorithm design technique is a general approach to solving problems algorithmically that is applicable to
a variety of problems from different areas of computing.
7. What is pseudo code? (R)
A pseudo code is a mixture of a natural language and programming language constructs to specify an
algorithm. A pseudo code is more precise than a natural language and its usage often yields more concise
algorithm descriptions.
8. What do you mean by “worst-case efficiency” of and algorithm? (R) (Nov 17)
The worst case efficiency of an algorithm, its efficiency for the worst-case input of size n, which is an input
or inputs of size n for which the algorithm runs the longest among all possible inputs of that size.
9. What is best-case efficiency? (R)
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input or
inputs for which the algorithm runs the fastest among all possible inputs of that size.
10. What is average case efficiency? (May 2008) (R)
The average case efficiency of an algorithm is its efficiency for an averagecase input of size n. It provides
information about an algorithm behavior on a “typical” or “random” input.
11. What is binary search? (R)
Binary search is a remarkably efficient algorithm for searching in a sorted
array. It works by comparing a search key K with the arrays middle element A[m]. if they match the algorithm
stops; otherwise the same operation is repeated recursively for the first half of the array if K < A[m] and the
second half if K > A[m].
a. A[0]………A[m-1] A[m] A[m+1]………A[n-1]
12. Write the algorithm for Iterative binary search? (A)
Algorithm BinSearch(a,n,x)
//Given an array a[1:n] of elements in nondecreasing
// order, n>0, determine whether x is present
{
low : = 1; high : = n;
while (low < high) do
{
mid : = [(low+high)/2];
if(x a[mid]) then high:= mid-1; else if (x a[mid]) then low:=mid + 1;
else return mid;
}
return 0;
}
13. Give the general plan for analyzing non recursive algorithm. (R)
• Decide a parameter indicating an Input’s Size.
• Identify the algorithms Basic Operation
• Check whether the number of times the basic operation is executed only on thesize of an input. If it
also depends on some additional property, the worst case, average case, and if necessary, best case
efficiencies have to be investigated separately.
• Using standard formulas and rules of sum manipulation either find a closedformula for the count or,
at the very least, establish its order of growth.
14. What is validation of algorithm? (Nov/Dec 2012) (R)
The process of measuring the effectiveness of an algorithm before it is coded to know the algorithm is correct
for every possible input. This process is called validation.
15. What are all the methods available for solving recurrence relations? (R)
Forward Substitution
Backward Substitution
Smoothness Rule
Master Theorem
16. Write down the problem types.(April/May 2008) (R)
➢ Sorting
➢ Searching
➢ String Processing
➢ Graph Problem
➢ Combinational Problem
➢ Geometric Problem
➢ Numerical Problem
17. Define Substitution Method. (April /May 2010) (R)
Substitution method is a method of solving a system of equations wherein one of theequations is solved for
one variable in terms of the other variables.
18. Define Recurrence Relation. (APRIL/MAY 2010) (Nov/Dec 2016) (R)
The equation defines M (N) not explicitly, ie.., is a function of n, but implicitly as afunction of its value at
another point, namely N-1. Such equations are calledrecurrence relation.
19. Write down the properties of asymptotic notations?(MAY 2015) (R)
If t1(n)Ɛ O(g1(n)) and t2(n)Ɛ O(g2(n)), then
t1(n)+t2(n)Ɛ O(max{g1(n),g2(n)})
20. Give the Euclid algorithm for computing gcd(m, n) (MAY/JUN 2016) (Apr/May 17) (APR 17)
ALGORITHM Euclid_gcd(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do r ←m mod n m←n
n←r
return m
Example: gcd(60, 24) = gcd(24, 12) = gcd(12, 0) = 12.
21. Design an algorithm for computing area and circumference of the circle. (NOV/DEC 2016) (C)
//Computes area and circumference of the circle
//Input: One non-negative integer radius
//Output: Area and Circumference of the circle
Area = PI * rad * rad; Ci = 2 * PI * rad; return area, ci
22. How to measure an algorithm running time? (NOV/DEC 2017) (R)
One possible approach is to count the number of times each of the algorithm’s operation is executed. The
thing to do is identify the most important operation of the algorithm called as basic operation and compute
the number of times the basic operation is executed.
T(n) Cop C(n)
23. What is a basic operation? (APR/MAY 2018) (R)
The process of identify the most important operation of the algorithm called as basic operation
24. Define best,worst,average case time complexity. (NOV/DEC 2018)
Best case: In the best case analysis, we calculate lower bound on running time of an algorithm. We must
know the case that causes minimum number of operations to be executed.
Worst case: In the worst case analysis, we calculate upper bound on running time of an algorithm. We must
know the case that causes maximum number of operations to be executed.
Averge case: In average case analysis, we take all possible inputs and calculate computing time for all of the
inputs. Sum all the calculated values and divide the sum by total number of inputs.
25. How do you measure the efficiency of an algorithm. (APR/MAY 2019)
Time efficiency - a measure of amount of time for an algorithm to execute.
Space efficiency - a measure of the amount of memory needed for an algorithm to execute.
26. What do you mean bt interpolation search?
Interpolation search finds a particular item by computing the probe position. Initially, the probe position is
the position of the middle most item of the collection.
If a match occurs, then the index of the item is returned. To split the list into two parts, we use the following
method −
mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) * (X - A[Lo])
27. Define Knuth-Morris-Pratt algorithm.
Knuth-Morris and Pratt introduce a linear time algorithm for the string matching problem. A matching time
of O (n) is achieved by avoiding comparison with an element of 'S' that have previously been involved in
comparison with some element of the pattern 'p' to be matched. i.e., backtracking on the string 'S' never occurs.
28. Write the advantage of insertion sort? (NOV/DEC 2017)
1.Simple implementation 2.Efficient for small data sets 3.Adaptive
More efficient in practice than most other simple quadratic, i.e. O(n2) algorithms such as Selection sort or
bubble sort; the best case (nearly sorted input) is O(n)
Stable - does not change the relative order of elements with equal keys
29.Define Heap sort?
Heap Sort is one of the best sorting methods being in-place and with no quadratic worst- case running time.
Heap sort involves building a Heap data structure from the given array and then utilizing the Heap to sort the
array.
30. What are the properties of heap structure?
The special heap properties given below:
Shape Property: Heap data structure is always a Complete Binary Tree, which means all

levels of the tree are fully filled.


Heap Property: All nodes are either greater than or equal to or less than or equal to each of its children. If the
parent nodes are greater than their child nodes, heap is called a Max-Heap, and if the parent nodes are smaller
than their child nodes, heap is called Min- Heap.
UNIT 1- PART B
1. Define the asymptotic notations used for best case average case and worst case analysis?
(APIRL/MAY2009) (APRIL/MAY-2008)(R) (Apr 18)
Input: Here our input is an integer array of size "n" and we have one integer "k" that we need to search for in
that array.
Output: If the element "k" is found in the array, then we have return 1, otherwise we have
// for-loop to iterate with each element in the array
for (int i = 0; i < n; ++i)
{
// check if ith element is equal to "k" or not
if (arr[i] == k)
return 1; // return 1, if you find "k"
}
return 0; // return 0, if you didn't find "k"
}
If the input array is [1, 2, 3, 4, 5] and you want to find if "1" is present in the array or not, then the if-condition
of the code will be executed 1 time and it will find that the element 1 is there in the array. So, the if-condition
will take 1 second here.
If the input array is [1, 2, 3, 4, 5] and you want to find if "3" is present in the array or not, then the if-condition
of the code will be executed 3 times and it will find that the element 3 is there in the array. So, the if-condition
will take 3 seconds here.
If the input array is [1, 2, 3, 4, 5] and you want to find if "6" is present in the array or not, then the if-condition
of the code will be executed 5 times and it will find that the element 6 is not there in the array and the algorithm
will return 0 in this case. So, the if-condition will take 5 seconds here.
As we can see that for the same input array, we have different time for different values of "k". So, this can be
divided into three cases:
Best case: This is the lower bound on running time of an algorithm. We must know the case that causes the
minimum number of operations to be executed. In the above example, our array was [1, 2, 3, 4, 5] and we are
finding if "1" is present in the array or not. So here, after only one comparison, we will get that ddelement is
present in the array. So, this is the best case of our algorithm.
Average case: We calculate the running time for all possible inputs, sum all the calculated values and divide
the sum by the total number of inputs. We must know (or predict) distribution of cases.
Worst case: This is the upper bound on running time of an algorithm. We must know the case that causes the
maximum number of operations to be executed. In our example, the worst case can be if the given array is [1,
2, 3, 4, 5] and we try to find if element "6" is present in the array or not. Here, the if-condition of our loop will
be executed 5 times and then the algorithm will give "0" as output.

2. What is meant by recurrence? Give one example to solve recurrence equations.


(APRIL/MAY 2012) (r)
Recurrence Relation:
A recurrence relation is a mathematical equation that describes the relation between the input size and the
running time of a recursive algorithm.
It expresses the running time of a problem in terms of the running time of smaller instances of the same
problem.
A recurrence relation typically has the form
T(n) = aT(n/b) + f(n)
where:
T(n) is the running time of the algorithm on an input of size n
a is the number of recursive calls made by the algorithm
b is the size of the input passed to each recursive call
f(n) is the time required to perform any non-recursive operations
The recurrence relation can be used to determine the time complexity of the algorithm using techniques such
as the Master Theorem or Substitution Method.
For example,
let's consider the problem of computing the nth Fibonacci number. A simple recursive algorithm for solving
this problem is as follows:
Fibonacci(n) if n <= 1 return n else
return Fibonacci(n-1) + Fibonacci(n-2)
The recurrence relation for this algorithm is T(n) = T(n-1) + T(n-2) + O(1),
which describes the running time of the algorithm in terms of the running time of the two smaller instances of
the problem with input sizes n-1 and n-2.
Using the Master Theorem, it can be shown that the time complexity of this algorithm is O(2^n) which is very
inefficient for large input sizes.

3. Distinguish between Big Oh, Theta and Omega natation. (NOV/DEC 2012) (AN)
Big O notation: (O(f(n))) provides an upper bound on the growth of a function. It describes
the worst-case scenario for the time or space complexity of an algorithm. For example, an algorithm with a
time complexity of O(n^2) means that the running time of the algorithm is at most n^2, where n is the size of
the input.
Big Ω notation (Ω(f(n))) provides a lower bound on the growth of a function. It describes the best-case
scenario for the time or space complexity of an algorithm. For example, an algorithm with a space complexity
of Ω(n) means that the memory usage of the algorithm is at least n, where n is the size of the input.
Big Θ notation (Θ(f(n))) provides a tight bound on the growth of a function. It describes the average-
case scenario for the time or space complexity of an algorithm. For example, an algorithm with a time
complexity of Θ(n log n) means that the running time of the algorithm is both O(n log n) and Ω(n log n), where
n is the size of the input.
It's important to note that the asymptotic notation only describes the behavior of the function for large values
of n, and does not provide information about the exact behavior of the function for small values of n. Also, for
some cases, the best, worst and average cases can be the same, in that case the notation will be simplified to
O(f(n)) = Ω(f(n)) = Θ(f(n))
Additionally, these notations can be used to compare the efficiency of different algorithms, where a lower
order of the function is considered more efficient. For example, an algorithm with a time complexity of O(n)
is more efficient than an algorithm with a time complexity of O(n^2).

4. Explain how analysis of linear search is done with a suitable illustration. (10)
Linear Search
Linear search, often known as sequential search, is the most basic search technique. In this type of search, we
go through the entire list and try to fetch a match for a single element. If we find a match, then the address of
the matching target element is returned.
On the other hand, if the element is not found, then it returns a NULL value. Following is a step-by-step
approach employed to perform Linear Search Algorithm.

The procedures for implementing linear search are as follows:


Step 1: First, read the search element (Target element) in the array.
Step 2: In the second step compare the search element with the first element in the array.
Step 3: If both are matched, display "Target element is found" and terminate the Linear Search function.
Step 4: If both are not matched, compare the search element with the next element in the array. Step 5: In this
step, repeat steps 3 and 4 until the search (Target) element is compared with the last element of the array.
Step 6 - If the last element in the list does not match, the Linear Search Function will be terminated, and the
message "Element is not found" will be displayed.
Algorithm of the Linear Search Algorithm

Linear Search ( Array Arr, Value a ) // Arr is the name of the array, and a is the searched
element. Step 1: Set i to 0 // i is the index of an array which starts from 0
Step 2: ifi > n then go to step 7 // n is the number of elements in
array Step 3: if Arr[i] = a then go to step 6
Step 4: Set i to i + 1
Step 5: Goto step 2
Step 6: Print element a found at index i and go to
step 8 Step 7: Print element not found
Step 8: Exit

Pseudocode of Linear Search Algorithm

For each element in the array


If (searched element == value)
Return's the searched element location
end if
end
for
end
Example of Linear Search Algorithm
Consider an array of size 7 with elements 13, 9, 21, 15, 39, 19, and 27 that starts with 0 and ends with size
minus one, 6.
Search element = 39

Step 1: The searched element 39 is compared to the first element of an array, which is 13.

The match is not found, you now move on to the next element and try to implement a comparison. Step 2:
Now, search element 39 is compared to the second element of an array, 9.

Step 3: Now, search element 39 is compared with the third element, which is 21.

Again, both the elements are not matching, you move onto the next following element. Step 4; Next, search
element 39 is compared with the fourth element, which is 15.
Step 5: Next, search element 39 is compared with the fifth element 39.

A perfect match is found, display the element found at location 4.


The Complexity of Linear Search Algorithm
Three different complexities faced while performing Linear Search Algorithm, they are mentioned as follows.
➢ Best Case
➢ Worst Case
➢ Average Case
Best Case Complexity
The element being searched could be found in the first position.
In this case, the search ends with a single successful comparison.
Thus, in the best-case scenario, the linear search algorithm performs O(1) operations.
Worst Case Complexity
The element being searched may be at the last position in the array or not at all.
In the first case, the search succeeds in ‘n’ comparisons.
In the next case, the search fails after ‘n’ comparisons.
Thus, in the worst-case scenario, the linear search algorithm performs O(n) operations.
Average Case Complexity
When the element to be searched is in the middle of the array, the average case of the Linear Search Algorithm
is O(n).
Space Complexity of Linear Search Algorithm
The linear search algorithm takes up no extra space; its space complexity is O(n) for an array of n elements.
Application of Linear Search Algorithm
The linear search algorithm has the following applications:
Linear search can be applied to both single-dimensional and multi-dimensional arrays.
Linear search is easy to implement and effective when the array contains only a few elements.
Linear Search is also efficient when the search is performed to fetch a single search in an unordered-List.

5. Explain how Time Complexity is calculated. Give an example. (APRIL/MAY 2010) (E)
Time Complexity
Bestcase-O(1)
The best-case occurs when the target is found exactly as the first expected position computed using the formula.
As we only perform one comparison, the time complexity is O(1).
Worst-case-O(n)
The worst case occurs when the given data set is exponentially distributed.
Averagecase-O(log(log(n)))
If the data set is sorted and uniformly distributed, then it takes O(log(log(n))) time as on an average
(log(log(n))) comparisons are made.

6. Write an algorithm for finding maximum element of an array, perform best, worst and average
case complexity with appropriate order notations. (APRIL/MAY 2008) (R)
Best Case Complexity:
The best-case scenario occurs when the maximum element is at the beginning of the array.
In this case, the algorithm would have a constant time complexity of (1)O(1).
Worst Case Complexity:
The worst-case scenario occurs when the maximum element is at the end of the array or is not present in the
array at all.
In this case, the algorithm needs to iterate through the entire array, resulting in a time complexity of O(n),
where n is the size of the array.
Average Case Complexity:
The average-case complexity is also O(n). This assumes that, on average, the maximum element is equally
likely to be in any position in the array.
Best Case: O(1)
Worst Case: O(n)
Average Case: O(n)
This algorithm is a straightforward linear search, and its performance is influenced by the position of the
maximum element in the array. If the array is sorted or partially sorted, more efficient algorithms like binary
search can be employed to achieve better than linear time complexity.

7. Briefly explain the time complexity, space complexity estimation.(6) (MAY/JUNE 2013)
Time and Space Complexity
Time complexity is a measure of how long an algorithm takes to run as a function of the size of the input. It
is typically expressed using big O notation, which describes the upper bound on the growth of the time required
by the algorithm. For example, an algorithm with a time complexity of O(n) takes longer to run as the input
size (n) increases.
There are different types of time complexities:
O(1) or constant time: the algorithm takes the same amount of time to run regardless of the size of the input.
O(log n) or logarithmic time: the algorithm's running time increases logarithmically with the size of the input.
O(n) or linear time: the algorithm's running time increases linearly with the size of the input.
O(n log n) or linear logarithmic time: the algorithm's running time increases linearly with the size of the input
and logarithmically with the size of the input.
O(n^2) or quadratic time: the algorithm's running time increases quadratically with the size of the input.
O(2^n) or exponential time: the algorithm's running time increases exponentially with the size of the input.
Space complexity, is a measure of how much memory an algorithm uses as a function of the size of the input.
Like time complexity, it is typically expressed using big O notation. For example, an algorithm with a space
complexity of O(n) uses more memory as the input size (n) increases. Space complexities are generally
categorized as:
O(1) or constant space: the algorithm uses the same amount of memory regardless of the size of the input.
O(n) or linear space: the algorithm's memory usage increases linearly with the size of the input.
O(n^2) or quadratic space: the algorithm's memory usage increases quadratically with the size of the input.
O(2^n) or exponential space: the algorithm's memory usage increases exponentially with the

8. Explains the steps involved in problem solving. (APR/MAY 2019)


Define the Problem:
Clearly articulate the problem you are trying to solve. Ensure you understand the requirements and constraints
involved. Break down complex problems into smaller, more manageable parts.
Understand the Problem:
Gain a deep understanding of the problem by asking questions and researching relevant information. Identify
any underlying issues or dependencies that might affect the solution.
Generate Possible Solutions:
Brainstorm and list as many potential solutions as possible. Encourage creativity and explore various
approaches. Avoid evaluating solutions at this stage.
Evaluate and Select a Solution:
Assess the pros and cons of each potential solution. Consider factors such as feasibility, resources, time, and
impact. Choose the solution that best addresses the problem while considering the available resources.
Plan the Implementation:
Develop a detailed plan for implementing the chosen solution. Break the plan into smaller tasks and set specific,
measurable, achievable, relevant, and time-bound (SMART) goals. Define roles and responsibilities if working
in a team.
Implement the Solution:
Execute the plan. Follow the steps outlined in your implementation plan. Be flexible and ready to adapt if
unexpected challenges arise. Communicate progress and collaborate with team members if applicable.
Monitor and Evaluate:
Continuously monitor the implementation process. Collect data and feedback to evaluate progress and identify
any issues. If the solution involves a long-term process, establish key performance indicators (KPIs) to measure
success.
Iterate if Necessary:
If the solution is not achieving the desired outcome, be willing to revisit and revise. Iterate on the solution by
going back to previous steps if needed. Analyze what worked and what didn't, and make adjustments
accordingly.
Document the Solution:
Record the details of the implemented solution. This documentation is valuable for future reference and for
sharing insights with others. It can also serve as a basis for continuous improvement.
Communicate Results:
Share the results and outcomes of the problem-solving process. Communicate with stakeholders, team
members, or anyone affected by the solution. Provide insights into what worked well and any lessons learned.
Reflect on the Process:
Reflect on the problem-solving process itself. Consider what went smoothly, what could be improved, and
what strategies were most effective. Use this reflection to enhance your problem-solving skills for future
challenges.
Effective problem-solving often involves collaboration, critical thinking, and adaptability.

9. Explain in detail about insertion sort? (NOV/DEC 2017)


Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the
first card is already sorted in the card game, and then we select an unsorted card. If the selected
unsorted card is greater than the first card, it will be placed at the right side; otherwise, it will
be placed at the left side. Similarly, all unsorted cards are taken and put in their exact place.
The same approach is applied in insertion sort. The idea behind the insertion sort is that first
take one element, iterate it through the sorted array. Although it is simple to use, it is not
appropriate for large data sets as the time complexity of insertion sort in the average case and
worst case is O(n2), where n is the number of items. Insertion sort is less efficient than the other
sorting algorithms like heap sort, quick sort, merge sort, etc.
Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step2 - Pick the next element, and store it separately in a key.
Step3 - Now, compare the key with all elements in the sorted
array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to the
next element. Else, shift greater elements in the array towards the right.
Step 5 - Insert the value.
Step 6 - Repeat until the array is sorted.
Working of Insertion sort Algorithm
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will
be easier to understand the insertion sort via an example.
Let the elements of array are -
Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along
with swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that
are 31 and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.


Insertion sort complexity
1. Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
insertion sort is O(n2).
2. Space Complexity
o The space complexity of insertion sort is O(1). It is because, in insertion sort, an
extra variable is required for swapping.

You might also like