Computational Complexity
Computational Complexity
Complexity
OBJECTIVE OF COMPLEXITY
should hold several steps, and it should get terminated after execution.
should be feasible to create each instruction. It should be flexible to bring out expected
changes.
should take less time and memory space. In short, it should be efficient enough.
Prior Analysis:
Posterior Analysis:
Example:
“Is Prime” returns ‘TRUE’ when given input number is a prime
number and ‘FALSE’, otherwise.
“Is Composite” verifies whether a given integer is not a prime
number.
When “Is Prime” returns ‘TRUE’ , “Is Composite” returns ‘FALSE’ and
vice- versa.
Applications
Theory of Computation has helped in many fields such as
Cryptography
Design and Analysis of Algorithm
Quantum Calculation
Logic within Computer Science
Computational Difficulty
Randomness within Calculation and
Correcting Errors in Codes.
Factors considered in analysis
Two factors considered while analyzing algorithms are time and space.
Time:
The amount of time required to accomplish the implementation is known as the
time complexity of an algorithm. The time complexity of an algorithm is represented
by the big O notation.
Space:
This is a less important factor than time because if more space is required, it can
always be found in the form of auxiliary storage.
The amount of space an algorithm needs while solving the problem is known as
space complexity. It is also represented in big O notation.
Factors considered in analysis
Auxiliary space is just the temporary or extra space, whereas
space complexity also includes space used by input values.
You can also see it as a way to measure how effectively your code
Constant: O(1)
Linear time: O(n)
Logarithmic time: O(n log n)
Quadratic time: O(n^2)
Exponential time: 2 ^(n)
Factorial time: O(n!)
Dominant Term:
count=0
increment count by 1
End While
Print count
In the above example, no of various instructions for a file size of 500 characters,
Initialization : 1 instruction
Increments: 500
Example:
Printing: 1
Example
def example_function(lst):
print("First element of list: ", lst[0])
The function above will require only one execution step whether
the above array contains 1, 100 or 1000 elements. As a result, the
function is in constant time with time complexity O(1).
BIG O NOTATION
Linear Time: O(n)
Linear time is achieved when the running time of an algorithm increases linearly with
the length of the input. This means that when a function runs for or iterates over an
input size of n, it is said to have a time complexity of order O(n).
Example
def example_function(lst, size):
for i in range(size):
print("Element at index", i, " has value: ", lst[i])
The above function will take O(n) time (or "linear time") to complete, where n is the
number of entries in the array. The function will print 10 times if the given array has 10
entries, and 100 times if the array has 100 entries.
Note: Even if you iterate over half the array, the runtime still depends on the input size,
so it will be considered O(n).
BIG O NOTATION
Logarithm Time: O(log n)
When the size of the input data decreases in each step by a certain factor, an
algorithm will have logarithmic time complexity. This means as the input size
grows, the number of operations that need to be executed grows comparatively
much slower.
Example
Binary search and finding the largest/smallest element in a binary tree are both
examples of algorithms having logarithmic time complexity.
Binary search comprises searching an array for a specified value by splitting the
array into two parts consistently and searching for the element in only one of the
two parts. This ensures that the operation is not performed on every element of
the input data.
Logarithm Time: O(log n)
def binarySearch(lst, x):
low = 0
high = len(lst)-1
# Repeat until the pointers low and high meet each other
while low <= high:
mid = low + (high - low)//2
if lst[mid] == x:
return mid
return -1
Logarithm Time: O(log n)
The Binary Search method takes a sorted list of elements and searches through it
for the element x. This is how the algorithm works:
Example
def quadratic_function(lst, size):
for i in range(size):
for j in range(size):
print("Iteration : " i, "Element of list at ", j, " is ", lst[j])
We have two nested loops in the example above. If the array has n items, the outer loop will
execute n times, and the inner loop will execute n times for each iteration of the outer loop,
resulting in n^2 prints. If the size of the array is 10, then the loop runs 10x10 times. So the
function ten will print 100 times. As a result, this function will take O(n^2) time to complete.
Exponential Time: O(2^n)
With each addition to the input (n), the growth rate doubles, and the algorithm iterates
across all subsets of the input elements. When an input unit is increased by one, the number
of operations executed is doubled.
Example
def fibonacci(n):
if (n <= 1):
return 1
else:
return fibonacci(n - 2) + fibonacci(n - 1)
In the above example, we use recursion to calculate the Fibonacci sequence. The algorithm
O(2^n) specifies a growth rate that doubles every time the input data set is added. An
O(2^n) function's exponential growth curve starts shallow and then rises rapidly.
Best, Average and Worst Case
complexity
In most algorithms, the actual complexity for a particular input can
vary. Eg. If input list is sorted, linear search may perform poorly
while binary search will perform very well. Hence, multiple input
sets must be considered while analyzing an algorithm. These
include the following
1.Best Case Input: This represents the input set that allows an
algorithm to perform most quickly. With this input, the algorithm
takes the shortest time to execute, as it causes the algorithms to
do the least amount of work. It provides the way an algorithm
behaves under optimal conditions. Eg. In a searching algorithm,
if match is found at the first location, it is the best case input as
the no. comparisons is just one
Best, Average and Worst Case
complexity
2. Worst Case Input: This represents the input set that allows an
algorithm to perform most slowly. It is an important analysis
because it gives us an idea of the maximum time an algorithm will
ever take. It is important because it provides an upper bound on
running time of an algorithm. And it is also a promise that the
algorithm will not take more than the calculated time. Eg. In a
searching algorithm, if value to be searched is at the last location
or it is not in the list, is the worst-case input because it tells the
maximum number of comparisons that have to be made.
3. Average Case Input: This represents the input set that allows
an algorithm to deliver an average performance. It provides the
expected running time. It needs assumption of statistical
distribution of inputs.
Data Structure Complexity Chart
Data Space
Structu Complexit Average Case Time Complexity
res y
Acces
Search Insertion Deletion
s
Array O(n) O(1) O(n) O(n) O(n)
Stack O(n) O(n) O(n) O(1) O(1)
Queue O(n) O(n) O(n) O(1) O(1)
Singly
Linked O(n) O(n) O(n) O(1) O(1)
List
Search Algorithms Complexity Chart
Search Space
Time Complexity
Algorithms Complexity
Best Average Worst
Case Case Case
Linear
O(1) O(1) O(n) O(n)
Search
Binary
O(1) O(1) O(log n) O(log n)
Search
Sorting Algorithms Complexity Chart
Sorting Space
Time Complexity
Algorithms Complexity
Best Case Average Case Worst Case
Selection
O(1) O(n^2) O(n^2) O(n^2)
Sort
Insertion
O(1) O(n) O(n^2) O(n^2)
Sort
Bubble Sort O(1) O(n) O(n^2) O(n^2)
Quick Sort O(log n) O(log n) O(n log n) O(n log n)
Merge Sort O(n) O(n) O(n log n) O(n log n)
Heap Sort O(1) O(1) O(n log n) O(n log n)