Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Computational Complexity

Uploaded by

hsbohra653
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Computational Complexity

Uploaded by

hsbohra653
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Computational

Complexity
OBJECTIVE OF COMPLEXITY

Fundamental subject of classifying computational problems based on


their ‘complexities’.

‘Complexity’ of a problem is a measure of the amount of


resources(time/space/random bits/no. of instructions/ queries etc..)
used by the best possible algorithm that solves the problem.
What is an algorithm
An algorithm is a sequence of instructions that act on some input data
to produce some output in a finite number of steps.

Why analyze algorithm:

Determining which algorithm is efficient than the other involves


analysis of algorithms.
Characteristics of Algorithm
An algorithm

should be well-structured and defined.

should be input specified.

should be output specified.

 should hold several steps, and it should get terminated after execution.

should be feasible to create each instruction. It should be flexible to bring out expected

changes.

 should take less time and memory space. In short, it should be efficient enough.

should be language independent.


Phases of Algorithm
Algorithm can be examined in 2 phases.

Prior Analysis:

 It is the theoretical analysis of an algorithm performed prior to its implementation.


 Before running or executing the algorithm, parameters such as the speed of the processor
might be considered, which has no impact on the execution phase.

Posterior Analysis:

 It is as the practical analysis of an algorithm.


 The algorithm is implemented in any computer language to obtain practical analysis.
 This analysis is used to determine how much running time and space the technique
consumes.
What is Computational
Complexity
It refers to the measure of the performance of an algorithm. It allows
the comparison of algorithm for efficiency and predicts their behavior
as data size increases.

Need of Computational Complexity

Complexity theory deals with decision making problems.


It distinguishes between the problems verifying “TRUE” or “FALSE”.
Decision Making Problems
A problem that reverses the “TRUE” and “FALSE” answers of another
problem is called a complement of that problem.

Example:
“Is Prime” returns ‘TRUE’ when given input number is a prime
number and ‘FALSE’, otherwise.
“Is Composite” verifies whether a given integer is not a prime
number.
When “Is Prime” returns ‘TRUE’ , “Is Composite” returns ‘FALSE’ and
vice- versa.
Applications
Theory of Computation has helped in many fields such as

Cryptography
Design and Analysis of Algorithm
Quantum Calculation
 Logic within Computer Science
Computational Difficulty
Randomness within Calculation and
Correcting Errors in Codes.
Factors considered in analysis
Two factors considered while analyzing algorithms are time and space.

Time:
The amount of time required to accomplish the implementation is known as the
time complexity of an algorithm. The time complexity of an algorithm is represented
by the big O notation.

Space:
This is a less important factor than time because if more space is required, it can
always be found in the form of auxiliary storage.

The amount of space an algorithm needs while solving the problem is known as
space complexity. It is also represented in big O notation.
Factors considered in analysis
Auxiliary space is just the temporary or extra space, whereas
space complexity also includes space used by input values.

 Space Complexity = Auxiliary space + Space used by input


values.

The best algorithms/programs should have the least space


complexity. The lesser the space used, the faster it executes.
Ideally, space and time complexities depend on various
factors, such as underlying hardware, OS, CPU, processor,
etc. But to keep things simple, we typically don’t consider
these factors when analyzing an algorithm's performance.
BIG O NOTATION

Big O Notation is a metric for determining an algorithm's efficiency.

Put simply, it gives an estimate of how long it takes your code to

run on different sets of inputs.

You can also see it as a way to measure how effectively your code

scales as your input size increases.


But why do we need Big O?
The world we live in today consists of complicated apps and software,
each running on various devices and each having different
capabilities.
Some devices like desktops can run heavy machine learning software,
but others like phones can only run apps.
So when you create an application, you’ll need to optimize your code
so that it runs smoothly across devices to give you an edge over your
competitors.
As a result, programmers should inspect and evaluate their code
thoroughly.
Time and Space complexities
Following are the key time and space complexities:

Constant: O(1)
Linear time: O(n)
Logarithmic time: O(n log n)
Quadratic time: O(n^2)
Exponential time: 2 ^(n)
Factorial time: O(n!)
Dominant Term:

While describing the growth rate of an algorithm, we


simply consider the term which affects the most on the
algorithm’s performance. This term is known as the
dominant term.
Example:

//to count no. of characters in a file

count=0

While(there are more characters in a file) do

increment count by 1

get the next character

End While

Print count

In the above example, no of various instructions for a file size of 500 characters,

Initialization : 1 instruction
Increments: 500
Example:

Conditional checks: 500 + 1 (for eof)

Printing: 1

Initialization and Printing is same for any size of file so


they are insignificant compared to increments and
checks. So increments and checks are the dominat terms
in the above example.
CONSTANT FACTOR

Constant factor refers to the idea that different


operations with the same complexity take slightly
different amounts of time to run. For example, three
addition operations take a bit longer than a single
addition operation.
BIG O NOTATION
BIG O NOTATION
Constant Time: O(1)
When there is no dependence on the input size n, an algorithm is
said to have a constant time of order O(1).

Example
def example_function(lst):
print("First element of list: ", lst[0])

The function above will require only one execution step whether
the above array contains 1, 100 or 1000 elements. As a result, the
function is in constant time with time complexity O(1).
BIG O NOTATION
Linear Time: O(n)
Linear time is achieved when the running time of an algorithm increases linearly with
the length of the input. This means that when a function runs for or iterates over an
input size of n, it is said to have a time complexity of order O(n).

Example
def example_function(lst, size):
for i in range(size):
print("Element at index", i, " has value: ", lst[i])
The above function will take O(n) time (or "linear time") to complete, where n is the
number of entries in the array. The function will print 10 times if the given array has 10
entries, and 100 times if the array has 100 entries.

Note: Even if you iterate over half the array, the runtime still depends on the input size,
so it will be considered O(n).
BIG O NOTATION
Logarithm Time: O(log n)
When the size of the input data decreases in each step by a certain factor, an
algorithm will have logarithmic time complexity. This means as the input size
grows, the number of operations that need to be executed grows comparatively
much slower.

Example
Binary search and finding the largest/smallest element in a binary tree are both
examples of algorithms having logarithmic time complexity.

Binary search comprises searching an array for a specified value by splitting the
array into two parts consistently and searching for the element in only one of the
two parts. This ensures that the operation is not performed on every element of
the input data.
Logarithm Time: O(log n)
def binarySearch(lst, x):
low = 0
high = len(lst)-1
# Repeat until the pointers low and high meet each other
while low <= high:
mid = low + (high - low)//2
if lst[mid] == x:
return mid

elif lst[mid] < x:


low = mid + 1
else:
high = mid - 1

return -1
Logarithm Time: O(log n)
The Binary Search method takes a sorted list of elements and searches through it
for the element x. This is how the algorithm works:

 Find the list's midpoint.


 Compare the target to the middle.
 We've located our goal if our value and the target match.
 If our value is lesser than the target, we focus on the list with values ranging
from the middle plus one to the highest.
 If our value is greater than the target, we focus on the list starting with the
smallest value and ending with the midpoint minus one.
 Continue until we locate the target or till we reach the last element, which
indicates that the element is not present in the list.
 With every iteration, the size of our search list shrinks by half. Therefore
traversing and finding an entry in the list takes O(log(n)) time.
Quadratic Time: O(n^2)
The performance of a quadratic time complexity algorithm is directly related to the squared
size of the input data collection. You will encounter such time complexity in programs when
you perform several iterations on data sets.

Example
def quadratic_function(lst, size):
for i in range(size):
for j in range(size):
print("Iteration : " i, "Element of list at ", j, " is ", lst[j])

We have two nested loops in the example above. If the array has n items, the outer loop will
execute n times, and the inner loop will execute n times for each iteration of the outer loop,
resulting in n^2 prints. If the size of the array is 10, then the loop runs 10x10 times. So the
function ten will print 100 times. As a result, this function will take O(n^2) time to complete.
Exponential Time: O(2^n)
With each addition to the input (n), the growth rate doubles, and the algorithm iterates
across all subsets of the input elements. When an input unit is increased by one, the number
of operations executed is doubled.
Example
def fibonacci(n):
if (n <= 1):
return 1
else:
return fibonacci(n - 2) + fibonacci(n - 1)
In the above example, we use recursion to calculate the Fibonacci sequence. The algorithm
O(2^n) specifies a growth rate that doubles every time the input data set is added. An
O(2^n) function's exponential growth curve starts shallow and then rises rapidly.
Best, Average and Worst Case
complexity
In most algorithms, the actual complexity for a particular input can
vary. Eg. If input list is sorted, linear search may perform poorly
while binary search will perform very well. Hence, multiple input
sets must be considered while analyzing an algorithm. These
include the following

1.Best Case Input: This represents the input set that allows an
algorithm to perform most quickly. With this input, the algorithm
takes the shortest time to execute, as it causes the algorithms to
do the least amount of work. It provides the way an algorithm
behaves under optimal conditions. Eg. In a searching algorithm,
if match is found at the first location, it is the best case input as
the no. comparisons is just one
Best, Average and Worst Case
complexity
2. Worst Case Input: This represents the input set that allows an
algorithm to perform most slowly. It is an important analysis
because it gives us an idea of the maximum time an algorithm will
ever take. It is important because it provides an upper bound on
running time of an algorithm. And it is also a promise that the
algorithm will not take more than the calculated time. Eg. In a
searching algorithm, if value to be searched is at the last location
or it is not in the list, is the worst-case input because it tells the
maximum number of comparisons that have to be made.

3. Average Case Input: This represents the input set that allows
an algorithm to deliver an average performance. It provides the
expected running time. It needs assumption of statistical
distribution of inputs.
Data Structure Complexity Chart

Data Space
Structu Complexit Average Case Time Complexity
res y
Acces
Search Insertion Deletion
s
Array O(n) O(1) O(n) O(n) O(n)
Stack O(n) O(n) O(n) O(1) O(1)
Queue O(n) O(n) O(n) O(1) O(1)
Singly
Linked O(n) O(n) O(n) O(1) O(1)
List
Search Algorithms Complexity Chart

Search Space
Time Complexity
Algorithms Complexity
Best Average Worst
Case Case Case
Linear
O(1) O(1) O(n) O(n)
Search
Binary
O(1) O(1) O(log n) O(log n)
Search
Sorting Algorithms Complexity Chart
Sorting Space
Time Complexity
Algorithms Complexity
Best Case Average Case Worst Case
Selection
O(1) O(n^2) O(n^2) O(n^2)
Sort
Insertion
O(1) O(n) O(n^2) O(n^2)
Sort
Bubble Sort O(1) O(n) O(n^2) O(n^2)
Quick Sort O(log n) O(log n) O(n log n) O(n log n)
Merge Sort O(n) O(n) O(n log n) O(n log n)
Heap Sort O(1) O(1) O(n log n) O(n log n)

You might also like