Time Complexity of Algorithms
Time Complexity of Algorithms
❖ Algorithms
❖ Introduction
❖ What is Time complexity ?
❖ Why is Time complexity so significant?
❖ Calculating Time Complexity
❖ Big O Notation
Algorithms
❖ what is Algorithms?
Generally, there is always more than one way to solve a problem in computer science with different algorithms.
Therefore, it is highly required to use a method to compare the solutions in order to judge which one is more
optimal. The method must be:
● Independent of the machine and its configuration, on which the algorithm is running on.
● Shows a direct correlation with the number of inputs.
● Can distinguish two algorithms clearly without ambiguity.
There are two such methods used, time complexity and space complexity
Introduction
❖ Space and Time complexity can define the effectiveness of an algorithm.
❖ While we know there is more than one way to solve the problem in programming, knowing how the algorithm works
efficiently can add value to the way we do programming.knowing how to evaluate them using Space and Time
complexity can make the program behave in required optimal conditions, and by doing so, it makes us efficient
programmers.
❖ In computing, the time complexity of an algorithm is a measure of its asymptotic efficiency as a function of the size or
runtime. In general terms, it is the number of steps that would need to be performed for a computer to execute an
algorithm with no additional computer resources. The time complexity provides an upper bound on the efficiency of an
algorithm, even in the presence of repeated computations.
What is Time complexity ?
❖ The time complexity of an algorithm signifies the amount of time taken by an algorithm to run, as a function of
the length of the input. Here, the length of input indicates the number of operations to be performed by the
algorithm.
❖ the time complexity is the number of operations an algorithm performs to complete its task (considering that
each operation takes the same amount of time). The algorithm that performs the task in the smallest number of
operations is considered the most efficient one in terms of the time complexity.
Time complexity
It can be applied to almost any algorithm or function but will be more useful in programs that
use redundancy, if there are sentences being executed frequently, identifying and
understanding the source of the complexity time can help find solutions to shorten the
processing time.
Why is Time complexity so significant?
In computer programming, there can be a number of ways to solve the same problem or perform the same task,
accordingly, there must be a specific series of instructions that must be given to the computer, and the difference
can occur in the way the instructions are defined. Also, with the options available to choose any of the
programming languages, the instructions can take any form (syntax) besides the performance of the chosen
programming language. The difference in the algorithm that must be performed in the computer leads to the
difference in the use of the operating system, processor, hardware, etc.
Now that we know that various factors influence the outcome of the algorithm being executed, it is wise to
understand how efficiently these programs can be used to perform a task. To measure this, we need to evaluate
Time and Space Complexity . By definition, the space complexity of an algorithm determines how much space or
memory the algorithm takes to function as a function of the length of the input. Whereas, the algorithm's Time
Complexity determines the amount of time the algorithm takes to run as a function of the length of the input.
Calculating Time Complexity
There are three asymptotic notations that are used to represent the time complexity of an
algorithm. They are:
● Θ Notation (theta)
● Ω Notation
● Big O Notation
Case of Algorithm
Before learning about these three asymptotic notation, we should learn about the best, average, and the worst case of an
algorithm.
Best case: This is the lower bound on running time of an algorithm. We must know the case that causes the minimum
number of operations to be executed.
Average case: We calculate the running time for all possible inputs, sum all the calculated values and divide the sum by the
total number of inputs. We must know (or predict) distribution of cases.
Worst case: This is the upper bound on running time of an algorithm. We must know the case that causes the maximum
number of operations to be executed.
Calculating Time Complexity
● Θ Notation (theta)
The Θ Notation is used to find the average bound of an algorithm i.e. it defines an upper bound and a
lower bound, and your algorithm will lie in between these levels.
The above expression can be read as theta of g(n) is defined as set of all the functions f(n) for which
there exists some positive constants c1, c2, and n0 such that c1*g(n) is less than or equal to f(n) and f(n)
is less than or equal to c2*g(n) for all n that is greater than or equal to n0.
For example:
Calculating Time Complexity
● Ω Notation
The Ω notation denotes the lower bound of an algorithm i.e. the time taken by the algorithm can't be lower than
this. In other words, this is the fastest time in which the algorithm will return a result. Its the time taken by the
algorithm when provided with its best-case input.
The above expression can be read as omega of g(n) is defined as set of all the functions f(n) for which there
exist some constants c and n0 such that c*g(n) is less than or equal to f(n), for all n greater than or equal to n0.
For example:
Calculating Time Complexity
● Big O Notation
The Big O notation defines the upper bound of any algorithm i.e. you algorithm can't take more time than this
time. In other words, we can say that the big O notation denotes the maximum time taken by an algorithm or the
worst-case time complexity of an algorithm.
The above expression can be read as Big O of g(n) is defined as a set of functions f(n) for which there exist some
constants c and n0 such that f(n) is greater than or equal to 0 and f(n) is smaller than or equal to c*g(n) for all n
greater than or equal to n0.
For example:
البحث الخطي
البحث الثنائي
How to Calculating Big O?
How to Calculating Big O?
Big O Notation
There are different types of time complexities used, let’s see one by one:
and many more complex notations like Exponential time, Quasilinear time,etc. are used
based on the type of functions defined.
O (1) ــConstant time
Constant time means the running time is constant, it’s not affected by the input size.
It does not matter, how many input elements a problem have, it takes constant number of
steps to solve the problem.
The number of steps and time required to solve a problem is based on input size.
Example:
❖ Linear Search
O(log n) – Logarithmic Time complexity
algorithm divides the problem into sub problems with the same size.
An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step. This
indicates that the number of operations is not the same as the input size. The number of operations gets reduced as
the input size increases.
In every step, halves the input size in logarithmic algorithm, log2 n is equals to the number of times n must be divided
by 2 to get 1.
Let us take an array with 16 elements input size, that is - log2 16
step 1: 16/2 = 8 will become input size
step 2: 8/2 = 4 will become input size
step 3: 4/2 =2 will become input size
step 4: 2/2 =1 calculation completed.
The calculation will be performed till we get a value 1.
Example:
❖binary search algorithm
❖binary conversion algorithm.
O(n^2) – Quadratic time complexity
An algorithm is said to have a non – linear time complexity where the running time increases
non-linearly (n^2) with the length of the input.
Generally, nested loops come under this time complexity order where for one loop takes O(n)
and if the function involves a loop within a loop, then it goes for O(n)*O(n) = O(n^2) order.
Example:
❖ Bubble Sort Algorithm
❖ Selection Sort Algorithm
❖ Insertion Sort Algorithm
O(n^3) – A Cubic complexity
❖ in the case of steep gradients we can see that even with a small input, the number of operations performed
are countless.
The order of growth for all time complexities are indicated in the graph below: