Module 3 - Complexity of An Algorithm
Module 3 - Complexity of An Algorithm
Module 3 - Complexity of An Algorithm
Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency
of X.
(a) Time Complexity: Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
(b) Space Complexity: Space is measured by counting the maximum
memory space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.
Page 1 of 7
(b) A Posterior Analysis: This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This
is then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.
Rate of growth is defined as the rate at which the running time of the
algorithm is increased when the input size is increased. The growth rate could
be categorized into two types: linear and exponential. If the algorithm is
increased in a linear way with an increasing in input size, it is linear growth
rate. And if the running time of the algorithm is increased exponentially with
the increase in input size, it is exponential growth rate.
Page 2 of 7
There are mainly three asymptotic notations:
✓ Big-O notation
✓ Omega notation
✓ Theta notation
The above expression can be described as a function f(n) belongs to the set
O(g(n)) if there exists a positive constant c such that it lies between 0 and cg(n),
for sufficiently large n. For any value of n, the running time of an algorithm
does not cross the time provided by O(g(n)).
Page 3 of 7
Omega gives the lower bound of a function
For a function g(n), Θ(g(n)) is given by the relation:
Ω(g(n)) = {f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0}
The above expression can be described as a function f(n) belongs to the set
Ω(g(n)) if there exists a positive constant c such that it lies above cg(n), for
sufficiently large n. For any value of n, the minimum time required by the
algorithm is given by Omega Ω(g(n)).
Page 4 of 7
The above expression can be described as a function f(n) belongs to the set
Θ(g(n)) if there exist positive constants c1 and c2 such that it can be
sandwiched between c1g(n) and c2g(n), for sufficiently large n. If a function f(n)
lies anywhere in between c1g(n) and c2g(n) for all n ≥ n0, then f(n) is said to be
asymptotically tight bound.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance
characteristic I.
Following is a simple example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 +
3. Now, space depends on data types of given variables and constant types
and it will be multiplied accordingly.
Page 5 of 7
3.5 Time Complexities Analysis
Time complexity of an algorithm represents the amount of time required by
the algorithm to run to completion. Time requirements can be defined as a
numerical function T(n), where T(n) can be measured as the number of steps,
provided each step consumes constant time. For example, addition of two n-
bit integers takes n steps. Consequently, the total computational time is T(n)
= c ∗ n, where c is the time taken for the addition of two bits. Here, we observe
that T(n) grows linearly as the input size increases.
Page 6 of 7
3.7 Assignment on Time and Space Complexities
Page 7 of 7