Time Complexity: 3.1: Which Is The Dominant Operation? Def Dominant (N) : For I in Xrange (N) : Return Result
Time Complexity: 3.1: Which Is The Dominant Operation? Def Dominant (N) : For I in Xrange (N) : Return Result
Time complexity
Use of time complexity makes it easy to estimate the running time of a program. Performing
an accurate calculation of a program’s operation time is a very labour-intensive process
(it depends on the compiler and the type of computer or speed of the processor). Therefore, we
will not make an accurate measurement; just a measurement of a certain order of magnitude.
Complexity can be viewed as the maximum number of primitive operations that a program
may execute. Regular operations are single additions, multiplications, assignments etc. We
may leave some operations uncounted and concentrate on those that are performed the largest
number of times. Such operations are referred to as dominant.
The number of dominant operations depends on the specific input data. We usually want
to know how the performance time depends on a particular aspect of the data. This is most
frequently the data size, but it can also be the size of a square matrix or the value of some
input variable.
The operation in line 4 is dominant and will be executed n times. The complexity is described
in Big-O notation: in this case O(n) — linear complexity.
The complexity specifies the order of magnitude within which the program will perform
its operations. More precisely, in the case of O(n), the program may perform c · n opera-
tions, where c is a constant; however, it may not perform n2 operations, since this involves
a different order of magnitude of data. In other words, when calculating the complexity we
omit constants: i.e. regardless of whether the loop is executed 20 · n times or n5 times, we still
have a complexity of O(n), even though the running time of the program may vary. When
analyzing the complexity we must look for specific, worst-case examples of data that the
program will take a long time to process.
�c Copyright 2020 by Codility Limited. All Rights Reserved. Unauthorized copying or publication pro-
hibited.
1
3.1. Comparison of different time complexities
Let’s compare some basic time complexities.
3.2: Constant time — O(1).
1 def constant(n):
2 result = n * n
3 return result
The value of n is halved on each iteration of the loop. If n = 2x then log n = x. How long
would the program below take to execute, depending on the input data?
3.4: Linear time — O(n).
1 def linear(n, A):
2 for i in xrange(n):
3 if A[i] == 0:
4 return 0
5 return 1
Let’s note that if the first value of array A is 0 then the program will end immediately. But
remember, when analyzing time complexity we should check for worst cases. The program
will take the longest time to execute if array A does not contain any 0.
3.5: Quadratic time — O(n2 ).
1 def quadratic(n):
2 result = 0
3 for i in xrange(n):
4 for j in xrange(i, n):
5 result += 1
6 return result
2
Exponential and factorial time
It is worth knowing that there are other types of time complexity such as factorial time O(n!)
and exponential time O(2n ). Algorithms with such complexities can solve problems only for
very small values of n, because they would take too long to execute for large values of n.
• n � 1 000 000, the expected time complexity is O(n) or O(n log n),
Of course, these limits are not precise. They are just approximations, and will vary depending
on the specific task.
3.4. Exercise
Problem: You are given an integer n. Count the total of 1 + 2 + . . . + n.
Solution: The task can be solved in several ways. Some person, who knows nothing about
time complexity, may implement an algorithm in which the result is incremented by 1:
3
3.7: Slow solution — time complexity O(n2 ).
1 def slow_solution(n):
2 result = 0
3 for i in xrange(n):
4 for j in xrange(i + 1):
5 result += 1
6 return result
Another person may increment the result respectively by 1, 2, . . . , n. This algorithm is much
faster:
But the third person’s solution is even quicker. Let us write the sequence 1, 2, . . . , n and
repeat the same sequence underneath it, but in reverse order. Then just add the numbers
from the same columns:
1 2 3 ... n−1 n
n n−1 n−2 ... 2 1
n+1 n+1 n+1 ... n+1 n+1
The result in each column is n + 1, so we can easily count the final result: