Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
365 views

InterviewBit - Time Complexity

This document discusses computing time complexity and asymptotic notations. It explains that time complexity analyzes how an algorithm's running time grows as the input size grows. It is concerned with the rate of growth, not specific runtimes. Common time complexities include O(1) for constant time, O(N) for linear time proportional to input size N, and O(N^2) for quadratic time. The document provides examples of calculating time complexity for different algorithms and problem types like sums. It also covers space complexity and analyzing recursive programs' time complexity.

Uploaded by

Abhinav Unnam
Copyright
© © All Rights Reserved
100% found this document useful (1 vote)
365 views

InterviewBit - Time Complexity

This document discusses computing time complexity and asymptotic notations. It explains that time complexity analyzes how an algorithm's running time grows as the input size grows. It is concerned with the rate of growth, not specific runtimes. Common time complexities include O(1) for constant time, O(N) for linear time proportional to input size N, and O(N^2) for quadratic time. The document provides examples of calculating time complexity for different algorithms and problem types like sums. It also covers space complexity and analyzing recursive programs' time complexity.

Uploaded by

Abhinav Unnam
Copyright
© © All Rights Reserved
You are on page 1/ 7

Computing Time Complexity

Calculating Running Time

Running time of the algorithm depends on

● Single vs Multi-processor ​(Hardware)


● Read/Write to memory ​(Hardware I/O)
● 32 bit vs 64 bit ​(Hardware)
● Input Size ​(Data Size)

The rate of Growth of time taken for program execution is what we are interested in

In algorithmic time complexity, we are mostly interested in the rate of change of size of input
data and not the specific time of the program execution.

We are assuming a model computing machine to specify the hardware and will check for
different input sizes to understand how the time complexity for the problem changes.

● Single Processor
● 32 bit
● Sequential Execution (Parallel Processing is another alternative)
● 1 unit time for arithmetical and logical operations
● 1 unit time for assignment and return

Sum (a,b) ​Inputs (a,b variables)


{
return (a+b) ​Returning the addition
}

Time Complexity is 2 units (One for addition and one for returning the values)

Sum of list (A,n) Cost Times


{
Total = 0 1(c1) 1
for i = 0 to n-1: 2(c2) n+1 (Once for false loop exit)
Total = Total + Ai 2(c3) n
return Total 1(c4) 1
}

Time complexity is c1 + 2*(n+1)*c2 + 2*n*c3 + c4 = 2*n(c3+c4) + c1+c2+c4​. ​We are interested


in the rate of growth of the expression. So the constants in the above expression are not very
important.

www.startupanalytics.co.in
Time Complexity of different input type problems

● Sum Operation on Variables: ​K​ ​ O(1)


● Sum of List: C*​N​ ​+ C2 ​ (N)
O
● Sum of Matrix: A*​(N^2)​ + B*(N) + C ​O(N^2)

How the time took varies as per the time complexity of the program execution changes.

Asymptotic Notations

We are interested in coming with a generic way to quantify the time taken by an algorithmic
without binding it to a specific machine or hardware. We are broadly interested in the rate of
growth of time of execution and the specifics of machine capabilities.

Algorithm 1: T(n) = 5*​(N^2)​ + 7


Algorithm 2: T(n) = 17*​(N^2)​ + 6*N + 8

Quadratic Rate of Growth

www.startupanalytics.co.in
The definition is that O(N) of a function : O(g(n)) = f(n) { there exist constants C and N such that
f(n) <= C*g(n) for some n >= N value

Big-O Notation

f(n) = 5*(n^2) + 2*(n) + 1


g(n) = 6*(n^2)

Here as we can see, for some constant C = 6 or 7 or 8 and so on, for a specific value of n=N,
the rate of growth of function f(n) will always be smaller than the rate of growth of function g(n).

So, the f(n) is ​5*(n^2) + 3*(n) + 2 and the corresponding g(n) for constant c = 8 and n = 2, we
the have the big O notation valid with ​f(n) always growing slower than g(n)​. Thereby g(n) is an
upper bound on the rate of growth of time complexity and is thereby written here as ​O(n^2)

Omega Notation

The definition is that O(N) of a function : O(g(n)) = f(n) { there exist constants C and N such that
N*g(n) <= f(n) for some n >= N value

For the above case by having C = 5, from N=0 onwards, g(n) remains 5*(n^2), while f(n) is
5*(n^2) + 3*(n) + 2. Thereby g(n) is a lower bound on the rate of growth of f(n).

www.startupanalytics.co.in
Theta Notation

The definition is that O(N) of a function : O(g(n)) = f(n) { there exist constants C1, C2 and N
such that f(n) is bound at lower and upper as C1*(n^2) and C2*(n^2)} . This acts as the best way
to understand the time complexity since we have a tight range.

Time Complexity: Generic Rules

We compute the time complexity for:

● Large numbers
● Worst Case Scenario

For the rate of growth as a polynomial function such as ​n^3 + 3^n^2 + 4^n + 2​, the lower order
of polynomials such as ​n^2 and n in this case, won’t matter when the input size is very large.
Thus the O(N) complexity, in this case, is ​n^3

This holds for non-polynomial functions as well as such as ​n^2 + log(n)​. In this case, the O(N)
complexity is ​n^2​. This is the case because ​log(N) ​grows very slow.

Time complexity is the cumulative sum of time of execution for each of the fragments in the
code piece.

Int A For (i = 0,i<n,i++) For (i = 0,i<n,i++)


A=5 { {
A ++ Simple Statements For (i=0,i<n,i++)
} {
Constant Time Simple Statements
O(1) complexity O(N) complexity }
}

O(N^2) complexity

If all these operations were part of the same code piece, the combined time complexity will be
O(N^2) + O(N) + O(1)​. ​The Big O value will be O(N^2).

www.startupanalytics.co.in
Complexity Analysis : Recursive Programs

Recursive implementation of the Fibonacci series

Factorial (N)
{
if (​N==0​):
Return 1
else :
Return N​*​Factorial(​N-1​)

Time complexity, in this case, is given as T(N) = T(N-1) + 3 (3 is represented by the red
coloured operations)

This T(N) needs to be reduced to its time complexity form. T(N-1) can be written as T(N-2) + 6,
this can be continued.

T(N) = T(N-1) + 3
= T(N-2) + 6
= T(N-3) + 9

This can thereby be written as 3*N + 1 (This 1 is for the N==0 condition)

Space Complexity

The space complexity can be understood in form of a stack memory. As the computer needs to
store the different states of the computation in a recursive program. The stack memory looks
something like below.

F(0)

F(1)

F(2)

F(3)

F(4)

F(5)

www.startupanalytics.co.in
The length or size of the recursive tree is the length of the entire of operations set which is F(0)
to F(6), 6 in length. The space complexity thereby is 6. Space complexity is directly proportioned
to the number N. This thereby in O(N) format is thereby N.

Disclaimer: The below piece of text is derived from ​Interview bit ​:

Relevance Of Time Complexity


Let's assume we ask 2 interviewees ​A and ​B ​to write a program to detect if a number ​N >= 2
is prime.

A number is prime if it is divisible by exactly 2 distinct positive numbers 1 and the


number itself.

A​ comes up with the following code :

i = 2
while i < N
if N is divisible by i
N is not prime
add 1 to i

B​ comes up with the following code :

i = 2
while i < square root of N
if N is divisible by i
N is not prime
add 1 to i

For now, let's assume that both codes are correct.

Now, ​B​ claims that his code is much better as it takes much less time to check if ​N​ is prime.

Let's look into why that is the case.

Let's assume that the operation ​N is divisible by i​ takes 1 ms.

Let'stookat few examples on time took:

www.startupanalytics.co.in
Example 1 :
N = 1000033 ( Prime number )
Time is taken by A's program = 1 ms * number of divisions
= 1 ms * 1000033
= approximately 1000 seconds or 16.7 mins.

Time is taken by B's program = 1ms * number of divisions


= 1ms * square root of 1000033
= approximately 1000ms = 1 second.

Example 2 :
N = 1500450271 ( Prime number )
Time is taken by A's program = 1 ms * number of divisions
= 1 ms * 1500450271
= approximately 1000000 seconds or 11.5 days

Time is taken by B's program = 1ms * number of divisions


= 1ms * square root of 1500450271
= approximately 40000ms = 40 seconds.

As you can see B’s program is significantly faster even though both methods of solving the
problem are correct.

This is where time complexity of programs comes in, which is a measure of how efficient (
or quick ) a program is for large inputs.

In the first case, the time taken is directly proportional to N, whereas in the second case it is
directly proportional to square root of N. In later slides, we will look into how we can
formalize it into time complexity notations.

www.startupanalytics.co.in

You might also like