Unit 2 Analysis of Algorithm Complexity Theory
Unit 2 Analysis of Algorithm Complexity Theory
UNIT -2
ANALYSIS OF ALGORITHM
& COMPLEXITY THEORY
➢ The following 3 asymptotic notations are mostly used to represent time complexity
of algorithms.
Consider a input function f(n) belongs to the set O(g(n)) if there exists a
positive constant c such that it lies between 1 and g(n), for sufficiently large n
i.e. 0 < cg(n) < n. Then
f(n)<=C g(n)
f(n)=O g(n)
if there exists a positive constant c such that it lies above cg(n), for
sufficiently large n i.e. cg(n)<C< n.
Then
f(n)>C g(n)
f(n)= Ω g(n)
if there exists a positive constant c such that it lies above cg(n), for
sufficiently large n i.e. cg(n)<C< n.
Then
f(n)>C g(n)
f(n)= Ω g(n)
Theta Notation (Θ): Theta notation encloses the function from above and below.
Since it represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average case complexity of an algorithm.
if there exist positive constants c1 and c2 such that it can be sandwiched
between c1g(n) and c2g(n), for sufficiently large n. If a function f(n) lies anywhere in
between c1g(n) and c2 > g(n) for all n ≥ n0, then f(n) is said to be asymptotically tight
bound.
C1 g(n) < f(n)<C2 g(n)
Then we can say
f(n)= Θ g(n)
It can be represented on graph as.
The Best Case, Worst Case & Average Case analysis is to have a measure of efficiency
of algorithms
Best Case Analysis : In the best case analysis, we calculate lower bound on running
time of an algorithm. We must know the case that causes minimum number of
operations to be executed. In the linear search problem, the best case occurs when x
is present at the first location. So time complexity in the best case would be Ω(1)
Worst Case Analysis: In the worst case analysis, we calculate upper bound on
running time of an algorithm. We must know the case that causes maximum number
of operations to be executed. For Linear Search, the worst case happens when the
element to be searched is not present in the array.
Average Case Analysis: In average case analysis, we take all possible inputs and
calculate computing time for all of the inputs. Sum all the calculated values and divide
the sum by total number of inputs. We must know (or predict) distribution of cases.
We define input size as the total number of items present in the input.
In other words, the time taken by an algorithm will increase with the
Example:
➢ That is as the amount of data gets bigger, how much more resource will
➢ To help understand the implications, this section will look at graphs for
Asymptotic Growth means the slower growth rate, and the better the
algorithm.
By this measure, a linear algorithm is always asymptotically better than a quadratic
one.
Ex. Algorithms having comexity
logarithmic : O(log n):
linear O(n):
Big-Ο is used as a can not be tight or loose upper bound on the growth of an
algorithm’s effort.
“Little-ο” (ο()) notation is used to describe an upper bound that is tight upper
bound.
Little o is a rough estimate of the maximum order of growth whereas Big-Ο may
be the actual order of growth.
1. Substitution method
a. Forward
b backward
2. Master Theorem
3. Tree method
By forward substitutions
If n=1
T(1)=T(0)+1 =1
If n=2
T(2)=T(1)+2 =1+2=3
If n=3
T(3)=T(2)+3 =3+3=6
So from above values determine the some common pattern
n(n+1)/2 = (n*n +n)/2 = n*n/2 +n/2 == O(n*n)
T(n-2)=T(n-2-1)+1
=T(n-3)+1
Put back value of T(n-2) in eq 2 so it will become
T(n)=T(n-3)+1+2
T(n)=T(n-3)+3
So the kth term will be
T(n)=T(n-k)+k
Assuming n==k
T(n)=T(0)+n Initial condition T(0)=0
T(n)= 0 +n
=O(n)
ϵ > 0 is a constant.
2. Slove the Recurence relation using Masters therom T(n) = 2T(n/2) + n log n
Solution:
➢Compare given eqetion with T(n) = aT(n/b) + f(n) to find a & b
➢Here, a = 2 & b=2 & f(n)=n log n
➢Now find nlogb a = nlog2 2 = n
➢Now as nlogb a < f(n) this example is in case 2
➢So according to case 2 T(n) = T(n) = Θ(f(n))
➢T(n) = Θ(n log n ).
➢ Ex.
1. Calculating the greatest common divisor.
2. Searching & Sorting Algorithms
3. Finding a maximum matching.
4. Decision versions of linear programming.
➢ The Problems than can be solved in polynomial time are called tractable problems.
Tractable means that the problems can be solved in theory as well as in practice.
But the problems that can be solved in theory but not in practice are known as
intractable.
➢ Ex.
➢ The solutions of the NP class are hard to find since they are being solved by a non-
➢ But if solution is given and we just want to verify solution whether is correct or incorrect then it
can be done in a polynomial time.
➢ So we can verify the problems solution in polynomial time but cannot solve in polynomial time
➢ To understand the relation between P & NP Class problems consider the following three cases
1: If P==NP Which means
➢ Every problem can be solvable in polynomial Time which is not actually feasible.
P NP NP
Reduced to
A In polynomial time
B
Let A & B are two NP problems, And problem A is Reduced to problem B , If there is
a way to solve A by Non deterministic polynomial algorithm that solve B also in
polynomial time So B is always in polynomial time.
➢ Let us Understand
Reduced to
A In polynomial time
B
Prepared By Mr. Vipin K. Wani
NP Hard Problem
41
A problem is NP-hard if an algorithm for solving it can be translated into one for
solving any NP-problem (nondeterministic polynomial time) problem.
1. Halting problem.
2. Qualified Boolean formulas.
3. No Hamiltonian cycle.
NP Problem
Reduced to
Y In polynomial time
X
Prepared By Mr. Vipin K. Wani
Relation Between P, NP, NP Hard & NP Complete
43
NP HARD NP COMPLETE
To solve this problem, it do not have to To solve this problem, it must be both
be in NP . NP and NP-hard problems.
j= choice(a, n)
if(A[j]==x) then
{
write(j);
success();
}
write(0);
failure();
Can solve the problem in polynomial Can’t solve the problem in polynomial
time. time.
➢ vertex cover problem are the approximation algorithms that run in polynomial time complexity. A
simple approximate algorithm for the vertex cover problem is described below:
2 4 2 4
1 1
3 5 3 5
2 4
2 4
1
1
2 4
2 4
1
1
3 5
3 5
Hamiltonian Path in an undirected graph is a path that visits each vertex exactly once. A
Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path such that there is an
edge (in the graph) from the last vertex to the first vertex of the Hamiltonian Path.
Determine whether a given graph contains Hamiltonian Cycle or not. If it contains, then
prints the path. Following are the input and output of the required function.
Example:
Example:
Example:
Example:
y1 y2 y3
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1