Lecture 2
Lecture 2
Lecture 2
Asymptotic notation
Cormen (3.1)
Levetin ( 2.1, 2.2)
What is Complexity?
• The level in difficulty in solving mathematically posed
problems as measured by
The time
(time complexity)
number of steps or arithmetic operations
(computational complexity)
memory space required
(space complexity)
Major Factors in Algorithms Design
1. Correctness
An algorithm is said to be correct if
• For every input, it halts with correct output.
• An incorrect algorithm might not halt at all OR
• It might halt with an answer other than desired one.
• Correct algorithm solves a computational problem
2. Algorithm Efficiency
Measuring efficiency of an algorithm,
• do its analysis i.e. growth rate.
• Compare efficiencies of different algorithms for the
same problem.
Algorithms Growth Rate
Algorithm Growth Rates
• It measures algorithm efficiency
What means by efficient?
If running time is bounded by polynomial in the input
Notations for Asymptotic performance
• How running time increases with input size
• O, Omega, Theta, etc. for asymptotic running time
• These notations defined in terms of functions whose
domains are natural numbers
• convenient for worst case running time
• Algorithms, asymptotically efficient best choice
Complexity Analysis
• Algorithm analysis means predicting resources such as
computational time
memory
computer hardware etc
• Worst case analysis
Provides an upper bound on running time
An absolute guarantee
• Average case analysis
Provides the expected running time
Very useful, but treat with care: what is “average”?
o Random (equally likely) inputs
o Real-life inputs
Worst-case Analysis
Let us suppose that
• Dn = set of inputs of size n for the problem
• I = an element of Dn.
• t(I) = number of basic operations performed on I
• Define a function W by
W(n) = max{t(I) | I Dn}
called the worst-case complexity of the algorithm
• W(n) is the maximum number of basic operations
performed by the algorithm on any input of size n.
• Please note that the input, I, for which an algorithm
behaves worst depends on the particular algorithm.
Average Complexity
• Let Pr(I) be the probability that input I occurs.
• Then the average behavior of the algorithm is defined as
A(n) = Pr(I) t(I), summation over all I
Dn
• We determine t(I) by analyzing the algorithm, but Pr(I)
cannot be computed analytically.
• Average cost =
A(n) = Pr(succ)Asucc(n) + Pr(fail)Afail(n)
• An element I in Dn may be thought as a set or
equivalence class that affect the behavior of the algorithm
Worst Analysis computing average cost
• Take all possible inputs, compute their cost, take average
Asymptotic Notations Properties
• Categorize algorithms based on asymptotic growth
rate e.g. linear, quadratic, polynomial, exponential
• Ignore small constant and small inputs
• Estimate upper bound and lower bound on growth
rate of time complexity function
• Describe running time of algorithm as n grows to .
• Describes behavior of function within the limit.
Limitations
• not always useful for analysis on fixed-size inputs.
• All results are for sufficiently large inputs.
Asymptotic Notations
Asymptotic Notations , O, , o,
We use to mean “order exactly”,
O to mean “order at most”,
to mean “order at least”,
o to mean “tight upper bound”,
• to mean “tight lower bound”,
f(n) O(g(n))
Since
f(n) ≤ c.g(n) n2 ≤ c.n2 1 ≤ c, take, c = 1,
n 0= 1
Then
n2 ≤ c.n2 for c = 1 and n 1
Hence, 2n2 O(n2), where c = 1 and n0= 1
Examples
Example 3: Prove that 1000.n2 + 1000.n O(n2)
Proof:
Assume that f(n) = 1000.n2 + 1000.n, and g(n) =
n2
We have to find existence of c and n0 such that
0 ≤ f(n) ≤ c.g(n) n n0
1000.n2 + 1000.n ≤ c.n2 = 1001.n2, for c = 1001
1000.n2 + 1000.n ≤ 1001.n2
Û 1000.n ≤ n2 n2 1000.n n2 - 1000.n 0
Û n (n-1000) 0, this true for n 1000
f(n) ≤ c.g(n) n n0 and c = 1001
Intuitively:
Set of all functions whose rate of growth is the same as or higher
than that of g(n).
Big-Omega Notation
f(n) (g(n))
Intuitively: Set of all functions that have same rate of growth as g(n).
Theta Notation
f(n) (g(n))
Since
f(n) < c.g(n) n2 < c.n2 1 ≤ c,
Therefore
1. f(n) o(f(n)) and
2. f(n) (f(n))
Note :
Hence small o and small omega are not reflexive relations
Symmetry
Definition:
• Let X be a non-empty set and R is a relation
over X then R is said to be symmetric if
a, b X, (a, b) R (b, a) R
Example 1:
• Let P be a set of persons, and S be a relation
over P such that if (x, y) S then x has the
same sign as y.
• This relation is symmetric because
(x, y) S (y, x) S
Example 2:
• Let P be a set of all persons, and B be a
relation over P such that if (x, y) B then x is
brother of y.
• This relation is not symmetric because
Symmetry over
Property : prove that
f(n) = (g(n)) g(n) = (f(n))
Proof
• Since f(n) = (g(n)) i.e. f(n) (g(n))
constants c1, c2 > 0 and n0 N such that
0 c1g(n) f(n) c2g(n) n n0 (1)
(1) 0 c1g(n) f(n) c2g(n) 0 f(n) c2g(n)
0 (1/c2)f(n) g(n) (2)
(1) 0 c1g(n) f(n) c2g(n) 0 c1g(n) f(n)
0 g(n) (1/c1)f(n) (3)
Symmetry over Q
From (2),(3): 0 (1/c2)f(n) g(n) 0 g(n) (1/c1)f(n)
0 (1/c2)f(n) g(n) (1/c1)f(n)
Suppose that 1/c2 = c3, and 1/c1 = c4,
Now the above equation implies that
0 c3f(n) g(n) c4f(n), n n0
g(n) = (f(n)), n n0
Hence it proves that,
f(n) = (g(n)) g(n) = (f(n))
Exercise:
prove that big O, big omega , little , and little o, do
not satisfy the symmetry property.
Transitivity
Definition:
• Let X be a non-empty set and R is a relation
over X then R is said to be transitive if
a, b, c X, (a, b) R (b, c) R (a, c) R
Example 1:
• Let P be a set of all persons, and B be a
relation over P such that if (x, y) B then x is
brother of y.
• This relation is transitive this is because
(x, y) B (y, z) B (x, z) B
Example 2:
• Let P be a set of all persons, and F be a
relation over P such that if (x, y) F then x is
father of y.
• Of course this relation is not a transitive
Transitivity Relation over Q, W, O, o and
Prove the following
1. f(n) = (g(n)) & g(n) = (h(n)) f(n) =
(h(n))
2. f(n) = O(g(n)) & g(n) = O(h(n)) f(n) =
O(h(n))
3. f(n) = (g(n)) & g(n) = (h(n)) f(n) =
(h(n))
4. f(n) = o (g(n)) & g(n) = o (h(n)) f(n) = o
(h(n))
5. f(n) = (g(n)) & g(n) = (h(n)) f(n) =
(h(n))
Transitivity Relation over Q
Property 1
f(n) = (g(n)) & g(n) = (h(n)) f(n) = (h(n))
Proof
• Since f(n) = (g(n)) i.e. f(n) (g(n))
constants c1, c2 > 0 and n01 N such that
0 c1g(n) f(n) c2g(n) n n01 (1)
2. Now since g(n) = (h(n)) i.e. g(n) (h(n))
constants c3, c4 > 0 and n02 N such that
0 c3h(n) g(n) c4h(n) n n02 (2)
3. Now let us suppose that n0 = max (n01, n02)
Transitivity Relation over Q
4. Now we have to show that f(n) = (h(n)) i.e. we have
to prove that
constants c5, c6 > 0 and n0 N such that
0 c5h(n) f(n) c6h(n) ?
(2) 0 c3h(n) g(n) c4h(n)
0 c3h(n) g(n) (3)
(1) 0 c1g(n) f(n) c2g(n)
0 c1g(n) f(n)
0 g(n) (1/c1)f(n) (4)
From (3) and (4), 0 c3h(n) g(n) (1/c1)f(n)
0 c1c3h(n) f(n) (5)
Transitivity Relation over Q
(1) 0 c1g(n) f(n) c2g(n)
0 f(n) c2g(n) 0 (1/c2)f(n) g(n) (6)
(2) 0 c3h(n) g(n) c4h(n)
0 g(n) c4h(n) (7)
From (6) and (7), 0 (1/c2)f(n) g(n) (c4)h(n)
0 (1/c2)f(n) (c4)h(n)
0 f(n) c2c4h(n) (8)
From (5), (8), 0 c1c3h(n) f(n) 0 f(n) c2c4h(n)
0 c1c3h(n) f(n) c2c4h(n)
0 c5h(n) f(n) c6h(n)
Transitivity Relation over Big O
Property 2
f(n) = O(g(n)) & g(n) = O(h(n)) f(n) = O(h(n))
Proof
• Since f(n) = O(g(n)) i.e. f(n) O(g(n))
constants c1 > 0 and n01 N such that
0 f(n) c1g(n) n n01 (1)
2. Now since g(n) = O(h(n)) i.e. g(n) O(h(n))
constants c2 > 0 and n02 N such that
0 g(n) c2h(n) n n02 (2)
3. Now let us suppose that n0 = max (n01, n02)
Transitivity Relation over Big O
Now we have to two equations
0 f(n) c1g(n) n n01 (1)
0 g(n) c2h(n) n n02 (2)
(2) 0 c1g(n) c1c2h(n) n n02 (3)
From (1) and (3)
0 f(n) c1g(n) c1c2h(n)
Now suppose that c3= c1c2
0 f(n) c1c2h(n)
i 0
Standard Logarithms Notations
Some Definitions
Exponent
• x = logba is the exponent for a = bx.
Natural log
• ln a = logea
Binary log
• lg a = log2a
Square of log
• lg2a = (lg a)2
Log of Log
• lg lg a = lg (lg a)
Standard Logarithms Notations
a b logb a
log c (ab) log c a log c b
log b a n nlog b a
log c a
log b a
log c b
log b (1/a) log b a
1
log b a
log a b
a logb c c logb a