Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
19 views

Unit-1_Algorithm Complexity and Asymptotic Notation (1)

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Unit-1_Algorithm Complexity and Asymptotic Notation (1)

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-1

Algorithm Complexity -Time-Space Efficiency of an Algorithm, Asymptotic notations: Big Oh, Big
Theta and Big Omega.

Algorithm Analysis:
Algorithm analysis is an important part of computational complexity theory, which provides
theoretical estimation for the required resources of an algorithm to solve a specific computational
problem. Analysis of algorithms is the determination of the amount of time and space resources
required to execute it.

Why Analysis of Algorithms is important?


• To predict the behavior of an algorithm without implementing it on a specific computer.
• It is much more convenient to have simple measures for the efficiency of an algorithm
than to implement the algorithm and test the efficiency every time a certain parameter in the
underlying computer system changes.
• It is impossible to predict the exact behavior of an algorithm. There are too many
influencing factors.
• The analysis is thus only an approximation; it is not perfect.
• More importantly, by analyzing different algorithms, we can compare them to determine
the best one for our purpose.

Types of Algorithm Analysis:
1. Best case
2. Worst case
3. Average case
• Best case: Define the input for which algorithm takes less time or minimum time. In the
best case calculate the lower bound of an algorithm. Example: In the linear search when
search data is present at the first location of large data then the best case occurs.
• Worst Case: Define the input for which algorithm takes a long time or maximum time. In
the worst calculate the upper bound of an algorithm. Example: In the linear search when
search data is not present at all then the worst case occurs.
• Average case: In the average case take all random inputs and calculate the computation
time for all inputs.
And then we divide it by the total number of inputs.
Average case = all random case time / total no of case

Algorithm Complexity
Suppose X is treated as an algorithm and N is treated as the size of input data, the time and space
implemented by the Algorithm X are the two main factors which determine the efficiency of X.
Time Factor − The time is calculated or measured by counting the number of key operations such as
comparisons in sorting algorithm.
Space Factor − The space is calculated or measured by counting the maximum memory space required
by the algorithm.
The complexity of an algorithm f(N) provides the running time and / or storage space needed by the
algorithm with respect of N as the size of input data.

Time Complexity
Time Complexity is a function / relationship that tells us how the time increases as input size increases.
Points to remember while calculating time complexity
• Consider larger inputs because relationship at this point persists.
• Constants are ignored since actual time even differs for the same relationship.
• Always ignore less dominating terms.
• Look for the worst case complexity - this will be what we consider the Big O of our
algorithm/function
Example
1) f(n) = 5n3 + 4n + 3
Time Complexity - O(n3)
Explanation - Ignoring the less dominating terms we are left with 5n3. Now ignoring the constants, we
get n3. And this is the time complexity.
2)
int sum = 0;
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
sum += i;
}
}
Time Complexity - O(n2)
Explanation - Adding i to sum is constant time operation. And if we fix the value of i, we are traversing
the inner loop n times. So, that means for a particular value of i, inner loop has O(n) complexity. And if
we notice the outer loop, it is also traversing n times. So, total = n * n times and that is the time
complexity.
Guidelines for Asymptotic notation:
Loops:
The running time of a loop is, at most, the running time of the statements inside the loop, multiplied
number of iterations.
for(int i = 0; i < n; i++){
System.out.println(i);
}
Here, no of iterations are n and printing statement is a constant time operation. So, time complexity
becomes O(n * 1) i.e. O(n).
Nested Loops:
The total running time is the product of the sizes of all the loops.
for(int j = 0; j < n; j++){
for(int i = 0; i < n; i++){
System.out.println(i);
}
}
Here, outer loop is traversing n times(each loop having complexity O(n) as explained earlier).So total
becomes O(n * n).
Consecutive statements:
Add the time complexity of each statement.
int x = 0;
x += 1;
for(int i = 0; i < n; i++){
System.out.println(i);
}
for(int j = 0; j < n; j++){
for(int i = 0; i < n; i++){
System.out.println(i);
}
}
Here, the topmost two lines of code take 2 units of time(each statement takes 1 unit of time). The loop
next to them executes n times(as explained earlier). The nested loop takes n2 time. Hence,the total
becomes n2 + n + 2. Ignoring less dominating terms and constants, final time complexity is O(n2).
If-then-else statements:
The total running time is the the sum of time taken for checking the condition and the part(if or else)
which takes the highest time.
int val = 12;
if(val < 18){
for(int i = 0; i < n; i++){
System.out.println(i);
}
}
else{
System.out.println(val);
}
Here, the first statement takes 1 unit of time. Then checking takes 1 unit of time. "if" part takes n unit of
time. "else" part takes 1 unit of time. Larger among "if" and "else" is "if" (i.e. n unit of time).
So total = 1 + 1 + n = O(n2)
Logarithmic Complexity:
It is achieved when the problem size is cut down by a fraction.
for(int i = 1; i <= n;){
i *= 2;
}
Here, in the first iteration, i = 1(i.e. 20)
second , i = 2(i.e. 21)
third , i = 4(i.e. 22)
fourth , i = 8(i.e. 23)

kth , i = n(i.e. 2k - 1)
So, we need to find the no of interations i.e. the value of k = log2n + 1. That means, time complexity
will be O(log2n)
Space Complexity
Space complexity of an algorithm is basically the amount of memory it needs to run to completion, ie,
to execute and produce the result.
Memory Usage while Execution
While executing, an algorithm uses memory space for three reasons:
• Instruction Space
-- It's the amount of memory used to save the compiled version of instructions.
• Environmental Stack
-- Sometimes an algorithm(function) may be called inside another algorithm(function). In such a
situation, the current variables are pushed onto the system stack, where they wait for further execution
and then the call to the inside algorithm(function) is made.
Ex. If a function A() calls function B() inside it, then all the variables of the function A() will get stored
on the system stack temporarily, while the function B() is called and executed inside the function A().
• Data Space
-- Amount of space used by the variables and constants.
So in general for any algorithm, the memory may be used for the following: - Variables (Data
Space), Program Instruction (Instruction Space) and Execution (Environmental Space). But while
calculating the Space Complexity of any algorithm, we usually consider only Data Space and we
neglect the Instruction Space and Environmental Stack.
Calculation of Space Complexity
An algorithm's space can be categorized into 2 parts:
1) Fixed Part - It is independent of the characteristics of input and output.
It includes instruction(code) space, space for simple variables, fixed-size component variables and
constants.
2) Variable Part - It depends on instance characteristics.
It consists of the space needed by component variables whose size is dependent on the particular
problem instance being solved, the space needed by referenced variables, and the recursion stack space.
Sometimes, Auxiliary Space is confused with Space Complexity. The Auxiliary Space is the extra
space or the temporary space used by the algorithm during it's execution.
Space Complexity = Auxiliary Space + Input space
Thus, space requirement S(M) of any algorithm M is: S(M) = c + Sm (Instance characteristics), where c
is constant
While analyzing space complexity, we primarily concentrate on estimating Sm. Consider the following
algorithm:
public int sum(int a, int b) {
return a + b;
}
In this particular method, three variables are used and allocated in memory:
1. The first int argument, a
2. The second int argument, b
3. The returned sum result which is also an int like a and b
In Java, a single integer variable occupies 4 bytes of memory. In this example, we have three integer
variables. Therefore, this algorithm always takes 12 bytes of memory to complete (3*4 bytes).
We can clearly see that the space complexity is constant, so, it can be expressed in big-O notation as
O(1).
Now let us see another example -
public int sumArray(int[] array) {
int size = array.length;
int sum = 0;
for (int i = 0; i < size; i++) {
sum += array[i];
}
return sum;
}
Again, let’s list all variables present in the above code:
1. Array – the function’s only argument – the space taken by the array is equal to 4n bytes where n
is the length of the array
2. The int variable, size
3. The int variable, sum
4. The int iterator, i
The total space needed for this algorithm to complete is 4n + 4 + 4 + 4 (bytes). The highest order is of n
in this equation. Thus, the space complexity of that code snippet is O(n). When the program consists of
loops (In case of Iterative algorithms), it will have linear space complexity or O(n).
While dealing with operations on data structures, we can say that space complexity depends on size of
the data structure. Ex, if an array stores N elements, its space complexity is O(N). A program with an
array of N arrays will have space complexity O(N^2) and so on.
Now, the space complexity analysis also takes into account the size of recursion stack in case of
recursive algorithms. Consider the code below -
Algorithm fact(n)
{
if (n<=0)
return 1;
else
return n * (n - 1);
}
In this case there are 3 statements ( an if statement & 2return statements). The depth of recursion is n +
1. Thus the recursion stack space needed is >=3(n+1). So we can say, space complexity is O(n) i.e.
linear.
Space Complexities of Common Algorithms
The space complexities of various algorithms is given below -
Algorithm Space Complexity
Linear Search O(1)

Binary Search O(1)

Bubble Sort O(1)

Insertion Sort O(1)

Selection Sort O(1)

Heapsort O(1)

Shell Sort O(1)

Quicksort O(log(n))

Mergesort O(n)

Timsort O(n)

Tree Sort O(n)

Bucket Sort O(n)

Radix Sort O(n+k)

Counting Sort O(k)

Asymptotic notations
❑ Asymptotic analysis is a useful tool to help to structure our thinking toward better algorithm
• We shouldn’t ignore asymptotically slower algorithms, however.
• Real-world design situations often call for a careful balancing
o Asymptotic complexity is a way of expressing the cost of an algorithm, using idealized units of
computational work. Consider, for example, the algorithm for sorting a deck of cards, which
proceeds by repeatedly searching through the deck for the lowest card. The asymptotic
complexity of this algorithm is the square of the number of cards in the deck.
❖ Note that we think about bounds on the performance of algorithms, rather than giving exact
speeds.
o The actual number of steps required to sort our deck of cards (with our naive quadratic
algorithm) will depend upon the order in which the cards begin.
o The actual time to perform each of our steps will depend upon our processor speed, the condition
of our processor cache, etc., etc.
Big-O

• Big-O is the formal method of expressing the upper bound of an algorithm's running time. It's
a measure of the longest amount of time it could possibly take for the algorithm to complete.
• More formally, for non-negative functions, f(n) and g(n),
• if there exists an integer n0 and a constant c > 0 such that for all integers n > n0, f(n) ≤ cg(n),
• then f(n) is Big O of g(n).
• This is denoted as "f(n) = O(g(n))".
• If graphed, g(n) serves as an upper bound to the curve you are analyzing, f(n).

EXAMPLE
Example: n2 + n = O(n3)
Proof:
• Here, we have f (n) = n2 + n, and g(n) = n3
• Notice that if n ≥ 1, n ≤ n3 is clear.
• Also, notice that if n ≥ 1, n2 ≤ n3 is clear.
• Side Note: In general, if a ≤ b, then na ≤ nb whenever n ≥ 1. This fact is used often in these types of
proofs.
• Therefore,
n2 + n ≤ n3 + n3 = 2n3
• We have just shown that
n2 + n ≤ 2n3 for all n ≥ 1
• Thus, we have shown that n2 + n = O(n3)
(by definition of Big- O, with n0 = 1, and c = 2.)

BIG-OMEGA NOTATION
➢ For non-negative functions, f(n) and g(n), if there exists an integer n0 and a constant c > 0 such
that for all integers n > n0, f(n) ≥ cg(n), then f(n) is omega of g(n).
➢ This is denoted as "f(n) = Ω(g(n))".
➢ This is almost the same definition as Big Oh, except that "f(n) ≥ cg(n)", this makes g(n) a lower
bound function, instead of an upper bound function.
➢ It describes the best that can happen for a given data size.
• Omega is the reverse of Big-Oh.
• Omega gives us a LOWER BOUND on a function.
• Big-Oh says, "Your algorithm is at least this good."
• Omega says, "Your algorithm is at least this bad."

EXAMPLE

Example: n3 + 4n2 = Ω(n2)


Proof:
• Here, we have f (n) = n3 + 4n2, and g(n) = n2
• It is not too hard to see that if n ≥ 0,
n3 ≤ n3 + 4n2
• We have already seen that if n ≥ 1,
n2 ≤ n3
• Thus when n ≥ 1,
n2 ≤ n3 ≤ n3 + 4n2
• Therefore,
1n2 ≤ n3 + 4n2 for all n ≥ 1
• Thus, we have shown that n3 + 4n2 = Ω(n2) (by definition of Big- Ω, with n0 = 1, and c = 1.)

THETA Notation

• The function f(n)=theta(g(n))(read as ―f of n is theta of g of n)


• iff there exist positive constants c1,c2, and n0 such that c1g(n) <=f(n)<=c2g(n) for all n, n>=n0.
• The theta notation is more precise than both the big oh and big omega notations.
• The function f(n)=theta(g(n)) iff g(n) is both the lower and upper bound of f(n).

Example: n2 + 5n + 7 = Θ(n2)
Proof:
• When n ≥ 1,
n2 + 5n + 7 ≤ n2 + 5n2 + 7n2 ≤ 13n2
• When n ≥ 0,
n2 ≤ n2 + 5n + 7
• Thus, when n ≥ 1
1n2 ≤ n2 + 5n + 7 ≤ 13n2
Thus, we have shown that n2 + 5n + 7 = Θ(n2) (by definition of Big-
Θ, with n0 = 1, c1 = 1, and c2 = 13.)

Commonly used Asymptotic Notations

Type Big O notation

Constant O(1)

Linear O(n)

Logarithmic O(log n)

N log n O(n log(n))

exponential 2O(n)

cubic O(n3)

polynomial nO(1)

quadratic O(n2)

Run time performance of the above complexities are :(Ascending Order)


O(1) < O(log n) < O(n) < O(n log n) < O(n2) < O (n3)< O(2n) < O(n!)

RATE OF GROWTH/Order of growth

• Let, the performance of X & Y are TX and TY respectively,


• Conclusion is for a large n, TX(n) ≻ TY (n)
• So, who is a better professional ?
• Ans. X
• When X outperforms Y and by what factor ?
• Ans. N0 and c

You might also like