DSD Unit 1 Analysis of Algorithm
DSD Unit 1 Analysis of Algorithm
DESIGN
Unit – 1
Analysis of Algorithm
INTRODUCTION TO ANALYSIS OF ALGORITHMS
• To bake a cake, we can get different recipes from the internet. We can find ‘n’ number of steps for different
varieties of cakes. All those different step by step procedure to make a cake can be called as an algorithm. We can
choose a simple, easy and most convenient way to make a cake.
• Similarly in computer science, multiple algorithms are available for solving the same problem. The algorithm
analysis helps us to determine which algorithm is most efficient in terms of running time, memory and space
consumed. etc.,
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
Why Analyze an Algorithm?
• The most straightforward reason for analyzing an algorithm is to discover its characteristics in
order to evaluate its suitability for various applications or compare it with other algorithms for
the same application. Moreover, the analysis of an algorithm can help us to understand it better,
and can suggest informed improvements. Algorithms tend to become shorter, simpler, and more
elegant during the analysis process.
• The efficiency of an algorithm can be decided based on
1. Amount of time required by an algorithm to execute.
2. Amount of storage required by an algorithm.
3. Size of the input set.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
Complexities of an Algorithm
• The complexity of an algorithm computes the amount of time and spaces required by an
algorithm for an input of size (n). The complexity of an algorithm can be two types. The time
complexity and the space complexity.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
Rate of Growth
• The rate at which the performance of an algorithm increases as a function of input size is called
as rate of growth. The commonly used rate of growth is, Time complexity Name
• The relationship between different rates of growth is given as, 1 Constant
log n Logarithmic
n Linear
n log n Linear logarithmic
n2 Quadratic
n3 Cubic
2n Exponential
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
Asymptotic Analysis
•Asymptotic analysis, gives an idea about the performance of the algorithm based on the input size.
It should not calculate the exact running time, but find the relation between the running time and
the input size. We should follow the running time when the size of the input is increased.
• For the space complexity, the goal is to get the relation or function that how much space in the
main memory is occupied to complete the algorithm.
• Algorithm analysis are broadly classified into three types such as
• Best case analysis: This analysis gives a lower bound on the run-time. It describes the behaviour
of an algorithm under optimal conditions.
Worst case analysis: This analysis gives the upper bound of the running time of algorithms. In
this case, a maximum number of operations are executed.
• Average case analysis: This analysis gives the region between the upper and lower bound of the
running time of algorithms. In this case, the number of executed operations is not minimum and
not maximum.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
ASYMPTOTIC NOTATIONS
Asymptotic notation is one of the most efficient ways to calculate the time complexity of an algorithm.
Asymptotic notations are mathematical tools to represent time complexity of algorithms for asymptotic analysis.
The three asymptotic notations used to represent time complexity of algorithms are,
The Big O (Big-Oh) Notation
The Big Ω (Big-Omega) Notation
The Big (Big-Theta) Notation
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
The Mathematical Definition of Big-Oh
f(n) ∈ O(g(n)) if and only if there exist some positive constant c and some non-negative integer n₀ such that, f(n) ≤
c g(n) for all n ≥ n₀, n₀ ≥ 1 and c>0.
The above definition says, in the worst case, let the function f(n) be the algorithm's runtime, and g(n) be an
arbitrary time complexity. Then O(g(n)) says that the function f(n) never grows faster than g(n) that is f(n)<=g(n)
and g(n) is the maximum number of steps that the algorithm can attain.
In the above graph c. g(n) is a function that gives the
maximum runtime (upper bound) and f(n) is the
algorithm’s runtime.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
The Big Omega () notation
• Big Omega is an Asymptotic Notation for the best-case scenario. Big notation defines an
asymptotic lower bond.
• f(n) ∈ Ω(g(n)) if and only if there exist some positive constant c and some non-negative integer n₀
such that, f(n) ≥ c g(n) for all n ≥ n₀, n₀ ≥ 1 and c>0.
• The above definition says, in the best case, let the function f(n) be the algorithm’s runtime
and g(n) be an arbitrary time complexity. Then Ω(g(n)) says that the function g(n) never grows more
than f(n) i.e. f(n)>=g(n), g(n) indicates the minimum number of steps that the algorithm will attain.
• In the above graph, c.g(n) is a function that gives the minimum runtime (lower bound) and f(n) is the
algorithm’s runtime.
Follow study with JBR Trisea You Tube Channel for Tamil
Explanation
The Big Theta () Notation
• Big Theta is an Asymptotic Notation for the average case, which gives the average growth for a given function.
Theta Notation is always between the lower bound and the upper bound. It provides an asymptotic average
bound for the growth rate of an algorithm. If the upper bound and lower bound of the function give the same
result, then the Θ notation will also have the same rate of growth.
• f(n) ∈ (g(n)) if and only if there exist some positive constant c ₁ and c ₂ some non-negative integer n ₀ such
that, c₁ g(n) ≤ f(n) ≤ c₂ g(n) for all n ≥ n₀, n₀ ≥ 1 and c>0.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
The above definition says in the average case, let the
function f(n) be the algorithm’s runtime and g(n) be an arbitrary
time complexity. Then (g(n)) says that the function g(n)
encloses the function f(n) from above and below
using c1.g(n) and c2.g(n).
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
RECURSION
• Recursion is a technique by which a function makes one or more calls to itself during execution.
• In computing, recursion provides an elegant and powerful alternative for performing repetitive tasks. When one
invocation of the function makes a recursive call, that invocation is suspended until the recursive call
completes.
• The factorial function
• The factorial function is a classic mathematical function that has a natural recursive definition.
• The factorial of a positive integer n, is defined as the product of the integers from 1 to n. if n = 0, then n! is
defined as 1.
• For example, 5! = 5 . 4 . 3 . 2 . 1 = 120 and note that 5 ! = 5. (4.3.2.1) = 5. 4!. Generally, for a positive integer
n, we can define n ! = n. (n – 1) !.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
In this case, n = 0 is the base class. It is defined non recursively is terms of fixed quantities. n (n-1)! is a recursive case.
def factorial(n):
if n = = 0:
return 1
else:
return n*factorial(n-1)
This function repetition is provided by the repeated recursive invocation of the function. When the function is invoked,
its argument is smaller by one and when a base case is reached, no further recursive calls are made.
Follow study with JBR Trisea You Tube Channel for Tamil Explanation
• The execution of a recursive function is illustrated using a recursive trace. Each entry of the trace
corresponds to a recursive call. Each new recursive function call is indicated by a downward
arrow to a new invocation. When the function returns, an arrow showing this return is drawn and
the return value may be indicated alongside this arrow.
• In python, when a function is called a structure known as an activation record or a frame is
created to store information about the progress of that invocation of the function. This activation
record includes a namespace for storing the function call’s parameters, local variables and
information about which command in the body of the function is currently executing.
• When the execution of a function leads to a nested function call, the execution of the former call
is suspended and its activation record stores the place where the control should continue upon
return of the nested call. That is, there is a different activation record for each active call.
Implementation:
draw_ruler
It manages the construction of the entire ruler. It takes the total number of inches and the major length as its arguments
The iterative range starts from 1 as the inch ‘0’ have constructed first.
The loop calls the draw_ interval and draw_line methods.
The draw_ line method draws a single tick with a specified number of dashes and an optional
string label.
It is a non-recursive method.
i) draw_ interval
This function draws the sequence of minor ticks within some interval, based on the length of the
interval’s central tick.
An interval with a central tick length L – 1.
The center_length = 0 that draws nothing. For center_length ≥ 1 the first and last steps are performed
by recursively by calling draw_interval method. The middle step is performed by calling the function
draw_line method.
An unsuccessful search occurs if low > high, as the interval [low, high] is empty. This algorithm is
known as binary search.
This binary search algorithm requires O(log n) time. Where as the sequential search algorithm uses O(n) time.
• File system for a computer has a recursive structure in which directories can be nested arbitrarily deep
within other directories. Recursive algorithms are widely to explore and manage these file systems.
• Modern operating systems define file-system directories in a recursive way. A file system consists of a
top-level directory, consists of files and other directories, which is turn contain files and other
directories and so on.
• The representation of such file system is,
• The file system representation uses recursive algorithms for copying a dictionary, deleting a dictionary
etc. In this example we consider computing the total disk usage for all files and directories nested
within a particular directory.
Some Python’s OS modules are used to implement recursive algorithm for computing disk usage. They are,
import os
def disk_usage (path):
total = os . path . getsize (path)
if os . path . isdir (path):
for filename in os.listdir(path):
childpath = os . path . join (path, filename)
total + = disk_usage (childpath)
print (‘(0:< 7)’, format (total), path)
return total
The objective of tower of hanoi puzzle is to move all the disks from the starting pole to one of the other two poles to
create a new tower, adhering two conditions.
1. Only one disk can be moved at a time.
2. A larger disk can never be placed on top of a smaller disk.
• Computing factorials
• To compute factorials (n), there are a total of n + 1 activations. And each individual activations
of factorials execute a constant number of operations. Therefore, we conclude that the overall
number of operations for computing factorial (n) is O(n) as there are n+1 activations, each of
which accounts for O (1) operations.
In a binary search algorithm, a constant executed time is proportional to the number of recursive calls performed. So, a
binary search algorithm runs a O(log n) time for a sorted sequence with n numbers.
Initially, the number of candidates is n, after the first call in a binary search, it is at
most n/2. After the second call, it is at most n/4 and so on. In general, after the jth call in a binary search, the number of
candidate entries remaining is at most n/2j. In an unsuccessful search, the recursive calls stop when there are no more
candidate entries. Hence, the maximum number of recursive calls performed, is the smallest integer r such that < 1
In other words, r > log n. Thus, we have r = [log n]+1, which implies that binary search runs in O(log n) time.
To evaluate the time complexity of the move ( ) function, we need to determine the cost of each invocation and the number of times the function
is called for any value of n.
Each function invocation only requires O(1) time since there are only two non-recursive function call steps performed by the function, both of
which require constant time.
To determine the total number of times the function is called, we need to calculate the number of times the function executes at each level of the
call tree and then add those values to obtain the final result. The number of function calls at each level is twice the number of calls at the previous
level.
If we label each level of the call tree starting with 0 at the top and going down to n-1 at the bottom, then the number of function calls at each
level i is equal to 2i.
Thus, the recursive solution for solving tower of hanoi problem requires exponential time of O (2 n)
in the worst case.