Introduction to Algorithm Analysis
Introduction to Algorithm Analysis
Finiteness: Algorithms must have a clear starting point and terminate after a
finite number of steps.
Definiteness: Each step of an algorithm should be precisely and
unambiguously defined.
Input and Output: Algorithms take zero or more inputs and produce one or
more outputs.
Effectiveness: The operations within an algorithm should be basic enough
to be performed, in principle, by a human using only paper and pencil.
Classification of Algorithms:
Brute Force: Systematically testing all possible solutions until the correct
one is found.
Divide and Conquer: Breaking down a problem into smaller sub-problems,
solving each recursively, and combining their solutions.
Dynamic Programming: Solving complex problems by breaking them
down into simpler overlapping sub-problems and storing their solutions to
avoid redundant computations.
Greedy Algorithms: Making a series of choices by selecting the option that
offers the most immediate benefit, aiming for a locally optimal solution.
Backtracking: Incrementally building candidates for solutions and
abandoning them if they fail to meet the problem's constraints.
Algorithm Specifications
Algorithm specifications define the precise behavior and characteristics of
algorithms, ensuring they perform as intended. A comprehensive algorithm
specification typically includes:
Performance Analysis
Performance analysis of algorithms involves evaluating their efficiency and
effectiveness in solving problems. This assessment is crucial for selecting the most
appropriate algorithm for a given application, especially when dealing with large
datasets or real-time processing requirements.
1. Time Complexity:
o Quantifies the amount of time an algorithm takes to complete as a
function of the size of its input.
o Commonly expressed using Big-O notation to classify algorithms
according to their worst-case or upper-bound performance.
2. Space Complexity:
o Measures the amount of memory an algorithm uses relative to the size
of the input.
o Important for applications with limited memory resources.
3. Smoothed Analysis:
o Combines aspects of worst-case and average-case analyses by
evaluating an algorithm's performance under slight random
perturbations of worst-case inputs.
o Provides a more realistic assessment of an algorithm's practical
performance.
4. Empirical Analysis:
o Involves implementing algorithms and testing them under various
conditions to observe their actual performance.
o Helps identify practical issues and areas for optimization that
theoretical analyses might not reveal.
Case Study on Analysis of Algorithm
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running
time of an algorithm. We must know the case that causes a maximum
number of operations to be executed.
For Linear Search, the worst case happens when the element to be
searched (x) is not present in the array. When x is not present, the
search() function compares it with all the elements of arr[] one by one.
This is the most commonly used analysis of algorithms (We will be
discussing below why). Most of the time we consider the case that causes
maximum operations.
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running
time of an algorithm. We must know the case that causes a minimum
number of operations to be executed.
For linear search, the best case occurs when x is present at the first
location. The number of operations in the best case is constant (not
dependent on n). So the order of growth of time taken in terms of input
size is constant.
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the
computing time for all of the inputs. Sum all the calculated values and
divide the sum by the total number of inputs.
We must know (or predict) the distribution of cases. For the linear search
problem, let us assume that all cases are uniformly distributed (including
the case of x not being present in the array). So we sum all the cases and
divide the sum by (n+1). We take (n+1) to consider the case when the
element is not present.
Average Case : The average case analysis is not easy to do in most practical
cases and it is rarely done. In the average case analysis, we need to
consider every input, its frequency and time taken by it which may not be
possible in many scenarios
Best Case : The Best Case analysis is considered bogus. Guaranteeing a
lower bound on an algorithm doesn’t provide any information as in the worst
case, an algorithm may take years to run.
Worst Case: This is easier than average case and gives an upper bound
which is useful information to analyze software products.
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of
Algorithms. The main idea of asymptotic analysis is to have a measure of the
efficiency of algorithms that don’t depend on machine-specific constants and
don’t require algorithms to be implemented and time taken by programs to
be compared. Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.
Asymptotic Notations:
Asymptotic Notations are mathematical tools used to analyze the
performance of algorithms by understanding how their efficiency changes
as the input size grows.
These notations provide a concise way to express the behavior of an
algorithm’s time or space complexity as the input size approaches infinity.
Rather than comparing algorithms directly, asymptotic analysis focuses
on understanding the relative growth rates of algorithms’ complexities.
It enables comparisons of algorithms’ efficiency by abstracting away
machine-specific constants and implementation details, focusing instead
on fundamental trends.
Asymptotic analysis allows for the comparison of algorithms’ space and
time complexities by examining their performance characteristics as the
input size varies.
By using asymptotic notations, such as Big O, Big Omega, and Big Theta,
we can categorize algorithms based on their worst-case, best-case, or
average-case time or space complexities, providing valuable insights into
their efficiency.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
Theta notation
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist
a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time
complexity.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O
provides exact or upper bounds .