Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views

Introduction to Algorithm Analysis

An algorithm is a sequence of instructions designed to perform a specific task, characterized by finiteness, definiteness, and effectiveness. Algorithms can be classified into various paradigms, such as brute force and dynamic programming, and their performance is analyzed through time and space complexity using asymptotic notations like Big-O, Omega, and Theta. Understanding these concepts is essential for selecting efficient algorithms for problem-solving in mathematics and computer science.

Uploaded by

sugiti318
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Introduction to Algorithm Analysis

An algorithm is a sequence of instructions designed to perform a specific task, characterized by finiteness, definiteness, and effectiveness. Algorithms can be classified into various paradigms, such as brute force and dynamic programming, and their performance is analyzed through time and space complexity using asymptotic notations like Big-O, Omega, and Theta. Understanding these concepts is essential for selecting efficient algorithms for problem-solving in mathematics and computer science.

Uploaded by

sugiti318
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Introduction to Algorithm

An algorithm is a finite sequence of well-defined instructions used to perform a


specific task or solve a problem. In mathematics and computer science, algorithms
serve as the foundation for computations and data processing, guiding systems
from an initial state through a series of steps to achieve a desired outcome.

Key Characteristics of Algorithms:

 Finiteness: Algorithms must have a clear starting point and terminate after a
finite number of steps.
 Definiteness: Each step of an algorithm should be precisely and
unambiguously defined.
 Input and Output: Algorithms take zero or more inputs and produce one or
more outputs.
 Effectiveness: The operations within an algorithm should be basic enough
to be performed, in principle, by a human using only paper and pencil.

Classification of Algorithms:

Algorithms can be categorized based on their design paradigms, such as:

 Brute Force: Systematically testing all possible solutions until the correct
one is found.
 Divide and Conquer: Breaking down a problem into smaller sub-problems,
solving each recursively, and combining their solutions.
 Dynamic Programming: Solving complex problems by breaking them
down into simpler overlapping sub-problems and storing their solutions to
avoid redundant computations.
 Greedy Algorithms: Making a series of choices by selecting the option that
offers the most immediate benefit, aiming for a locally optimal solution.
 Backtracking: Incrementally building candidates for solutions and
abandoning them if they fail to meet the problem's constraints.

Algorithm Specifications
Algorithm specifications define the precise behavior and characteristics of
algorithms, ensuring they perform as intended. A comprehensive algorithm
specification typically includes:

1. Input and Output Definitions:


o Inputs: Clearly describe the data types, constraints, and valid ranges
of all inputs.
o Outputs: Specify the expected results, including data types and
possible value ranges.
2. Initial Conditions:
o Outline the state of the system or environment before the algorithm
executes, including any assumptions about input validity.
3. Detailed Operational Steps:
o Provide a step-by-step breakdown of the algorithm's logic, ensuring
each action is unambiguous and executable.
o Use formal languages or pseudocode to represent operations clearly.
4. Termination Criteria:
o Define conditions under which the algorithm will conclude, ensuring
it halts after a finite number of steps.
5. Error Handling and Edge Cases:
o Describe how the algorithm addresses invalid inputs, exceptions, or
unusual scenarios.
6. Performance Metrics:
o Specify time and space complexity requirements, guiding efficient
resource utilization.
7. Correctness Proofs:
o Provide formal proofs or reasoning that the algorithm meets its
specifications, ensuring reliability.

Performance Analysis
Performance analysis of algorithms involves evaluating their efficiency and
effectiveness in solving problems. This assessment is crucial for selecting the most
appropriate algorithm for a given application, especially when dealing with large
datasets or real-time processing requirements.

Key Aspects of Performance Analysis:

1. Time Complexity:
o Quantifies the amount of time an algorithm takes to complete as a
function of the size of its input.
o Commonly expressed using Big-O notation to classify algorithms
according to their worst-case or upper-bound performance.
2. Space Complexity:
o Measures the amount of memory an algorithm uses relative to the size
of the input.
o Important for applications with limited memory resources.
3. Smoothed Analysis:
o Combines aspects of worst-case and average-case analyses by
evaluating an algorithm's performance under slight random
perturbations of worst-case inputs.
o Provides a more realistic assessment of an algorithm's practical
performance.
4. Empirical Analysis:
o Involves implementing algorithms and testing them under various
conditions to observe their actual performance.
o Helps identify practical issues and areas for optimization that
theoretical analyses might not reveal.
Case Study on Analysis of Algorithm
1. Worst Case Analysis (Mostly used)
 In the worst-case analysis, we calculate the upper bound on the running
time of an algorithm. We must know the case that causes a maximum
number of operations to be executed.
 For Linear Search, the worst case happens when the element to be
searched (x) is not present in the array. When x is not present, the
search() function compares it with all the elements of arr[] one by one.
 This is the most commonly used analysis of algorithms (We will be
discussing below why). Most of the time we consider the case that causes
maximum operations.
2. Best Case Analysis (Very Rarely used)
 In the best-case analysis, we calculate the lower bound on the running
time of an algorithm. We must know the case that causes a minimum
number of operations to be executed.
 For linear search, the best case occurs when x is present at the first
location. The number of operations in the best case is constant (not
dependent on n). So the order of growth of time taken in terms of input
size is constant.
3. Average Case Analysis (Rarely used)
 In average case analysis, we take all possible inputs and calculate the
computing time for all of the inputs. Sum all the calculated values and
divide the sum by the total number of inputs.
 We must know (or predict) the distribution of cases. For the linear search
problem, let us assume that all cases are uniformly distributed (including
the case of x not being present in the array). So we sum all the cases and
divide the sum by (n+1). We take (n+1) to consider the case when the
element is not present.

Why is Worst Case Analysis Mostly Used?

Average Case : The average case analysis is not easy to do in most practical
cases and it is rarely done. In the average case analysis, we need to
consider every input, its frequency and time taken by it which may not be
possible in many scenarios
Best Case : The Best Case analysis is considered bogus. Guaranteeing a
lower bound on an algorithm doesn’t provide any information as in the worst
case, an algorithm may take years to run.
Worst Case: This is easier than average case and gives an upper bound
which is useful information to analyze software products.

Interesting information about asymptotic notations:


A) For some algorithms, all the cases (worst, best, average) are
asymptotically the same. i.e., there are no worst and best cases.
 Example: Merge Sort does order of n log(n) operations in all cases.
B) Where as most of the other sorting algorithms have worst and best
cases.
 Example 1: In the typical implementation of Quick Sort (where pivot is
chosen as a corner element), the worst occurs when the input array is
already sorted and the best occurs when the pivot elements always divide
the array into two halves.
 Example 2: For insertion sort, the worst case occurs when the array is
reverse sorted and the best case occurs when the array is sorted in the
same order as output.
Types of Asymptotic Notations in Complexity
Analysis of Algorithms

We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of
Algorithms. The main idea of asymptotic analysis is to have a measure of the
efficiency of algorithms that don’t depend on machine-specific constants and
don’t require algorithms to be implemented and time taken by programs to
be compared. Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.
Asymptotic Notations:
 Asymptotic Notations are mathematical tools used to analyze the
performance of algorithms by understanding how their efficiency changes
as the input size grows.
 These notations provide a concise way to express the behavior of an
algorithm’s time or space complexity as the input size approaches infinity.
 Rather than comparing algorithms directly, asymptotic analysis focuses
on understanding the relative growth rates of algorithms’ complexities.
 It enables comparisons of algorithms’ efficiency by abstracting away
machine-specific constants and implementation details, focusing instead
on fundamental trends.
 Asymptotic analysis allows for the comparison of algorithms’ space and
time complexities by examining their performance characteristics as the
input size varies.
 By using asymptotic notations, such as Big O, Big Omega, and Big Theta,
we can categorize algorithms based on their worst-case, best-case, or
average-case time or space complexities, providing valuable insights into
their efficiency.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation) :


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
.Theta (Average Case) You add the running times for each possible input
combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a natural
number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1
* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the
value f(n) is always between c1 * g(n) and c2 * g(n) for large values of n (n ≥
n0). The definition of theta also requires that f(n) must be non-negative for
values of n greater than n0.
The execution time serves as both a lower and upper bound on the
algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order
terms and ignore leading constants. For example, Consider the
expression 3n3 + 6n2 + 6000 = Θ(n3), the dropping lower order terms is
always fine because there will always be a number(n) after which Θ(n 3) has
higher values than Θ(n2) irrespective of the constants involved. For a given
function g(n), we denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time
complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-O(Worst Case) It is defined as the condition that allows an algorithm to
complete statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist
a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time
complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤
cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the
best case and quadratic time in the worst case. We can safely say that the
time complexity of the Insertion sort is O(n 2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we
have to use two statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n 2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.

Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O
provides exact or upper bounds .

3. Omega Notation (Ω-Notation):


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time
complexity.
It is defined as the condition that allows an algorithm to complete
statement execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural number
n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤
f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity
of Insertion Sort can be written as Ω(n), but it is not very useful information
about insertion sort, as we are generally interested in worst-case and
sometimes in the average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω
provides exact or lower bounds.

You might also like