Algorithm
Algorithm
Algorithm
ALGORITHM
What is an Algorithm?
In computer programming terms, an algorithm is a set of well-defined instructions to
solve a particular problem. It takes a set of input(s) and produces the desired output.
For example,
Step 1: Start
Step 4: Add num1 and num2 and assign the result to sum.
sum←num1+num2
Step 6: Stop
Time Complexity
What is Time complexity?
Time complexity is defined as the amount of time taken by an algorithm to run, as a function of
the length of the input
Measurement of Complexity of an Algorithm
Based on the above three notations of Time Complexity there are three cases to analyze
an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of an
algorithm. We must know the case that causes a maximum number of operations to be
executed. For Linear Search, the worst case happens when the element to be searched
(x) is not present in the array. When x is not present, the search() function compares it
with all the elements of arr[] one by one. Therefore, the worst-case time complexity of the
linear search would be O(n).
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running time of an
algorithm. We must know the case that causes a minimum number of operations to be
executed. In the linear search problem, the best case occurs when x is present at the first
location. The number of operations in the best case is constant (not dependent on n). So
time complexity in the best case would be Ω(1)
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the computing time for
all of the inputs. Sum all the calculated values and divide the sum by the total number of
inputs. We must know (or predict) the distribution of cases. For the linear search problem,
let us assume that all cases are uniformly distributed (including the case of x not being
present in the array). So we sum all the cases and divide the sum by (n+1). Following is
the value of average-case time complexity.
Asymptotic Notations:
.Asymptotic Notations are mathematical tools that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the upper
and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Mathematical Representation of Theta notation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm. Therefore,
it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.
Mathematical Representation of Big-O Notation:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥
n0 }
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time.
Mathematical Representation of Omega notation :
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥
n0 }
int main()
{
cout << "Hello World";
return 0;
}
Time Complexity: In the above code “Hello World” is printed only once on the screen.
So, the time complexity is constant: O(1) i.e. every time a constant amount of time is
required to execute code, no matter which operating system or which machine
configurations you are using.
Example 2:
C++
#include <iostream>
int main()
{
int i, n = 8;
return 0;
Time Complexity: In the above code “Hello World !!!” is printed only n times on the
screen, as the value of n can change.
So, the time complexity is linear: O(n) i.e. every time, a linear amount of time is required
to execute code.
Example 3:
C++
#include <iostream>
int main()
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
return 0;
Example 4
for(i= 0 ; i < n; i++){
cout<< i << " " ;
i++;
}
The loop has maximum value n but the i will be incremented twice in the for loop which will
make the time take half. So the time complexity is O(n/2) which is equivalent to O(n).
Example 5
for(i= 0 ; i < n; i++){
for(j = 0; j<n ;j++){
cout<< i << " ";
}
}
The inner loop and the outer loop both are executing n times. So for single value of i, j is
looping n times, for n values of i, j will loop total n*n = n 2 times. So the time complexity is
O(n 2 ).
Example 6
int i = n;
while(i){
cout << i << " ";
i = i/2;
}
In this case, after each iteration the value of i is turned into half of its previous value. So
the series will be like: . So the time complexity is O(log n).
Example 7
if(i > j ){
j>23 ? cout<<j : cout<<i;
}
There are two conditional statements in the code. Each conditional statement has time
complexity = O(1), for two of them it is O(2) which is equivalent to O(1) which is constant.
Example 5
The inner loop is executing (log n) times where the outer is executing n times. So for single
value of i, j is executing (log n) times, for n values of i, j will loop total n*(log n) = (n log n)
times. So the time complexity is O(n log n).
Now, in this section, I will take you through different types of Time
Complexities with the implementation of the C++ programming
language.
Linear: O(n):
int n;
cin>>n;
int a = 0;
Quadratic: O(n²):
int n;
cin>>n;
int a = 0;
a = a + 1;
int n, m;
cin>>n>>m;
int a = 0;
a = a + 1;
a = a + 1;
int n, m;
cin>>n>>m;
int a = 0;
a = a + rand();
int n;
cin>>n;
int a = 0; i = n;
a = a + 1;
i /= 2;
Space Complexity
1. add(4)
2. -> add(3)
3. -> add(2)
4. -> add(1)
5. -> add(0)
2.Greedy Algorithm
1. Objective Function:
Defines the goal of the optimization problem.
2. Constraints:
Conditions or restrictions on the solution.
3. Feasible Solution:
A solution satisfying all constraints.
4. Optimal Solution:
The best solution according to the objective function.
1. Initialization:
Start with an empty or trivial solution.
2. Greedy Choice:
Make the locally optimal choice at each step.
3. Feasibility Check:
Ensure the choice satisfies all constraints.
4. Update Solution:
Incorporate the chosen element into the solution.
5. Termination:
Repeat steps 2-4 until a solution is complete or a termination condition is met.
6. Optimality Check:
Verify if the solution is optimal.
Examples of problems that can be solved using dynamic programming include the knapsack
problem, longest common subsequence, matrix chain multiplication, and many others.
4.Backtracking:
Applications of Backtracking
o N-queen problem
o Sum of subset problem
o Graph coloring
o Hamiliton cycle
5.Recursive Algorithm:
Here's a general structure of a recursive algorithm:
1. Base Case(s): Define one or more base cases that are simple enough to solve directly without
further recursion. The base case(s) prevent the algorithm from calling itself indefinitely.
2. Recursive Case(s): Define one or more cases where the function calls itself with a smaller input.
This is the step where the problem is broken down into smaller sub-problems.
3. Combine Results: If needed, combine the results obtained from the recursive calls to solve the
original problem.
4. Termination: Ensure that the recursive calls eventually reach the base case(s) so that the
recursion stops and the algorithm terminates.
Certainly! Here are some brief applications of recursive algorithms:
1. Mathematical Calculations:
Factorials, Fibonacci sequence.
2. Sorting Algorithms:
Merge Sort, Quick Sort.
3. Tree and Graph Traversals:
Depth-First Search, Binary Tree Operations.
4. Dynamic Programming:
Memoization, Longest Common Subsequence.
5. Fractals:
Mandelbrot Set.
6. File System Operations:
Directory Tree Traversal.
7. Backtracking Algorithms:
N-Queens Problem.
8. Computer Graphics:
Turtle Graphics.
9. Language Processing:
Parsing Expressions.
10. Artificial Intelligence:
Search Algorithms.
1. String Matching:
Searching for a pattern in text by checking every possible substring.
2. Traveling Salesman Problem (TSP):
Finding the shortest tour that visits cities and returns, considering all possible orders.
3. Subset Sum Problem:
Discovering a subset with a target sum by trying all possible combinations.
4. Password Cracking:
Attempting all possible password combinations in a security attack.
5. Sudoku Solving:
Solving Sudoku puzzles by trying all possible number placements until a solution is
found.
8.Randomized algorithm
A randomized algorithm is an algorithm that uses randomness as part of its logic to solve
computational problems. Unlike deterministic algorithms, which produce the same output for a
given input every time they run, randomized algorithms introduce an element of randomness to
achieve certain goals, such as improving efficiency or achieving probabilistic correctness.
1. Examples:
Randomized Quicksort: Choosing a random pivot in the partitioning step of quicksort.
Randomized Prim's Algorithm: Randomly selecting the next edge to add in the
construction of a minimum spanning tree.
Randomized Rounding: Used in approximation algorithms to round fractional solutions
to integer solutions randomly.
2. Parallel and Distributed Computing:
Randomized algorithms are useful in parallel and distributed computing for load
balancing and achieving efficient coordination among distributed entities.
3. Cryptography:
Randomized algorithms play a role in certain cryptographic protocols, such as
generating random keys and ensuring unpredictability.