Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views16 pages

Algorithm

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 16

বিসবিল্লাবির রািিাবির রাবিি

ALGORITHM

What is an Algorithm?
In computer programming terms, an algorithm is a set of well-defined instructions to
solve a particular problem. It takes a set of input(s) and produces the desired output.
For example,

An algorithm to add two numbers:

Algorithm 1: Add two numbers entered by the user

Step 1: Start

Step 2: Declare variables num1, num2 and sum.

Step 3: Read values num1 and num2.

Step 4: Add num1 and num2 and assign the result to sum.

sum←num1+num2

Step 5: Display sum

Step 6: Stop

Use of the Algorithms:


Algorithms play a crucial role in various fields and have many applications. Some of the
key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and are used
to solve problems ranging from simple sorting and searching to complex tasks such as
artificial intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as finding
the optimal solution to a system of linear equations or finding the shortest path in a
graph.
3. Operations Research: Algorithms are used to optimize and make decisions in fields
such as transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence and
machine learning, and are used to develop intelligent systems that can perform tasks
such as image recognition, natural language processing, and decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract insights from
large amounts of data in fields such as marketing, finance, and healthcare.
What is the need for algorithms?
1. Algorithms are necessary for solving complex problems efficiently and effectively.
2. They help to automate processes and make them more reliable, faster, and easier to
perform.
3. Algorithms also enable computers to perform tasks that would be difficult or impossible
for humans to do manually.
4. They are used in various fields such as mathematics, computer science, engineering,
finance, and many others to optimize processes, analyze data, make predictions, and
provide solutions to problems.

Time Complexity
What is Time complexity?

Time complexity is defined as the amount of time taken by an algorithm to run, as a function of
the length of the input
Measurement of Complexity of an Algorithm
Based on the above three notations of Time Complexity there are three cases to analyze
an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of an
algorithm. We must know the case that causes a maximum number of operations to be
executed. For Linear Search, the worst case happens when the element to be searched
(x) is not present in the array. When x is not present, the search() function compares it
with all the elements of arr[] one by one. Therefore, the worst-case time complexity of the
linear search would be O(n).
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running time of an
algorithm. We must know the case that causes a minimum number of operations to be
executed. In the linear search problem, the best case occurs when x is present at the first
location. The number of operations in the best case is constant (not dependent on n). So
time complexity in the best case would be Ω(1)
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the computing time for
all of the inputs. Sum all the calculated values and divide the sum by the total number of
inputs. We must know (or predict) the distribution of cases. For the linear search problem,
let us assume that all cases are uniformly distributed (including the case of x not being
present in the array). So we sum all the cases and divide the sum by (n+1). Following is
the value of average-case time complexity.

Asymptotic Notations:
.Asymptotic Notations are mathematical tools that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the upper
and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Mathematical Representation of Theta notation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm. Therefore,
it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.
Mathematical Representation of Big-O Notation:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥
n0 }
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time.
Mathematical Representation of Omega notation :
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥
n0 }

Understanding Time Complexity with Simple


Examples
Example 1:
#include <iostream>
using namespace std;

int main()
{
cout << "Hello World";
return 0;
}
Time Complexity: In the above code “Hello World” is printed only once on the screen.
So, the time complexity is constant: O(1) i.e. every time a constant amount of time is
required to execute code, no matter which operating system or which machine
configurations you are using.

Example 2:
C++

#include <iostream>

using namespace std;

int main()
{

int i, n = 8;

for (i = 1; i <= n; i++) {

cout << "Hello World !!!\n";

return 0;

Time Complexity: In the above code “Hello World !!!” is printed only n times on the
screen, as the value of n can change.
So, the time complexity is linear: O(n) i.e. every time, a linear amount of time is required
to execute code.

Example 3:
C++

#include <iostream>

using namespace std;

int main()

int i, n = 8;
for (i = 1; i <= n; i=i*2) {

cout << "Hello World !!!\n";

return 0;

Time Complexity: O(log2(n))

Example 4
for(i= 0 ; i < n; i++){
cout<< i << " " ;
i++;
}

The loop has maximum value n but the i will be incremented twice in the for loop which will
make the time take half. So the time complexity is O(n/2) which is equivalent to O(n).

Example 5
for(i= 0 ; i < n; i++){
for(j = 0; j<n ;j++){
cout<< i << " ";
}
}

The inner loop and the outer loop both are executing n times. So for single value of i, j is
looping n times, for n values of i, j will loop total n*n = n 2 times. So the time complexity is
O(n 2 ).

Example 6
int i = n;
while(i){
cout << i << " ";
i = i/2;
}
In this case, after each iteration the value of i is turned into half of its previous value. So
the series will be like: . So the time complexity is O(log n).

Example 7

Find the time complexity of the following code snippets

if(i > j ){
j>23 ? cout<<j : cout<<i;
}

There are two conditional statements in the code. Each conditional statement has time
complexity = O(1), for two of them it is O(2) which is equivalent to O(1) which is constant.

Example 5

Find the time complexity of the following code snippets

for(i= 0; i < n; i++){


for(j = 1; j < n; j = j*2){
cout << i << " ";
}
}

The inner loop is executing (log n) times where the outer is executing n times. So for single
value of i, j is executing (log n) times, for n values of i, j will loop total n*(log n) = (n log n)
times. So the time complexity is O(n log n).

Now, in this section, I will take you through different types of Time
Complexities with the implementation of the C++ programming
language.

Linear: O(n):

int n;

cin>>n;

int a = 0;

for (int i = 1; i<=n; i++){


a = a + 1;

view rawtime_complexity.cpp hosted with ❤ by GitHub

Quadratic: O(n²):

int n;

cin>>n;

int a = 0;

for (int i = 1; i <= n; i++){

for (int j = 1; j <= n; j++){

a = a + 1;

view rawtime_complexity.cpp hosted with ❤ by GitHub

Linear Time O(n+m):

int n, m;

cin>>n>>m;

int a = 0;

for (int i = 1; i <= n; i++){

a = a + 1;

for (int j = 1; j <= m; j++){

a = a + 1;

view rawtime_complexity.cpp hosted with ❤ by GitHub

Time Complexity O(n*m):

int n, m;
cin>>n>>m;

int a = 0;

for (int i = 1; i <= n; i++){

for (int j = 1; j <= m; j++){

a = a + rand();

view rawtime_complexity.cpp hosted with ❤ by GitHub

Logarithmic Time O(log n):

int n;

cin>>n;

int a = 0; i = n;

while (i >= 1){

a = a + 1;

i /= 2;

view rawtime_complexity.cpp hosted with ❤ by GitHub

Space Complexity

Space complexity of an algorithm represents the amount of memory


space required by the algorithm in its life cycle. The space required
by an algorithm is equal to the sum of the following two
components −

 A fixed part that is a space required to store certain data and


variables, that are independent of the size of the problem. For
example, simple variables and constants used, program size,
etc.
 A variable part is a space required by variables, whose size
depends on the size of the problem. For example, dynamic
memory allocation, recursion stack space, etc.
 Example :
 int add (int n){
 if (n <= 0){
 return 0;
 }
 return n + add (n-1);
 }

 Here each call add a level to the stack :


 1. add(4)
 2. -> add(3)
 3. -> add(2)
 4. -> add(1)
 5. -> add(0)

 Each of these calls is added to call stack and takes up actual


memory.
 So it takes O(n) space.
Many kinds of Algorithms

Divide and Conquer Algorithm


A divide and conquer algorithm is a strategy of solving a large problem by
1.breaking the problem into smaller sub-problems(independent)

2.solving the sub-problems, and

3.combining them to get the desired output.

Divide and Conquer Applications


1. Binary Search
2. Merge Sort
3. Quick Sort
4. Calculate pow(x, n)
5. Karatsuba algorithm for fast multiplication
6. Strassen’s Matrix Multiplication
7. Convex Hull (Simple Divide and Conquer Algorithm)
8. Quickhull Algorithm for Convex Hull

2.Greedy Algorithm
1. Objective Function:
 Defines the goal of the optimization problem.
2. Constraints:
 Conditions or restrictions on the solution.
3. Feasible Solution:
 A solution satisfying all constraints.
4. Optimal Solution:
 The best solution according to the objective function.

Greedy Algorithm Steps:

1. Initialization:
 Start with an empty or trivial solution.
2. Greedy Choice:
 Make the locally optimal choice at each step.
3. Feasibility Check:
 Ensure the choice satisfies all constraints.
4. Update Solution:
 Incorporate the chosen element into the solution.
5. Termination:
 Repeat steps 2-4 until a solution is complete or a termination condition is met.
6. Optimality Check:
 Verify if the solution is optimal.

Different Types of Greedy Algorithm


 Selection Sort
 Knapsack Problem
 Minimum Spanning Tree
 Single-Source Shortest Path Problem
 Job Scheduling Problem

 Prim's Minimal Spanning Tree Algorithm


 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Huffman Coding
 Ford-Fulkerson Algorithm
3.Dynamic Algorithm:
1. Define the Problem: Clearly define the problem you want to solve and identify its optimal
substructure and overlapping subproblems.
2. Formulate a Recursive Solution: Express the solution to the problem in terms of solutions to
its smaller subproblems. This often involves defining a recursive relation.
3. Memoization (Top-Down): Implement the recursive solution using memoization. Memoization
involves storing the solutions to subproblems in a table (usually an array or a dictionary) and
checking this table before solving a subproblem to avoid redundant computations.
4. Tabulation (Bottom-Up): Alternatively, you can use a bottom-up approach, starting from the
simplest subproblems and iteratively building up to the original problem. This usually involves
creating a table and filling it in a systematic way.
5. Optimal Substructure: Ensure that the solution to the original problem can be constructed
from the solutions of its subproblems. This is the key property that allows dynamic
programming to work.
6. Analyzing Time Complexity: Analyze the time complexity of your dynamic programming
algorithm. Typically, dynamic programming solutions have a time complexity that is polynomial
in the size of the input, making them much more efficient than naive exponential-time
algorithms.

Examples of problems that can be solved using dynamic programming include the knapsack
problem, longest common subsequence, matrix chain multiplication, and many others.

4.Backtracking:
Applications of Backtracking
o N-queen problem
o Sum of subset problem
o Graph coloring
o Hamiliton cycle

Here's a general overview of how backtracking works:

1. Define the Problem:


 Clearly define the problem you want to solve and identify the constraints.
 Define the structure of a potential solution.
2. Design a Recursive Function:
 Formulate a recursive function that explores possible solutions step by step.
 At each step, make a choice that contributes to building a potential solution.
3. Make Choices:
 At each level of recursion, make a choice that advances towards a potential solution.
 This involves trying out different options or values.
4. Check Constraints:
 Check if the current partial solution violates any constraints. If it does, backtrack and
undo the last choice.
 This step is crucial for ensuring that only valid solutions are explored.
5. Base Case:
 Define a base case that indicates when a complete, valid solution has been found.
 At the base case, you may collect the solution or perform any necessary actions.
6. Backtrack:
 If the current partial solution does not lead to a valid solution, backtrack to the previous
level of recursion and undo the last choice.
 This involves "undoing" the changes made in the current step and trying a different
option.
7. Explore All Possibilities:
 Repeat the process until all possibilities have been explored.
 The algorithm systematically explores the solution space, trying out different
combinations of choices.
8. Optimizations (if needed):
 Depending on the problem, you may implement optimizations to prune the search space
and improve efficiency.
 For example, you might avoid exploring branches that are guaranteed to lead to invalid
solutions.

5.Recursive Algorithm:
Here's a general structure of a recursive algorithm:

1. Base Case(s): Define one or more base cases that are simple enough to solve directly without
further recursion. The base case(s) prevent the algorithm from calling itself indefinitely.
2. Recursive Case(s): Define one or more cases where the function calls itself with a smaller input.
This is the step where the problem is broken down into smaller sub-problems.
3. Combine Results: If needed, combine the results obtained from the recursive calls to solve the
original problem.
4. Termination: Ensure that the recursive calls eventually reach the base case(s) so that the
recursion stops and the algorithm terminates.
Certainly! Here are some brief applications of recursive algorithms:

1. Mathematical Calculations:
 Factorials, Fibonacci sequence.
2. Sorting Algorithms:
 Merge Sort, Quick Sort.
3. Tree and Graph Traversals:
 Depth-First Search, Binary Tree Operations.
4. Dynamic Programming:
 Memoization, Longest Common Subsequence.
5. Fractals:
 Mandelbrot Set.
6. File System Operations:
 Directory Tree Traversal.
7. Backtracking Algorithms:
 N-Queens Problem.
8. Computer Graphics:
 Turtle Graphics.
9. Language Processing:
 Parsing Expressions.
10. Artificial Intelligence:
 Search Algorithms.

6.Brach and Bound Algorithm:


Branch and bound (BB, B&B, or BnB) is a method for solving optimization problems by breaking them down
into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the
optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as
well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of
candidate solutions by means of state space search: the set of candidate solutions is thought of as forming
a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets
of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against
upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better
solution than the best one found so far by the algorithm.

Ex:NP HARD PROBLEM

7.Brute force algorithm:


A brute force algorithm is a straightforward and exhaustive problem-solving approach that
explores all possible solutions to a problem, without any optimization or heuristics. It involves
systematically trying every possible option until a solution is found. While effective for small
problem instances, brute force algorithms become impractical for larger problems due to their
high time and resource complexity.

Here are examples of brute force algorithm applications:

1. String Matching:
 Searching for a pattern in text by checking every possible substring.
2. Traveling Salesman Problem (TSP):
 Finding the shortest tour that visits cities and returns, considering all possible orders.
3. Subset Sum Problem:
 Discovering a subset with a target sum by trying all possible combinations.
4. Password Cracking:
 Attempting all possible password combinations in a security attack.
5. Sudoku Solving:
 Solving Sudoku puzzles by trying all possible number placements until a solution is
found.

8.Randomized algorithm
A randomized algorithm is an algorithm that uses randomness as part of its logic to solve
computational problems. Unlike deterministic algorithms, which produce the same output for a
given input every time they run, randomized algorithms introduce an element of randomness to
achieve certain goals, such as improving efficiency or achieving probabilistic correctness.

1. Examples:
 Randomized Quicksort: Choosing a random pivot in the partitioning step of quicksort.
 Randomized Prim's Algorithm: Randomly selecting the next edge to add in the
construction of a minimum spanning tree.
 Randomized Rounding: Used in approximation algorithms to round fractional solutions
to integer solutions randomly.
2. Parallel and Distributed Computing:
 Randomized algorithms are useful in parallel and distributed computing for load
balancing and achieving efficient coordination among distributed entities.
3. Cryptography:
 Randomized algorithms play a role in certain cryptographic protocols, such as
generating random keys and ensuring unpredictability.

You might also like