What Is Algorithm
What Is Algorithm
Algorithms
Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed in
calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that
frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.
As one would not follow any written instructions to cook the recipe, but only the
standard one. Similarly, not all written instructions for programming are an algorithm.
For some instructions to be an algorithm, it must have the following characteristics:
Clear and Unambiguous: The algorithm should be unambiguous. Each of
its steps should be clear in all aspects and must lead to only one meaning.
Well-Defined Inputs: If an algorithm says to take inputs, it should be well-
defined inputs. It may or may not take input.
Well-Defined Outputs: The algorithm must clearly define what output will
be yielded and it should be well-defined as well. It should produce at least 1
output.
Finite-ness: The algorithm must be finite, i.e. it should terminate after a
finite time.
Feasible: The algorithm must be simple, generic, and practical, such that it
can be executed with the available resources. It must not contain some
future technology or anything.
Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be implemented
in any language, and yet the output will be the same, as expected.
Input: An algorithm has zero or more inputs. Each that contains a
fundamental operator must accept zero or more inputs.
Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
Definiteness: All instructions in an algorithm must be unambiguous,
precise, and easy to interpret. By referring to any of the instructions in an
algorithm one can clearly understand what is to be done. Every fundamental
operator in instruction must be defined without any ambiguity.
Finiteness: An algorithm must terminate after a finite number of steps in all
test cases. Every instruction which contains a fundamental operator must be
terminated within a finite amount of time. Infinite loops or recursive
functions without base conditions do not possess finiteness.
Effectiveness: An algorithm must be developed by using very basic,
simple, and feasible operations so that one can trace it out by using just
paper and pencil.
Properties of Algorithm:
It should terminate after a finite time.
It should produce at least one output.
It should take zero or more input.
It should be deterministic means giving the same output for the same input
case.
Every step in the algorithm must be effective i.e. every step should do some
work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm:
It is the simplest approach to a problem. A brute force algorithm is the first approach
that comes to finding when we see a problem.
2. Recursive Algorithm:
A recursive algorithm is based on recursion. In this case, a problem is broken into
several sub-parts and called the same function again and again.
3. Backtracking Algorithm:
The backtracking algorithm builds the solution by searching among all possible
solutions. Using this algorithm, we keep on building the solution following criteria.
Whenever a solution fails we trace back to the failure point build on the next solution
and continue this process till we find the solution or all possible solutions are looked
after.
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or groups of
elements from a particular data structure. They can be of different types based on their
approach or the data structure in which the element should be found.
5. Sorting Algorithm:
Sorting is arranging a group of data in a particular manner according to the
requirement. The algorithms which help in performing this function are called sorting
algorithms. Generally sorting algorithms are used to sort groups of data in an
increasing or decreasing manner.
6. Hashing Algorithm:
Hashing algorithms work similarly to the searching algorithm. But they contain an
index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm:
This algorithm breaks a problem into sub-problems, solves a single sub-problem, and
merges the solutions to get the final solution. It consists of the following three steps:
Divide
Solve
Combine
8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution for the next
part is built based on the immediate benefit of the next part. The one solution that
gives the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm:
This algorithm uses the concept of using the already found solution to avoid repetitive
calculation of the same part of the problem. It divides the problem into smaller
overlapping subproblems and solves them.
10. Randomized Algorithm:
In the randomized algorithm, we use a random number so it gives immediate benefit.
The random number helps in deciding the expected outcome.
To learn more about the types of algorithms refer to the article about “Types of
Algorithms“.
Advantages of Algorithms:
It is easy to understand.
An algorithm is a step-wise representation of a solution to a given problem.
In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
Writing an algorithm takes a long time so it is time-consuming.
Understanding complex logic through algorithms can be very difficult.
Branching and Looping statements are difficult to show in
Algorithms(imp).
An algorithm is defined as a finite set of instructions that, if followed,
performs a particular task. All algorithms must satisfy the following criteria
Recursive Algorithms
A recursive algorithm calls itself which generally passes the return value as a
parameter to the algorithm again. This parameter indicates the input while
the return value indicates the output.
Time complexity is defined in terms of how many times it takes to run a given
algorithm, based on the length of the input. Time complexity is not a measurement
of how much time it takes to execute a particular algorithm because such factors as
programming language, operating system, and processing power are also considered
.Time complexity is a type of computational complexity that describes the time
required to execute an algorithm. The time complexity of an algorithm is the amount
of time it takes for each statement to complete.
The time required by the algorithm to solve given problem is called time
complexity of the algorithm. Time complexity is very useful measure in algorithm
analysis.
It is the time needed for the completion of an algorithm. To estimate the time
complexity, we need to consider the cost of each fundamental instruction and the
number of times the instruction is executed.
Example 1: Addition of two scalar variables.
Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <- A + B
return C
The addition of two scalar numbers requires one addition operation. the time
complexity of this algorithm is constant, so T(n) = O(1) .
Space Complexity
Asymptotic Notation
Asymptotic Notation is used to describe the running time of an algorithm -
how much time an algorithm takes with a given input, n. There are three
different notations: big O, big Theta (Θ), and big Omega (Ω). big-Θ is used
when the running time is the same for all cases, big-O for the worst case
running time, and big-Ω for the best case running time.
Big-Θ Notation
We compute the big-Θ of an algorithm by counting the number of
iterations the algorithm always takes with an input of n. For instance, the
loop in the pseudo code below will always iterate N times for a list size of
N. The runtime can be described as Θ(N).
Big-O Notation
The Big-O notation describes the worst-case running time of a program.
We compute the Big-O of an algorithm by counting how many iterations an
algorithm will take in the worst-case scenario with an input of N. We
typically consult the Big-O because we must always plan for the worst case.
For example, O(log n) describes the Big-O of a binary search algorithm.
Big-Ω Notation
Big-Ω (Omega) describes the best running time of a program. We compute
the big-Ω by counting how many iterations an algorithm will take in the
best-case scenario based on an input of N. For example, a Bubble Sort
algorithm has a running time of Ω(N) because in the best case scenario the
list is already sorted, and the bubble sort will terminate after the first
iteration.
Probability simply talks about how likely is the event to occur, and its value
always lies between 0 and 1 (inclusive of 0 and 1). For example: consider that you
have two bags, named A and B, each containing 10 red balls and 10 black balls. If
you randomly pick up the ball from any bag (without looking in the bag), you
surely don’t know which ball you’re going to pick up. So here is the need of
probability where we find how likely you’re going to pick up either a black or a
red ball. Note that we’ll be denoting probability as P from now on. P(X) means the
probability for an event X to occur.
P(Red ball)= P(Bag A). P(Red ball | Bag A) + P(Bag B). P(Red ball | Bag B),
this equation finds the probability of the red ball. Here I have introduced the
concept of conditional probability ( which finds probability when we’re provided
with the condition). P(Bag A) = 1/2 because we’ve 2 bags out which we’ve to
select Bag A. P(Red ball | Bag A) should read as “probability of drawing a red
ball given the bag A” here “given” word specifies the condition which is Bag A in
this case, so it is 10 red balls out of 20 balls i.e. 10/20. So let’s solve:
P(Red Ball)= 1/2. 10/20 + 1/2. 10/20 = 1/2
Similarly, you can try to find the probability of drawing a black ball? Also, find the
probability of drawing two consecutive red balls from the bag after transferring one
black ball from bag A to Bag B?
Now if you look at the image above, you must be thinking what is it? I haven’t
introduced the “intersection” in Set Theory. I have already discussed the concept
above, there is nothing new in the image given above. Here we’re finding the
probability for an event A to occur given that event B has already occurred. The
numerator of the right-hand side of the equation is the probability for both events
to occur, divided by the probability for an event B to occur. The numerator has an
inverted shape symbol between A and B which we call “Intersection” in set theory.
There are a few key concepts that are important to understand in probability
theory. These include:
Sample space: The sample space is the collection of all potential outcomes
of an experiment. For example, the sample space of flipping a coin is
{heads, tails}.
Event: An event is a collection of outcomes within the sample space. For
example, the event of flipping a head is {heads}.
Probability: The probability of an event is a number between 0 and 1 that
represents the likelihood of the event occurring. A chance of 0 means that
the event is impossible, and a probability of 1 means that the event is
specific.