What Is An Algorithm
What Is An Algorithm
2. Recursive Algorithm:
This type of algorithm is based on recursion. In recursion, a problem is
solved by breaking it into subproblems of the same type and calling own
self again and again until the problem is solved with the help of a base
condition.
Some common problem that is solved using recursive algorithms
are Factorial of a Number, Fibonacci Series, Tower of Hanoi, DFS for Graph,
etc.
a) Divide and Conquer Algorithm:
In Divide and Conquer algorithms, the idea is to solve the problem in two
sections, the first section divides the problem into subproblems of the same
type. The second section is to solve the smaller problem independently and
then add the combined result to produce the final answer to the problem.
Some common problem that is solved using Divide and Conquers
Algorithms are Binary Search, Merge Sort, Quick Sort, Strassen’s Matrix
Multiplication, etc.
b) Dynamic Programming Algorithms:
This type of algorithm is also known as the memoization technique because
in this the idea is to store the previously calculated result to avoid
calculating it again and again. In Dynamic Programming, divide the complex
problem into smaller overlapping subproblems and store the result for
future use.
The following problems can be solved using the Dynamic Programming
algorithm Knapsack Problem, Weighted Job Scheduling, Floyd Warshall
Algorithm, etc.
c) Greedy Algorithm:
In the Greedy Algorithm, the solution is built part by part. The decision to
choose the next part is done on the basis that it gives an immediate benefit.
It never considers the choices that had been taken previously.
Some common problems that can be solved through the Greedy Algorithm
are Dijkstra Shortest Path Algorithm, Prim’s Algorithm, Kruskal’s
Algorithm, Huffman Coding, etc.
d) Backtracking Algorithm:
In Backtracking Algorithm, the problem is solved in an incremental way i.e.
it is an algorithmic technique for solving problems recursively by trying to
build a solution incrementally, one piece at a time, removing those solutions
that fail to satisfy the constraints of the problem at any point of time.
Some common problems that can be solved through the Backtracking
Algorithm are the Hamiltonian Cycle, M-Coloring Problem, N Queen
Problem, Rat in Maze Problem, etc.
3. Randomized Algorithm:
In the randomized algorithm, we use a random number.it helps to decide the
expected outcome. The decision to choose the random number so it gives
the immediate benefit
Some common problems that can be solved through the Randomized
Algorithm are Quicksort: In Quicksort we use the random number for
selecting the pivot.
4. Sorting Algorithm:
The sorting algorithm is used to sort data in maybe ascending or
descending order. Its also used for arranging data in an efficient and useful
manner.
Some common problems that can be solved through the sorting Algorithm
are Bubble sort, insertion sort, merge sort, selection sort, and quick sort are
examples of the Sorting algorithm.
5. Searching Algorithm:
The searching algorithm is the algorithm that is used for searching the
specific key in particular sorted or unsorted data. Some common problems
that can be solved through the Searching Algorithm are Binary search or
linear search is one example of a Searching algorithm.
6. Hashing Algorithm:
Hashing algorithms work the same as the Searching algorithm but they
contain an index with a key ID i.e a key-value pair. In hashing, we assign a
key to specific data.
Some common problems can be solved through the Hashing Algorithm in
password verification.
Popular Notations in Complexity Analysis of
Algorithms
1. Big-O Notation
We define an algorithm’s worst-case time complexity by using the Big-O
notation, which determines the set of functions grows slower than or at the
same rate as the expression. Furthermore, it explains the maximum amount
of time an algorithm requires to consider all input values.
2. Omega Notation
It defines the best case of an algorithm’s time complexity, the Omega
notation defines whether the set of functions will grow faster or at the same
rate as the expression. Furthermore, it explains the minimum amount of
time an algorithm requires to consider all input values.
3. Theta Notation
It defines the average case of an algorithm’s time complexity, the Theta
notation defines when the set of functions lies in
both O(expression) and Omega(expression), then Theta notation is used.
This is how we define a time complexity average case for an algorithm.
Measurement of Complexity of an Algorithm
Based on the above three notations of Time Complexity there are three
cases to analyze an algorithm:
1. Heap: In such types, we construct a heap to find out the max or min
value of the sequence. This used the data structure of trees to achieve
its output.
2. Binary Search: This C++ algorithm divides the whole sequence into
two parts iteratively until it finds the actual value we are searching
order.
sort, selection sort, heap sort, quick sort, merge sort. Some of these
algorithms work on the principle of “divide and rule” like merge and
quick sort. These are quick and efficient in comparison to others
not modify the data values of the element but function around these
elements.
Primitive data structure is a fundamental type of data structure that stores the data of
only one type whereas the non-primitive data structure is a type of data structure
which is a user-defined that stores the data of different types in a single entity.
n the above image, we can observe the classification of the data structure. The data
structure is classified into two types, i.e., primitive and non-primitive data structure.
In the case of primitive data structure, it contains fundamental data types such as
integer, float, character, pointer, and these fundamental data types can hold a single
type of value. For example, integer variable can hold integer type of value, float
variable can hold floating type of value, character variable can hold character type of
value whereas the pointer variable can hold pointer type of value.
In the case of non-primitive data structure, it is categorized into two parts such as
linear data structure and non-linear data structure. Linear data structure is a sequential
type of data structure, and here sequential means that all the elements in the memory
are stored in a sequential manner; for example, element stored after the second
element would be the third element, the element stored after the third element would
be the fourth element and so on. We have different linear data structures holding the
sequential values such as Array, Linked list, Stack, Queue.
Non-linear data structure is a kind of random type of data structure. The non-linear
data structures are Tree and Grap
Primitive data structure is a kind of data structure that Non-primitive data structure is a type of data structure th
stores the data of only one type. store the data of more than one type.
Examples of primitive data structure are integer, Examples of non-primitive data structure are Array, Link
character, float. stack.
Primitive data structure will contain some value, i.e., it Non-primitive data structure can consist of a NULL value
cannot be NULL.
The size depends on the type of the data structure. In case of non-primitive data structure, size is not fixed.
Primitive data structure can be used to call the Non-primitive data structure cannot be used to ca
methods. methods.
The examples of primitive data structure are float, character, integer and pointer. The
value to the primitive data structure is provided by the programmer. The following are
the four primitive data structures:
o Integer: The integer data type contains the numeric values. It contains the whole
numbers that can be either negative or positive. When the range of integer data type
is not large enough then in that case, we can use long.
o Float: The float is a data type that can hold decimal values. When the precision of
decimal value increases then the Double data type is used.
o Boolean: It is a data type that can hold either a True or a False value. It is mainly used
for checking the condition.
o Character: It is a data type that can hold a single character value both uppercase and
lowercase such as 'A' or 'a'.
In case of linear data structure, the data is stored in a sequence, i.e., one data after
another data. When we access the data from the linear data structure, we just need to
start from one place and will find other data in a sequence.
o Array: An array is a data structure that can hold the elements of same type. It cannot
contain the elements of different types like integer with character. The commonly used
operation in an array is insertion, deletion, traversing, searching.
For example:
The above example is an array that contains the integer type elements stored in a
contiguous manner.
o String: String is defined as an array of characters. The difference between the character
array and string is that the string data structure terminates with a 'NULL' character, and
it is denoted as a '\0'.
Char Representation:
1. char name[100] = {'H', 'e', 'l','l','o',' ', 'j', 'a', 'v', 'a', 't','p', 'o', 'i', 'n', 't' }
In the above example, the length of the string is 16 as it does not have any NULL
character as the last character to denote the termination.
o Stack: Stack is a data structure that follows the principle LIFO (Last In First Out).
All the operations on the stack are performed from the top of the stack such as
PUSH and POP operation. The push operation is the process of inserting
element into the stack while the pop operation is the process of removing
element from the stack. The stack data structure can be implemented by using
either array or linked list.
o Queue: Queue is a data structure that can be implemented by using array. The
difference between the stack and queue data structure is that the elements in
the queue are inserted from the rear end while the elements in the queue are
removed from the front end.