Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
44 views

What Is An Algorithm

The document discusses different types of algorithms. It begins by defining an algorithm as a step-by-step procedure to solve a problem in an optimized manner. It then describes several common types of algorithms including recursive algorithms, divide and conquer algorithms, dynamic programming algorithms, greedy algorithms, backtracking algorithms, randomized algorithms, sorting algorithms, searching algorithms, and hashing algorithms. It provides examples of common problems solved by each algorithm type. Finally, it discusses complexity analysis of algorithms using big-O, Omega, and Theta notations and describes worst case, best case, and average case analyses.

Uploaded by

jattakcent
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

What Is An Algorithm

The document discusses different types of algorithms. It begins by defining an algorithm as a step-by-step procedure to solve a problem in an optimized manner. It then describes several common types of algorithms including recursive algorithms, divide and conquer algorithms, dynamic programming algorithms, greedy algorithms, backtracking algorithms, randomized algorithms, sorting algorithms, searching algorithms, and hashing algorithms. It provides examples of common problems solved by each algorithm type. Finally, it discusses complexity analysis of algorithms using big-O, Omega, and Theta notations and describes worst case, best case, and average case analyses.

Uploaded by

jattakcent
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

What is an Algorithm?

An algorithm is a step-by-step procedure to solve a problem. A good


algorithm should be optimized in terms of time and space. Different types of
problems require different types of algorithmic techniques to be solved in
the most optimized manner. There are many types of algorithms but the
most important and fundamental algorithms that you must are discussed in
this article.

2. Recursive Algorithm:
This type of algorithm is based on recursion. In recursion, a problem is
solved by breaking it into subproblems of the same type and calling own
self again and again until the problem is solved with the help of a base
condition.
Some common problem that is solved using recursive algorithms
are Factorial of a Number, Fibonacci Series, Tower of Hanoi, DFS for Graph,
etc.
a) Divide and Conquer Algorithm:
In Divide and Conquer algorithms, the idea is to solve the problem in two
sections, the first section divides the problem into subproblems of the same
type. The second section is to solve the smaller problem independently and
then add the combined result to produce the final answer to the problem.
Some common problem that is solved using Divide and Conquers
Algorithms are Binary Search, Merge Sort, Quick Sort, Strassen’s Matrix
Multiplication, etc.
b) Dynamic Programming Algorithms:
This type of algorithm is also known as the memoization technique because
in this the idea is to store the previously calculated result to avoid
calculating it again and again. In Dynamic Programming, divide the complex
problem into smaller overlapping subproblems and store the result for
future use.
The following problems can be solved using the Dynamic Programming
algorithm Knapsack Problem, Weighted Job Scheduling, Floyd Warshall
Algorithm, etc.
c) Greedy Algorithm:
In the Greedy Algorithm, the solution is built part by part. The decision to
choose the next part is done on the basis that it gives an immediate benefit.
It never considers the choices that had been taken previously.
Some common problems that can be solved through the Greedy Algorithm
are Dijkstra Shortest Path Algorithm, Prim’s Algorithm, Kruskal’s
Algorithm, Huffman Coding, etc.
d) Backtracking Algorithm:
In Backtracking Algorithm, the problem is solved in an incremental way i.e.
it is an algorithmic technique for solving problems recursively by trying to
build a solution incrementally, one piece at a time, removing those solutions
that fail to satisfy the constraints of the problem at any point of time.
Some common problems that can be solved through the Backtracking
Algorithm are the Hamiltonian Cycle, M-Coloring Problem, N Queen
Problem, Rat in Maze Problem, etc.
3. Randomized Algorithm:
In the randomized algorithm, we use a random number.it helps to decide the
expected outcome. The decision to choose the random number so it gives
the immediate benefit
Some common problems that can be solved through the Randomized
Algorithm are Quicksort: In Quicksort we use the random number for
selecting the pivot.

4. Sorting Algorithm:
The sorting algorithm is used to sort data in maybe ascending or
descending order. Its also used for arranging data in an efficient and useful
manner.
Some common problems that can be solved through the sorting Algorithm
are Bubble sort, insertion sort, merge sort, selection sort, and quick sort are
examples of the Sorting algorithm.

5. Searching Algorithm:
The searching algorithm is the algorithm that is used for searching the
specific key in particular sorted or unsorted data. Some common problems
that can be solved through the Searching Algorithm are Binary search or
linear search is one example of a Searching algorithm.

6. Hashing Algorithm:
Hashing algorithms work the same as the Searching algorithm but they
contain an index with a key ID i.e a key-value pair. In hashing, we assign a
key to specific data.
Some common problems can be solved through the Hashing Algorithm in
password verification.
Popular Notations in Complexity Analysis of
Algorithms
1. Big-O Notation
We define an algorithm’s worst-case time complexity by using the Big-O
notation, which determines the set of functions grows slower than or at the
same rate as the expression. Furthermore, it explains the maximum amount
of time an algorithm requires to consider all input values.
2. Omega Notation
It defines the best case of an algorithm’s time complexity, the Omega
notation defines whether the set of functions will grow faster or at the same
rate as the expression. Furthermore, it explains the minimum amount of
time an algorithm requires to consider all input values.
3. Theta Notation
It defines the average case of an algorithm’s time complexity, the Theta
notation defines when the set of functions lies in
both O(expression) and Omega(expression), then Theta notation is used.
This is how we define a time complexity average case for an algorithm.
Measurement of Complexity of an Algorithm
Based on the above three notations of Time Complexity there are three
cases to analyze an algorithm:

1. Worst Case Analysis (Mostly used)


In the worst-case analysis, we calculate the upper bound on the running
time of an algorithm. We must know the case that causes a maximum
number of operations to be executed. For Linear Search, the worst case
happens when the element to be searched (x) is not present in the array.
When x is not present, the search() function compares it with all the
elements of arr[] one by one. Therefore, the worst-case time complexity of
the linear search would be O(n).
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running time
of an algorithm. We must know the case that causes a minimum number of
operations to be executed. In the linear search problem, the best case
occurs when x is present at the first location. The number of operations in
the best case is constant (not dependent on n). So time complexity in the
best case would be Ω(1)
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the
computing time for all of the inputs. Sum all the calculated values and
divide the sum by the total number of inputs. We must know (or predict) the
distribution of cases. For the linear search problem, let us assume that all
cases are uniformly distributed (including the case of x not being present in
the array). So we sum all the cases and divide the sum by (n+1). Following
is the value of average-case time complexity.

1. Heap: In such types, we construct a heap to find out the max or min

value of the sequence. This used the data structure of trees to achieve

its output.

2. Binary Search: This C++ algorithm divides the whole sequence into

two parts iteratively until it finds the actual value we are searching

from the targeted sequence. It is a highly effective algorithm as it

reduces time by half. The preliminary condition to use this C++

algorithm is that the sequence provided to it should be sorted in any

order.

3. Sorting: There are different types of sorting that can be used to

generate the sorted sequence. They are insertion sort, bubble

sort, selection sort, heap sort, quick sort, merge sort. Some of these

algorithms work on the principle of “divide and rule” like merge and
quick sort. These are quick and efficient in comparison to others

although uses more memory in their operations.

4. Simple Operations Over the Sequence: Algorithms can be used to

perform simple operations like replace, remove, reverse the numbers

in a sequence. There are many ways to reach this output using

different algorithms all aiming to achieve the same output.

5. Non-modifying Operations: Some operations like search, find,

count the number of elements in the sequence. These operations do

not modify the data values of the element but function around these

elements.

o Primitive data structure


o Non-primitive data structure

Primitive data structure is a fundamental type of data structure that stores the data of
only one type whereas the non-primitive data structure is a type of data structure
which is a user-defined that stores the data of different types in a single entity.

n the above image, we can observe the classification of the data structure. The data
structure is classified into two types, i.e., primitive and non-primitive data structure.
In the case of primitive data structure, it contains fundamental data types such as
integer, float, character, pointer, and these fundamental data types can hold a single
type of value. For example, integer variable can hold integer type of value, float
variable can hold floating type of value, character variable can hold character type of
value whereas the pointer variable can hold pointer type of value.
In the case of non-primitive data structure, it is categorized into two parts such as
linear data structure and non-linear data structure. Linear data structure is a sequential
type of data structure, and here sequential means that all the elements in the memory
are stored in a sequential manner; for example, element stored after the second
element would be the third element, the element stored after the third element would
be the fourth element and so on. We have different linear data structures holding the
sequential values such as Array, Linked list, Stack, Queue.

Non-linear data structure is a kind of random type of data structure. The non-linear
data structures are Tree and Grap

Primitive data structure Non-primitive data structure

Primitive data structure is a kind of data structure that Non-primitive data structure is a type of data structure th
stores the data of only one type. store the data of more than one type.

Examples of primitive data structure are integer, Examples of non-primitive data structure are Array, Link
character, float. stack.

Primitive data structure will contain some value, i.e., it Non-primitive data structure can consist of a NULL value
cannot be NULL.

The size depends on the type of the data structure. In case of non-primitive data structure, size is not fixed.

It starts with a lowercase character. It starts with an uppercase character.

Primitive data structure can be used to call the Non-primitive data structure cannot be used to ca
methods. methods.

Primitive data structure


Primitive data structure is a data structure that can hold a single value in a specific
location whereas the non-linear data structure can hold multiple values either in a
contiguous location or random locations

The examples of primitive data structure are float, character, integer and pointer. The
value to the primitive data structure is provided by the programmer. The following are
the four primitive data structures:
o Integer: The integer data type contains the numeric values. It contains the whole
numbers that can be either negative or positive. When the range of integer data type
is not large enough then in that case, we can use long.
o Float: The float is a data type that can hold decimal values. When the precision of
decimal value increases then the Double data type is used.
o Boolean: It is a data type that can hold either a True or a False value. It is mainly used
for checking the condition.
o Character: It is a data type that can hold a single character value both uppercase and
lowercase such as 'A' or 'a'.

Non-primitive data structure


The non-primitive data structure is a kind of data structure that can hold multiple
values either in a contiguous or random location. The non-primitive data types are
defined by the programmer. The non-primitive data structure is further classified into
two categories, i.e., linear and non-linear data structure.

In case of linear data structure, the data is stored in a sequence, i.e., one data after
another data. When we access the data from the linear data structure, we just need to
start from one place and will find other data in a sequence.

The following are the types of linear data structure:

o Array: An array is a data structure that can hold the elements of same type. It cannot
contain the elements of different types like integer with character. The commonly used
operation in an array is insertion, deletion, traversing, searching.

For example:

int a[6] = {1,2,3,4,5,6};

The above example is an array that contains the integer type elements stored in a
contiguous manner.

o String: String is defined as an array of characters. The difference between the character
array and string is that the string data structure terminates with a 'NULL' character, and
it is denoted as a '\0'.

String data structure:

1. char name[100] = "Hello javaTpoint";


In the above example, the length of the string is 17 as the las

Char Representation:

1. char name[100] = {'H', 'e', 'l','l','o',' ', 'j', 'a', 'v', 'a', 't','p', 'o', 'i', 'n', 't' }

In the above example, the length of the string is 16 as it does not have any NULL
character as the last character to denote the termination.

o Stack: Stack is a data structure that follows the principle LIFO (Last In First Out).
All the operations on the stack are performed from the top of the stack such as
PUSH and POP operation. The push operation is the process of inserting
element into the stack while the pop operation is the process of removing
element from the stack. The stack data structure can be implemented by using
either array or linked list.
o Queue: Queue is a data structure that can be implemented by using array. The
difference between the stack and queue data structure is that the elements in
the queue are inserted from the rear end while the elements in the queue are
removed from the front end.

You might also like