Diploma CS 5 Sem Notes 2-1
Diploma CS 5 Sem Notes 2-1
Diploma CS 5 Sem Notes 2-1
It's one of the most basic and simplest types of algorithms to approach a problem. Technically
someone has to move through each and every possible solution to a problem and find the best
solution to resolve the issue.
For e.g. suppose a salesman wants to travel to different cities in a state so how can he know what
are the ways he has to follow so that the distance traveled by the salesman can be minimized so for
this situation he needs to use brute force algorithm.
Selection Sort, Bubble Sort, Sequential Search, String Matching, Depth-First Search and Breadth-First
Search, Closest-Pair and Convex-Hull Problems can be solved by Brute Force.
Pros (Advantages):
1. The brute force approach is a guaranteed way to find the correct solution by listing all the possible
solutions for the problem.
3. The brute force algorithm is a simple and straightforward solution to the problem, generally based
on the description of the problem and the definition of the concept involved.
Cons (Disadvantages):
1. Brute force algorithms are slow. This method relies more on compromising the power of a
computer system for solving a problem than on a good algorithm design
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm in
which each pair of adjacent elements is compared and the elements are swapped if they are not in
order. This algorithm is not suitable for large data sets as its average and worst case complexity are
of Ο(n2) where n is the number of items.
Following are the steps involved in bubble sort (for sorting a given array in ascending order):
1. Starting with the first element (index = 0), compare the current element with the
next element of the array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next element, move to the next element.
Repeat Step 1.
Example:
First Pass
Sorting will start from the initial two elements. Let compare them to check which is greater.
Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.
Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look like –
Here, 35 is greater than 32. So, there is no swapping required as they are already sorted.
Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we reach at the end
of the array. After first pass, the array will be –
Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be –
Third Pass
Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will be –
Fourth pass
Now, let's see the time complexity of bubble sort in the best case, average case, and worst case.
Time Complexity
Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted.
The best-case time complexity of bubble sort is O(n).
Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of bubble sort is
O(n2).
Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse
order. That means suppose you have to sort the array elements in ascending order, but its elements
are in descending order. The worst-case time complexity of bubble sort is O(n2).
Space Complexity
The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is required
for swapping.
Number of swaps in bubble sort = Number of inversion pairs present in the given array.
Bubble sort is beneficial when array elements are less and the array is nearly sorted.
Advantages:
The primary advantage of the bubble sort is that it is popular and easy to implement.
In the bubble sort, elements are swapped in place without using additional temporary
storage.
Disadvantages:
It does not work well when we have large unsorted lists, and it necessitates more resources
that end up taking so much of time.
Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in each
iteration and places that element at the beginning of the unsorted list.
This algorithm is not suitable for large data sets as its average and worst case complexities are of
Ο(n2), where n is the number of items.
To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be
easier to understand the Selection sort via an example.
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is the
smallest value.
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array.
For the second position, where 29 is stored presently, we again sequentially scan the rest of the
items of unsorted array. After scanning, we find that 12 is the second lowest element in the array
that should be appeared at second position.
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the sorted
array. So, after two iterations, the two smallest values are placed at the beginning in a sorted way.
The same process is applied to the rest of the array elements. Now, we are showing a pictorial
representation of the entire sorting process.
Now, the array is completely sorted.
Time Complexity
If we want to sort in ascending order and the array is in descending order then, the worst case
occurs.
It occurs when the elements of the array are in jumbled order (neither ascending nor descending).
Space Complexity
Selection sort has a time complexity of O(n^2) in the worst and average case.
Does not preserve the relative order of items with equal keys which means it is not stable.
Linear search is also called as sequential search algorithm. It is the simplest searching algorithm. In
Linear search, we simply traverse the list completely and match each element of the list with the
item whose location is to be found. If the match is found, then the location of the item is returned;
otherwise, the algorithm returns NULL.
It is widely used to search an element from the unordered list, i.e., the list in which items are not
sorted. The worst-case time complexity of linear search is O(n).
Every element is considered as a potential match for the key and checked for the same.
If any element is found equal to the key, the search is successful and the index of that
element is returned.
If no element is found equal to the key, the search yields “No match found”.
Step 1: Set i to 1
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 8: Exit
For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30
Step 1: Start from the first element (index 0) and compare key with each element (arr[i]).
Comparing key with first element arr[0]. SInce not equal, the iterator moves to the next
element as a potential match.
Compare key with arr[0]
Comparing key with next element arr[1]. SInce not equal, the iterator moves to the next
element as a potential match.
Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search Algorithm will
yield a successful message and return the index of the element when key is found (here 2).
Time Complexity:
Best Case: In the best case, the key might be present at the first index. So the best case
complexity is O(1)
Worst Case: In the worst case, the key might be present at the last index i.e., opposite to the
end from which the search has started in the list. So the worst-case complexity is O(N)
where N is the size of the list.
Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable
is used.
Linear search can be used irrespective of whether the array is sorted or not. It can be
used on arrays of any data type.
Does not require any additional memory.
It is a well-suited algorithm for small datasets.
Linear search has a time complexity of O(N), which in turn makes it slow for large datasets.
Not suitable for large arrays.
Decrease and conquer is a technique used to solve problems by reducing the size of the input data at
each step of the solution process. This technique is similar to divide-and-conquer, in that it breaks
down a problem into smaller sub problems, but the difference is that in decrease-and-conquer, the
size of the input data is reduced at each step. The technique is used when it’s easier to solve a
smaller version of the problem, and the solution to the smaller problem can be used to find the
solution to the original problem.
1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease
Decrease by a Constant : In this variation, the size of an instance is reduced by the same constant on
each iteration of the algorithm. Typically, this constant is equal to one , although other constant size
reductions do happen occasionally. Below are example problems :
Insertion sort
Graph search algorithms: DFS, BFS
Topological sorting
Algorithms for generating permutations, subsets
Decrease by a Constant factor: This technique suggests reducing a problem instance by the same
constant factor on each iteration of the algorithm. In most applications, this constant factor is equal
to two. A reduction by a factor other than two is especially rare. Decrease by a constant factor
algorithms are very efficient especially when the factor is greater than 2 as in the fake-coin problem.
Below are example problems :
Binary search
Fake-coin problems
Russian peasant multiplication
Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one iteration of an
algorithm to another. As, in problem of finding gcd of two number though the value of the second
argument is always smaller on the right-handside than on the left-hand side, it decreases neither by
a constant nor by a constant factor. Below are example problems :
1. Problem-Specific: The technique is not applicable to all problems and may not be suitable for
more complex problems.
2. Implementation Complexity: The technique can be more complex to implement when
compared to other techniques like divide-and-conquer, and may require more careful
planning.
Insertion Sort
Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each
iteration. Insertion sort works similarly as we sort cards in our hand in a card game.
We assume that the first card is already sorted then, we select an unsorted card. If the unsorted card
is greater than the card in hand, it is placed on the right otherwise, to the left. In the same way,
other unsorted cards are taken and put in their right place.
Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be
easier to understand the insertion sort via an example.
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now,
12 is stored in a sorted sub-array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted
array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are
31 and 8.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
It occurs when there is no sorting required, i.e. the array is already sorted. The best-case time
complexity of insertion sort is O(n).
Average Case: O(n2)
It occurs when the array elements are in jumbled order that is not properly ascending and not
properly descending. The average case time complexity of insertion sort is O(n 2).
It occurs when the array elements are required to be sorted in reverse order. That means suppose
you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of insertion sort is O(n2).
The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra variable is
required for swapping.
It, like other quadratic sorting algorithms, is efficient for small data sets.
It just necessitates a constant amount of O(1) extra memory space.
It works well with data sets that have been sorted in a significant way.
It does not affect the relative order of elements with the same key.
***