Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Diploma CS 5 Sem Notes 2-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Learning Outcomes – 2

Brute Force Approach and Decrease and Conquer Approach

Brute Force Approach

It's one of the most basic and simplest types of algorithms to approach a problem. Technically
someone has to move through each and every possible solution to a problem and find the best
solution to resolve the issue.

For e.g. suppose a salesman wants to travel to different cities in a state so how can he know what
are the ways he has to follow so that the distance traveled by the salesman can be minimized so for
this situation he needs to use brute force algorithm.

Selection Sort, Bubble Sort, Sequential Search, String Matching, Depth-First Search and Breadth-First
Search, Closest-Pair and Convex-Hull Problems can be solved by Brute Force.

Pros (Advantages):

1. The brute force approach is a guaranteed way to find the correct solution by listing all the possible
solutions for the problem.

2. It is mainly used for solving simpler and small problems.

3. The brute force algorithm is a simple and straightforward solution to the problem, generally based
on the description of the problem and the definition of the concept involved.

Cons (Disadvantages):

1. Brute force algorithms are slow. This method relies more on compromising the power of a
computer system for solving a problem than on a good algorithm design

2. It is an inefficient algorithm as it requires solving each and every state.

Bubble Sort Algorithm

Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm in
which each pair of adjacent elements is compared and the elements are swapped if they are not in
order. This algorithm is not suitable for large data sets as its average and worst case complexity are
of Ο(n2) where n is the number of items.

Bubble Sort Algorithm:

Following are the steps involved in bubble sort (for sorting a given array in ascending order):

1. Starting with the first element (index = 0), compare the current element with the
next element of the array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next element, move to the next element.
Repeat Step 1.
Example:

Working of Bubble Sort Algorithm:

Let the elements of array are –

First Pass

Sorting will start from the initial two elements. Let compare them to check which is greater.

Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.

Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look like –

Now, compare 32 and 35.

Here, 35 is greater than 32. So, there is no swapping required as they are already sorted.

Now, the comparison will be in between 35 and 10.

Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we reach at the end
of the array. After first pass, the array will be –

Now, move to the second iteration.


Second Pass

The same process will be followed for second iteration

Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be –

Now, move to the third iteration.

Third Pass

The same process will be followed for third iteration.

Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will be –

Now, move to the fourth iteration.

Fourth pass

Similarly, after the fourth iteration, the array will be –


Hence, there is no swapping required, so the array is completely sorted.

Bubble sort complexity

Now, let's see the time complexity of bubble sort in the best case, average case, and worst case.

Time Complexity

 Best Case: O(n)

 Average Case: O(n2)

 Worst Case: O(n2)

Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted.
The best-case time complexity of bubble sort is O(n).

Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of bubble sort is
O(n2).

Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse
order. That means suppose you have to sort the array elements in ascending order, but its elements
are in descending order. The worst-case time complexity of bubble sort is O(n2).

Space Complexity

The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is required
for swapping.

Properties of Bubble Sort Algorithm

Some of the important properties of bubble sort algorithm are-

 Bubble sort is a stable sorting algorithm.

 Bubble sort is an in-place sorting algorithm.

 The worst case time complexity of bubble sort algorithm is O(n2).

 The space complexity of bubble sort algorithm is O(1).

 Number of swaps in bubble sort = Number of inversion pairs present in the given array.

 Bubble sort is beneficial when array elements are less and the array is nearly sorted.

Advantages of Bubble Sort Algorithm

Advantages:
 The primary advantage of the bubble sort is that it is popular and easy to implement.

 In the bubble sort, elements are swapped in place without using additional temporary
storage.

 The space requirement is at a minimum

Disadvantages of Bubble Sort Algorithm

Disadvantages:

 It does not work well when we have large unsorted lists, and it necessitates more resources
that end up taking so much of time.

 It is only meant for academic purposes, not for practical implementations.

 It involves the n2 order of steps to sort an algorithm.

Selection Sort Algorithm

Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in each
iteration and places that element at the beginning of the unsorted list.

This algorithm is not suitable for large data sets as its average and worst case complexities are of
Ο(n2), where n is the number of items.

Algorithm of Selection Sort:

Step 1 − Set MIN to location 0

Step 2 − Search the minimum element in the list

Step 3 − Swap with value at location MIN

Step 4 − Increment MIN to point to next element

Step 5 − Repeat until list is sorted

Working of Selection Sort

Now, let's see the working of the Selection sort Algorithm.

To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be
easier to understand the Selection sort via an example.

Let the elements of array are –

Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is the
smallest value.

So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array.

For the second position, where 29 is stored presently, we again sequentially scan the rest of the
items of unsorted array. After scanning, we find that 12 is the second lowest element in the array
that should be appeared at second position.

Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the sorted
array. So, after two iterations, the two smallest values are placed at the beginning in a sorted way.

The same process is applied to the rest of the array elements. Now, we are showing a pictorial
representation of the entire sorting process.
Now, the array is completely sorted.

Time Complexity

 Worst Case Complexity: O(n2)

If we want to sort in ascending order and the array is in descending order then, the worst case
occurs.

 Best Case Complexity: O(n2)

It occurs when the array is already sorted

 Average Case Complexity: O(n2)

It occurs when the elements of the array are in jumbled order (neither ascending nor descending).

Space Complexity

Space complexity is O(1) because an extra variable temp is used.

Advantages of Selection Sort Algorithm

 Simple and easy to understand.

 Works well with small datasets.

Disadvantages of Selection Sort Algorithm

 Selection sort has a time complexity of O(n^2) in the worst and average case.

 Does not work well on large datasets.

 Does not preserve the relative order of items with equal keys which means it is not stable.

Selection is generally used when-

 A small array is to be sorted

 Swapping cost doesn't matter

 It is compulsory to check all elements

Linear Search Algorithm

Linear search is also called as sequential search algorithm. It is the simplest searching algorithm. In
Linear search, we simply traverse the list completely and match each element of the list with the
item whose location is to be found. If the match is found, then the location of the item is returned;
otherwise, the algorithm returns NULL.
It is widely used to search an element from the unordered list, i.e., the list in which items are not
sorted. The worst-case time complexity of linear search is O(n).

Figure: Linear Search Algorithm

How Does Linear Search Algorithm Work?

In Linear Search Algorithm,

 Every element is considered as a potential match for the key and checked for the same.

 If any element is found equal to the key, the search is successful and the index of that
element is returned.

 If no element is found equal to the key, the search yields “No match found”.

Algorithm of Linear Search

Linear Search ( Array A, Value x)

Step 1: Set i to 1

Step 2: if i > n then go to step 7

Step 3: if A[i] = x then go to step 6

Step 4: Set i to i + 1

Step 5: Go to Step 2

Step 6: Print Element x Found at index i and go to step 8

Step 7: Print element not found

Step 8: Exit

Working of Linear search algorithm

For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30

Step 1: Start from the first element (index 0) and compare key with each element (arr[i]).

 Comparing key with first element arr[0]. SInce not equal, the iterator moves to the next
element as a potential match.
Compare key with arr[0]

 Comparing key with next element arr[1]. SInce not equal, the iterator moves to the next
element as a potential match.

Compare key with arr[1]

Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search Algorithm will
yield a successful message and return the index of the element when key is found (here 2).

Compare key with arr[2]

Complexity Analysis of Linear Search:

Time Complexity:

 Best Case: In the best case, the key might be present at the first index. So the best case
complexity is O(1)
 Worst Case: In the worst case, the key might be present at the last index i.e., opposite to the
end from which the search has started in the list. So the worst-case complexity is O(N)
where N is the size of the list.

 Average Case: O(N)

Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable
is used.

Advantages of Linear Search:

 Linear search can be used irrespective of whether the array is sorted or not. It can be
used on arrays of any data type.
 Does not require any additional memory.
 It is a well-suited algorithm for small datasets.

Drawbacks of Linear Search:

 Linear search has a time complexity of O(N), which in turn makes it slow for large datasets.
 Not suitable for large arrays.

When to use Linear Search?

 When we are dealing with a small dataset.


 When you are searching for a dataset stored in contiguous memory.

Decrease and Conquer

Decrease and conquer is a technique used to solve problems by reducing the size of the input data at
each step of the solution process. This technique is similar to divide-and-conquer, in that it breaks
down a problem into smaller sub problems, but the difference is that in decrease-and-conquer, the
size of the input data is reduced at each step. The technique is used when it’s easier to solve a
smaller version of the problem, and the solution to the smaller problem can be used to find the
solution to the original problem.

This approach can be either implemented as top-down or bottom-up. Top-down approach : It


always leads to the recursive implementation of the problem. Bottom-up approach : It is usually
implemented in iterative way, starting with a solution to the smallest instance of the problem.

There are three major variations of decrease-and-conquer:

1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease

Decrease by a Constant : In this variation, the size of an instance is reduced by the same constant on
each iteration of the algorithm. Typically, this constant is equal to one , although other constant size
reductions do happen occasionally. Below are example problems :
 Insertion sort
 Graph search algorithms: DFS, BFS
 Topological sorting
 Algorithms for generating permutations, subsets

Decrease by a Constant factor: This technique suggests reducing a problem instance by the same
constant factor on each iteration of the algorithm. In most applications, this constant factor is equal
to two. A reduction by a factor other than two is especially rare. Decrease by a constant factor
algorithms are very efficient especially when the factor is greater than 2 as in the fake-coin problem.
Below are example problems :

 Binary search
 Fake-coin problems
 Russian peasant multiplication

Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one iteration of an
algorithm to another. As, in problem of finding gcd of two number though the value of the second
argument is always smaller on the right-handside than on the left-hand side, it decreases neither by
a constant nor by a constant factor. Below are example problems :

 Computing median and selection problem.


 Interpolation Search
 Euclid’s algorithm

Advantages of Decrease and Conquer:

1. Simplicity: Decrease-and-conquer is often simpler to implement compared to other


techniques like dynamic programming or divide-and-conquer.
2. Efficient Algorithms: The technique often leads to efficient algorithms as the size of the input
data is reduced at each step, reducing the time and space complexity of the solution.
3. Problem-Specific: The technique is well-suited for specific problems where it’s easier to
solve a smaller version of the problem.

Disadvantages of Decrease and Conquer:

1. Problem-Specific: The technique is not applicable to all problems and may not be suitable for
more complex problems.
2. Implementation Complexity: The technique can be more complex to implement when
compared to other techniques like divide-and-conquer, and may require more careful
planning.

Insertion Sort

Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each
iteration. Insertion sort works similarly as we sort cards in our hand in a card game.

We assume that the first card is already sorted then, we select an unsorted card. If the unsorted card
is greater than the card in hand, it is placed on the right otherwise, to the left. In the same way,
other unsorted cards are taken and put in their right place.

A similar approach is used by insertion sort.


It is not appropriate for large data sets as the time complexity of insertion sort in the average case
and worst case is O(n2), where n is the number of items. Insertion sort is less efficient than the other
sorting algorithms like heap sort, quick sort, merge sort, etc.

Algorithm of Insertion Sort:

step 1 − If it is the first element, it is already sorted. return 1;

Step 2 − Pick next element

Step 3 − Compare with all elements in the sorted sub-list

Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted

Step 5 − Insert the value

Step 6 − Repeat until list is sorted

Working of Insertion sort Algorithm

Now, let's see the working of the insertion sort Algorithm.

To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be
easier to understand the insertion sort via an example.

Let the elements of array are –

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now,
12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.


Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted
array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are
31 and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.


Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and
32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

Time Complexity of Insertion Sort Algorithm

 Best Case: O(n)

It occurs when there is no sorting required, i.e. the array is already sorted. The best-case time
complexity of insertion sort is O(n).
 Average Case: O(n2)

It occurs when the array elements are in jumbled order that is not properly ascending and not
properly descending. The average case time complexity of insertion sort is O(n 2).

 Worst Case: O(n2)

It occurs when the array elements are required to be sorted in reverse order. That means suppose
you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of insertion sort is O(n2).

Space Complexity of Insertion Sort Algorithm

The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra variable is
required for swapping.

Characteristics of Insertion Sort Algorithm

 This algorithm is one of the simplest algorithms with a simple implementation


 Basically, Insertion sort is efficient for small data values
 Insertion sort is adaptive in nature, i.e. it is appropriate for data sets that are already
partially sorted.

Advantages of Insertion Sort Algorithm

 It, like other quadratic sorting algorithms, is efficient for small data sets.
 It just necessitates a constant amount of O(1) extra memory space.
 It works well with data sets that have been sorted in a significant way.
 It does not affect the relative order of elements with the same key.

Disadvantages of Insertion Sort Algorithm

 Insertion sort is inefficient against more extensive data sets


 The insertion sort exhibits the worst-case time complexity of O(n2)
 It does not perform well than other, more advanced sorting algorithms
 Does not preserve the relative order of items with equal keys which means it is not stable.

***

You might also like