Notes on Divide and Conquer Algorithms
Notes on Divide and Conquer Algorithms
Topperworld.in
The Divide and Conquer strategy typically follows these three steps:
Divide:
• The original problem is divided into smaller, more manageable sub-
problems.
• This step is crucial for simplifying the problem and making it easier to
solve.
• The division process continues until the sub-problems become simple
enough to be solved directly.
Conquer:
• Each sub-problem is solved independently.
• This is often the recursive application of the same algorithm to each
sub-problem.
• The base case of the recursion is the point where the sub-problems
become simple enough to be solved directly, without further division.
©Topperworld
Design and Analysis of Algorithm
Combine:
• The solutions to the sub-problems are combined to obtain the solution
to the original problem.
• The combining step should be efficient and should not add significant
complexity to the overall algorithm.
The Divide and Conquer paradigm is widely used in various algorithms and
computational problems, such as sorting algorithms (e.g., Merge Sort,
QuickSort), searching algorithms (e.g., Binary Search), and various dynamic
programming solutions.
Here's a brief overview of how Divide and Conquer is applied in two
common algorithms:
❖ Merge Sort:
• Divide: The array is divided into two halves.
• Conquer: Recursively sort each half.
• Combine: Merge the sorted halves to produce the final sorted array.
©Topperworld
Design and Analysis of Algorithm
©Topperworld
Design and Analysis of Algorithm
Now they become array of unit length that can no longer be divided and array
of unit length are always sorted.
These sorted subarrays are merged together, and we get bigger sorted
subarrays.
This merging process is continued until the sorted array is built from the
smaller subarrays.
©Topperworld
Design and Analysis of Algorithm
The following diagram shows the complete merge sort process for an
example array {38, 27, 43, 10}.
©Topperworld
include <iostream>
#include <vector>
int i = 0, j = 0, k = 0;
arr[k++] = left[i++];
} else {
arr[k++] = right[j++];
} }
arr[k++] = left[i++];
arr[k++] = right[j++];
If (arr.size() <= 1) {
return;
©Topperworld
Design and Analysis of Algorithm
// Divide
merge_sort(left);
merge_sort(right);
// Combine (Merge)
int main() {
merge_sort(arr);
return 0;
©Topperworld
Design and Analysis of Algorithm
©Topperworld
Design and Analysis of Algorithm
Parallelizable:
• Merge sort is a naturally parallelizable algorithm, which means it can
be easily parallelized to take advantage of multiple processors or
threads.
©Topperworld
Design and Analysis of Algorithm
❖ QuickSort:
• Divide: Choose a pivot element and partition the array into two sub-
arrays - elements less than the pivot and elements greater than the
pivot.
• Conquer: Recursively sort the sub-arrays.
• Combine: No explicit combining step is needed; the sorting is done in
place by rearranging the elements.
©Topperworld
Design and Analysis of Algorithm
Algorithm:
Partition Algorithm:
The partition algorithm rearranges the sub-arrays in a place.
©Topperworld
Design and Analysis of Algorithm
Let us understand the working of partition and the Quick Sort algorithm with
the help of the following example:
Consider: arr[] = {10, 80, 30, 90, 40}.
Compare 10 with the pivot and as it is less than pivot arrange it accrodingly.
©Topperworld
Design and Analysis of Algorithm
©Topperworld
Design and Analysis of Algorithm
illustration of Quicksort:
✓ As the partition process is done recursively, it keeps on putting the
pivot in its actual position in the sorted array.
✓ Repeatedly putting pivots in their actual position makes the array
sorted.
Follow the below images to understand how the recursive implementation of
the partition algorithm helps to sort the array.
Initial partition on the main array:
©Topperworld
Design and Analysis of Algorithm
Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and
in worst case. We will also see the space complexity of quicksort.
1. Time Complexity
Best case = O(n*logn)
Average case = O(n*logn)
Worst case = O(n2)
2.Space Complexity = O(n*logn)
#include <stdio.h>
int t = a[i];
a[i] = a[j];
a[j] = t;
int t = a[i+1];
a[i+1] = a[end];
©Topperworld
Design and Analysis of Algorithm
a[end] = t;
return (i + 1);
void quick(int a[], int start, int end) /* a[] = array to be sorted, start =
Starting index, end = Ending index */
quick(a, p + 1, end);
int i;
int main()
printArr(a, n);
quick(a, 0, n - 1);
printArr(a, n);
©Topperworld
Design and Analysis of Algorithm
OUTPUT:
©Topperworld
Design and Analysis of Algorithm
❖ Binary Search:
• Divide: Compare the target value with the middle element of the
sorted array.
• Conquer: If the target is equal to the middle element, the search is
complete. If the target is less than the middle element, search the left
half; otherwise, search the right half.
• Combine: No explicit combine step; the search space is reduced until
the target is found or the sub-array becomes empty.
©Topperworld
Design and Analysis of Algorithm
Compare the middle element of the search space with the key.
• If the key is found at middle element, the process is terminated.
• If the key is not found at middle element, choose which half will be
used as the next search space.
• If the key is smaller than the middle element, then the left side is used
for next search.
• If the key is larger than the middle element, then the right side is used
for next search.
• This process is continued until the key is found or the total search space
is exhausted.
©Topperworld
Design and Analysis of Algorithm
First Step:
✓ Calculate the mid and compare the mid element with the key.
✓ If the key is less than mid element, move to left and if it is greater than
the mid then move search space to the right.
✓ Key (i.e., 23) is greater than current mid element (i.e., 16).
✓ The search space moves to the right.
Key is less than the current mid 56. The search space moves to the left.
Second Step:
• If the key matches the value of the mid element, the element is found
and stop search.
©Topperworld
Design and Analysis of Algorithm
#include <iostream>
int low = 0;
if (arr[mid] == target) {
} else {
int main() {
int arr[] = {2, 5, 8, 12, 16, 23, 38, 42, 55, 67};
if (result != -1) {
std::cout << "Element " << target << " found at index " << result << std::endl;
} else {
©Topperworld
Design and Analysis of Algorithm
std::cout << "Element " << target << " not found in the array." << std::endl;
return 0;
OUTPUT:
Time Complexity:
• Best Case: O(1)
• Average Case: O(log N)
• Worst Case: O(log N)
Auxiliary Space: O(1), If the recursive call stack is considered then the
auxiliary space will be O(logN).
©Topperworld
Design and Analysis of Algorithm
✓ Binary search requires that the data structure being searched be stored
in contiguous memory locations.
✓ Binary search requires that the elements of the array be comparable,
meaning that they must be able to be ordered.
for i = 1 to p do
for j = 1 to r do
Z[i,j] := 0
for k = 1 to q do
©Topperworld
Design and Analysis of Algorithm
M4: =A×(F−H)
M5:=(C+D)×(E)
M6:=(A+B)×(H)
M7:=D×(G−E)
Then,
I:= M2+M3−M6−M7
J:=M4+M6
K:=M5+M7
L:=M1−M3−M4−M5
©Topperworld
Design and Analysis of Algorithm
Analysis:
#include<stdio.h>
int main(){
int z[2][2];
int i, j;
printf("\n");
printf("%d\t", x[i][j]);
printf("\n");
©Topperworld
Design and Analysis of Algorithm
printf("%d\t", y[i][j]);
z[0][1] = m3 + m5;
z[1][0] = m2 + m4;
z[1][1] = m1 - m2 + m3 + m6;
printf("\n");
printf("%d\t", z[i][j]);
return 0;
OUTPUT:
©Topperworld
Design and Analysis of Algorithm
Algorithm:
©Topperworld
Design and Analysis of Algorithm
Certainly! Let's walk through an example of the Closest Pair of Points problem
using the Divide and Conquer algorithm.
Consider the following set of points in a 2D plane:
P={(1,2),(4,6),(7,8),(9,5),(12,2),(15,14),(17,6),(19,9)}
Step 1:
✓ Sort Points Based on x-coordinates.
✓ Sort the points based on their x-coordinates:
P={(1,2),(4,6),(7,8),(9,5),(12,2),(15,14),(17,6),(19,9)}
Step 2:
✓ Divide the Set into Two Halves
✓ Divide the sorted set into two equal halves along the vertical line
passing through the median x-coordinate:
P left = {(1,2),(4,6),(7,8),(9,5)}
P right = {(12,2),(15,14),(17,6),(19,9)}
Step 3:
✓ Recursively Find the Closest Pair in Each Half
✓ Recursively find the closest pair in each half.
✓ We'll apply the algorithm to both.
Step 4:
✓ Merge the Two Halves
✓ Determine the minimum distance between pairs that have one point in
each half.
✓ This involves checking a strip of points around the median line.
©Topperworld
Design and Analysis of Algorithm
Step 5:
✓ Return the Closest Pair
✓ Return the pair with the smallest distance found in the previous steps.
Visualization:
• For the sake of simplicity, let's assume that we have found the closest
pair in each half and are now considering the strip of points around the
median line.
Consider the strip:
Strip={(7,8),(9,5),(12,2),(15,14)}
✓ We need to find the closest pair within this strip.
✓ This can be done efficiently by comparing each point to the next 7
points in the strip (as the maximum possible distance is 7).
After checking all possible pairs in the strip, we find that the closest pair in
the strip is (9,5) and (12,2) with a distance of :
✓ Now, we compare this distance with the distances found in the left and
right halves.
✓ If the distance in the strip is smaller, it becomes the final result.
✓ Otherwise, we return the minimum distance found in the left or right
halves.
✓ In this example, let's assume that the closest pair in the strip has the
smallest distance.
©Topperworld
Design and Analysis of Algorithm
Final Result:
Time Complexity:
• The time complexity of the Closest Pair of Points algorithm is
O(nlogn), where n is the number of points.
Space Complexity:
• The space complexity of the Closest Pair of Points algorithm is O(n).
©Topperworld
Design and Analysis of Algorithm
©Topperworld