Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
26 views

Divide and Conquer Algorithm

Uploaded by

smtptesting021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Divide and Conquer Algorithm

Uploaded by

smtptesting021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Divide and Conquer Algorithm

Sushma Prajapati
Assistant Professor
CO Dept
CKPCET,surat
Email:sushma.prajapati@ckpcet.ac.in
Outline
● Introduction
● Recurrence and different methods to solve recurrence
● Problem Solving using divide and conquer algorithm
○ Multiplying large Integers Problem
○ Binary Search
○ Max-Min problem
○ Sorting (Merge Sort, Quick Sort)
○ Matrix Multiplication
○ Exponential
Introduction
● This technique can be divided into the
following three parts:
○ Divide: This involves dividing the problem
into some sub problem.
○ Conquer: Sub problem by calling recursively
until sub problem solved.
○ Combine: The Sub problem Solved so that we
will get find problem solution.
General Divide and Conquer Algorithm
Algorithm DAndC(P)
If Small(P) then
Return S(P);
Else
{
Devide P into smaller instances P1, P2, … Pk k≥ 1
Apply DAndC to each of these Sub-problems;
Return Combine(DAndC(P1), DAndC(P2),…, DAndC(Pk))
}
Generally time complexity of this problem is Time Complexity
given by recurrence relation T(n) = T(1) ; n=1
= aT(n/b) + f(n); n>1
Recurrence and different methods to solve
recurrence
Recurrence
In mathematics a recurrence is a function that is defined in terms of one or more base
cases, and itself , with smaller arguments

Examples:
Different methods to solve recurrence
● Back Substitution
● Recursion tree
● Master’s theorem
● Change variable
Back substitution method
● It is also known as iterative method or substitution method or iterative substitution
method
● It is a technique or procedure in computational mathematics used to solve a
recurrence relation that uses an initial guess to generate a sequence of improving
approximate solutions for a class of problems, in which the n-th approximation is
derived from the previous ones.
Back substitution method (Example)
Consider the algorithm:: Recurrence equation will be::

T (n) = 1 if n=0
Void fun(int n) = T (n-1) if n>1
{
if(n>0)
{ Wil solve it on board……….
printf (“%d”,n);
fun(n-1);
}

}
Master’s Theorem
Recursion tree method
● A recursion tree is a tree where each node represents the cost of a certain recursive
sub problem. Then you can sum up the numbers in each node to get the cost of the
entire algorithm.
Recursion tree method: Example
Recurrence: T (n) = 3T (n/4) + cn 2
Recursion tree method: Example (Contd…)
Recurrence: T (n) = 3T (n/4) + cn 2
Change variable method
● Sometimes, a little algebraic change can make an unknown recurrence similar to
one you have seen before.

Consider the recurrence:

● We can simplify this recurrence, with a change of variable method.


● Let’s rewrite the equation by substituting m=log n,

T(2m) = 2T(2m/2) + m

● We can now rename S(m) = T(2m) to produce the new recurrence


● S(m) =2s(m/2) + m
● If we solve it using master’s theorem we get,
○ S(m) = O(m lg m) = O(lg n lg lg n).
Problem Solving using divide and conquer
algorithm
Multiplying large Integers Problem

What’s the best way to multiply two numbers?


Multiplication : The Problem

Input: 2 non-negative numbers, x and y (n digits each)


Output: the product x · y

5678
x 1234
7006652
17
Grade School Multiplicaton

45
x 63
135
2700
2835
18
Grade School Multiplication

45
Algorithm description (informal*): x 63
compute partial products (using multiplication
& “carries” for digit overflows), and add all 135
(properly shifted) partial products together
2700
This algorithm takes O(n^2) time.
2835
19
Multiplication of large integers
(divide-conquer recursive algorithm)
● a = a1a0 and b = b1b0
○ c=a*b
○ = (a110n/2 + a0) * (b110n/2 + b0)
● =(a1 * b1)10n + (a1 * b0 + a0 * b1)10n/2 + (a0 * b0)

● For instance: a = 123456, b = 117933:


● Then c = a * b = (123*103+456)*(117*103+933)
○ =(123 * 117)106 + (123 * 933 + 456 * 117)103 + (456 * 933)
Multiplication of large integers
(divide-conquer recursive algorithm)
● So, we can say that if, a, b: n-digit integers AND a = a1a0 and b = b1b0
● Then, a1,a0,b1,b0: n/2-digit integer
● c=a*b

=(a1 * b1)10n + (a1 * b0 + a0 * b1)10n/2 + (a0 * b0)

● Here we need total 4 multiplication


● Wait, our grade-school algorithm was already O(n2)!
● Is Divide-and-Conquer really that useless?
KARATSUBA Integer multiplication
● a = a1a0 and b = b1b0
● c=a*b

=(a1 * b1)10n + (a1 * b0 + a0 * b1)10n/2 + (a0 * b0)

=c210n + c110n/2 + c0,

where

● c2 = a1 * b1 is the product of their first halves


● c0 = a0 * b0 is the product of their second halves
● c1 = (a1 + a0) * (b1 + b0) – (c2 + c0) is the product of the sum of the a’s halves and
the sum of the b’s halves minus the sum of c2 and c0.
KARATSUBA Integer multiplication
c =c210n + c110n/2 + c0,

where

c2 = a1 * b1

c0 = a0 * b0

c1 = (a1 + a0) * (b1 + b0) – (c2 + c0)

Multiplication of n-digit numbers requires three multiplications of n/2-digit numbers


KARATSUBA Integer multiplication: Time
Analysis
T(n) = 3T(n/2) for n>1, T(1) = 1

By applying master’s theorem

T(n) = n1.585
Binary Search
Binary Search : Introduction
● Binary Search is one of the fastest searching algorithms.
● It is used for finding the location of an element in a linear array.
● It works on the principle of divide and conquer technique.
● Binary Search Algorithm can be applied only on Sorted arrays.
● To apply binary search on an unsorted array,
○ First, sort the array using some sorting technique.
○ Then, use binary search algorithm.
Binary Search : Algorithm
● Iterative Version BinSearchI(A, n, key)
● Input Parameters
○ n – No of elements low=1;
○ Array – A[1:n] in high=n;
○ increasing order, n>0, While( low ≤ high)
○ key – element to be searched {
● Output mid = (low + high)/2 ;
If (key < A[mid]) then
○ If key is present in A it returns index of found
high= mid -1;
element
Else if If (key > A[mid]) then
○ If key is not present then return 0
low = mid + 1;
Else
Return mid;
}
Return 0;
Binary Search: Example
Binary Search : Time Analysis
The recurrence relation can be written as,

So by applying master’s theorem we get ɵ(log2n)


Max-Min problem
Max-Min problem:Naive Approach
● Naïve method is a basic method to solve any Algorithm: Max-Min-Element (A[])
problem. In this method, the maximum and
minimum number can be found separately. 1. X=A[1]; y=A[1]
2. For I=2 to n
● To find the maximum and minimum
3. If A[i]>y then y=A[i]
numbers, the beside straightforward 4. If A[i]<x then x=A[i]
algorithm can be used. 5. End for
6. Return (x,y)
Max-Min problem:Naive Approach Analysis
● The number of comparison in Naive method is 2n - 2.
● The number of comparisons can be reduced using the divide and conquer
approach.
Max-Min problem: D & C Approach
● In this approach, the array is divided into two
halves. Then using recursive approach Algorithm: minmax(low,high)
maximum and minimum numbers in each 1. if high-low=1 then
2. if A[low]<A[high] then
halves are found.
return(A[low],A[high])
● Later, return the maximum of two maxima of 3. else return(A[high],A[low])
each half and the minimum of two minima of 4. end if
each half. 5. else
6. mid=(low+high)/2
7. (x1,y1) =minmax(low,mid)
8. (x2,y2) =minmax(mid+1,high)
9. x =min{x1,x2}
10. y =max{y1,y2}
11. return (x, y)
12. end if
Max-Min problem: example
Max-Min problem: example
{8,2,6,3,9,1,7,5,4,2,8}
{8,2,6,3,9} {1,7,5,4,2,8}

{8,2} {6,3,9} {1,7,5} {4,2,8}

{6} {3,9} {1} {7,5} {4} {2,8}


Max-Min problem Example :Solve Small
Instances & Combine
{1,9}
{2,9} {1,8}

{2,8} {3,9} {1,7} {2,8}

{2,8} {6} {3,9} {7,5} {4} {2,8}


{1}

{6,6} {3,9} {1,1} {5,7} {4,4} {2,8}


Max-Min problem : Time Analysis
The recurrence relation for this problem we can find as,

So, by applying the master’s theorem we can get the time complexity as,

ɵ(n)
Sorting(Merge Sort, Quick Sort)
Merge Sort : Introduction
● Sorting Problem: Sort a sequence of n elements into non-decreasing order.

● Divide: Divide the n-element sequence to be sorted into two subsequences of n/2
elements each
● Conquer: Sort the two subsequences recursively using merge sort.
● Combine: Merge the two sorted subsequences to produce the sorted answer.
Merge Sort : Algorithm
MERGE-SORT(A, p,r) Algorithm MERGE(A, p, q,r)
Sorts A[p . .r].
1 if p < r then Merges A[p . . q] and A[q + 1 . .r].
2 q ← b(p + 1)/2c
3 MERGE-SORT(A, p, q); MERGE-SORT(A, q + 1,r) 1 n1 ← q − p + 1; n2 ← r − q
4 MERGE(A, p, q,r) 2 create array L[1 . . n1 + 1] and R[1 . . n2 + 1]
3 for i ← 1 to n1 do L[i] ← A[p + i − 1]
4 for j ← 1 to n2 do R[ j] ← A[q + j]
5 L[n1 + 1] ← ∞; R[n2 + 1] ← ∞
6 i ← 1, j ← 1
7 for k ← p to r do
8 if L[i] <= R[ j] then A[k] ← L[ i]; i++
9 else A[k] ← R[ j]; j++
Merge Sort : Example
Merge Sort : Analysis
● Running time T(n) of Merge Sort:
● Divide: computing the middle takes θ(1)
● Conquer: solving 2 subproblems takes 2T(n/2)
● Combine: merging n elements takes θ(n)
● Total:
○ T(n) = (1) if n = 1
○ T(n) = 2T(n/2) + (n) if n > 1
● By applying master’s theorem we get complexity as
● T(n) =θ (n lg n)
Quick Sort : Introduction
● Divide: Partition (separate) the array A[p..r] into two (possibly empty) subarrays
A[p..q–1] and A[q+1..r].
○ Each element in A[p..q–1] < A[q].
○ A[q] < each element in A[q+1..r].
○ Index q is computed as part of the partitioning procedure.
● Conquer: Sort the two subarrays by recursive calls to quicksort.
● Combine: The subarrays are sorted in place – no work is needed to combine them.
Quick Sort : Algorithm
Algorithm QUICKSORT(A, p, r) PARTITION(A, p, r)
{ {
if p < r x = A[r]
{ i=p-1
q = PARTITION(A, p, r) for j = p to r - 1
QUICKSORT(A, p, q - 1) {
QUICKSORT(A, q + 1, r) if A[j] <= x
} {
} i=i+1
exchange A[i] with A[j]
}
}
exchange A[i + 1] with A[r]
return i + 1
Quick Sort :Partition Procedure
Select the last element A[r] in the subarray A[p..r] as the pivot – the element around
which to partition.

As the procedure executes, the array is partitioned into following regions.

A[p..i ] — All entries in this region are < pivot.

A[i+1..j – 1] — All entries in this region are > pivot.

A[r] = pivot.
Quick Sort : Example(Partitioning)
Quick Sort : Example(Partitioning)
Quick Sort : Time Analysis
Worst Case : Unbalanced Partitioning
● The worst-case behavior for quicksort occurs
when the partitioning routine produces one
region with n - 1 elements and one with only l
element.
● Since partitioning costs (n) time and T(1) = θ
(1), the recurrence for the running time is
● T(n) = T(n - 1) + θ(n).
● Applying the substitution method Here, we
get time complexity as,
● θ(n2)
Best Case : Balanced Partitioning
If the partitioning procedure produces two regions
of size n/2, quicksort runs much faster. The
recurrence is then

T(n) = 2T(n/2) + θ(n),

Applying master’s theorem we get the time


complexity as,

θ(nlogn)

Average case also behaves same as best case


Matrix Multiplication
Matrix Multiplication : Naive Approach
The time complexity of this algorithm turns to be Algorithm

O(n*n*n) = O(n3) Mat_Mul(a,b,c,n)


{
for i=1 to n do
for j=1 to n do
c[i,j] =0;
for k=1 to n do
c[i,j]=c[i,j] +a[i,k]* b[k,j]
}
Matrix Multiplication : D&C Approach
1. Divide matrices A and B in 4 sub-matrices of ● In this method, we do 8 multiplications for
size N/2 x N/2 as shown in the figure matrices of size N/2 x N/2 and 4 additions.
2. Calculate following values recursively. ae + Addition of two matrices takes O(N2) time. So
bg, af + bh, ce + dg and cf + dh. the recurrence relation can be written as,
● T(N) = 8T(N/2) + O(N2)
● Applying Master's Theorem, time complexity
of above method is O(N3)
● which is unfortunately same as the naive
method.

Better way??
Strassen’s Matrix Multiplication
● In the previous divide and conquer method,
the main component for high time
complexity is 8 recursive calls.
● Strassen’s method is similar to above simple
divide and conquer method in the sense that
this method also divide matrices to
sub-matrices of size N/2 x N/2 as shown in the
above diagram, but in Strassen’s method, the
four sub-matrices of result are calculated
using following formulae.
Strassen’s Matrix Multiplication: Time Analysis
Here 7 multiplications are required so, recurrence relation can be written as,

T(N) = 7T(N/2) + O(N2)

From Master's Theorem, time complexity of above method is

O(NLog7) which is approximately O(N2.8074)


Exponential Problem : Naive Approach
Problem: Algo power-Naive(x,n)
{
Exponentiation is a process of repeated e=1
multiplication. For i=1 to n
e= e*x
Return e
Given an integer x and a non-negative integer n, we
}
have to calculate x n

Xn can be written as, x*x*x*........*x=>n-1


multiplications
Which gives time complexity as O(n)
Exponential Problem : Recursive Approach
Algo power-recur(x,n)
{
If n=0
Return 1
Else return x*power-recur(x,n-1)
}

Gives time complexity as again O(n)


Exponential Problem : D&C Approach
Algo int power(x,n)
{
if(n=0)
return 1;
else
Int r=power(x, n/2);
if(n%2 == 0)then
return(r*r)
else
return(r*r*x)

Gives time complexity as O(log(n))

You might also like