Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
3 views

Algorithm Design Techniques

The document outlines various algorithm design techniques including Divide and Conquer, Greedy Technique, Dynamic Programming, Branch and Bound, Randomized Algorithms, and Backtracking. It also discusses recursive algorithms, their types, and the significance of recurrence relations in analyzing algorithm complexity. Additionally, it presents methods for solving recurrence relations such as the Substitution Method, Recurrence Tree Method, and Master Method.

Uploaded by

ribhu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Algorithm Design Techniques

The document outlines various algorithm design techniques including Divide and Conquer, Greedy Technique, Dynamic Programming, Branch and Bound, Randomized Algorithms, and Backtracking. It also discusses recursive algorithms, their types, and the significance of recurrence relations in analyzing algorithm complexity. Additionally, it presents methods for solving recurrence relations such as the Substitution Method, Recurrence Tree Method, and Master Method.

Uploaded by

ribhu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Algorithm Design Techniques:

The following is a list of several popular design approaches:

1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the
divide & conquer techniques involve three steps:

o Divide the original problem into a set of sub problems.


o Solve every sub problem individually, recursively.
o Combine the solution of the sub problems (top level) into a solution of the whole original
problem.
2. Greedy Technique: Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (known as objective), i.e. some constraints or conditions.

o Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to
optimize a given objective.
o The greedy algorithm doesn't always guarantee the optimal solution however it generally
produces a solution that is very close in value to the optimal.
3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all
possible small problems and then combine them to obtain solutions for bigger problems.

This is particularly helpful when the number of copying sub problems is exponentially large.
Dynamic Programming is frequently related to Optimization Problems.

4. Branch and Bound: In Branch & Bound algorithm a given sub problem, which cannot be
bounded, has to be divided into at least two new restricted sub problems. Branch and Bound
algorithm are methods for global optimization in non-convex problems. Branch and Bound
algorithms can be slow, however in the worst case they require effort that grows exponentially
with problem size, but in some cases we are lucky, and the method coverage with much less
effort.

5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed


to access a source of independent, unbiased random bits, and it is then allowed to use these
random bits to influence its computation.

6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the
right one. It is a depth-first search of the set of possible solution. During the search, if an
alternative doesn't work, then backtrack to the choice point, the place which presented different
alternatives, and tries the next alternative.

7. Randomized Algorithm: A randomized algorithm uses a random number at least once during
the computation make a decision.

Example 1: In Quick Sort, using a random number to choose a pivot.


Recursive Algorithms:
Recursion is technique used in computer science to solve big problems by breaking them into
smaller, similar problems. The process in which a function calls itself directly or indirectly is
called recursion and the corresponding function is called a recursive function. Using a
recursive algorithm, certain problems can be solved quite easily.

What is a Recursive Algorithm?


A recursive algorithm is an algorithm that uses recursion to solve a problem. Recursive
algorithms typically have two parts:
1. Base case: Which is a condition that stops the recursion.
2. Recursive case: Which is a call to the function itself with a smaller version of the problem.

Types of Recursion
There are several different recursion types and terms. These include:
 Direct recursion: This is typified by the factorial implementation where the methods call
itself.
 In-Direct recursion: This happens where one method, say method A, calls another
method B, which then calls method A. This involves two or more methods that eventually
create a circular call sequence.
 Head recursion: The recursive call is made at the beginning of the method.
 Tail recursion: The recursive call is the last statement.

When to Use Recursion?


Recursion is a powerful technique that can be used to solve a wide variety of problems.
However, it is important to use recursion carefully, as it can lead to stack overflows if not used
properly.
Recursion should be used when:
 The problem can be broken down into smaller sub problems that can be solved recursively.
 The base case is easy to identify.
 The recursive calls are tail recursive.
Examples of Recursion
Here are some common examples of recursion:
Example 1: Factorial: The factorial of a number n is the product of all the integers
from 1 to n. The factorial of n can be defined recursively as:
factorial(n) = n * factorial(n-1)
Recurrence Relations:

A recurrence relation is a mathematical expression that defines a sequence in terms


of its previous terms. In the context of algorithmic analysis, it is often used to model
the time complexity of recursive algorithms.
General form of a Recurrence Relation: an=f(an−1,an−2,....,an−k)an=f(an−1,an−2
,....,an−k)
where f is a function that defines the relationship between the current term and the
previous terms

Significance of Recurrence Relations


Recurrence Relations play a significant role in analyzing and optimizing the complexity of
algorithms. Having a strong understanding of Recurrence Relations play a great role in
developing the problem-solving skills of an individual. Some of the common uses
of Recurrence Relations are:
 Time Complexity Analysis
 Generalizing Divide and Conquer Algorithms
 Analyzing Recursive Algorithms
 Defining State and Transitions for Dynamic Programming

Common Examples of Recurrence Relations:


Example Recurrence Relation

Fibonacci Sequence F(n) = F(n-1) + F(n-2)

Factorial of a number n F(n) = n * F(n-1)

Merge Sort T(n) = 2*T(n/2) + O(n)

Tower of Hanoi H(n) = 2*H(n-1) + 1

Binary Search T(n) = T(n/2) + 1


Types of Recurrence Relations:
Various types of Recurrence Relations are:
1. Linear Recurrence Relations
2. Divide and Conquer Recurrences
3. Substitution Recurrences
4. Homogeneous Recurrences
5. Non-Homogeneous Recurrences
1. Linear Recurrence Relations:
Following are some of the examples of recurrence relations based on linear recurrence relation.
T(n) = T(n-1) + n for n > 0 and T(0) = 1
These types of recurrence relations can be easily solved using substitution method .
For example,
T(n) = T(n-1) + n
= T(n-2) + (n-1) + n
= T(n-k) + (n-(k-1))….. (n-1) + n
Substituting k = n, we get
T(n) = T(0) + 1 + 2+….. +n = n(n+1)/2 = O(n^2)
2. Divide and conquer recurrence relations:
Following are some of the examples of recurrence relations based on divide and conquer.
T(n) = 2T(n/2) + cn
T(n) = 2T(n/2) + √n
These types of recurrence relations can be easily solved using Master Method.
For recurrence relation: T(n) = 2T(n/2) + cn, the values of a = 2, b = 2 and k =1. Here logb(a) =
log2(2) = 1 = k. Therefore, the complexity will be Θ(nlog2(n)).
Similarly for recurrence relation T(n) = 2T(n/2) + √n, the values of a = 2, b = 2 and k =1/2.
Here logb(a) = log2(2) = 1 > k. Therefore, the complexity will be Θ(n).
3. Substitution Recurrences:
Sometimes, recurrence relations can’t be directly solved using techniques
like substitution, recurrence tree or master method. Therefore, we need to convert the
recurrence relation into appropriate form before solving. For example,
T(n) = T(√n) + 1
To solve this type of recurrence, substitute n = 2^m as:
T(2^m) = T(2^m /2) + 1
Let T(2^m) = S(m),
S(m) = S(m/2) + 1
Solving by master method, we get
S(m) = Θ(logm)
As n = 2^m or m = log2(n),
T(n) = T(2^m) = S(m) = Θ(logm) = Θ(loglogn)

4. Homogeneous Recurrence Relations:


A homogeneous recurrence relation is one in which the right-hand side is equal to zero.
Mathematically, a homogeneous recurrence relation of order k is represented as:
an=f(an−1,an−2,...,an−k)an=f(an−1,an−2,...,an−k)
with the condition that the above equation equates to 0.
Example: an=2∗an−1−an−2an=2∗an−1−an−2

5. Non-Homogeneous Recurrence Relations:


A non-homogeneous recurrence relation is one in which the right-hand side is not equal to
zero. It can be expressed as:
an=f(an−1,an−2,...,an−k)+g(n)
where g(n) is a function that introduces a term not dependent on the previous terms. The
presence of g(n) makes the recurrence non-homogeneous.
Example: an=2∗an−1−an−2+3nan=2∗an−1−an−2+3n
In this case, the term 3^n on the right-hand side makes the recurrence non-homogeneous.

Ways to Solve Recurrence Relations:


Here are the general steps to analyze the complexity of a recurrence relation:
 Substitute the input size into the recurrence relation to obtain a sequence of terms.
 Identify a pattern in the sequence of terms, if any, and simplify the recurrence relation to
obtain a closed-form expression for the number of operations performed by the algorithm.
 Determine the order of growth of the closed-form expression by using techniques such as
the Master Theorem, or by finding the dominant term and ignoring lower-order terms.
 Use the order of growth to determine the asymptotic upper bound on the running time of
the algorithm, which can be expressed in terms of big O notation.

solving recurrences plays a crucial role in the analysis, design, and optimization of algorithms,
and is an important topic in computer science.
There are mainly three ways of solving recurrences:
1. Substitution Method
2. Recurrence Tree Method
3. Master Method

1. Substitution Method:
We make a guess for the solution and then we use mathematical induction to
prove the guess is correct or incorrect.
For example, consider the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(nLogn). Now we use induction to prove our
guess.
We need to prove that T(n) <= cnLogn. We can assume that it is true for values
smaller than n.
T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn – cnLog2 + n
= cnLogn – cn + n
<= cnLogn

You might also like