Algorithm Design Techniques
Algorithm Design Techniques
1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the
divide & conquer techniques involve three steps:
o Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to
optimize a given objective.
o The greedy algorithm doesn't always guarantee the optimal solution however it generally
produces a solution that is very close in value to the optimal.
3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all
possible small problems and then combine them to obtain solutions for bigger problems.
This is particularly helpful when the number of copying sub problems is exponentially large.
Dynamic Programming is frequently related to Optimization Problems.
4. Branch and Bound: In Branch & Bound algorithm a given sub problem, which cannot be
bounded, has to be divided into at least two new restricted sub problems. Branch and Bound
algorithm are methods for global optimization in non-convex problems. Branch and Bound
algorithms can be slow, however in the worst case they require effort that grows exponentially
with problem size, but in some cases we are lucky, and the method coverage with much less
effort.
6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the
right one. It is a depth-first search of the set of possible solution. During the search, if an
alternative doesn't work, then backtrack to the choice point, the place which presented different
alternatives, and tries the next alternative.
7. Randomized Algorithm: A randomized algorithm uses a random number at least once during
the computation make a decision.
Types of Recursion
There are several different recursion types and terms. These include:
Direct recursion: This is typified by the factorial implementation where the methods call
itself.
In-Direct recursion: This happens where one method, say method A, calls another
method B, which then calls method A. This involves two or more methods that eventually
create a circular call sequence.
Head recursion: The recursive call is made at the beginning of the method.
Tail recursion: The recursive call is the last statement.
solving recurrences plays a crucial role in the analysis, design, and optimization of algorithms,
and is an important topic in computer science.
There are mainly three ways of solving recurrences:
1. Substitution Method
2. Recurrence Tree Method
3. Master Method
1. Substitution Method:
We make a guess for the solution and then we use mathematical induction to
prove the guess is correct or incorrect.
For example, consider the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(nLogn). Now we use induction to prove our
guess.
We need to prove that T(n) <= cnLogn. We can assume that it is true for values
smaller than n.
T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn – cnLog2 + n
= cnLogn – cn + n
<= cnLogn