Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Daa Unit-1: Introduction To Algorithms

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

DAA UNIT-1

Introduction to Algorithms:
What is Algorithm? Algorithm Basics
 The word Algorithm means “a process or set of rules to be followed in
calculations or other problem-solving operations”. Therefore Algorithm refers
to a set of rules/instructions that step-by-step define how a work is to be
executed upon in order to get the expected results. 

 It can be understood by taking an example of cooking a new recipe. To cook a


new recipe, one reads the instructions and steps and execute them one by one,
in the given sequence. The result thus obtained is the new dish cooked
perfectly. Similarly, algorithms help to do a task in programming to get the
expected output.
 The Algorithm designed are language-independent, i.e. they are just plain
instructions that can be implemented in any language, and yet the output will
be the same, as expected.
What are the Characteristics of an Algorithm?
As one would not follow any written instructions to cook the recipe, but only the
standard one. Similarly, not all written instructions for programming is an algorithm.
In order for some instructions to be an algorithm, it must have the following
characteristics:
 Clear and Unambiguous: Algorithm should be clear and unambiguous.
Each of its steps should be clear in all aspects and must lead to only one
meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-
defined inputs.
 Well-Defined Outputs: The algorithm must clearly define what output will
be yielded and it should be well-defined as well.
 Finite-ness: The algorithm must be finite, i.e. it should not end up in an
infinite loops or similar.
 Feasible: The algorithm must be simple, generic and practical, such that it
can be executed upon will the available resources. It must not contain some
future technology, or anything.
 Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be implemented
in any language, and yet the output will be same, as expected.
Advantages of Algorithms:
 It is easy to understand.
 Algorithm is a step-wise representation of a solution to a given problem.
 In Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Branching and Looping statements are difficult to show in Algorithms.
Why analysis of algorithm it is important?
In the analysis of the algorithm, it generally focussed on CPU (time) usage, Memory
usage, Disk usage, and Network usage. All are important, but the most concern is
about the CPU time. Be careful to differentiate between:
 Performance: How much time/memory/disk/etc. is used when a program is
run. This depends on the machine, compiler, etc. as well as the code we
write.
 Complexity: How do the resource requirements of a program or algorithm
scale, i.e. what happens as the size of the problem being solved by the code
gets larger.
Note: Complexity affects performance but not vice-versa.
Algorithm Analysis:
Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem. Analysis of algorithms is the determination of the
amount of time and space resources required to execute it.
Why Analysis of Algorithms is important?
 To predict the behavior of an algorithm without implementing it on a
specific computer.
 It is much more convenient to have simple measures for the efficiency of an
algorithm than to implement the algorithm and test the efficiency every time
a certain parameter in the underlying computer system changes.
 It is impossible to predict the exact behavior of an algorithm. There are too
many influencing factors.
 The analysis is thus only an approximation; it is not perfect.
 More importantly, by analyzing different algorithms, we can compare them
to determine the best one for our purpose

Analysis of Algorithms | (Asymptotic


Notations)
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of Algorithms. The
main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that don’t
depend on machine-specific constants and doesn’t require algorithms to be implemented and time
taken by programs to be compared. Asymptotic notations are mathematical tools to represent the time
complexity of algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly
used to represent the time complexity of algorithms. 
 
1) Θ Notation: The theta notation bounds a function from above and below, so it defines exact
asymptotic behavior. 
A simple way to get Theta notation of an expression is to drop low order terms and ignore leading
constants. For example, consider the following expression. 
3n3 + 6n2 + 6000 = Θ(n3) 
Dropping lower order terms is always fine because there will always be a number(n) after which Θ(n 3)
has higher values than Θ(n2) irrespective of the constants involved. 
For a given function g(n), we denote Θ(g(n)) is following set of functions. 
 
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such
that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n) and
c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0. 

2) Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a


function only from above. For example, consider the case of Insertion Sort. It takes linear time in best
case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is
O(n^2). Note that O(n^2) also covers linear time. 
If we use Θ notation to represent time complexity of Insertion sort, we have to use two statements for
best and worst cases: 
1. The worst case time complexity of Insertion Sort is Θ(n^2). 
2. The best case time complexity of Insertion Sort is Θ(n). 
The Big O notation is useful when we only have upper bound on time complexity of an algorithm.
Many times we easily find an upper bound by simply looking at the algorithm.  
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}
 

3) Ω Notation: Just as Big O notation provides an asymptotic upper bound on a function, Ω


notation provides an asymptotic lower bound. 
Ω Notation can be useful when we have lower bound on time complexity of an algorithm. As
discussed in the previous post, the best case performance of an algorithm is generally not useful, the
Omega notation is the least used notation among all three. 
For a given function g(n), we denote by Ω(g(n)) the set of functions.  
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be
written as Ω(n), but it is not a very useful information about insertion sort, as we are generally
interested in worst case and sometimes in average case. 
Properties of Asymptotic Notations : 
As we have gone through the definition of this three notations let’s now discuss some
important properties of those notations. 
1. General Properties : 
     If f(n) is O(g(n)) then a*f(n) is also O(g(n)) ; where a is a constant. 
     Example: f(n) = 2n²+5 is O(n²) 
     then 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²) .
     Similarly this property satisfies for both Θ and Ω notation. 
     We can say 
     If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)) ; where a is a constant. 
     If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)) ; where a is a constant.
2. Transitive Properties : 
    If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)) .
    Example: if f(n) = n, g(n) = n² and h(n)=n³
    n is O(n²) and n² is O(n³)
    then n is O(n³)
   Similarly this property satisfies for both Θ and Ω notation.
   We can say
   If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
   If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))
3. Reflexive Properties : 
      Reflexive properties are always easy to understand after transitive.
      If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF !
      Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
       Example: f(n) = n² ; O(n²) i.e O(f(n))
      Similarly this property satisfies for both Θ and Ω notation.
      We can say that:
      If f(n) is given then f(n) is Θ(f(n)).
      If f(n) is given then f(n) is Ω (f(n)).
4. Symmetric Properties : 
      If f(n) is Θ(g(n)) then g(n) is Θ(f(n)) . 
      Example: f(n) = n² and g(n) = n² 
      then f(n) = Θ(n²) and g(n) = Θ(n²) 
      This property only satisfies for Θ notation.
5. Transpose Symmetric Properties : 
      If f(n) is O(g(n)) then g(n) is Ω (f(n)). 
      Example: f(n) = n , g(n) = n² 
      then n is O(n²) and n² is Ω (n) 
This property only satisfies for O and Ω notations.
6. Some More Properties : 
     1.) If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))
     2.) If f(n) = O(g(n)) and d(n)=O(e(n)) 
          then f(n) + d(n) = O( max( g(n), e(n) )) 
          Example: f(n) = n i.e O(n) 
                         d(n) = n² i.e O(n²) 
                         then f(n) + d(n) = n + n² i.e O(n²)
      3.) If f(n)=O(g(n)) and d(n)=O(e(n)) 
           then f(n) * d(n) = O( g(n) * e(n) ) 
           Example: f(n) = n i.e O(n) 
           d(n) = n² i.e O(n²) 
                      then f(n) * d(n) = n * n² = n³ i.e O(n³)

Analysis of Algorithms (Worst, Average


and Best Cases):
In the previous post, we discussed how Asymptotic analysis overcomes the problems of naive way of
analyzing algorithms. In this post, we will take an example of Linear Search and analyze it using
Asymptotic analysis.
We can have three cases to analyze an algorithm: 
1) The Worst Case 
2) Average Case 
3) Best Case
Let us consider the following implementation of Linear Search. 
example:

// C++ implementation of the approach


#include <bits/stdc++.h>
using namespace std;
  
// Linearly search x in arr[].
// If x is present then return the index,
// otherwise return -1
int search(int arr[], int n, int x)
{
    int i;
    for (i = 0; i < n; i++) {
        if (arr[i] == x)
            return i;
    }
    return -1;
}
  
// Driver Code
int main()
{
    int arr[] = { 1, 10, 30, 15 };
    int x = 30;
    int n = sizeof(arr) / sizeof(arr[0]);
    cout << x << " is present at index "
         << search(arr, n, x);
  
    getchar();
    return 0;
}
  
// This code is contributed
// by Akanksha Rai

Output: 
30 is present at index 2
Worst Case Analysis (Usually Done) :
 In the worst case analysis, we calculate upper bound on running time of an algorithm. We
must know the case that causes maximum number of operations to be executed.
 For Linear Search, the worst case happens when the element to be searched (x in the above
code) is not present in the array.
 When x is not present, the search() functions compares it with all the elements of arr[] one by
one. Therefore, the worst case time complexity of linear search would be Θ(n).
Average Case Analysis (Sometimes done) 
 In average case analysis, we take all possible inputs and calculate computing time for all of
the inputs.
 Sum all the calculated values and divide the sum by total number of inputs. We must know
(or predict) distribution of cases.
 For the linear search problem, let us assume that all cases are uniformly distributed (including
the case of x not being present in array). So we sum all the cases and divide the sum by (n+1).
Following is the value of average case time complexity. 
 
Average Case Time =

=
= Θ(n)
Best Case Analysis (Bogus) :
 In the best case analysis, we calculate lower bound on running time of an algorithm. We must
know the case that causes minimum number of operations to be executed. In the linear search
problem, the best case occurs when x is present at the first location.
 The number of operations in the best case is constant (not dependent on n). So time
complexity in the best case would be Θ(1) 
 Most of the times, we do worst case analysis to analyze algorithms. In the worst analysis, we
guarantee an upper bound on the running time of an algorithm which is good information. 
 The average case analysis is not easy to do in most of the practical cases and it is rarely done.
In the average case analysis, we must know (or predict) the mathematical distribution of all
possible inputs.
 The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide
any information as in the worst case, an algorithm may take years to run.
 For some algorithms, all the cases are asymptotically same, i.e., there are no worst and best
cases. For example, Merge Sort. Merge Sort does Θ(nLogn) operations in all cases.
 Most of the other sorting algorithms have worst and best cases. For example, in the typical
implementation of Quick Sort (where pivot is chosen as a corner element), the worst occurs
when the input array is already sorted and the best occur when the pivot elements always
divide array in two halves. For insertion sort, the worst case occurs when the array is reverse
sorted and the best case occurs when the array is sorted in the same order as output.
Performace measurement of algorithm:
Performance analysis of an algorithm depends upon two factors i.e. amount of memory used and
amount of compute time consumed on any CPU. Formally they are notified as complexities in terms
of:
 Space Complexity.
 Time Complexity.
Space Complexity of an algorithm is the amount of memory it needs to run to completion i.e. from
start of execution to its termination. Space need by any algorithm is the sum of following
components:
1. Fixed Component: This is independent of the characteristics of the inputs and outputs. This
part includes: Instruction Space, Space of simple variables, fixed size component variables,
and constants variables.
2. Variable Component: This consist of the space needed by component variables whose size is
dependent on the particular problems instances(Inputs/Outputs) being solved, the space
needed by referenced variables and the recursion stack space is one of the most prominent
components. Also this included the data structure components like Linked list, heap, trees,
graphs etc.
Therefore the total space requirement of any algorithm 'A' can be provided as
Space(A) = Fixed Components(A) + Variable Components(A)
Among both fixed and variable component the variable part is important to be determined
accurately, so that the actual space requirement can be identified for an algorithm 'A'. To
identify the space complexity of any algorithm following steps can be followed:
1. Determine the variables which are instantiated by some default values.
2. Determine which instance characteristics should be used to measure the space
requirement and this is will be problem specific.
3. Generally the choices are limited to quantities related to the number and magnitudes
of the inputs to and outputs from the algorithms.
4. Sometimes more complex measures of the interrelationships among the data items can
used.
Example: Space Complexity
Algorithm Sum(number,size)\\ procedure will produce sum of all numbers provided in
'number' list
{
result=0.0;
for count = 1 to size do \\will repeat from 1,2,3,4,....size times
result= result + number[count];
return result;
}
In above example, when calculating the space complexity we will be looking for both fixed
and variable components. here we have
Fixed components as 'result','count' and 'size' variable there for total space required is three(3)
words.
Variable components is characterized as the value stored in 'size' variable (suppose value
store in variable 'size 'is 'n'). because this will decide the size of 'number' list and will also
drive the for loop. therefore if the space used by size is one word then the total space required
by 'number' variable will be 'n'(value stored in variable 'size').
therefore the space complexity can be written as Space(Sum) = 3 + n;
Time Complexity :
Time complexity of an algorithm(basically when converted to program) is the amount of
computer time it needs to run to completion.
The time taken by a program is the sum of the compile time and the run/execution time .
The compile time is independent of the instance(problem specific) characteristics. following
factors effect the time complexity:
 Characteristics of compiler used to compile the program.
 Computer Machine on which the program is executed and physically clocked.
 Multiuser execution system.
 Number of program steps.
Therefore the again the time complexity consist of two components fixed(factor 1 only) and
variable/instance(factor 2,3 & 4), so for any algorithm 'A' it is provided as:
Time(A) = Fixed Time(A) + Instance Time(A)
Here the number of steps is the most prominent instance characteristics and The number of
steps any program statement is assigned depends on the kind of statement like
 comments count as zero steps,
 an assignment statement which does not involve any calls to other algorithm is
counted as one step,
 for iterative statements we consider the steps count only for the control part of the
statement etc.
Therefore to calculate total number program of program steps we use following procedure.
For this we build a table in which we list the total number of steps contributed by each
statement. This is often arrived at by first determining the number of steps per execution of
the statement and the frequency of each statement executed. This procedure is explained
using an example.
Example: Time Complexity
In above example if you analyze carefully frequency of "for count = 1 to size do" it is 'size +1'
this is because the statement will be executed one time more die to condition check for false
situation of condition provided in for statement. Now once the total steps are calculated they
will resemble the instance characteristics in time complexity of algorithm. Also the repeated
compile time of an algorithm will also be constant every time we compile the same set of
instructions so we can consider this time as constant 'C'. Therefore the time complexity can
be expressed as: Time(Sum) = C + (2size +3)
So in this way both the Space complexity and Time complexity can be calculated.
Combination of both complexity comprises the Performance analysis of any algorithm and
can not be used independently. Both these complexities also helps in defining parameters on
basis of which we optimize algorithms.

Time-Space Trade-Off in Algorithm:


In this article, we will discuss Time-Space Trade-Off in Algorithms. A tradeoff is a situation where
one thing increases and another thing decreases. It is a way to solve a problem in:
 Either in less time and by using more space, or
 In very little space by spending a long amount of time.
The best Algorithm is that which helps to solve a problem that requires less space in memory and also
takes less time to generate the output. But in general, it is not always possible to achieve both of these
conditions at the same time. The most common condition is an algorithm using a lookup table. This
means that the answers to some questions for every possible value can be written down. One way of
solving this problem is to write down the entire lookup table, which will let you find answers very
quickly but will use a lot of space. Another way is to calculate the answers without writing down
anything, which uses very little space, but might take a long time. Therefore, the more time-efficient
algorithms you have, that would be less space-efficient.
Types of Space-Time Trade-off
 Compressed or Uncompressed data
 Re Rendering or Stored images
 Smaller code or loop unrolling
 Lookup tables or Recalculation
1.Compressed or Uncompressed data: A space-time trade-off can be applied to the problem of data
storage. If data stored is uncompressed, it takes more space but less time. But if the data is stored
compressed, it takes less space but more time to run the decompression algorithm. There are many
instances where it is possible to directly work with compressed data. In that case of compressed
bitmap indices, where it is faster to work with compression than without compression.
2.Re-Rendering or Stored images: In this case, storing only the source and rendering it as an image
would take more space but less time i.e., storing an image in the cache is faster than re-rendering but
requires more space in memory.
3.Smaller code or Loop Unrolling: Smaller code occupies less space in memory but it requires high
computation time that is required for jumping back to the beginning of the loop at the end of each
iteration. Loop unrolling can optimize execution speed at the cost of increased binary size. It occupies
more space in memory but requires less computation time.
4.Lookup tables or Recalculation: In a lookup table, an implementation can include the entire table
which reduces computing time but increases the amount of memory needed. It can recalculate i.e.,
compute table entries as needed, increasing computing time but reducing memory requirements.
For Example: In mathematical terms, the sequence Fn of the Fibonacci Numbers is defined by the
recurrence relation:
Fn = Fn – 1 + Fn – 2, 
where, F0 = 0 and F1 = 1.
A simple solution to find the Nth Fibonacci term using recursion from the above recurrence relation.
Below is the implementation using recursion:

 C++

// C++ program to find Nth Fibonacci


// number using recursion
#include <iostream>
using namespace std;
// Function to find Nth Fibonacci term
int Fibonacci(int N)
{
    // Base Case
    if (N < 2)
        return N;
 
    // Recursively computing the term
    // using recurrence relation
    return Fibonacci(N - 1) + Fibonacci(N - 2);
}
// Driver Code
int main()
{
    int N = 5;
 
    // Function Call
    cout << Fibonacci(N);
 
    return 0;
}

Output: 
5
Time Complexity: O(2N)
Auxiliary Space: O(1)
Explanation: The time complexity of the above implementation is exponential due to multiple
calculations of the same subproblems again and again. The auxiliary space used is minimum. But our
goal is to reduce the time complexity of the approach even it requires extra space. Below is the
Optimized approach discussed.
Efficient Approach: To optimize the above approach, the idea is to use Dynamic Programming to
reduce the complexity by memoization of the overlapping subproblems as shown in the below
recursion tree:

Below is the implementation of the above approach:

 C++

// C++ program to find Nth Fibonacci


// number using recursion
#include <iostream>
using namespace std;
 
// Function to find Nth Fibonacci term
int Fibonacci(int N)
{
    int f[N + 2];
    int i;
 
    // 0th and 1st number of the
    // series are 0 and 1
    f[0] = 0;
    f[1] = 1;
 
    // Iterate over the range [2, N]
    for (i = 2; i <= N; i++) {
 
        // Add the previous 2 numbers
        // in the series and store it
        f[i] = f[i - 1] + f[i - 2];
    }
 
    // Return Nth Fibonacci Number
    return f[N];
}
 
// Driver Code
int main()
{
    int N = 5;
 
    // Function Call
    cout << Fibonacci(N);
 
    return 0;
}

Output: 
5
Time Complexity: O(N)
Auxiliary Space: O(N)
Explanation: The time complexity of the above implementation is linear by using an auxiliary space
for storing the overlapping subproblems states so that it can be used further when required.
Analysis of recursive algorithm through recurrence
relation:
In the previous post, we discussed analysis of loops. Many algorithms are recursive in nature. When
we analyze them, we get a recurrence relation for time complexity. We get running time on an input
of size n as a function of n and the running time on inputs of smaller sizes. For example in Merge
Sort, to sort a given array, we divide it in two halves and recursively repeat the process for the two
halves. Finally we merge the results. Time complexity of Merge Sort can be written as T(n) = 2T(n/2)
+ cn. There are many other algorithms like Binary Search, Tower of Hanoi, etc. 
There are mainly three ways for solving recurrences. 
1) Substitution Method: We make a guess for the solution and then we use mathematical induction
to prove the guess is correct or incorrect. 
For example consider the recurrence T(n) = 2T(n/2) + n

We guess the solution as T(n) = O(nLogn). Now we use induction


to prove our guess.

We need to prove that T(n) <= cnLogn. We can assume that it is true
for values smaller than n.

T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn - cnLog2 + n
= cnLogn - cn + n
<= cnLogn
2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time taken
by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence tree, we
start from the given recurrence and keep drawing till we find a pattern among levels. The pattern is
typically a arithmetic or geometric series. 
 
For example consider the recurrence relation
T(n) = T(n/4) + T(n/2) + cn2

cn2
/ \
T(n/4) T(n/2)

If we further break down the expression T(n/4) and T(n/2),


we get following recursion tree.

cn2
/ \
2
c(n )/16 c(n2)/4
/ \ / \
T(n/16) T(n/8) T(n/8) T(n/4)
Breaking down further gives us following
cn2
/ \
2
c(n )/16 c(n2)/4
/ \ / \
2 2 2
c(n )/256 c(n )/64 c(n )/64 c(n2)/16
/ \ / \ / \ / \

To know the value of T(n), we need to calculate sum of tree


nodes level by level. If we sum the above tree level by level,
we get the following series
T(n) = c(n^2 + 5(n^2)/16 + 25(n^2)/256) + ....
The above series is geometrical progression with ratio 5/16.

To get an upper bound, we can sum the infinite series.


We get the sum as (n2)/(1 - 5/16) which is O(n2)
3) Master Method: 
Master Method is a direct way to get the solution. The master method works only for following type
of recurrences or for recurrences that can be transformed to following type. 
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
There are following three cases: 
1. If f(n) = O(nc) where c < Logba then T(n) = Θ(nLogba) 
2. If f(n) = Θ(nc) where c = Logba then T(n) = Θ(ncLog n) 
3.If f(n) = Ω(nc) where c > Logba then T(n) = Θ(f(n))
How does master method work? 
Master method is mainly derived from recurrence tree method. If we draw recurrence tree of T(n) =
aT(n/b) + f(n), we can see that the work done at root is f(n) and work done at all leaves is Θ(n c) where
c is Logba. And the height of recurrence tree is Logbn 
 
In recurrence tree method, we calculate total work done. If the work done at leaves is polynomially
more, then leaves are the dominant part, and our result becomes the work done at leaves (Case 1). If
work done at leaves and root is asymptotically same, then our result becomes height multiplied by
work done at any level (Case 2). If work done at root is asymptotically more, then our result becomes
work done at root (Case 3). 
Examples of some standard algorithms whose time complexity can be evaluated using Master
Method 
Merge Sort: T(n) = 2T(n/2) + Θ(n). It falls in case 2 as c is 1 and Log ba] is also 1. So the solution is
Θ(n Logn) 
Binary Search: T(n) = T(n/2) + Θ(1). It also falls in case 2 as c is 0 and Log ba is also 0. So the
solution is Θ(Logn) 
Notes: 
1) It is not necessary that a recurrence of the form T(n) = aT(n/b) + f(n) can be solved using Master
Theorem. The given three cases have some gaps between them. For example, the recurrence T(n) =
2T(n/2) + n/Logn cannot be solved using master method. 
2) Case 2 can be extended for f(n) = Θ(ncLogkn) 
If f(n) = Θ(ncLogkn) for some constant k >= 0 and c = Logba, then T(n) = Θ(ncLogk+1n) 
Pros and Cons of recursive programming
Pros:

 Recursion can reduce time complexity.


 Recursion adds clarity and reduces the time needed to write and debug code.
 Recursion can lead to more readable and efficient algorithm descriptions.
 Recursion is a useful way of defining things that have a repeated similar structural
form like tree traversal.
Cons:

 Recursion uses more memory like using run-time stack.


 Recursion can be slow.
 If recursion is too deep, then there is a danger of running out of space on the stack
and ultimately program crashes

You might also like