Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
449 views

Greedy Algorithms

1. Divide and conquer algorithms break problems into smaller subproblems, solve the subproblems recursively, and combine the solutions. 2. Recurrence relations define algorithms runtimes using a recursive formula relating the time for a problem of size n to the time for subproblems of smaller size. 3. For binary search, the problem is broken into halves at each step. The recurrence relation is T(n) = T(n/2) + c, where c is constant work. This yields a runtime of O(log n).

Uploaded by

Eduardo Alapisco
Copyright
© © All Rights Reserved
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
449 views

Greedy Algorithms

1. Divide and conquer algorithms break problems into smaller subproblems, solve the subproblems recursively, and combine the solutions. 2. Recurrence relations define algorithms runtimes using a recursive formula relating the time for a problem of size n to the time for subproblems of smaller size. 3. For binary search, the problem is broken into halves at each step. The recurrence relation is T(n) = T(n/2) + c, where c is constant work. This yields a runtime of O(log n).

Uploaded by

Eduardo Alapisco
Copyright
© © All Rights Reserved
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 6

Divide and Conquer Algorithms

A recursive Solution

First we find out a recursive solution for our problem such that it solves a smaller part of our problem.
So recursion goes on until our entire problem is solved.

We need to accurately handle a base case, this is a given circumstance where recursion will stop and
return an answer.

Recurrence relation

In order to examine the runtime of our recursive algorithm it's often useful to define the time that the
algorithm takes in the form of a recurrence relation.

A recurrence relation defines a sequence of values in terms of a recursive formula.

The example here shows the recursive definition of the values in the Fibonacci sequence.

You can see that we defined the value for the nth Fibonacci as the sum of the preceding two values.

As with any recurrence definition, we need one or more base cases. Here, we define base cases in
evaluating a zero and nth of one.

From this recursive definition, we've defined values for evaluating F(n) for any non-negative integer, n.
The sequence starts with 0, 1, 1, 2, 3, 5, 8, and continues on.
When we're doing run-time analysis for divide and conquer algorithms, we usually define a recurrence
relation for T of n. Where t stand for the worst time taken for the algorithm, and n is the size of the
problem.

For example, in the next linear search algorithm, the worst case time is when an element isn't found
because we must check every element of the array. In this case we have a recursion for a problem of
size n which consists of subproblem of size n minus one plus a constant amount of work.

The constant amount of work concludes checking high versus low, checking A low equals key,
preparing the parameters for the workers who call, and then returning the result to that call. Thus the
recurrence is t of n is t of n minus one plus c, where c is some constant.

The base case of the recursion is an empty array, there's a constant amount of work. Checking high less
than low and then returning not found. Thus T is 0 equals C.
Recursion Tree

Let's look at a recursion tree in order to determine how much total time the algorithm takes. As is
normal we're looking at worse case run time, which will occur when no matching element is found.

In a recurgent tree, we show the problem along with the size of the problem.

We see that we have an original problem size N which then generates a sub problem size n-1, and so on
all the way down to a problem size zero. The work column shows the amount of work that is done at
each level. We have a constant amount of work at each level which we represent by c, a constant.

Alternatively, we could have represented this constant amount of work with big theta of one.

The total work is just the sum of the work done at each level that's a summation from zero to n and a
constant c. Which is n plus one times c, or just big theta of n.
To summarize, what we've done is

1.- creative a recursive solution.


2.-define the corresponding recurrence relation, T.
3.- Solve T of n to determine if worst-case runtime, and four, create an iterative solution from the
recursive.

Solving Binary Search

First our base case. If we have an empty array, that is if a high is less than low, so no elements, then
we're going to return low- 1.

Otherwise, we're going to calculate the midpoint. So we want something halfway between low and
high. So what we're going to do is figure the width, which is high- low, cut it in half, so divide by 2,
and then add that to low. That might not be an integer. Because of the fact that high- low divided by 2
may give us a fractional portion, so we're going to take the floor of that.
And now we check and see is the element at that midpoint equal to our key. If so, we're done, we return
it. If not, the good news is of course, is we don't have to check all the other elements, we've ruled out
half of them. So if the key is less than the midpoint element, then all the upper ones we can ignore. So
we're going to go ahead and now return the BinarySearch in A from low to mid- 1, completely ignoring
all the stuff over here. Otherwise, the key is greater than the midpoint, and again, we can throw away
the lower stuff and go for midpoint + 1, all the way to high.

Summary

1.- Broken our problem into non-overlapping subproblems of the same type.

2.- We've recursively solved the subproblems.

3.- And then we're going to combine the results of those subproblems

Binary Search Recurrence Relation

So whats our recurrence relation for the worst case run time? Well, the worst case is if we don't find an
element. So were going to look at T(n) Is equal to T of roughly n over 2 + c. We have a floor there of n
over 2 because if n is odd. Plus some constant amount of work to add together, to calculate the
midpoint. As well as checking the midpoint against the key. And then our base case is when we have an
empty array. And that's just a constant amount of time to check them.

Binary Search Runtime

We got our original size n, and we're going to break it down, n over 2, n over 4. All the way down. How
many of these problems are there. Well, if we're cutting something in two over and over again. It's
going to take log base two until we get down to 1. So the total here, is actually log base two of n + 1.
The amount of work we're doing is c. So at each level, we're doing c work. So the total amount of work
if we sum it, is just the sum from i=0 to log base 2 of n of c.

That is just log base 2 of n + 1, that is log base 2 of n, that quantity, times c.

And that is just theta of log based two of n, but really what we'd normally say is theta of log n, because
the base doesn't matter. That's just a concept multiplicative factor.

You might also like