Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Designe and Analysis of Algoritham Mid-Term Equivalent Assignment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

DESIGNE AND

ANALYSIS OF ALGORITHAM

MID-TERM EQUIVALENT ASSIGNMENT

Submitted To:

Submitted By:

Roll No:

Dated:
Solutions

3-1 Asymptotic behavior of polynomials

a.If we pick any c > 0, then, the end behavior of cnk −p(n) is going to infinity, in particular, there is an
n0 so that for every n ≥ n0, it is positive, so, we can add p(n) to both sides to get p(n) < cnk .
b. If we pick any c > 0, then, the end behavior of p(n)−cnk is going to infinity, in particular, there is an
n0 so that for every n ≥ n0, it is positive, so, we can add cnk to both sides to get p(n) > cnk.
c. We have by the previous parts that p(n) = O (n k) and p(n) = Ω (n k). So, by Theorem 3.1, we have
that p(n) = Θ (n k).
d. limn→∞ p(n) nk = limn→∞ n d (ad + o(1)) nk < limn→∞ 2adn d nk = 2ad limn→∞ n d−k = 0
e. limn→∞ n k p(n) = limn→∞ n k nd O(1) < limn→∞ n k nd = limn→∞ n k−d =0

3-4 Asymptotic notation properties

a. False. Counterexample: n = O (n 2) but n 2 6= O(n).

b. False. Counterexample: n + n 2 6= Θ(n).

c. True. Since f(n) = O(g(n)) there exist c and n0 such that n ≥ n0 implies f(n) ≤ cg(n) and f(n) ≥ 1. This
means that log(f(n)) ≤ log(cg(n)) = log(c) + log(g(n)). Note that the inequality is preserved after taking
logs because f(n) ≥ 1. Now we need to find d such that log(f(n)) ≤ d log(g(n)). It will suffice to make log(c)
+ log(g(n)) ≤ d log(g(n)), which is achieved by taking d = log(c) + 1, since log(g(n)) ≥ 1.

d. False. Counterexample: 2n = O(n) but 22n 6= 2n as shown in exercise 3.1-4.

e. False. Counterexample: Let f(n) = 1 n. Suppose that c is such that 1 n ≤ c 1 n2 for n ≥ n0. Choose k such
that kc ≥ n0 and k > 1. Then this implies 1 kc ≤ c k2c 2 = 1 k2c, a contradiction.

f. True. Since f(n) = O(g(n)) there exist c and n0 such that n ≥ n0 implies f(n) ≤ cg(n). Thus g(n) ≥ 1 c f(n),
so g(n) = Ω(f(n)).

g. False. Counterexample: Let f(n) = 22n. By exercise 3.1-4, 22n 6= O(2n).


h. True. Let g be any function such that g(n) = o(f(n)). Since g is asymptotically positive let n0 be such
that n ≥ n0 implies g(n) ≥ 0. Then f(n) + g(n) ≥ f(n) so f(n) + o(f(n)) = Ω(f(n)). Next, choose n1 such that n ≥
n1 implies g(n) ≤ f(n). Then f(n) + g(n) ≤ f(n) + f(n) = 2f(n) so f(n) +o(f(n)) = O(f(n)). By Theorem 3.1, this
implies f(n) +o(f(n)) = Θ(f(n)).

4-1 Recurrence examples

a. By Master Theorem, T(n) ∈ Θ (n 4)


b. By Master Theorem, T(n) ∈ Θ(n)
c. By Master Theorem, T(n) ∈ Θ (n 2 lg(n))
d. By Master Theorem, T(n) ∈ Θ (n 2)
4-4 Fibonacci numbers

a. Recall that F0 = 0, F1 = 1, and Fi = Fi−1 + Fi−2 for i ≥ 2. Then we have F(z) = X∞ i=0 Fiz i = F0 + F1z + X∞
i=2 (Fi−1 + Fi−2)z i = z + z X∞ i=2 Fi−1z i−1 + z 2X∞ i=2 Fi−2z i−2 = z + z X∞ i=1 Fiz i + z 2X∞ i=0 Fiz i = z +
zF(z) + z 2F(z).

b. Manipulating the equation given in part (a) we have F(z)−zF(z)−z 2F(z) = z, so factoring and dividing
gives F(z) = z 1 − z − z 2. Factoring the denominator with the quadratic formula shows 1 − z − z 2 = (1 −
φz) (1 − φzˆ), and the final equality comes from a partial fraction decomposition. 17

c. From part (b) and our knowledge of geometric series we have F(z) = 1 √ 5 1 1 − φz − 1 1 − φzˆ = 1 √ 5
X∞ i=0 (φz) i − X∞ i=0 (φzˆ) i! = X∞ i=0 1 √ 5 (φ i − φˆi) z i.

d. From the definition of the generating function, Fi is the coefficient of z i in F(z). By part (c) this is given
by √ 1 5 (φ i − φˆi). Since |φˆ| < 1 we must have | φˆi √ 5 | < | φˆ √ 5 | < 1 2. Finally, since the Fibonacci
numbers are integers, we see that the exact solution must be the approximated solution φ i √ 5 rounded
to the nearest integer.

5-1 Probabilistic counting

a. We will show that the expected increase from each increment operation is equal to one. Suppose that
the value of the counter is currently i. Then, we will increase the number represented from ni to ni+1
with a probability of 1 ni+1−ni, leaving the value alone otherwise. Multiplying these together, we get
that the expected increase is ni+1−ni ni+1−ni − 1.

b. For this choice of ni, we have that at each increment operation, the probability that we change the
value of the counter is 1 100. Since this is a constant with respect to the current value of the counter i,
we can view the final result as a binomial distribution with a p value of .01. Since the variance of a
binomial distribution is np (1 − p), and we have that each success is worth 100 instead, the variance is
going to be equal to .99n.

5-2 Searching an unsorted array

a. Assume that A has n elements. Our algorithm will use an array P to track the elements which have
been seen, and add to a counter c each time a new element is checked. Once this counter reaches n, we
will know that every element has been checked. Let RI(A) be the function that returns a random index of
A.

Algorithm 2 RANDOM-SEARCH Initialize an array P of size n containing all zeros Initialize integers c and i
to 0 while c 6= n do i = RI(A) if A[i] == x then return i end if if P[i] == 0 then P[i] = 1 c = c + 1 end if end
while return A does not contain x

b. Let N be the random variable for the number of searches required. Then E[N] = X i≥1 iP(i iterations are
required) = X i≥1 i n − 1 n i−1 1 n = 1 n 1 1 − n−1 n 2 = n. c. Let N be the random variable for the
number of searches required. Then E[N] = X i≥1 iP (i iterations are required) = X i≥1 i n − k n i−1 k n = k
n 1 1 − n−k n 2 = n k

6-1 Building a heap using insertion

a. They do not. Consider the array A = h3, 2, 1, 4, 5i. If we run Build-Max Heap, we get h5, 4, 1, 3, 2i.
However, if we run Build-Max-Heap’, we will get h5, 4, 1, 2, 3i instead.

b. Each insert step takes at most O(lg(n)), since we are doing it n times, we get a bound on the runtime
of O (n lg(n)).

6-2 Analysis of d-ary heaps

a. It will suffice to show how to access parent and child nodes. In a d-ary array, PARENT(i) = bi/dc, and
CHILD (k, i) = di − d + 1 + k, where CHILD (k, i) gives the k the child of the node indexed by i.

b. The height of a d-ary heap of n elements is with 1 of log d n.

c. The following is an implementation of HEAP-EXTRACT-MAX for a d-ary heap. An implementation of


DMAX-HEAPIFY is also given, which is the analog of MAX-HEAPIFY for d-ary heap. HEAP-EXTRACT-MAX
consists of constant time operations, followed by a call to DMAX-HEAPIFY. The number of times this
recursively calls itself is bounded by the height of the d-ary heap, so the running time is O (d log d n).
Note that the CHILD function is meant to be the one described in part (a).

Algorithm 3 HEAP-EXTRACT-MAX(A) for a d-ary heap

1: if A.heap − size < 1 then


2: error “heap underflow”
3: end if
4: max = A [1]
5: A [1] = A [A.heap − size]
6: A.heap − size = A.heap − size − 1
7: DMAX-HEAPIFY(A,1)
Algorithm 4 DMAX-HEAPIFY(A,i)
1: largest = i
2: for k = 1 to d do
3: if CHILD (k, i) ≤ A.heap − size and A[CHILD(k, i)] > A[i] then
4: if A[CHILD(k, i)] > largest then
5: largest = A[CHILD(k, i)]
6: end if
7: end if
8: end for
9: if largest 6= i then
10: exchange A[i] with A[largest]
11: DMAX-HEAPIFY (A, largest)
12: end if

d. The runtime of this implementation of INSERT is O (log d n) since the while loop runs at most as many
times as the height of the d-ary array. Note that when we call PARENT, we mean it as defined in part (a).

Algorithm 5 INSERT(A,key)

1: A.heap − size = A.heap − size + 1


2: A [A.heap − size] = key
3: i = A.heap − size
4: while i > 1 and A [P ARENT(i) < A[i] do
5: exchange A[i] with A [P ARENT(i)]
6: i = PARENT(i)
7: end while

e. This is identical to the implementation of HEAP-INCREASE-KEY for 2-ary heaps, but with the PARENT
function interpreted as in part (a). The runtime is O (log d n) since the while loop runs at most as many
times as the height of the d-ary array.

Algorithm 6 INCREASE-KEY(A,i,key)

1: if key < A[i] then

2: error “new key is smaller than current key ”

3: end if

4: A[i] = key

5: while i > 1 and A [PARENT(i) < A[i] do

6: exchange A[i] with A[P ARENT(i)]

7: i = P ARENT(i)

8: end while

6-3 Young tableaus

2 3 4 5
8 9 12 14
16 ∞ ∞ ∞
a. ∞ ∞ ∞ ∞

b. For every i, j, Y [1, 1] ≤ Y [i, 1] ≤ Y [i, j]. So, if Y [1, 1] = ∞, we know that Y [i, j] = ∞ for every i, j. This
means that no elements exist. If Y is full, it has no elements labeled ∞, in particular, the element Y [m, n]
is not labeled ∞.

c. Extract-Min(Y,i,j), extracts the minimum value from the young tableau Y 0 obtained by Y 0 [i 0 , j0 ] = Y
[i 0 + i − 1, j0 + j − 1]. Note that in running this algorithm, several accesses may be made out of bounds
for Y, define these to return ∞. No store operations will be made on out of bounds locations. Since the
largest value of i + j that this can be called with is n + m, and

1: min = Y [i, j]
2: if Y [i, j + 1] = Y [i + 1, j] = ∞ then
3: Y [i, j] = ∞
4: return min
5: end if
6: if Y [i, j + 1] < Y [i + 1, j] then
7: Y [i, j] = Y [i, j + 1]
8: Y [i, j + 1] = min
9: return Extract-min (y,i,j+1)
10: else
11: Y [i, j] = Y [i + 1, j]
12: Y [i + 1, j] = min
13: return Extract-min(y,i+1,j)
14: end if
this quantity must increase by one for each call, we have that the runtime is bounded by n + m.

d. Insert(Y,key) Since i + j is decreasing at each step, starts as n + m and is


1: i = m, j = n
2: Y [i, j] = key
3: while Y [i − 1, j] > Y [i, j] or Y [i, j − 1] > Y [i, j] do
4: if Y [i − 1, j] < Y [i, j − 1] then
5: Swap Y [i, j] and Y [i, j − 1]
6: j − −
7: else
8: Swap Y [i, j] and Y [i − 1, j]
9: i − −
10: end if
11: end while
bounded by 2 below, we know that this program has runtime O (n + m).
e. Place the n 2 elements into a Young Tableau by calling the algorithm from part d on each. Then, call
the algorithm from part c n 2 to obtain the numbers in increasing order. Both of these operations take
time at most 2n ∈ O(n), and are done n 2 times, so, the total runtime is O (n 3)
f. Find(Y,key). Let Check(y,key,i,j) mean to return true if Y [i, j] = key, otherwise do nothing

7-1 Hoare partition correctness


a. We will be calling with the parameters p = 1, r = |A| = 12. So, throughout, x = 13.

And we do indeed see that partition has moved the two elements that are bigger than the pivot, 19 and
21, to the two final positions in the array.

b. We know that at the beginning of the loop, we have that i < j, because it is true initially so long as |A|
≥ 2. And if it were to be untrue at some iteration, then we would of left the loop in the prior iteration. To
show that we won’t access outside of the array, we need to show that at the beginning of every run of
the loop, there is a k > i so that A[k] ≥ x, and a k 0 < j so that A[j 0 ] ≤ x. This is clearly true because
initially i and j are outside the bounds of the array, and so the element x must be between the two.
Since i < j, we can pick k = j, and k 0 = i. The elements k satisfies the desired relation to x, because the
element at position j was the element at position i in the prior iteration of the loop, prior to doing the
exchange on line 12. Similarly, for k 0 .

c. If there is more than one run of the main loop, we have that j < r because it decreases by at least one
with every iteration. Note that at line 11 in the first run of the loop, we have that i = 1 because A[p] = x ≥
x. So, if we were to terminate after a single iteration of the main loop, we must also have that j = 1 < p.

d. We will show the loop invariant that all the elements in A[p..i] are less than or equal to x which is less
than or equal to all the elements of A[j..r]. It is trivially true prior to the first iteration because both of
these sets of elements are empty. Suppose we just finished an iteration of the loop during which j went
from j1 to j2, and i went from i1 to i2. All the elements in A[i1+1..i2−1] were < x, because they didn’t
cause the loop on lines 8 − 10 to terminate. Similarly, we have that all the elements in A [j2 + 1..j1 − 1]
were > x. We have also that A[i2] ≤ x ≤ A[j2] after the exchange on line 12. Lastly, by induction, we had
that all the elements in A [p.i1] are less than or equal to x, and all the elements in A[j1..r] are greater
than or equal to x. Then, putting it all together, since A[p..i2] = A[p..i1]∪A[i1 + 1..i2 −1]∪ {A[i2]} and
A[j2..r] = ∪{A[j2]}∪A[j2+1..j1−1]∪A[j1..r], we have the desired inequality. Since at termination, we have
i j
13 19 9 5 12 8 7 4 11 2 6 21 0 13
6 19 9 5 12 8 7 4 11 2 13 21 1 11
6 13 9 5 12 8 7 4 11 2 19 21 2 10
6 13 9 5 12 8 7 4 11 2 19 21 10 2
that i ≥ j, we know that A[p..j] ⊆ A[p..i], and so, every element of A[p..j] is less than or equal to x, which
is less than or equal to every element of A[j + 1..r] ⊆ A[j..r].
e. After running Hoare-partition, we don’t have the guarantee that the pivot value will be in the position
j, so, we will scan through the list to find the pivot value, place it between the two subarrays, and
recurse

7-3 Alternative quicksort analysis


a. Since the pivot is selected as a random element in the array, which has size n, the probabilities of any
particular element being selected are all equal, and add to one, so, are all 1 n . As such, E[Xi ] = Pr[i
smallest is picked ] = 1 n .
b. We can apply linearity of expectation over all of the events Xi. Suppose we have a particular Xi be
true, then, we will have one of the sub arrays be length i − 1, and the other be n − i, and will of course
still need linear time to run the partition procedure. This corresponds exactly to the summand in
equation (7.5).

c. E "Xn q=1 Xq(T(q − 1) + T(n − q) + Θ(n))# = Xn q=1 E [Xq(T(q − 1) + T(n − q) + Θ(n))] = Xn q=1 (T(q − 1) +
T(n − q) + Θ(n))/n = Θ(n) + 1 n Xn q=1 (T(q − 1) + T(n − q)) = Θ(n) + 1 n Xn q=1 T(q − 1) +Xn q=1 T(n − q) ! =
Θ(n) + 1 n Xn q=1 T (q − 1) +Xn q=1 T (q − 1)! = Θ(n) + 2 n Xn q=1 T (q − 1) = Θ(n) + 2 n nX−1 q=0 T(q) =
Θ(n) + 2 n nX−1 q=2 T(q)

d. We will prove this inequality in a different way than suggested by the hint. If we let f(k) = k lg(k)
treated as a continuous function, then f 0 (k) = lg(k)+ 1. Note now that the summation written out is the
left hand approximation of the integral of f(k) from 2 to n with step size 1. By integration by parts, the
anti-derivative of k lg k is 1 ln(2) k 2 2 ln(k) − k 2 4 So, plugging in the bounds and subtracting, we get n
2 lg(n) 2 − n 2 4 ln(2) − 1. Since f has a positive derivative over the entire interval that the integral is
being evaluated over, the left hand rule provides a underapproximation of the integral, so, we have that
nX−1 k=2 k lg(k) ≤ n 2 lg(n) 2 − n 2 4 ln(2) − 1 ≤ n 2 lg(n) 2 − n 2 8 where the last inequality uses the fact
that ln(2) > 1/2. e. Assume by induction that T(q) ≤ q lg(q) + Θ(n). Combining (7.6) and (7.7), we have
E[T(n)] = 2 n nX−1 q=2 E[T(q)] + Θ(n) ≤ 2 n nX−1 q=2 (q lg(q) + T heta(n)) + Θ(n) ≤ 2 n nX−1 q=2 (q lg(q)) +
2 n (nΘ(n)) + Θ(n) ≤ 2 n ( 1 2 n 2 lg(n) − 1 8 n 2 ) + Θ(n) = n lg(n) − 1 4 n + Θ(n) = n lg(n) + Θ(n)

7-4 Stack depth for quicksort


a. We’ll proceed by induction. For the base case, if A contains 1 element then p = r so the algorithm
terminates immediately, leave a single sorted element. Now suppose that for 1 ≤ k ≤ n − 1, TAIL-
RECURSIVE-QUICKSORT correctly sorts an array A containing k elements. Let A have size n. We set q
equal to the pivot and by the induction hypothesis, TAIL-RECURSIVEQUICKSORT correctly sorts the left
subarray which is of strictly smaller size. Next, p is updated to q + 1 and the exact same sequence of
steps follows as if we had originally called TAIL-RECURSIVE-QUICKSORT(A,q+1,n). Again, this array is of
strictly smaller size, so by the induction hypothesis it correctly sorts A[q + 1...n] as desired.

b. The stack depth will be Θ(n) if the input array is already sorted. The right subarray will always have
size 0 so there will be n − 1 recursive calls before the while-condition p < r is violated. c. We modify the
algorithm to make the recursive call on the smaller subarray to avoid building pushing too much on the
stack:
Algorithm 4 MODIFIED-TAIL-RECURSIVE-QUICKSORT(A,p,r)
1: while p < r do
2: q =PARTITION (A, p, r)
3: if q < b (r − p)/2c then
4: MODIFIED-TAIL-RECURSIVE-QUICKSORT (A, p, q − 1)
5: p = q + 1
6: else
7: MODIFIED-TAIL-RECURSIVE-QUICKSORT (A, q + 1, r)
8: r = q − 1
9: end if
10: end while

7-6 Fuzzy sorting of intervals


a. Our algorithm will be essentially the same as the modified randomized quicksort written in problem 2.
We sort the ai ’s, but we replace the comparison operators to check for overlapping intervals. The only
modifications needed are made in PARTITION, so we will just rewrite that here. For a given element x in
position i, we’ll use x.a and x.b to denote ai and bi respectively.
Algorithm 5 FUZZY-PARTITION(A,p,r)
1: x = A[r]
2: exchange A[r] with A[p]
3: i = p − 1
4: k = p
5: for j = p + 1 to r − 1 do
6: if bj < x.a then
7: i = i + 1
8: k = i + 2
9: exchange A[i] with A[j]
10: exchange A[k] with A[j]
11: end if
12: if bj ≥ x.a or aj ≤ x.b then
13: x.a = max (aj , x.a) and x.b = min(bj , x.b)
14: k = k + 1
15: exchange A[k] with A[j]
16: end if
17: end for
18: exchange A [i + 1] with A[r]
19: return i + 1 and k + 1 When intervals overlap, we treat them as equal elements, thus cutting down
on the time required to sort.

b. For distinct intervals the algorithm runs exactly as regular quicksort does, so its expected runtime will
be Θ (n lg n) in general. If all of the intervals overlap then the condition on line 12 will be satisfied for
every iteration of the for loop. Thus, the algorithm returns p and r, so only empty arrays remain to be
sorted. FUZZY-PARTITION will only be called a single time, and since its runtime remains Θ(n), the total
expected runtime is Θ(n).

You might also like