Designe and Analysis of Algoritham Mid-Term Equivalent Assignment
Designe and Analysis of Algoritham Mid-Term Equivalent Assignment
Designe and Analysis of Algoritham Mid-Term Equivalent Assignment
ANALYSIS OF ALGORITHAM
Submitted To:
Submitted By:
Roll No:
Dated:
Solutions
a.If we pick any c > 0, then, the end behavior of cnk −p(n) is going to infinity, in particular, there is an
n0 so that for every n ≥ n0, it is positive, so, we can add p(n) to both sides to get p(n) < cnk .
b. If we pick any c > 0, then, the end behavior of p(n)−cnk is going to infinity, in particular, there is an
n0 so that for every n ≥ n0, it is positive, so, we can add cnk to both sides to get p(n) > cnk.
c. We have by the previous parts that p(n) = O (n k) and p(n) = Ω (n k). So, by Theorem 3.1, we have
that p(n) = Θ (n k).
d. limn→∞ p(n) nk = limn→∞ n d (ad + o(1)) nk < limn→∞ 2adn d nk = 2ad limn→∞ n d−k = 0
e. limn→∞ n k p(n) = limn→∞ n k nd O(1) < limn→∞ n k nd = limn→∞ n k−d =0
c. True. Since f(n) = O(g(n)) there exist c and n0 such that n ≥ n0 implies f(n) ≤ cg(n) and f(n) ≥ 1. This
means that log(f(n)) ≤ log(cg(n)) = log(c) + log(g(n)). Note that the inequality is preserved after taking
logs because f(n) ≥ 1. Now we need to find d such that log(f(n)) ≤ d log(g(n)). It will suffice to make log(c)
+ log(g(n)) ≤ d log(g(n)), which is achieved by taking d = log(c) + 1, since log(g(n)) ≥ 1.
e. False. Counterexample: Let f(n) = 1 n. Suppose that c is such that 1 n ≤ c 1 n2 for n ≥ n0. Choose k such
that kc ≥ n0 and k > 1. Then this implies 1 kc ≤ c k2c 2 = 1 k2c, a contradiction.
f. True. Since f(n) = O(g(n)) there exist c and n0 such that n ≥ n0 implies f(n) ≤ cg(n). Thus g(n) ≥ 1 c f(n),
so g(n) = Ω(f(n)).
a. Recall that F0 = 0, F1 = 1, and Fi = Fi−1 + Fi−2 for i ≥ 2. Then we have F(z) = X∞ i=0 Fiz i = F0 + F1z + X∞
i=2 (Fi−1 + Fi−2)z i = z + z X∞ i=2 Fi−1z i−1 + z 2X∞ i=2 Fi−2z i−2 = z + z X∞ i=1 Fiz i + z 2X∞ i=0 Fiz i = z +
zF(z) + z 2F(z).
b. Manipulating the equation given in part (a) we have F(z)−zF(z)−z 2F(z) = z, so factoring and dividing
gives F(z) = z 1 − z − z 2. Factoring the denominator with the quadratic formula shows 1 − z − z 2 = (1 −
φz) (1 − φzˆ), and the final equality comes from a partial fraction decomposition. 17
c. From part (b) and our knowledge of geometric series we have F(z) = 1 √ 5 1 1 − φz − 1 1 − φzˆ = 1 √ 5
X∞ i=0 (φz) i − X∞ i=0 (φzˆ) i! = X∞ i=0 1 √ 5 (φ i − φˆi) z i.
d. From the definition of the generating function, Fi is the coefficient of z i in F(z). By part (c) this is given
by √ 1 5 (φ i − φˆi). Since |φˆ| < 1 we must have | φˆi √ 5 | < | φˆ √ 5 | < 1 2. Finally, since the Fibonacci
numbers are integers, we see that the exact solution must be the approximated solution φ i √ 5 rounded
to the nearest integer.
a. We will show that the expected increase from each increment operation is equal to one. Suppose that
the value of the counter is currently i. Then, we will increase the number represented from ni to ni+1
with a probability of 1 ni+1−ni, leaving the value alone otherwise. Multiplying these together, we get
that the expected increase is ni+1−ni ni+1−ni − 1.
b. For this choice of ni, we have that at each increment operation, the probability that we change the
value of the counter is 1 100. Since this is a constant with respect to the current value of the counter i,
we can view the final result as a binomial distribution with a p value of .01. Since the variance of a
binomial distribution is np (1 − p), and we have that each success is worth 100 instead, the variance is
going to be equal to .99n.
a. Assume that A has n elements. Our algorithm will use an array P to track the elements which have
been seen, and add to a counter c each time a new element is checked. Once this counter reaches n, we
will know that every element has been checked. Let RI(A) be the function that returns a random index of
A.
Algorithm 2 RANDOM-SEARCH Initialize an array P of size n containing all zeros Initialize integers c and i
to 0 while c 6= n do i = RI(A) if A[i] == x then return i end if if P[i] == 0 then P[i] = 1 c = c + 1 end if end
while return A does not contain x
b. Let N be the random variable for the number of searches required. Then E[N] = X i≥1 iP(i iterations are
required) = X i≥1 i n − 1 n i−1 1 n = 1 n 1 1 − n−1 n 2 = n. c. Let N be the random variable for the
number of searches required. Then E[N] = X i≥1 iP (i iterations are required) = X i≥1 i n − k n i−1 k n = k
n 1 1 − n−k n 2 = n k
a. They do not. Consider the array A = h3, 2, 1, 4, 5i. If we run Build-Max Heap, we get h5, 4, 1, 3, 2i.
However, if we run Build-Max-Heap’, we will get h5, 4, 1, 2, 3i instead.
b. Each insert step takes at most O(lg(n)), since we are doing it n times, we get a bound on the runtime
of O (n lg(n)).
a. It will suffice to show how to access parent and child nodes. In a d-ary array, PARENT(i) = bi/dc, and
CHILD (k, i) = di − d + 1 + k, where CHILD (k, i) gives the k the child of the node indexed by i.
d. The runtime of this implementation of INSERT is O (log d n) since the while loop runs at most as many
times as the height of the d-ary array. Note that when we call PARENT, we mean it as defined in part (a).
Algorithm 5 INSERT(A,key)
e. This is identical to the implementation of HEAP-INCREASE-KEY for 2-ary heaps, but with the PARENT
function interpreted as in part (a). The runtime is O (log d n) since the while loop runs at most as many
times as the height of the d-ary array.
Algorithm 6 INCREASE-KEY(A,i,key)
3: end if
4: A[i] = key
7: i = P ARENT(i)
8: end while
2 3 4 5
8 9 12 14
16 ∞ ∞ ∞
a. ∞ ∞ ∞ ∞
b. For every i, j, Y [1, 1] ≤ Y [i, 1] ≤ Y [i, j]. So, if Y [1, 1] = ∞, we know that Y [i, j] = ∞ for every i, j. This
means that no elements exist. If Y is full, it has no elements labeled ∞, in particular, the element Y [m, n]
is not labeled ∞.
c. Extract-Min(Y,i,j), extracts the minimum value from the young tableau Y 0 obtained by Y 0 [i 0 , j0 ] = Y
[i 0 + i − 1, j0 + j − 1]. Note that in running this algorithm, several accesses may be made out of bounds
for Y, define these to return ∞. No store operations will be made on out of bounds locations. Since the
largest value of i + j that this can be called with is n + m, and
1: min = Y [i, j]
2: if Y [i, j + 1] = Y [i + 1, j] = ∞ then
3: Y [i, j] = ∞
4: return min
5: end if
6: if Y [i, j + 1] < Y [i + 1, j] then
7: Y [i, j] = Y [i, j + 1]
8: Y [i, j + 1] = min
9: return Extract-min (y,i,j+1)
10: else
11: Y [i, j] = Y [i + 1, j]
12: Y [i + 1, j] = min
13: return Extract-min(y,i+1,j)
14: end if
this quantity must increase by one for each call, we have that the runtime is bounded by n + m.
And we do indeed see that partition has moved the two elements that are bigger than the pivot, 19 and
21, to the two final positions in the array.
b. We know that at the beginning of the loop, we have that i < j, because it is true initially so long as |A|
≥ 2. And if it were to be untrue at some iteration, then we would of left the loop in the prior iteration. To
show that we won’t access outside of the array, we need to show that at the beginning of every run of
the loop, there is a k > i so that A[k] ≥ x, and a k 0 < j so that A[j 0 ] ≤ x. This is clearly true because
initially i and j are outside the bounds of the array, and so the element x must be between the two.
Since i < j, we can pick k = j, and k 0 = i. The elements k satisfies the desired relation to x, because the
element at position j was the element at position i in the prior iteration of the loop, prior to doing the
exchange on line 12. Similarly, for k 0 .
c. If there is more than one run of the main loop, we have that j < r because it decreases by at least one
with every iteration. Note that at line 11 in the first run of the loop, we have that i = 1 because A[p] = x ≥
x. So, if we were to terminate after a single iteration of the main loop, we must also have that j = 1 < p.
d. We will show the loop invariant that all the elements in A[p..i] are less than or equal to x which is less
than or equal to all the elements of A[j..r]. It is trivially true prior to the first iteration because both of
these sets of elements are empty. Suppose we just finished an iteration of the loop during which j went
from j1 to j2, and i went from i1 to i2. All the elements in A[i1+1..i2−1] were < x, because they didn’t
cause the loop on lines 8 − 10 to terminate. Similarly, we have that all the elements in A [j2 + 1..j1 − 1]
were > x. We have also that A[i2] ≤ x ≤ A[j2] after the exchange on line 12. Lastly, by induction, we had
that all the elements in A [p.i1] are less than or equal to x, and all the elements in A[j1..r] are greater
than or equal to x. Then, putting it all together, since A[p..i2] = A[p..i1]∪A[i1 + 1..i2 −1]∪ {A[i2]} and
A[j2..r] = ∪{A[j2]}∪A[j2+1..j1−1]∪A[j1..r], we have the desired inequality. Since at termination, we have
i j
13 19 9 5 12 8 7 4 11 2 6 21 0 13
6 19 9 5 12 8 7 4 11 2 13 21 1 11
6 13 9 5 12 8 7 4 11 2 19 21 2 10
6 13 9 5 12 8 7 4 11 2 19 21 10 2
that i ≥ j, we know that A[p..j] ⊆ A[p..i], and so, every element of A[p..j] is less than or equal to x, which
is less than or equal to every element of A[j + 1..r] ⊆ A[j..r].
e. After running Hoare-partition, we don’t have the guarantee that the pivot value will be in the position
j, so, we will scan through the list to find the pivot value, place it between the two subarrays, and
recurse
c. E "Xn q=1 Xq(T(q − 1) + T(n − q) + Θ(n))# = Xn q=1 E [Xq(T(q − 1) + T(n − q) + Θ(n))] = Xn q=1 (T(q − 1) +
T(n − q) + Θ(n))/n = Θ(n) + 1 n Xn q=1 (T(q − 1) + T(n − q)) = Θ(n) + 1 n Xn q=1 T(q − 1) +Xn q=1 T(n − q) ! =
Θ(n) + 1 n Xn q=1 T (q − 1) +Xn q=1 T (q − 1)! = Θ(n) + 2 n Xn q=1 T (q − 1) = Θ(n) + 2 n nX−1 q=0 T(q) =
Θ(n) + 2 n nX−1 q=2 T(q)
d. We will prove this inequality in a different way than suggested by the hint. If we let f(k) = k lg(k)
treated as a continuous function, then f 0 (k) = lg(k)+ 1. Note now that the summation written out is the
left hand approximation of the integral of f(k) from 2 to n with step size 1. By integration by parts, the
anti-derivative of k lg k is 1 ln(2) k 2 2 ln(k) − k 2 4 So, plugging in the bounds and subtracting, we get n
2 lg(n) 2 − n 2 4 ln(2) − 1. Since f has a positive derivative over the entire interval that the integral is
being evaluated over, the left hand rule provides a underapproximation of the integral, so, we have that
nX−1 k=2 k lg(k) ≤ n 2 lg(n) 2 − n 2 4 ln(2) − 1 ≤ n 2 lg(n) 2 − n 2 8 where the last inequality uses the fact
that ln(2) > 1/2. e. Assume by induction that T(q) ≤ q lg(q) + Θ(n). Combining (7.6) and (7.7), we have
E[T(n)] = 2 n nX−1 q=2 E[T(q)] + Θ(n) ≤ 2 n nX−1 q=2 (q lg(q) + T heta(n)) + Θ(n) ≤ 2 n nX−1 q=2 (q lg(q)) +
2 n (nΘ(n)) + Θ(n) ≤ 2 n ( 1 2 n 2 lg(n) − 1 8 n 2 ) + Θ(n) = n lg(n) − 1 4 n + Θ(n) = n lg(n) + Θ(n)
b. The stack depth will be Θ(n) if the input array is already sorted. The right subarray will always have
size 0 so there will be n − 1 recursive calls before the while-condition p < r is violated. c. We modify the
algorithm to make the recursive call on the smaller subarray to avoid building pushing too much on the
stack:
Algorithm 4 MODIFIED-TAIL-RECURSIVE-QUICKSORT(A,p,r)
1: while p < r do
2: q =PARTITION (A, p, r)
3: if q < b (r − p)/2c then
4: MODIFIED-TAIL-RECURSIVE-QUICKSORT (A, p, q − 1)
5: p = q + 1
6: else
7: MODIFIED-TAIL-RECURSIVE-QUICKSORT (A, q + 1, r)
8: r = q − 1
9: end if
10: end while
b. For distinct intervals the algorithm runs exactly as regular quicksort does, so its expected runtime will
be Θ (n lg n) in general. If all of the intervals overlap then the condition on line 12 will be satisfied for
every iteration of the for loop. Thus, the algorithm returns p and r, so only empty arrays remain to be
sorted. FUZZY-PARTITION will only be called a single time, and since its runtime remains Θ(n), the total
expected runtime is Θ(n).