Design and Analysis of Algorithm Quantum
Design and Analysis of Algorithm Quantum
1 Introduction
CONTENTS
Part-1 : Algorithms, Analyzing ............................... 1–2B to 1–3B
Algorithms, Complexity
of Algorithms
Questions-Answers
Answer
1. An algorithm is a set of rules for carrying out calculation either by hand
or on machine.
2. It is a finite step-by-step procedure to achieve a required result.
3. It is a sequence of computational steps that transform the input into the
output.
4. An algorithm is a sequence of operations performed on data that have to
be organized in data structures.
Characteristics of algorithm are :
1. Input and output : The algorithm must accept zero or more inputs
and must produce at least one output.
2. Definiteness : Each step of algorithm must be clear and unambiguous.
3. Effectiveness : Every step must be basic and essential.
4. Finiteness : Total number of steps used in algorithm should be finite.
Que 1.2. What do you mean by analysis or complexity of an
algorithm ? Give its types and cases.
Answer
Analysis/complexity of an algorithm :
The complexity of an algorithm is a function g(n) that gives the upper
bound of the number of operation (or running time) performed by an
algorithm when the input size is n.
Types of complexity :
1. Space complexity : The space complexity of an algorithm is the amount
of memory it needs to run to completion.
2. Time complexity : The time complexity of an algorithm is the amount
of time it needs to run to completion.
Cases of complexity :
1. Worst case complexity : The running time for any given size input
will be lower than the upper bound except possibly for some values of
the input where the maximum is reached.
Design and Analysis of Algorithms 1–3 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. Average case complexity : The running time for any given size input
will be the average number of operations over all problem instances
for a given size.
3. Best case complexity : The best case complexity of the algorithm is
the function defined by the minimum number of steps taken on any
instance of size n.
PART-2
Growth of Functions, Performance Measurements.
Questions-Answers
Answer
1. Asymptotic notation is a shorthand way to represent the fastest possible
and slowest possible running times for an algorithm.
2. It is a line that stays within bounds.
3. These are also referred to as ‘best case’ and ‘worst case’ scenarios and
are used to find complexities of functions.
Notations used for analyzing complexity are :
1. -Notation (Same order) :
c2g(n)
f(n)
c1g(n)
n0
n
f(n) = (g(n))
Fig. 1.3.1.
a. This notation bounds a function within constant factors.
b. We say f(n) = g(n) if there exist positive constants n0, c1 and c2
such that to the right of n0 the value of f(n) always lies between
c1 g(n) and c2 g(n) inclusive.
2. O-Notation (Upper bound) :
a. Big-oh is formal method of expressing the upper bound of an
algorithm’s running time.
Introduction 1–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. It is the measure of the longest amount of time it could possibly
take for the algorithm to complete.
c. More formally, for non-negative functions, f (n) and g(n), if there
exists an integer n0 and a constant c > 0 such that for all integers
n n0.
f (n) cg(n)
d. Then, f (n) is big-oh of g(n). This is denoted as :
f (n) O(g(n))
i.e., the set of functions which, as n gets large, grow faster than a
constant time f (n).
cg(n)
f(n)
n
n0 f(n) = O(g(n))
Fig. 1.3.2.
3. -Notation (Lower bound) :
a. This notation gives a lower bound for a function within a constant
factor.
b. We write f(n) = g(n)) if there are positive constants n0 and c such
that to the right of n0, the value of f(n) always lies on or above cg(n).
f(n)
cg(n)
n
n0
f(n) = (g(n))
Fig. 1.3.3.
4. Little-oh notation (o) : It is used to denote an upper bound that is
asymptotically tight because upper bound provided by O-notation is
not tight.
o(g(n)) = {f(n) : for any positive constant c > 0, if a constant n0 > 0 such
that 0 < f(n) < cg(n) V n > n0}
5. Little omega notation () : It is used to denote lower bound that is
asymptotically tight.
(g(n)) = {f(n) : For any positive constant c > 0, if a constant n0 > 0 such
that 0 < cg(n) < f(n) V n > n0}
Design and Analysis of Algorithms 1–5 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 1.4. If f(n) = 100 * 2n + n5 + n, then show that f(n) = O(2n).
Answer
If f(n) = 100 * 2n + n5 + n
For n5 n
100 * 2n + n5 + n 100 * 2n + n5 + n5
100 * 2n + 2n5
For 2n n5
100 * 2n + n5 + n 100 * 2n + 2.2n
102 * 2n [ n 1, n0 = 23]
Thus, f(n) = O(2n)
Answer
Master’s theorem :
Let T(n) be defined on the non-negative integers by the recurrence.
n
T(n) = aT + f(n) where a 1, b > 1 are constants
b
a = Number of sub-problems in the recursion
1/b = Portion of the original problem represented by each sub-
problem
f(n) = Cost of dividing the problem and the cost of merging the
solution
Then T(n) can be bounded asymptotically as follows :
Case 1 :
If it is true that : f(n) = O(nlogb a –E) for E> 0
It follows that : T(n) = (nlogb a)
n
Example : T(n) = 8T + 1000n2
b
In the given formula, the variables get the following values :
a = 8, b = 2, f(n) = 1000n2, logba = log28 = 3
nlogb a = nlog2 8 = n3
f(n) = O(nlogb a – E) = O(n3 – E)
For E = 1, we get
f(n) = O(n3 – 1) = O(n2)
Since this equation holds, the first case of the Master’s theorem applies to the
given recurrence relation, thus resulting solution is
T(n) = (nlogb a) = (n3)
Case 2 :
If it is true that : f(n) = (nlogb a)
It follows that : T(n) = (nlogb a log(n))
Introduction 1–6 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Example :
n
T(n) = 2T + n
2
In the given formula, the variables get the following values :
a = 2, b = 2, f(n) = n, logba = log22 = 1
nlogb a = nlog2 2 = n
f(n) = (nlogb a) = (n)
Since this equation holds, the second case of the Master’s theorem applies to
the given recurrence relation, thus resulting solution is :
T(n) = (nlogb a log(n)) = (n log n)
Case 3 :
If it is true that : f(n) = (nlogb a + E) for E> 0
and if it is also true that :
n
if af cf(n) for a, c < 1 and all sufficiently large n
b
It follows that : T(n) = (f(n))
n
Example : T(n) = 2T + n2
2
In the given formula, the variables get the following values :
a = 2, b = 2, f(n) = n2, logb a = log2 2 = 1
nlogb a = nlog2 2 = n
f(n) = (nlogb a + E)
For E = 1 we get
f(n) = (n1 + 1) = (n2)
Since the equation holds, third case of Master’s theorem is applied.
Now, we have to check for the second condition of third case, if it is true
that :
n
af c f(n)
b
If we insert once more the values, we get :
2
n 1
2 cn2 n2 cn2
2 2
1
If we choose c = , it is true that :
2
1 2 1 2
n n n 1
2 2
So, it follows that : T(n) = (f(n))
If we insert once more the necessary values, we get :
T(n) (n2)
Thus, the given recurrence relation T(n) was in (n2).
Answer
Given that :
n
T(n) = 7T n2 ...(1.6.1)
2
n
T(n) = aT n2 ...(1.6.2)
4
Here, eq. (1.6.1) defines the running time for algorithm A and eq. (1.6.2)
defines the running time for algorithm A. Then for finding value of a for
which A is asymptotically faster than A we find asymptotic notation for the
recurrence by using Master’s method.
n
Now, compare eq. (1.6.1) by T(n) = aT f (n)
b
we get, a= 7
b=2
f(n) = n2
log 7
n log b a = n 2 = n2.81
Now, apply cases of Master’s, theorem as :
Case 1 : f(n) = O (n log2 7 E )
f(n) = O (n2.81 – E)
f(n) = O (n2.81 – 0.81)
f(n) = O (n2)
Hence, case 1 of Master’s theorem is satisfied.
Thus, T(n) = ( nlog b a )
T(n) = (n2.81)
Since recurrence given by eq. (1.6.1) is asymptotically bounded by
-notation by which is used to show optimum time we have to show that
recurrence given by eq. (1.6.2) is bounded by -notation which shows
minimum time (best case).
For the use satisfy the case 3 of Master theorem, let a = 16
n
T(n) = 16T n2
4
a = 16
b=4
f(n) = n2
(nlog b a E ) = (n2 + E)
Hence, case 3 of Master’s theorem is satisfied.
T(n) = (f(n))
T(n) = (n2)
Therefore, this shows that A is asymptotically faster than A when
a = 16.
Introduction 1–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 1.7. The recurrence T(n) = 7T(n/3) + n2 describes the running
time of an algorithm A. Another competing algorithm B has a
running time of S(n) = a S(n/9) + n2. What is the smallest value of a
such that B is asymptotically faster than A ?
AKTU 2018-19, Marks 10
Answer
Given that :
n
T(n) = 7T n2 ...(1.7.1)
3
n 2
S(n) = aS n ...(1.7.2)
9
Here, eq. (1.7.1) defines the running time for algorithm A and eq. (1.7.2)
defines the running time for algorithm B. Then for finding value of a for which
B is asymptotically faster than A we find asymptotic notation for the recurrence
by using Master’s method.
n
Now, compare eq. (1.7.1) with T(n) = aT f (n)
b
we get, a = 7, b = 3
f(n) = n2
a 7
log
nlog b = n 3 = n2.81
Now, apply cases of Master’s, theorem as :
7
Case 3 : f(n) = O (nlog 3 )
f(n) = O (n1.77 + )
f(n) = O (n1.77 + 0.23)
f(n) = O (n2)
Hence, case 3 of Master’s theorem is satisfied.
Thus, T(n) = f(n)
T(n) = (n2)
Since recurrence (1) is asymptotically bounded by -notation which is
used to show optimum time we have to show that recurrence given by
eq. (1.7.2) is bounded by -notation which shows minimum time (best
case).
For the use satisfy the case 2 of Master theorem, Guess a = 81
n 2
S(n) = f(n) = 81 S n
9
a = 81, b = 9
f(n) = nlog 9 81
f(n) = (nlog b a ) = (n2)
Hence, case 2 of Master’s theorem is satisfied.
T(n) = ( nlog 9 81 log n)
Design and Analysis of Algorithms 1–9 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
T(n) = (n2 log n)
Therefore, this shows that B is asymptotically faster than A when
a = 81.
T(n) = T( n ) + O(log n)
Answer
T(n) = T( n ) + O(log n) ...(1.8.1)
m= log n
Let n= 2m
n1/2 = 2m/2 ...(1.8.2)
Put value of n in eq. (1.8.1) we get
T(2m) = T (2m / 2 ) O(log 2m ) ...(1.8.3)
x(m) = T(2m) ...(1.8.4)
Putting the value of x(m) in eq. (1.8.3)
m
x(m) = x O(m) ...(1.8.5)
2
Solution of eq. (1.8.5) is given as
a = 1, b = 2, f(n) = O(m)
mlogba = mlog21 + E where E = 1
x(m) = (log m)
T(n) = (log log n)
Que 1.9.
i. Solve the recurrence T (n) = 2T(n/2) + n2 + 2n + 1
ii. Prove that worst case running time of any comparison sort
is (n log n). AKTU 2019-20, Marks 07
Answer
n
i. T (n) = 2T(n/2) + n2 + 2n + 1 2T n2
2
n
Compare it with T(n) = aT f (n)
b
we have, a = 2, b = 2, f(n) = n2
Now, we apply cases for Master’s theorem.
nlog b a = nlog 2 2 = n
This satisfies case 3 of Master’s theorem.
f(n) = nlog b a E n1 E
= (n1 + 1) where E = 1
= (n2)
Introduction 1–10 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
n2
Again 2 f c f(n2) ...(1.9.1)
2
eq. (1.9.1) is true for c = 2
T(n) = (f(n))
T(n) = f(n2)
ii. Let T(n) be the time taken by merge sort to sort any array of n elements.
n n
Therefore, T(n) = T g(n)
2 2
where g(n) (n)
This recurrence, which becomes :
n
T(n) = 2T g (n)
2
when n is even is a special case of our general analysis for divide-and
conquer algorithms.
n
Compare the above given recurrence with T(n) = aT f (n)
b
we get a= 2
b=2
f(n) = g(n)
log a log 2
Now we find, n b = n 2 n1 n
f(n) = (n)
i.e., case 2 of Master’s theorem applied then
T(n) = nlog b a log n
T(n) = (n log n)
Hence, the worst case running time of merge sort is (n log n).
Answer
1. Recursion is a process of expressing a function that calls itself to perform
specific operation.
2. Indirect recursion occurs when one function calls another function that
then calls the first function.
3. Suppose P is a procedure containing either a call statement to itself or
a call statement to a second procedure that may eventually result in a
call statement back to the original procedure P. Then P is called recursive
procedure.
4. A recursive procedure must have the following two properties :
a. There must be certain criteria, called base criteria, for which the
procedure does not call itself.
Design and Analysis of Algorithms 1–11 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. Each time the procedure does call itself, it must be closer to the
criteria.
5. A recursive procedure with these two properties is said to be well-
defined.
For example :
The factorial function may also be defined as follows :
a. If n = 0, then n! = 1.
Here, the value of n! is explicitly given when n = 0 (thus 0 is the
base value).
b. If n > 0, then n! = n. (n – 1)!
Here, the value of n! for arbitrary n is defined in terms of a smaller
value of n which is closer to the base value 0.
Observe that this definition of n! is recursive, since it refers to itself when it
uses (n – 1)!
Answer
1. Recursion tree is a pictorial representation of an iteration method,
which is in the form of a tree, where at each level nodes are expanded.
2. In a recursion tree, each node represents the cost of a single subproblem.
3. Recursion trees are particularly useful when the recurrence describes
the running time of a divide and conquer algorithm.
4. A recursion tree is best used to generate a good guess, which is then
verified by the substitution method.
5. It is a method to analyze the complexity of an algorithm by diagramming
the recursive function calls in the form of tree.
Answer
T(n) = T(n – 1) + T(n – 2) + 1
At kth level, T(1) will be equal to 1
when, n–k= 1
k= n–1
= 20 + 21 + 22 + ...... 2k
= 20 + 21 + 22 + ...... 2n–1
Introduction 1–12 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
T(n)
T(n – 1) T(n – 2) 1 20 = 1
T(n – k) 2k
n 0 n 1
a(r 1) 2 (2 1)
Sum of n terms of geometric progression =
r 1 21
=2 n – 1 n
– 1 = O(2 )
Answer
7n 72 n
T(n) = n 2 + ........+ log n times
8 8
= (n log n)
n n
n n n 7n
2 4 8 8
log n
2
n n n n n n n n n 7n
2
4 8 16 8 16 32 16 32 64 8
Answer
T(n) = T(n) + T((1 – )n) + cn
Design and Analysis of Algorithms 1–13 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Recursion tree :
Cost
cn cn
c n (1 – )cn cn
1
So, Total cost = cn + cn + .... (k + 1) times = cn(k + 1)
log 1 (cn)
= cn ×
1
Answer
T(n) = n + n + n + ................+ log n times = (n log n)
n n
n 4n n
5 5
log n
n 4n 4n 16n
25 25 25 25 n
Questions-Answers
Answer
1. Shell sort is a highly efficient sorting algorithm and is based on insertion
sort algorithm and we can code it easily.
2. It roughly sorts the data first, moving large elements towards one end
and small elements towards the other.
3. In shell sort several passes over the data is performed.
4. After the final pass, the data is fully sorted.
5. The shell sort does not sort the data itself; it increases the efficiency of
other sorting algorithms.
Algorithm :
Input : An array a of length n with array elements numbered 0 to n – 1.
1. inc round (n/2)
2. while inc > 0
3. for i = inc to n – 1
temp a[i]
ji
while j inc and a [j – inc] > temp
a[j] a[j – inc]
j j – inc
a[j] temp
4. inc round (inc/2.2)
For example :
45 36 75 20 05 90 80 65 30 50 10 75 85
The distance between the elements to be compared is 3. The subfiles generated
with the distance of 3 are as follows :
Subfile 1 a[0] a[3] a[6] a[9] a[12]
Subfile 2 a[1] a[4] a[7] a[10]
Subfile 3 a[2] a[5] a[8] a[11]
Input to pass 1 with distance = 3
Design and Analysis of Algorithms 1–15 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
45 36 75 20 05 90 80 65 30 50 10 75 85
(a)
Output of pass 1 is input to pass 2 and distance = 2
20 05 30 45 10 75 50 36 75 80 65 90 85
(b)
Output of pass 2 is input to pass 3 and distance = 1
10 05 20 36 30 45 50 75 65 80 75 90 85
(c)
Output of pass 3
05 10 20 30 36 45 50 65 75 75 80 85 90
(d)
Fig. 1.16.1.
i. Selection sort
ii. Insertion sort
Answer
i. Selection sort (A) :
1. n length [A]
2. for j 1 to n–1
3. smallest j
4. for i j + 1 to n
5. if A [i] < A [smallest]
6. then smallest i
7. exchange (A [j], A [smallest])
ii. Insertion_Sort(A) :
1. for j 2 to length[A]
2. do key A[j]
3. Insert A[j] into the sorted sequence A[1 ..... j – 1]
4. i j – 1
5. while i > 0 and A[i] > key
6. do A[i + 1] A[i]
7. i i – 1
8. A[i + 1] key
Introduction 1–16 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 1.18. Write non-deterministic algorithm for sorting.
Answer
Non-deterministic algorithms are algorithm that, even for the same input,
can exhibit different behaviours on different runs, iterations and executions.
N SORT(A, B) :
1. for i = 1 to n do
2. j = choice(1 . . . n)
3. if B[j] != 0 then failure
4. B[j] = A[i]
5. endfor
6. for i = 1 to n – 1 do
7. if B[i] < B[i + 1] then failure
8. endfor
9. print(B)
10. success
Que 1.19. Explain the concepts of quick sort method and analyze
Answer
Quick sort :
Quick sort works by partitioning a given array A[p ... r] into two non-empty
subarray A[p ... q – 1] and A [q + 1 ... r] such that every key in A[p ... q – 1] is
less than or equal to every key in A[q + 1 ... r]. Then the two subarrays are
sorted by recursive calls to quick sort.
Quick_Sort (A, p, r)
1. If p < r then
2. q Partition (A, p, r)
3. Recursive call to Quick_Sort (A, p, q – 1)
4. Recursive call to Quick_Sort (A, q + 1, r)
As a first step, Quick sort chooses as pivot one of the items in the array to be
sorted. Then array is partitioned on either side of the pivot. Elements that
are less than or equal to pivot will move toward the left and elements that are
greater than or equal to pivot will move toward the right.
Partition (A, p, r)
1. x A[r]
2. i p – 1
3. for j p to r – 1
4. do if A[j] x
5. then i i + 1
6. then exchange A[i] A[j]
Design and Analysis of Algorithms 1–17 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
7. exchange A[i + 1] A[r]
8. return i + 1
Example : Given array to be sorted
3 1 4 1 5 9 2 6 5 3 5 8 9
Sort the array A using quick sort algorithm.
Step 1 : The array is Pivoted about it first element i.e., Pivot (P) = 3
3 1 4 1 5 9 2 6 5 3 5 8 9
P
Step 2 : Find first element larger then pivot (make underline) and find
element not larger than pivot from end make over line.
3 1 4 1 5 9 2 6 5 3 5 8 9
P
Underline Overline
Step 3 : Swap these element and scan again.
3 1 3 1 5 9 2 6 5 4 5 8 9
P
Array after swapping
3 1 3 1 5 9 2 6 5 4 5 8 9
P
Underline Overline
Apply swapping,
3 1 3 1 2 9 5 6 5 4 5 8 9
Again apply scanning,
3 1 3 1 2 9 5 6 5 4 5 8 9
Overline Underline
The pointers have crossed
i.e., overline on left of underlined
Then, in this situation swap pivot with overline.
2 1 3 1 3 9 5 6 5 4 5 8 9
P
Now, pivoting process is complete.
Step 4 : Recursively sort subarrays on each side of pivot.
Subarray 1 : 2 1 3 1
Subarray 2 : 9 5 6 5 1 5 8 9
First apply Quick sort for subarray 1.
Introduction 1–18 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2 1 3 1
P
Underline Overline
2 1 1 3
P
Underline Overline
The pointers have crossed.
i.e., overline on left of underlined.
Swap pivot with overline
1 1 2 3
Sorted array
Now, for subarray 2 we apply Quick sort procedure.
9 5 6 5 4 5 8 9
P
Overline Underline
The pointer has crossed. Then swap pivot with overline.
8 5 6 5 4 5 9 9
Subarray 4
Subarray 3
8 5 6 5 4 5
P Overline
Swap overline with pivot.
5 5 6 5 4 8
P
5 5 6 5 4
Underline Overline
5 5 4 5 6
Overline on left of underlined.
Swap pivot with overline.
4 5 5 5 6
P
Now combine all the subarrays
Sorted array 1 1 2 3 3 4 5 5 5 6 8 9 9
Pivot
Design and Analysis of Algorithms 1–19 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Analysis of complexity :
i. Worst case :
1. Let T(n) be the worst case time for quick sort on input size n. We
have a recurrence
T(n) = max (T(q) + T(n – q – 1)) + (n) ...(1.19.1)
o q n 1
where q ranges from 0 to n – 1, since the partition produces two
regions, each having size n – 1.
2. Now we assume that T(n) cn2 for some constant c.
Substituting our assumption in eq. (1.19.1) we get
T(n) max (cq2 + c(n – q – 1)2) + (n)
o q n 1
Que 1.20. Discuss the best case and worst case complexities of
quick sort algorithm in detail.
Answer
Best case :
1. The best thing that could happen in quick sort would be that each
partitioning stage divides the array exactly in half.
2. In other words, the best to be a median of the keys in A[p .. r] every
time procedure ‘Partition’ is called.
3. The procedure ‘Partition’ always split the array to be sorted into two
equal sized arrays.
4. If the procedure ‘Partition’ produces two regions of size n/2, then the
recurrence relation is :
T(n) T(n/2) + T(n/2) + (n) 2T(n/2) + (n)
And from case (2) of master theorem
T(n) = (n log n)
Worst case : Refer Q. 1.19, Page 1–16B, Unit-1.
PART-4
Merge Sort.
Questions-Answers
10, 25 5, 16
5, 10, 16, 25
3. Consider the second half : 35, 48, 5 again divide into two sub-arrays
35 , 48 8
35, 48 8
8, 35, 48
4. Merge these two sorted sub-arrays,
5, 10, 16, 25 8, 35, 48
Que 1.22. Determine the best case time complexity of merge sort
algorithm.
Answer
1. The best case of merge sort occurs when the largest element of one
array is smaller than any element in the other array.
2. For this case only n/2 comparisons of array elements are made.
3. Merge sort comparisons are obtained by the recurrence equation of
the recursive calls used in merge sort.
4. As it divides the array into half so the recurrence function is defined
as :
n n n
T(n) = T T + n = 2T + n ...(1.22.1)
2 2 2
5. By using variable k to indicate depth of the recursion, we get
n
T(n) = 2kT k + kn ...(1.22.2)
2
6. For the best case there are only n/2 comparisons hence equation (1.22.2)
can be written as
n n
T(n) = 2k k k
2 2
7. At the last level of recursion tree
2k = n
Design and Analysis of Algorithms 1–23 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
k = log2 n
8. So the recurrence function is defined as :
n n
T(n) = 2log2 n T log n log 2 n
2 2 2
n n
= nT(1) + log2 n = log2 n + n
2 2
T(n) = O(n log2 n)
Hence, the best case complexity of merge sort is O(n log2 n).
PART-5
Heap Sort.
Questions-Answers
Answer
1. Heap sort is a comparison based sorting technique based on binary heap
data structure.
2. Heap sort finds the largest element and puts it at the end of array, then
the second largest item is found and this process is repeated for all other
elements.
3. The general approach of heap sort is as follows :
a. From the given array, build the initial max heap.
b. Interchange the root (maximum) element with the last element.
c. Use repetitive downward operation from root node to rebuild the
heap of size one less than the starting.
d. Repeat step a and b until there are no more elements.
Analysis of heap sort :
Complexity of heap sort for all cases is O(n log2 n).
MAX-HEAPIFY (A, i) :
1. i left [i]
2. r right [i]
3. if l heap-size [A] and A[l] > A[i]
4. then largest l
5. else largest i
6. if r heap-size [A] and A[r] > A [largest]
7. then largest r
Introduction 1–24 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
8. if largest i
9. then exchange A[i] A[largest]
10. MAX-HEAPIFY [A, largest]
HEAP-SORT(A) :
1. BUILD-MAX-HEAP (A)
2. for i length [A] down to 2
3. do exchange A[1] A[i]
4. heap-size [A] heap-size [A] – 1
5. MAX-HEAPIFY (A, 1)
BUILD-MAX-HEAP (A)
1. heap-size (A) length [A]
2. for i (length [A]/2) down to 1 do
3. MAX-HEAPIFY (A, i)
We can build a heap from an unordered array in linear time.
Average case and worst case complexity :
1. We have seen that the running time of BUILD-HEAP is O(n).
2. The heap sort algorithm makes a call to BUILD-HEAP for creating a
(max) heap, which will take O(n) time and each of the (n – 1) calls to
MAX-HEAPIFY to fix up the new heap (which is created after
exchanging the root and by decreasing the heap size).
3. We know ‘MAX-HEAPIFY’ takes time O(log n).
4. Thus the total running time for the heap sort is O(n log n).
Que 1.24. How will you sort following array A of element using
heap sort : A = (23, 9, 18, 45, 5, 9, 1, 17, 6).
AKTU 2018-19, Marks 10
Answer
Given array : 23 9 18 45 5 9 1 17 6
First we call Build-Max heap
heap size [A] = 9
23
9 18
i=4
45 5 9 1
l=8 r=9
17 6
so i = 4 to 1 call MAX HEAPIFY (A, i)
i.e., first we call MAX HEAPIFY (A, 4)
A[l] = 7, A[i] = A[4] = 45, A[r] = 6
l left [4] = 2 × 4 = 8
r right[4] = 2 × 4 + 1 = 9
8 9 and A[8] = 17 < 45 (False)
Then, largest 4.
Design and Analysis of Algorithms 1–25 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Similarly for i = 3, 2, 1 we get the following heap tree :
i 23
23 9 18
i=3 l r
9 18
l=6 r=7 45 5 9 1
45 5 9 1
17 6
17 6
45
i
23 23 18
r
i
45 l 18
9 1
5 9
l
9 5 9 1 r
17 6
17 6
45
23 18
17 5 9 1
9 6
45 6
23 18 23 18
17 5 9 1 17 5 9 1
9 6 9 45
45
9 5 9 1 9 5 9 1
6 23
23 45
Now, call MAX HEAPIFY (A, 1), exchange A[1] and A[4] and size = 5 – 1
=4
9 9
5 6 5 6
1 1
9 9 17 18 23 45
Now, call MAX HEAPIFY (A, 1), exchange A[1] and A[3] and size = 4 –
1=3
9 9
1 6 1 6
6 9 9 17 18 23 45
Exchange A[1] and A[2] size 3 – 1 = 2
5 5
1 1 1
5 6 9 9 17 18 23 45
The sorted array
1 5 6 9 9 17 18 23 45
Now call MAX HEAPIFY (A, 1) and exchange A[1] and A[7]
18 1
9 17 9 17
6 5 9 1 6 5 9 18
Now call MAX HEAPIFY (A, 1) and size = 7 – 1 = 6 exchange A[1] and
A[6]
17 17
6 9 6 9
1 5 9 1 5 9
Design and Analysis of Algorithms 1–27 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Exchange A[1] and A[5] and size = 6 – 1 = 5
9 5
6 9 6 9
1 5 1 9
Que 1.25. What is heap sort ? Apply heap sort algorithm for
sorting 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Also deduce time complexity of heap
sort. AKTU 2015-16, Marks 10
Answer
Heap sort and its time complexity : Refer Q. 1.23, Page 1–23B, Unit-1.
Numerical : Since the given problem is already in sorted form. So, there is
no need to apply any procedure on given problem.
Answer
Heap sort : Refer Q. 1.23, Page 1–23B, Unit-1.
Numerical :
Originally the given array is : [6, 14, 3, 25, 2, 10, 20, 7, 6]
First we call BUILD-MAX-HEAP
heap size [A] = 9
6
14 3
25 2 10 20
7 6
so, i = 4 to 1, call MAX-HEAPIFY (A, i)
i.e., first we call MAX-HEAPIFY (A, 4)
Az [l] = 7, A [i] = A [4] = 25, A [r] = 6
l left [4] = 8
r right [4] = 9
8 9 and 7 > 25 (False)
Then, largest 4
9 9 and 6 > 25 (False)
Then, largest = 4
A [i] A [4]
Now call MAX-HEAPIFY (A, 2)
Introduction 1–28 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
6
14 3
i=4
25 2 10 20
l=8
7 6 r=9
(i)
Similarly for i = 3, 2, 1 we get the following heap tree.
6 i 6
i
14 3 14 r 20
l
25 2 10 20 25 2 10 3
l r
7 6 7 6
(ii) (iii)
i
6 i 25
r
l 25 20 6 r 20
l
14 2 10 3 14 2 10 3
7 6 7 6
(iv) (v)
25 25
14 20 14 20
i
6 2 10 3 7 2 10 3
l 7 6 r 6 6
(vi) (vii)
So final tree after BUILD-MAX-HEAP is
25
14 20
7 2 10 3
6 6
(viii)
7 2 10 3 7 2 10 3
6 6 6 25
6 14 20 7 2 10 3 6 25
Now call MAX-HEAPIFY (A, 1) we get
Now exchange A [1] and A [8] and size = 8 – 1 = 7
20 6
14 10 14 10
7 2 6 3 7 2 3
6
6 20
6 14 10 7 2 6 3 20
Again call MAX-HEAPIFY (A, 1), we get
exchange A [1] and A [7] and size = 7 – 1 = 6
14 3
7 10 7 10
6 2 6 3 6 2 6 14
3 7 10 6 2 6 14
Again call MAX-HEAPIFY (A, 1), we get
exchange A [1] and A [6] and now size = 6 – 1 = 5
10 3
7 6 7 6
6 2 3 6 2 10
3 7 6 6 2 10
Again call MAX-HEAPIFY (A, 1)
exchange A [1] and A [5] and now size = 5 – 1 = 4
7 2
6 6 6 6
3 2 3 7
2 6 6 3 7
Introduction 1–30 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Again, call MAX-HEAPIFY (A, 1)
exchange A [1] and A [4] and size = 4 – 1 = 3
6 3
6 2 6 2
3 6
3 6 2 6
6 2
3 2 3 6
2 3 6
3 2
2 3
2 3
2 3 6 6 7 10 14 20 25
PART-6
Comparison of Sorting Algorithms, Sorting in Linear Time.
Questions-Answers
Answer
Counting sort is a linear time sorting algorithm used to sort items when they
belong to a fixed and finite set.
Algorithm :
Counting_Sort(A, B, k)
1. let C[0..k] be a new array
2. for i 0 to k
3. do C[i] 0
4. for j 1 to length[A]
5. do C[A[j]] C[A[j]] + 1
// C[i] now contains the number of elements equal to i.
6. for i 1 to k
7. do C[i] C[i] + C[i – 1]
// C[i] now contains the number of elements less than or equal to i.
8. for j length[A] down to 1
9. do B[C[A[j]]] A[j]
10. C[A[j]] C[A[j]] – 1
Introduction 1–32 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 1.29. What is the time complexity of counting sort ? Illustrate
the operation of counting sort on array A = {1, 6, 3, 3, 4, 5, 6, 3, 4, 5}.
Answer
Given array : Time complexity of counting sort is O(n).
1 2 3 4 5 6 7 8 9 10
A 1 6 3 3 4 5 6 3 4 5
Step 1 : i = 0 to 6 k = 6 (largest element in array A)
C[i] 0
0 1 2 3 4 5 6
C 0 0 0 0 0 0 0
Step 2 : j = 1 to 10 ( length [A] = 10)
C[A[j]] C[A[j]] + 1
For j = 1
0 1 2 3 4 5 6
C[A[1]] C[1] + 1 = 0 + 1 = 1 C 0 1 0 0 0 0 0
C[1] 1
For j = 2
0 1 2 3 4 5 6
C[A[2]] C[6] + 1 = 0 + 1 = 1 C 0 1 0 0 0 0 1
C[6] 1
0 1 2 3 4 5 6
Similarly for j = 5, 6, 7, 8, 9, 10 C 0 1 0 3 2 2 2
Step 3 :
For i = 1 to 6
C[i] C[i] + C[i – 1]
For i = 1
0 1 2 3 4 5 6
C[1] C[1] + C[0] C 0 1 0 3 2 2 2
C[1] 1 + 0 = 1
For i = 2
0 1 2 3 4 5 6
C[2] C[2] + C[1] C 0 1 1 3 2 2 2
C[1] 1 + 0 = 1
0 1 2 3 4 5 6
Similarly for i = 4, 5, 6 C 0 1 1 4 6 8 10
Step 4 :
For j = 10 to 1
B[C[A[j]]] A [j]
C[A[j ]] C[A[j] – 1
Design and Analysis of Algorithms 1–33 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
j A[j] C[A[j ]] B[C[A[j]]] A[j] C[A[j]] C[A[j]] – 1
10 5 8 B [8] 5 C[5] 7
9 4 6 B [6] 4 C[4] 5
8 3 4 B [4] 3 C[3] 3
7 6 10 B [10] 6 C[6] 9
6 5 7 B [7] 5 C[5] 6
5 4 5 B [5] 4 C[4] 4
4 3 3 B [3] 3 C[3] 2
3 3 2 B [2] 3 C[3] 1
2 6 9 B [9] 6 C[6] 8
1 1 1 B [1] 1 C[1] 0
1 2 3 4 5 6 7 8 9 10
B 1 3 3 3 4 4 5 5 6 6
Answer
1. The bucket sort is used to divide the interval [0, 1] into n equal-sized
sub-intervals, or bucket, and then distribute the n-input numbers into
the bucket.
2. Since the inputs are uniformly distributed over [0, 1], we do not except
many numbers to fall into each bucket.
3. To produce the output, simply sort the numbers in each bucket and then
go through the bucket in order, listing the elements in each.
4. The code assumes that input is in n-element array A and each element
in A satisfies 0 A[i] 1. We also need an auxiliary array B[0 ... n – 1] for
linked-list (buckets).
BUCKET_SORT (A)
1. n length [A]
2. for i 1 to n
3. do Insert A[i] into list B nA[ i]
4. for i 0 to n – 1
5. do Sort list B[i] with insertion sort
6. Concatenate the lists B[0], B[1], …. B[n – 1] together in order.
10 10 20 20 30
Answer
1. Radix sort is a sorting algorithm which consists of list of integers or
words and each has d-digit.
2. We can start sorting on the least significant digit or on the most significant
digit.
3. On the first pass entire numbers sort on the least significant digit (or
most significant digit) and combine in a array.
4. Then on the second pass, the entire numbers are sorted again on the
second least significant digits and combine in an array and so on.
RADIX_SORT (A, d)
1. for i 1 to d do
2. use a stable sort to sort array A on digit i
// counting sort will do the job
The code for radix sort assumes that each element in the n-element
array A has d-digits, where digit 1 is the lowest-order digit and d is the
highest-order digit.
Analysis :
1. The running time depends on the table used as an intermediate sorting
algorithm.
Design and Analysis of Algorithms 1–35 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. When each digit is in the range 1 to k, and k is not too large,
COUNTING_SORT is the obvious choice.
3. In case of counting sort, each pass over n d-digit numbers takes
(n + k) time.
4. There are d passes, so the total time for radix sort is (n + k) time. There
are d passes, so the total time for radix sort is (dn + kd). When d is
constant and k = (n), the radix sort runs in linear time.
For example : This example shows how radix sort operates on seven 3-
digit number.
Table 1.32.1.
Input 1st pass 2nd pass 3rd pass
329 720 720 329
457 355 329 355
657 436 436 436
839 457 839 457
436 657 355 657
720 329 457 720
355 839 657 839
In the table 1.32.1, the first column is the input and the remaining
shows the list after successive sorts on increasingly significant digits
position.
Que 1.33. Among Merge sort, Insertion sort and quick sort which
sorting technique is the best in worst case. Apply the best one among
these algorithms to sort the list E, X, A, M, P, L, E in alphabetic order.
AKTU 2019-20, Marks 07
Answer
Merge sort technique is best in worst case because of its time complexity
O(n log n).
Numerical :
Given : E, X, A, M, P, L, E
Pass 1 : Merge each pair of element to obtain sorted list :
E X A M P L E
After sorting each pair, we get
E X A M L P E
Pass 2 : Merge each pair to obtain the list :
A E M X E L P
Pass 3 : Again merge the two sub arrays to obtain the list :
A E E L M P X
Introduction 1–36 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
VERY IMPORTANT QUESTIONS
Following questions are very important. These questions
may be asked in your SESSIONALS as well as
UNIVERSITY EXAMINATION.
More pdf : www.motivationbank.in
2–1 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
2 Advanced Data
Structure
CONTENTS
Part-1 : Red-Black Trees ........................................ 2–2B to 2–19B
Questions-Answers
Answer
Red-black tree :
A red-black tree is a binary tree where each node has colour as an extra
attribute, either red or black. It is a self-balancing Binary Search Tree (BST)
where every node follows following properties :
1. Every node is either red or black.
2. The root is black.
3. Every leaf (NIL) is black.
4. If a node is red, then both its children are black.
5. For each node, all paths from the node to descendent leave contain the
same number of black nodes.
Insertion :
i. We begin by adding the node as we do in a simple binary search tree
and colouring it red.
RB-INSERT(T, z)
1. y nil [T]
2. x root [T]
3. while x nil [T]
4. do y x
5. if key[z] < key [x]
6. then x left [x]
7. else x right [x]
8. p[z] y
9. if y = nil [T]
10. then root [T] z
11. else if key [z] < key[y]
12. then left [y] z
13. else right [y] z
14. left [z] nil[T]
15. right [z] nil[T]
16. colour [z] RED
17. RB-INSERT-FIXUP(T, z)
More pdf : www.motivationbank.in
2–3 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
B B
11 11
R B R B
14 2 14
B B B R y
1 7 Case 1 1 7 New z
R R R R
5 8 y 5 8
R R
4 4
z ( a) Fig. 2.1.1. (b )
Now, in this case violation of property 4 occurs, because z’s uncle y is red,
then case 1 is applied.
Case 2 : z’s uncle is black, z is the right of its parent :
a. Change z to z’s parent.
b. Rotate z’s parent left to make case 3.
Advanced Data Structure 2–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B B
11 11
B R B
R
2 14 7 14
B R y R B R
1 7 z 2 8 15
Case 2 New z
B B
B B
5 8 5
R
4 R 4
( a) Fig. 2.1.2. (b )
Case 3 : z’s uncle is black, z is the left child of its parent :
a. Set z’s parent black.
b. Set z’s grandparent to red.
c. Rotate z’s grandparent right.
B
11 B
7
R B z R R
11
7 14 y 2
B B B B
R B 8 14
z 2 8 Case 3 1 5
15
B B R
4 R
1 5
R
4 (a) Fig. 2.1.3. (b )
Que 2.2. What are the advantages of red-black tree over binary
search tree ? Write algorithms to insert a key in a red-black tree
insert the following sequence of information in an empty red-black
tree 1, 2, 3, 4, 5, 5.
Answer
Advantages of RB-tree over binary search tree :
1. The main advantage of red-black trees over AVL trees is that a single
top-down pass may be used in both insertion and deletion operations.
2. Red-black trees are self-balancing while on the other hand, simple binary
search trees are unbalanced.
3. It is particularly useful when inserts and/or deletes are relatively frequent.
4. Time complexity of red-black tree is O(log n) while on the other hand, a
simple BST has time complexity of O(n).
Algorithm to insert a key in a red-black tree : Refer Q. 2.1, Page 2–2B,
Unit-2.
More pdf : www.motivationbank.in
2–5 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Numerical :
Insert 1 : B
1
B
Insert 2 : 1
R
2
B
Insert 3 : B
1 2
R Left
2 R R
rotate 1 3
R
B 3
B
Insert 4 : 2 2
R B B
1 3 1 B
3
R R
4 4
B
B
Insert 5 : 2 Left 2
B B
B
1 3 1 4 B
R R
4 5
3 R
B 5 R
Insert 5 : 2
B
1 B
4
B B
3 5
5
R
Fig. 2.2.1.
Que 2.3. Explain red-black tree. Show steps of inserting the keys
41, 38, 31, 12, 19, 8 into initially empty red-black tree.
OR
What is red-black tree ? Write an algorithm to insert a node in an
empty red-black tree explain with suitable example.
AKTU 2017-18, Marks 10
Advanced Data Structure 2–6 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
Red-black tree and insertion algorithm : Refer Q. 2.1, Page 2–2B, Unit-2.
Numerical :
B
Insert 41 : 41
B
41
Insert 38 : R
38
Insert 31 :
B B
41
R 38
Case 3 R R
38
R 31 41
31
Insert 12 :
B
38 B
B R 38
Case 1 R R
31 41
R 31 41
12 R
12
Insert 19 :
B B
38
B B 38
Case 2, 3 B B
31 41
R 19 41
12 R R
R 12 31
19
Insert 8 :
B B
38 38
B B R B
Case 1
19 41 19 41
R R B B
12 31 12 31
R R
8 8
Thus final tree is
B
38
R B
19 41
B
B
12 31
R
8
More pdf : www.motivationbank.in
2–7 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Answer
Insertion in red-black tree : Refer Q. 2.1, Page 2–2B, Unit-2.
B
Insert 1 : 1
B
1
Insert 2 : R
2
Insert 3 :
B
B
1 Case 2 2
R
2 1 3
R R R
3
Insert 4 :
B B
2 2
1 3 1 3
R R B B 4
4
R R
Insert 5 :
B B
2 Case 2 2 B
B B
1 3 1 4
R B R
4 3 5
R
5 R
Insert 6 :
B B
2 2
B B B R
1 4 1 4
R
3 5 3 5
R
R 6 B B 6
R
Advanced Data Structure 2–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 7 :
B B
2 Case 2 2
B B B R
1 4 1 4
R B
3 5 B 3 6
R R
R 6 R 5 7
R
7
Insert 8 :
B B
2 2
B R B R
1 4 1 4
B R
B 3 6 B 3 6
R B
R 5 7 B 5 7
R R
8 8
Insert 9 :
B
2 Case 2 2
B R
1 4 1 4
R
B 3 6 3 6
B
R 5 7 5 8
R
8 7 9
R
9
Que 2.5. How to remove a node from RB-tree ? Discuss all cases
and write down the algorithm.
Answer
To remove a node from RB-tree RB-DELETE procedure is used. In
RB-DELETE procedure, after splitting out a node, it calls an auxiliary
procedure RB-DELETE-FIXUP that changes colours and performs rotations
to restore the red-black properties.
RB-DELETE(T, z)
1. if left[z] = nil[T] or right[z] = nil[T]
2. then y z
3. else y TREE-SUCCESSOR(z)
4. if left[y] nil[T]
5. then x left[y]
6. else x right[y]
7. p[x] p[y]
8. if p[y] = nil[T]
More pdf : www.motivationbank.in
2–9 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
9. then root[T] x
10. else if y = left[p[y]]
11. then left[p[y]] x
12. else right[p[y]] x
13. if y z
14. then key[z] key[y]
15. copy y’s sibling data into z
16. if colour[y] = BLACK
17. then RB-DELETE-FIXUP(T, x)
18. return y
RB-DELETE-FIXUP(T, x)
1. while x root[T] and colour[x] = BLACK
2. do if x = left[p[x]]
3. then w right[p[x]]
4. if colour[w] = RED
5. then colour[w] BLACK case 1
6. colour[p[x]] RED case 1
7. LEFT-ROTATE(T, p[x]) case 1
8. w right[p[x]] case 1
9. if colour[left[w]] = BLACK and colour[right[w]] = BLACK
10. then colour[w] RED case 2
11. x p[x] case 2
12. else if colour[right[w]] = BLACK
13. then colour[left[w]] BLACK case 3
14. colour[w] RED case 3
15. RIGHT-ROTATE(T, w) case 3
16. w right[p[x]] case 3
17. colour[w] colour[p[x]] case 4
18. colour[p[x]] BLACK case 4
19. colour[right[w]] BLACK case 4
20. LEFT-ROTATE(T, p[x]) case 4
21. x root[T] case 4
22. else (same as then clause with “right” and “left” exchanged).
23. colour[x] BLACK
Cases of RB-tree for deletion :
Case 1 : x’s sibling w is red :
1. It occurs when node w the sibling of node x, is red.
2. Since w must have black children, we can switch the colours of w
and p[x] and then perform a left-rotation on p[x] without violating
any of the red-black properties.
3. The new sibling of x, which is one of w’s children prior to the
rotation, is now black, thus we have converted case 1 into case 2, 3
or 4.
4. Case 2, 3 and 4 occur when node w is black. They are distinguished
by colours of w’s children.
Advanced Data Structure 2–10 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B B
B D
B R
x Case 1
B
A D B R E
B B ^
x AB
C E New C B
w
^
(a) (b )
Fig. 2.5.1.
Case 2 : x’s sibling w is black, and both of w’s children are black :
1. Both of w’s children are black. Since w is also black, we take one
black of both x and w, leaving x with only one black and leaving w
red.
2. For removing one black from x and w, we add an extra black to
p[x], which was originally either red or black.
3. We do so by repeating the while loop with p[x] as the new node x.
4. If we enter in case 2 through case 1, the new node x is red and
black, the original p[x] was red.
5. The value c of the colour attribute of the new node x is red, and
the loop terminates when it tests the loop condition. The new
node x is then coloured black.
B New x B
B B
B Case 2 R
x x
A D w A D
B B B B
C E C E
^ ^
( a) (b )
Fig. 2.5.2.
Case 3 : x’s sibling w is black, w’s left child is red, and w’s right child
is black :
1. Case 3 occurs when w is black, its left child is red and its right child
is black.
2. We can switch the colours of w and its left child left[w] and then
perform a right rotation on w without violating any of the red-
black properties, the new sibling w of x is a black node with a red
right child and thus we have transformed case 3 into case 4.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 2–11 B (CS/IT-Sem-5)
B B
B
R Case 3
x B
A D w A B C New x
B B R
D
C E
B
E
^
(a) (b )
Fig. 2.5.3.
^
Case 4 : x’s sibling w is black, and w’s right child is red :
1. When node x’s sibling w is black and w’s right child is red.
2. By making some colour changes and performing a left rotation on
p[x], we can remove the extra black on x, making it singly black,
without violating any of the red-black properties.
B D
B B
B Case 4 B
x
A D w B E
B ^
R
B C
C C E A C
^
( a) Fig. 2.5.4. (b )
Que 2.6. Insert the nodes 15, 13, 12, 16, 19, 23, 5, 8 in empty
red-black tree and delete in the reverse order of insertion.
AKTU 2016-17, Marks 10
Answer
Insertion :
B
Insert 15 : 15
B
R 15
Insert 13 : 13
Insert 12 :
B
B
15 13
R
R 13 12 15
12 R R
Advanced Data Structure 2–12 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 16 :
B B
13 Case 3 13
12 15 12 15
R R B B 16
16
R R
Insert 19 :
B B
13 Case 2 13
B B B
12 15 12 16
R B R
16 15 19
R
19 R
Insert 23 :
B B
13 13
B B B R
12 16 12 16
R
15 19 15 19
R
R 23 B B 23
R
Insert 5 :
B
B 13
R
12 16
B
5 15 19
R B
23
R
Insert 8 :
B B
13 13
B R
B
B 12 16 8 16
R R
2nd 15 19 B
R 5 5 12 15 19
B B
23 B R
1st 8 23
R
R
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 2–13 B (CS/IT-Sem-5)
Deletions :
Delete 8 :
B B
13 13
B R B R
8 16 12 16
R
5 12 15 19 B 5 R B 15 19 B
R B
R 23 23 R
Delete 5 :
B
B
13 13
R R B
R
12 16 12 16
R B B
5 15 19 15 19 B
B
23 R 23 R
Delete 23 :
B B
13 13
B 12 R B 12 R
16 16
B 15 19 B B 15 19 B
23 R
Delete 19 :
B B B
13 13 13
R Case 2 B
B 12 16 B 12 16 12 16 B
R
B 15 19 B B 15 15 R
Delete 16 :
B B
13 13
B B
B 12 16 B 12 15
15 R
Delete 12 :
B B
13 13
B 12 15 B 15 B
Delete 13 :
B B
13 15
15 B
Advanced Data Structure 2–14 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Delete 15 :
No tree
Answer
B
Insert 12 : 12
B
Insert 9 : 12
R
9
B
Insert 81 : 12
R
R
9 81
B B
12 12
Insert 76 : R B
R Case 1 B
9 81 9 81
R
76 76 R
Insert 23 : B B
12 12
B B
B Case 3 B
9 81 9 76
R
76 23 81
R R R
23
Insert 43 : B B
12 12
B B
B Case 1 R
9 76 9 76
R R B B
23 81 23 81
R 43 43 R
More pdf : www.motivationbank.in
2–15 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Insert 65 : B B
12 12
B B
R Case 3 R
9 76 9 76
R B B
23 81 B 43 81
R
R
43 23 65 R
R
65
Insert 88 : B
12
B
R
9 76
B
43 81 B
R
R R
23 65 88
Insert 76 : B
12
B
R
9 76 Case 2
B B
43 81
R
R R
23 65 76 R 88
76 B
R
12 81 B
B 9 B 43 76 R 88 R
R 23 R 65
Advanced Data Structure 2–16 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 32 : B
76
R B
12 81
B
B R
9 43 76 R 88
R
23 R 65
R
32
76 B
R
12 81 B
B 9 R 43 76 R 88 R
B 23 B 65
32 R
Insert 54 : B
76
R 12 81 B
R R
B 9 R 43 76 88
B 23 65 B
R R
32 54
More pdf : www.motivationbank.in
2–17 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Delete 23 : B B
76 76
R 12 81 B R 12 81 B
R R R R
B 9 R 43 76 88 B 9 R 43 76 88
B 23 65 B B 32 65 B
R R R
32 54 54
Delete 81 : B B
76 76
R 12 81 B R 12 88 B
R R R
B 9 R 43 76 88 B 9 R 43 76
B 32 65 B B 32 65 B
R R
54 54
Answer
Properties of red-black tree : Refer Q. 2.1, Page 2–2B, Unit-2.
1. By property 5 of RB-tree, every root-to-leaf path in the tree has the
same number of black nodes, let this number be B.
2. So there are no leaves in this tree at depth less than B, which means the
tree has at least as many internal nodes as a complete binary tree of
height B.
3. Therefore, n 2B – 1. This implies B log (n + 1).
4. By property 4 of RB-tree, at most every other node on a root-to-leaf
path is red, therefore, h 2B.
Putting these together, we have
h 2 log (n + 1).
Advanced Data Structure 2–18 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 2.9. Insert the elements 8, 20, 11, 14, 9, 4, 12 in a Red-Black
tree and delete 12, 4, 9, 14 respectively.
AKTU 2018-19, Marks 10
Answer
Insert 8 :
B
8
Insert 20 :
B
8 R
20
Insert 11 : Since, parent of node 11 is red. Check the colour of uncle of
node 11. Since uncle of node 11 is nil than do rotation and recolouring.
B B
8 8 Left B
R rotation 11
20 R R
R 11 R 8 20
11
R
20
Insert 14 : Uncle of node 14 is red. Recolour the parent of node 14 i.e.,
20 and uncle of node 14 i.e., 8. No rotation is required.
B B
11 11
B R B B
8 20 8 20
R 14 R 14
Insert 9 : Parent of node 9 is black. So no rotation and no recolouring.
B B
11 11
B B B B
8 20 8 20
R R
R 9 14 R 9 14
Insert 4 : Parent of node 4 is black. So no rotation and no recolouring.
B B
11 11
B B B B
8 20 8 20
R R
R 4 R 9 14 R 4 R 9 14
More pdf : www.motivationbank.in
2–19 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
B B
11 11
B Right B
B B
8 20 rotation 8 14
R
R 4 R 9 14 R 4 R 9 12 R 20 R
12 R
Delete 12 : Node 12 is red and leaf node. So simply delete node 12.
B B
11 11
B B B B
8 14 8 14
R 4 R 9 12 R 20 R R 4 R 9 20 R
Delete 4 : Node 4 is red and leaf node. So simply delete node 4.
B B
11 11
B B B B
8 14 8 14
R 4 9 R 20 R R 9 20 R
9 R 20 R 20 R
Delete 14 : Node 14 is internal node replace node 14 with node 20 and do
not change the colour.
B B
11 11
B B B B
8 14 8 20
R
20
PART-2
B-Trees.
Advanced Data Structure 2–20 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Questions-Answers
Answer
A B-tree of order m is an m-ary search tree with the following properties :
1. The root is either leaf or has atleast two children.
2. Each node, except for the root and the leaves, has between m/2 and m
children.
3. Each path from the root to a leaf has the same length.
4. The root, each internal node and each leaf is typically a disk block.
5. Each internal node has upto (m – 1) key values and upto m children.
SEARCH(x, k)
1. i 1
2. while i n[x] and k > keyi[x]
3. do i i + 1
4. if i n[x] and k = keyi[x]
5. then return(x, i)
6. if leaf[x]
7. then return NIL
8. else DISK-READ(ci[x])
9. return B-TREE-SEARCH (ci[x], k)
B-TREE-INSERT(T, k)
1. r root[T]
2. if n[r] = 2t – 1
3. then s ALLOCATE-NODE ( )
4. root[T] S
5. leaf[s] FALSE
6. n[s] 0
7. c1[s] r
8. B-TREE SPLIT CHILD(S, l, r)
9. B-TREE-INSERT-NONFULL(s, k)
10. else B-TREE-INSERT-NONFULL(r, k)
B-TREE SPLIT CHILD(x, i, y)
1. z ALLOCATE-NODE ( )
2. leaf[z] leaf[y]
3. n[z] t – 1
More pdf : www.motivationbank.in
2–21 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
4. for j 1 to t – 1
5. do keyj[z] ← keyj+t[y]
6. if not leaf[y]
7. then for j 1 to t
8. do cj[z] cj+t [y]
9. n[y] t – 1
10. for j n[x] + 1 down to i + 1
11. do cj+1[x] cj [x]
12. ci+1[x] z
13. for j n[x] down to i
14. do keyj+1[x] keyj[x]
15. keyi[x] keyt[y]
16. n[x] n[x] + 1
17. DISK-WRITE[y]
18. DISK-WRITE[z]
19. DISK-WRITE[x]
The CPU time used by B-TREE SPLIT CHILD is (t). The procedure performs
(1) disk operations.
B-TREE-INSERT-NONFULL(x, k)
1. i n[x]
2. if leaf[x]
3. then while i 1 and k < keyi[x]
4. do keyi+1[x] keyi[x]
5. i i – 1
6. keyi+1[x] k
7. n[x] n[x] + 1
8. DISK-WRITE(x)
9. else while i 1 and k < keyi[x]
10. do i i – 1
11. i i + 1
12. DISK-READ(c i[x])
13. if n[ci[x]] = 2t – 1
14. then B-TREE-SPLIT-CHILD(x, i, ci[x])
15. if k > keyi[x]
16. then i i + 1
17. B-TREE INSERT NONFULL(ci [x], k)
The total CPU time use is O(th) = O(t logt n)
Que 2.11. What are the characteristics of B-tree ? Write down the
steps for insertion operation in B-tree.
Advanced Data Structure 2–22 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
Characteristic of B-tree :
1. Each node of the tree, except the root node and leaves has at least m/2
subtrees and no more than m subtrees.
2. Root of tree has at least two subtree unless it is a leaf node.
3. All leaves of the tree are at same level.
Insertion operation in B-tree :
In a B-tree, the new element must be added only at leaf node. The insertion
operation is performed as follows :
Step 1 : Check whether tree is empty.
Step 2 : If tree is empty, then create a new node with new key value and
insert into the tree as a root node.
Step 3 : If tree is not empty, then find a leaf node to which the new key value
can be added using binary search tree logic.
Step 4 : If that leaf node has an empty position, then add the new key value
to that leaf node by maintaining ascending order of key value within the
node.
Step 5 : If that leaf node is already full, then split that leaf node by sending
middle value to its parent node. Repeat the same until sending value is fixed
into a node.
Step 6 : If the splitting is occurring to the root node, then the middle value
becomes new root node for the tree and the height of the tree is increased by
one.
Answer
There are three possible cases for deletion in B-tree as follows :
Let k be the key to be deleted, x be the node containing the key.
Case 1 : If the key is already in a leaf node, and removing it does not cause
that leaf node to have too few keys, then simply remove the key to be
deleted. Key k is in node x and x is a leaf, simply delete k from x.
Case 2 : If key k is in node x and x is an internal node, there are three cases
to consider :
a. If the child y that precedes k in node x has at least t keys (more than the
minimum), then find the predecessor key k in the subtree rooted at y.
Recursively delete k and replace k with k in x.
b. Symmetrically, if the child z that follows k in node x has at least t keys,
find the successor k and delete and replace as before.
c. Otherwise, if both y and z have only t – 1 (minimum number) keys,
merge k and all of z into y, so that both k and the pointer to z are
removed from x, y now contains 2t – 1 keys, and subsequently k is
deleted.
Case 3 : If key k is not present in an internal node x, determine the root of
the appropriate subtree that must contain k. If the root has only t – 1 keys,
More pdf : www.motivationbank.in
Design and Analysis of Algorithms2–23 B (CS/IT-Sem-5)
execute either of the following two cases to ensure that we descend to a node
containing at least t keys. Finally, recurse to the appropriate child of x.
a. If the root has only t – 1 keys but has a sibling with t keys, give the root
an extra key by moving a key from x to the root, moving a key from the
roots immediate left or right sibling up into x, and moving the appropriate
child from the sibling to x.
b. If the root and all of its siblings have t – 1 keys, merge the root with one
sibling. This involves moving a key down from x into the new merged
node to become the median key for that node.
Que 2.13. How B-tree differs with other tree structures ?
Answer
1. In B-tree, the maximum number of child nodes a non-terminal node
can have is m where m is the order of the B-tree. On the other hand,
other tree can have at most two subtrees or child nodes.
2. B-tree is used when data is stored in disk whereas other tree is used
when data is stored in fast memory like RAM.
3. B-tree is employed in code indexing data structure in DBMS, while,
other tree is employed in code optimization, Huffman coding, etc.
4. The maximum height of a B-tree is log mn (m is the order of tree and
n is the number of nodes) and maximum height of other tree is log2 n
(base is 2 because it is for binary).
5. A binary tree is allowed to have zero nodes whereas any other tree
must have atleast one node. Thus binary tree is really a different kind
of object than any other tree.
Answer
In 2-3-4 B-trees, non-leaf node can have minimum 2 keys and maximum 4
keys so the order of tree is 5.
Insert 40, 35, 22, 90 :
22 35 40 90
Insert 12 :
35
12 22 40 90
Insert 45, 58 :
35
12 22 40 45 58 90
Advanced Data Structure 2–24 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 78 :
35 58
12 22 40 45 78 90
Insert 67, 60 :
35 58
12 22 40 45 60 67 78 90
Delete 35 :
40 58
12 22 45 60 67 78 90
Delete 22 :
40 58
12 25 60 67 78 90
A C D E I K N O R S T U V Y Z
Fig. 2.15.1.
Answer
B-tree : Refer Q. 2.10, Page 2–20B, Unit-2.
Numerical :
Insertion :
G M P X
A C D E I K N O R S T U V Y Z
Assuming, order of B-tree = 5
G M P T X
A C D E I K N O R S U V Y Z
More pdf : www.motivationbank.in
2–25 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
G M T X
A C D E I K N O R S U V Y Z
Insert B :
G M T X
A B C D E I K N O R S U V Y Z
C G M T X
A B D E I K N O R S U V Y Z
Insert Q :
C G M T X
A B D E I K N O Q R S U V Y Z
Insert L :
P
C G M T X
A B D E I K L N O Q R S U V Y Z
Advanced Data Structure 2–26 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert F :
P
C G M T X
A B D E F I K L N O Q R S U V Y Z
Deletion :
Delete F :
C G M T X
A B D E I K L N O Q R S U V Y Z
Delete M :
C G L T X
A B D E I K N O Q R S U V Y Z
Delete G :
P
C L T X
A B D E I K N O Q R S U V Y Z
Delete D :
C L P T X
A B E I K N O Q R S U V Y Z
Delete B :
E L P T X
A C I K NO Q R S UV Y Z
More pdf : www.motivationbank.in
Design and Analysis of Algorithms2–27 B (CS/IT-Sem-5)
Answer
Assume that t=3
2t – 1 = 2 × 3 – 1 = 6 – 1 = 5
and t–1= 3–1=2
So, maximum of 5 keys and minimum of 2 keys can be inserted in a node.
Now, apply insertion process as :
Insert F : F
Insert S : F S
Insert Q : F Q S
Insert K : F K Q S
Insert C : C F K Q S
Insert L : C F K L Q S
As, there are more than 5 keys in this node.
Find median, n[x] = 6 (even)
n[ x ] 6
Median = = 3
2 2
Now, median = 3,
So, we split the node by 3rd key.
K
C F K L Q S
Median
(splitting point) C F L Q S
Move up
Insert H, T :
K
C F H L Q S T
Insert V :
K
C F H L Q S T V
Advanced Data Structure 2–28 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert W :
K
C F H L Q S T V W
More than 5 keys split node from Median.
n[x] = 6 [even]
n[ x] 6
Median = 3 (i.e., 3rd key move up)
2 2
K S
C F H L Q T V W
Insert M, R, N :
K S
CF H L MNQR T V W
Insert P :
K S
C F H LM N P QR T V W
More than 5 key
split the node
n[x] = 6
n[ x] 6
Median = 3 (i.e., 3rd key move up)
2 2
K N S
C F H LM P QR T V W
Insert A, B :
KN S
A B CF H LM PQ R T V W
More pdf : www.motivationbank.in
2–29 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Insert X, Y :
K N S
A B CF H LM PQR TV WXY
Insert D :
K N S
A B CD FH LM PQ R TV WXY
More than 5 key
split the node
n[x] = 6 (even)
n[ x] 6
Median = 3 (i.e., 3rd key move up)
2 2
C K NS
AB D FH LM PQR TV WXY
Insert Z, E :
CK NS
A B D E FH L M PQR TV X YZ
Insert G, I :
N
CK SW
A B DEFG HI LM PQR TV X YZ
Advanced Data Structure 2–30 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
N
CF K SW
A B DE GHI LM PQR TV X YZ
Fig. 2.16.1. Inserted all given information with degree t = 3.
Answer
Insert 10, 25, 20, 35 :
20
Split
10 20 25 35 10 25 35
Insert 30 :
20
10 25 30 35
Insert 55 :
20 20 30
Split
10 25 30 35 55 10 25 35 55
Insert 40, 45 :
20 30 20 30 40
Split
10 25 35 40 45 55 10 25 35 45 55
More pdf : www.motivationbank.in
2–31 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Insert 50, 55 :
20 30 40 20 30 40 50
Split
10 25 35 45 50 55 55 10 25 35 45 55 55
Split
30
20 40 50
10 25 35 45 55 55
Insert 60, 75 :
30
20 40 50 Split
10 25 35 45 55 55 60 75
30
20 40 50 55
10 25 35 45 55 60 75
Advanced Data Structure 2–32 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 70, 65 :
30
20 40 50 55 Split
10 25 35 45 55 60 65 70 75
30
40 50 55 65 Split
20
10 25 35 45 55 60 70 75
30 50
20 40 55 65
10 25 35 45 55 60 70 75
Insert 80, 85 :
30 50
20 40 55 65
Split
10 25 35 45 55 60 70 75 80 85
30 50
20 40 55 65 75
10 25 35 45 55 60 70 80 85
Insert 90 :
30 50
20 40 55 65 75
10 25 35 45 55 60 70 80 85 90
More pdf : www.motivationbank.in
2–33 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
PART-3
Binomial Heaps.
Questions-Answers
Answer
Binomial heap :
1. Binomial heap is a type of data structure which keeps data sorted and
allows insertion and deletion in amortized time.
2. A binomial heap is implemented as a collection of binomial tree.
Properties of binomial tree :
1. The total number of nodes at order k are 2k.
2. The height of the tree is k.
k
3. There are exactly i.e., kCi nodes at depth i for i = 0, 1, …. , k (this is
i
why the tree is called a “binomial” tree).
4. Root has degree k (children) and its children are Bk-1, Bk-2, …, B0 from
left to right.
Answer
Binomial heap : Refer Q. 2.18, Page 2–33B, Unit-2.
Union of binomial heap :
1. The BINOMIAL-HEAP-UNION procedure repeatedly links binomial
trees where roots have the same degree.
2. The following procedure links the Bk-1 tree rooted at node to the
Bk-1 tree rooted at node z, that is, it makes z the parent of y. Node z thus
becomes the root of a Bk tree.
Advanced Data Structure 2–34 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
BINOMIAL-LINK (y, z)
i p[y] ← z
ii. sibling [y] child[z]
iii. child[z] y
iv. degree[z] degree[z] + 1
3. The BINOMIAL-HEAP-UNION procedure has two phases :
a. The first phase, performed by the call of BINOMIAL-HEAP-
MERGE, merges the root lists of binomial heaps H1 and H2 into a
single linked list H that is sorted by degree into monotonically
increasing order.
b. The second phase links root of equal degree until at most one root
remains of each degree. Because the linked list H is sorted by
degree, we can perform all the like operations quickly.
BINOMIAL-HEAP-UNION(H1, H2)
1. H MAKE-BINOMIAL-HEAP ( )
2. head[H] ← BINOMIAL-HEAP-MERGE(H1, H2)
3. Free the objects H1 and H2 but not the lists they point to
4. if head[H] = NIL
5. then return H
6. prev-x NIL
7. x head[H]
8. next-x sibling[x]
9. while next-x NIL
10. do if (degree[x] degree[next-x]) or
(sibling[next-x] NIL and degree[sibling[next-x]] = degree[x])
11. then prev-x x case 1 and 2
12. x next-x case 1 and 2
13. else if key[x] key[next-x]
14. then sibling[x] sibling[next-x] case 3
15. BINOMIAL-LINK(next-x, x) case 3
16. else if prev-x = NIL case 4
17. then head[H] next-x case 4
18. else sibling[prev-x] next-x case 4
19. BINOMIAL-LINK(x, next-x) case 4
20. x next-x case 4
21. next-x sibling[x]
22. return H
BINOMIAL-HEAP-MERGE(H1, H2)
1. a head[H1]
2. b head[H2]
3. head[H1] min-degree (a, b)
4. if head[H1] = NIL
5. return
6. if head[H1] = b
7. then b a
8. a head[H1]
9. while b NIL
More pdf : www.motivationbank.in
2–35 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Bk BL Case1 Bk BL
(a) (b )
Fig. 2.19.1.
Case 2 : It occurs when x is the first of three roots of equal degree, that is,
degree[x] = degree[next-x] = degree[sibling[next-x]], then again pointer
move one position further down the list, and next iteration executes either
case 3 or case 4.
sibling
prev-x x next-x [next-x] prev-x x next-x
a b c d a b c d
Bk Bk Bk Case 2 Bk Bk Bk
(a) (b )
Fig. 2.19.2.
Case 3 : If degree[x] = degree[next-x] degree [sibling[next-x]] and key[x]
key[next-x], we remove next-x from the root list and link it to x, creating
Bk+1 tree.
sibling
prev-x x next-x [next-x] prev-x x next-x
a b c d a b d
Bk Bk BL Case 3
Bk BL
c
key [x] key [next-x]
(a) Bk
Bk+1
Fig. 2.19.3. (b )
Advanced Data Structure 2–36 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Case 4 : degree[x] = degree[next-x] degree[sibling[next-x] and key[next-x]
key x, we remove x from the root list and link it to next-x, again creating
a Bk+1 tree.
sibling
prev-x x next-x [next-x] prev-x x next-x
a b c d a c d
Bk Bk BL Case 4
b Bk BL
key [x] key [next-x]
Bk
(a) Bk+1
Fig. 2.19.4. (b )
Time complexity of union of two binomial heap is O(log n).
Answer
Properties of binomial heap : Refer Q. 2.18, Page 2–33B, Unit-2.
Algorithm for union of binomial heap : Refer Q. 2.19, Page 2–33B,
Unit-2.
Minimum key :
BINOMIAL-HEAP-EXTRACT-MIN (H) :
1. Find the root x with the minimum key in the root list of H, and remove
x from the root list of H.
2. H' MAKE-BINOMIAL-HEAP( ).
3. Reverse the order of the linked list of x’s children, and set head[H] to
point to the head of the resulting list.
4. H BINOMIAL-HEAP-UNION(H, H’).
5. Return x
Since each of lines 1-4 takes O(log n) time of H has n nodes, BINOMIAL-
HEAP-EXTRACT-MIN runs in O(log n) time.
Que 2.21. Construct the binomial heap for the following sequence
of number 7, 2, 4, 17, 1, 11, 6, 8, 15.
Answer
Numerical :
Insert 7 :
Head [H]
7
More pdf : www.motivationbank.in
2–37 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Insert 2 :
x next-x
7 2
Head [H]
prev-x = NIL
degree [x] = 0. So, degree [x] degree [next-x] is false.
degree [next-x] = 0 and Sibling [next-x] = NIL
So, case 1 and 2 are false here.
Now key [x] = 7 and key [next-x] = 2
Now prev-x = NIL
then Head [H] next-x and
i.e.,
Head [H]
7 2
and BINOMIAL-LINK (x, next-x)
i.e.,
2
7
Now
Head [H] 2 x
7
So, after inserting 2, binomial heap is
Head [H] 2
7
Insert 4 :
x next-x
4 2
7
degree [x] degree [next-x]
So, Now next-x makes x and x makes prev-x.
Now next-x = NIL
So, after inserting 4, final binomial heap is :
Advanced Data Structure 2–38 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Head [H] 4 2
7
Insert 17 :
After Binomial-Heap-Merge, we get
x next-x
4 17 2
7
degree [x] = degree [next-x]
degree [Sibling-[next-x]] degree [x]
key [x] key [next-x]
4 17 [True]
So,
4 17 2
and call Binomial-Link [next-x, x]
7
We get
x next-x
4 2
7
17
degree [x] = degree [next-x]
Sibling [next-x] = NIL
Key [x] key [next-x] [False]
prev-x = NIL then
Head [H] [next-x]
Binomial-Link [x, next-x]
x next-x
Head [H]
2 x
4
7
next-x = NIL
17
More pdf : www.motivationbank.in
2–39 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Head [H] 2 x
4
7
17
Insert 1 :
x next-x
Head [H] 1 2
4
7
17
degree [x] degree [next-x]
So, next-x makes x and next-x = NIL
and after inserting 1, binomial heap is :
Head [H] 1 2
4
7
17
Insert 11 :
After Binomial-Heap-Merge, we get
x next-x
Head [H] 1 11 2
4
7
17
degree [x] = degree [next-x]
degree [Sibling [next-x]] degree [x]
key [x] key [next-x] [True]
Advanced Data Structure 2–40 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
So,
x next-x
1 2
4 7
11
17
degree [x] degree [next-x]
So, next-x makes x and next-x = NIL
and final binomial heap after inserting 11 is
Head [H] 1 2
4 7
11
17
Insert 6 :
x next-x
Head [H] 6 1 2
4 7
11
17
degree [x] degree [next-x]
So, next-x becomes x
Sibling [next-x] becomes next-x.
i.e.,
x next-x
6 1 2
4 7
11
17
More pdf : www.motivationbank.in
2–41 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
4 7
11
17
Insert 8 :
x next-x
6 8 1 2
4 7
11
17
degree [x] = degree [next-x]
degree [Sibling [next-x]] degree [x]
key [x] key [next-x] [True]
So,
x next-x
6 1 2
8 11 4 7
17
degree [x] = degree [next-x]
degree [Sibling p[next-x]] degree [x]
key [x] key [next-x] [False]
prev-x = NIL
So,
x next-x
1 2
6 4
11 7
8
17
Advanced Data Structure 2–42 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
degree [x] = degree [next-x]
Sibling [next-x] = NIL
key [x] key [next-x] [True]
So, Sibling [x] = NIL.
and
Head [H]
1
2
6 11
4 7
8
17
next [x] = NIL
So, this is the final binomial heap after inserting 8.
Insert 15 :
x next-x
Head [H] 15 1
2
6 11
8 4 7
17
degree [x] degree [next-x]
So, no change and this is the final binomial heap after inserting 15.
Answer
Deletion of key from binomial heap :
The operation BINOMIAL-HEAP-DECREASE (H, x, k) assigns a new key ‘k’
to a node ‘x’ in a binomial heap H.
BINOMIAL-HEAP-DECREASE-KEY (H, x, k)
1. if k > key [x] then
2. Message “error new key is greater than current key”
3. key [x] k
More pdf : www.motivationbank.in
2–43 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
4. y x
5. z P [y]
6. While (z NIL) and key [y] < key [z]
7. do exchange key [y] key [z]
9. y z
10. z P [y]
Deleting a key : The o pe ration BIN OMIAL-HEAP-DELETE
(H, x) is used to delete a node x’s key from the given binomial heap H. The
following implementation assumes that no node currently in the binomial
heap has a key of – .
BINOMIAL-HEAP-DELETE (H, x)
1. BINOMIAL-HEAP-DECREASE-KEY (H, x, – )
2. BINOMIAL-HEAP-EXTRACT-MIN(H)
For example : Operation of Binomial-Heap-Decrease (H, x, k) on the
following given binomial heap :
Suppose a binomial heap H is as follows :
Head[H] 37 10 1
41 28 13 6 16 12 25
77 8 14 29 26 23 18
11 17 38 42
27
The root x with minimum key is 1. x is removed from the root list of H. i.e.,
x
Head[H] 37 10 1
41 28 13 6 16 12 25
77 8 14 29 26 23 18
11 17 38 42
27
Now, the linked list of x’s children is reversed and set head[H] to point to the
head of the resulting list, i.e., another binomial heap H.
Advanced Data Structure 2–44 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Head[H ] 25 12 16 6
18 26 23 8 14 29
42 11 17 38
27
37 18 10 8 14 29
41 16 28 13 11 17 38
26 23 77 27
42
PART-4
Fibonacci Heaps.
Questions-Answers
Answer
1. A Fibonacci heap is a set of min-heap-ordered trees.
2. Trees are not ordered binomial trees, because
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 2–45 B (CS/IT-Sem-5)
12 7 4 8
14 15 9 10 11 6
Fig. 2.23.1.
20
3. Fibonacci heap H is accessed by a pointer min[H] to the root of a tree
containing a minimum key. This node is called the minimum node.
4. If Fibonacci heap H is empty, then min[H] = NIL.
Applications of Fibonacci heap :
1. Fibonacci heap is used for Dijkstra’s algorithm because it improves the
asymptotic running time of this algorithm.
2. It is used in finding the shortest path. These algorithms run in O(n2)
time if the storage for nodes is maintained as a linear array.
Answer
Fibonacci heap : Refer Q. 2.23, Page 2–44B, Unit-2.
CONSOLIDATE operation :
CONSOLIDATE(H)
1. for i 0 to D(n[H])
2. do A[i] NIL
3. for each node w in the root list of H
4. do x w
5. d degree[x]
6. while A[d] NIL
7. do y A[d] Another node with the same degree as x.
8. if key[x] > key[y]
9. then exchange x y
10. FIB-HEAP-LINK(H, y, x)
11. A[d] NIL
12. d d + 1
13. A[d] x
14. min[H] NIL
15. for i 0 to D(n[H])
16. do if A[i] NIL
17. then add A[i] to the root list of H
Advanced Data Structure 2–46 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
18. if min[H] = NIL or key [A[i]] < key[min[H]]
19. then min[H] A[i]
FIB-HEAP-LINK(H, y, x)
1. remove y from the root list of H
2. make y a child of x, incrementing degree[x]
3. mark[y] FALSE
Answer
Fibonacci heap : Refer Q. 2.23, Page 2–44B, Unit-2.
Structure of Fibonacci heap :
1. Node structure :
a. The field “mark” is True if the node has lost a child since the node
became a child of another node.
b. The field “degree” contains the number of children of this node.
The structure contains a doubly-linked list of sibling nodes.
Parent
Key
Mark
Degree
8 Left Right
Child
12 7 4
14 15 9 10 11 6
20
Fig. 2.25.2. Heap structure.
Function for uniting two Fibonacci heap :
Make-Heap :
MAKE-FIB-HEAP( )
More pdf : www.motivationbank.in
Design and Analysis of Algorithms2–47 B (CS/IT-Sem-5)
allocate(H)
min(H) = NIL
n(H) = 0
FIB-HEAP-UNION(H1, H2)
1. H MAKE-FIB-HEAP( )
2. min[H] min[H1]
3. Concatenate the root list of H2 with the root list of H
4. if (min[H1] = NIL) or (min[H2] NIL and min[H2] < min[H1])
5. then min[H] min[H2]
6. n[H] n[H1] + n[H2]
7. Free the objects H1 and H2
8. return H
Answer
i. Make-Heap : Refer Q. 2.25, Page 2–46B, Unit-2.
ii. Insert : (H, x)
1. degree[x] 0
2. p[x] NIL
3. child[x] NIL
4. left[x] x
5. right[x] x
6. mark[x] FALSE
7. concatenate the root list containing x with root list H
8. if min[H] = NIL or key[x] < key[min[H]]
9. then min[H] x
10. n[H] n[H] + 1
To determine the amortized cost of FIB-HEAP-INSERT, Let H be the
input Fibonacci heap and H’ be the resulting Fibonacci heap, then
t(H) = t(H) + 1 and m(H’) = m(H), and the increase in potential is,
(t(H) + 1) + 2m(H) – (t(H) + 2m(H)) = 1
Since the actual cost is O(1), the amortized cost is O(1) + 1 = O(1)
iii. Minimum :
The minimum node of a Fibonacci heap H is always the root node given
by the pointer min[H], so we can find the minimum node in O(1) actual
time. Because the potential of H does not change, the amortized cost of
this operation is equal to its O(1) actual cost.
iv. FIB-HEAP-EXTRACT-MIN(H)
1. z min[H]
2. if z NIL
3. then for each child x of z
Advanced Data Structure 2–48 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4. do add x to the root list of H
5. p[x] NIL
6. remove z from the root list of H
7. if z = right[z]
8. then min[H] NIL
9. else min[H] right[z]
10. CONSOLIDATE (H)
11. n[H] n[H] – 1
12. return z
PART-5
Tries, Skip List.
Questions-Answers
Answer
1. A trie (digital tree / radix tree / prefix free) is a kind of search tree i.e., an
ordered tree data structure that is used to store a dynamic set or
associative array where the keys are usually strings.
2. Unlike a binary search tree, no node in the tree stores the key associated
with that node; instead, its position in the tree defines the key with
which it is associated.
3. All the descendants of a node have a common prefix of the string
associated with that node, and the root is associated with the empty
string.
4. Values are not necessarily associated with every node. Rather, values
tend only to be associated with leaves, and with some inner nodes that
correspond to keys of interest.
Properties of a trie :
1. Trie is a multi-way tree.
2. Each node has from 1 to d children.
3. Each edge of the tree is labeled with a character.
4. Each leaf node corresponds to the stored string, which is a concatenation
of characters on a path from the root to this node.
Answer
Search a key in trie :
Trie-Search(t, P[k..m]) // inserts string P into t
1. if t is leaf then return true
2. else if t.child(P[k]) = nil then return false
3. else return Trie-Search(t.child(P[k]), P[k + 1..m])
Insert a key in trie :
Trie-Insert(t, P[k..m])
1. if t is not leaf then //otherwise P is already present
2. if t.child(P[k]) = nil then
//Create a new child of t and a “branch” starting with that child and
storing P[k..m]
3. else Trie-Insert(t.child(P[k]), P[k + 1..m])
Answer
1. A skip list is built in layers.
2. The bottom layer is an ordinary ordered linked list.
3. Each higher layer acts as an “express lane”, where an element in layer
i appears in layer (i + 1) with some fixed probability p (two commonly
used values for p are ½ and ¼.).
4. On average, each element appears in 1/(1– p) lists, and the tallest element
(usually a special head element at the front of the skip list) in all the lists.
5. The skip list contains log1/pn (i.e., logarithm base 1/p of n).
Properties of skip list :
1. Some elements, in addition to pointing to the next element, also point to
elements even further down the list.
2. A level k element is a list element that has k forward pointers.
3. The first pointer points to the next element in the list, the second pointer
points to the next level 2 element, and in general, the ith pointer points
to the next level i element.
Answer
Insertion in skip list :
1. We will start from highest level in the list and compare key of next node
of the current node with the key to be inserted.
Advanced Data Structure 2–50 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. If key of next node is less than key to be inserted then we keep on
moving forward on the same level.
3. If key of next node is greater than the key to be inserted then we store
the pointer to current node i at update[i] and move one level down and
continue our search.
At the level 0, we will definitely find a position to insert given key.
Insert(list, searchKey)
1. local update[0...MaxLevel+1]
2. x : = list header
3. for i : = list level down to 0 do
4. while x forward[i] key forward[i]
5. update[i] : = x
6. x : = x forward[0]
7. lvl : = randomLevel()
8. if lvl > list level then
9. for i : = list level + 1 to lvl do
10. update[i] : = list header
11. list level : = lvl
12. x : = makeNode(lvl, searchKey, value)
13. for i : = 0 to level do
14. x forward[i] : = update[i] forward[i]
15. update[i] forward[i] : = x
Searching in skip list :
Search(list, searchKey)
1. x : = list header
2. loop invariant : x key level down to 0 do
3. while x forward[i] key forward[i]
4. x : = x forward[0]
5. if x key = searchKey then return x value
6. else return failure
Deletion in skip list :
Delete(list, searchKey)
1. local update[0..MaxLevel+1]
2. x : = list header
3. for i : = list level down to 0 do
4. while x forward[i] key forward[i]
5. update[i] : = x
More pdf : www.motivationbank.in
Design and Analysis of Algorithms2–51 B (CS/IT-Sem-5)
6. x : = x forward[0]
7. if x key = searchKey then
8. for i : = 0 to list level do
9. if update[i] forward[i] x then break
10. update[i] forward[i] : = x forward[i]
11. free(x)
12. while list level > 0 and list header forward[list level] = NIL do
13. list level : = list level – 1
Answer
Function to calculate xn with time complexity O(log n) :
int power(int x, unsigned int y)
{
int temp;
if(y == 0)
return 1;
temp = power(x, y/2);
if (y%2 == 0)
return temp * temp;
else
return x * temp * temp;
}
More pdf : www.motivationbank.in
3–1 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
3 Graph Algorithms
CONTENTS
Part-1 : Divide and Conquer with ........................ 3–2B to 3–10B
Examples such as Sorting,
Matrix Multiplication, Convex
Hull, Searching
PART-1
Divide and Conquer with Examples such as
Sorting, Matrix Multiplication, Convex Hull, Searching.
Questions-Answers
Que 3.1. Write short note on divide and conquer. Write algorithm
for merge sort and quick sort.
Answer
Divide and conquer :
1. Divide and conquer is an algorithm design paradigm based on multi-
branched recursion.
2. A divide-and-conquer algorithm works by recursively breaking down a
problem into two or more sub-problems of the same or related type,
until these become simple enough to be solved directly.
3. The solutions to the sub-problems are then combined to give a solution
to the original problem.
4. Divide and conquer technique can be divided into the following three
parts :
a. Divide : This involves dividing the problem into some sub-problem.
b. Conquer : Recursively calling sub-problem until it is solved.
c. Combine : This involves combination of solved sub-problem so
that we will get the solution of problem.
Merge Sort : Refer Q. 1.21, Page 1–20B, Unit-1.
Quick Sort : Refer Q. 1.19, Page 1–16B, Unit-1.
Answer
1. Matrix chain multiplication (or Matrix Chain Ordering Problem, MCOP)
is an optimization problem that can be solved using dynamic
programming.
2. MCOP helps to find the most efficient way to multiply given matrices.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms3–3 B (CS/IT-Sem-5)
Answer
Strassen’s matrix multiplication algorithm has four steps :
1. Divide the input matrices A and B into n/2 × n/2 sub-matrices.
2. Using (n 2 ) scalar additio ns and subtractio n compute 14
n/2 × n/2 matrices A1, B1, A2, B2, ...A7, B7.
3. Recursively compute the seven matrix products.
Pi = AiBi for i = 1, 2, 3, ..., 7
More pdf : www.motivationbank.in
Graph Algorithm 3–4 B (CS/IT-Sem-5)
Answer
1. A graph G consists of a set of vertices V and a set of edges E.
2. Graphs are important because any binary relation is a graph, so graphs
can be used to represent essentially any relationship.
1 2
3 4
Fig. 3.4.1.
Various representations of graphs :
1. Matrix representation : Matrices are commonly used to represent
graphs for computer processing. Advantage of representing the graph
in matrix lies in the fact that many results of matrix algebra can be
readily applied to study the structural properties of graph from an
algebraic point of view.
a. Adjacency matrix :
i. Representation of undirected graph :
The adjacency matrix of a graph G with n vertices and no
parallel edges is a n × n matrix A = [aij] whose elements are
given by
aij = 1, if there is an edge between ith and jth vertices
= 0, if there is no edge between them
ii. Representation of directed graph :
The adjacency matrix of a digraph D, with n vertices is the
matrix
More pdf : www.motivationbank.in
Graph Algorithm 3–6 B (CS/IT-Sem-5)
A = [aij]n×n in which
aij = 1 if arc (vi, vj) is in D
= 0 otherwise
b. Incidence matrix :
i. Representation of undirected graph :
Consider an undirected graph G = (V, E) which has n vertices
and m edges all labelled. The incidence matrix I(G) = [bij], is
then n × m matrix, where
bij = 1 when edge ej is incident with vi
= 0 otherwise
ii.
Representation of directed graph :
The incidence matrix I(D) = [bij] of digraph D with n vertices
and m edges is the n × m matrix in which.
bij = 1 if arc j is directed away from vertex vi
= – 1 if arc j is directed towards vertex vi
= 0 otherwise.
2. Linked representation :
a. In linked representation, the two nodes structures are used :
i. For non-weighted graph,
INFO Adj-list
ii. For weighted graph,
Weight INFO Adj-list
Where Adj-list is the adjacency list i.e., the list of vertices which are
adjacent for the corresponding node.
b. The header nodes in each list maintain a list of all adjacent vertices
of that node for which the header node is meant.
Answer
Depth First Search Algorithm :
1. Algorithm starts at a specific vertex S in G, which becomes current
vertex.
2. Then algorithm traverse graph by any edge (u, v) incident to the current
vertex u.
3. If the edge (u, v) leads to an already visited vertex v, then we backtrack
to current vertex u.
4. If, on other hand, edge (u, v) leads to an unvisited vertex v, then we go
to v and v becomes our current vertex.
5. We proceed in this manner until we reach to “dead end”. At this point we
start backtracking.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 3–7 B (CS/IT-Sem-5)
6. The process terminates when backtracking leads back to the start vertex.
7. Edges leads to new vertex are called discovery or tree edges and edges
lead to already visited vertex are called back edges.
Algorithm :
In DFS, each vertex v has two timestamps : the first timestamp d[v] i.e.,
discovery time records when v is first discovered i.e., grayed, and the second
timestamp f[v] i.e. finishing time records when the search finishes examining
v’s adjacency list i.e., blacked. For every vertex d[u] < f[u].
DFS(G) :
1. for each vertex u V[G]
2. do color[u] WHITE
3. [u] NIL
4. time 0
5. for each vertex u V[G]
6. do if color[u] = WHITE
7. then DFS-VISIT(u)
DFS-VISIT(u) :
1. color[u] GRAY // White vertex u has just been discovered.
2. time time + 1
3. d[u] time
4. for each v Adj[u] // Explore edge (u, v)
5. do if color[v] = WHITE
6. then [v] u
7. DFS-VISIT (v)
8. color[u] BLACK // Blacken u, it is finished.
9. f[u] time time + 1
Que 3.6. Explain Breadth First Search (BFS). Give its algorithm.
Answer
Breadth first search :
1. The general idea behind a breadth first search is as follows :
a. First we examine the starting node A.
b. Then, we examine all the neighbours of A, and so on.
2. Naturally, we need to keep track of the neighbours of a node, and we
need to guarantee that no node is processed more than once.
3. This is accomplished by using a queue to hold nodes that are waiting to
be processed.
Algorithm :
BFS (G, s) :
1. for each vertex u V[G] – {s}
2. do color[u] WHITE
More pdf : www.motivationbank.in
Graph Algorithm 3–8 B (CS/IT-Sem-5)
3. d[u]
4. [u] NIL
5. color[s] GRAY
6. d[s] 0
7. [s] NIL
8. Q
9. ENQUEUE(Q, s)
10. while Q
do c
11. do u DEQUEUE(Q)
12. for each v Adj[u]
13. do if color [v] = WHITE
14. then color[v] GRAY
15. d[v] d[u] + 1
16. [v] u
17. ENQUEUE(Q, v)
18. color[u] BLACK
Answer
Test-connected (G) :
1. Choose a vertex x
2. Make a list L of vertices reachable from x,
and another list K of vertices to be explored.
3. Initially, L = K = x.
4. while K is non-empty
5. Find and remove some vertex y in K
6. for each edge (y, z)
7. if (z is not in L)
8. Add z to both L and K
9. if L has fewer than n items
10. return disconnected
11. else return connected.
Answer
1. The Strongly Connected Components (SCC) of a directed graph G are
its maximal strongly connected subgraphs.
2. If each strongly connected component is contracted to a single vertex, the
resulting graph is a directed acyclic graph called as condensation of G.
3. Kosaraju’s algorithm is an algorithm to find the strongly connected
components of a directed graph.
Kosaraju’s algorithm :
1. Let G be a directed graph and S be an empty stack.
2. While S does not contain all vertices :
i. Choose an arbitrary vertex v not in S. Perform a depth first search
starting at v.
ii. Each time that depth first search finishes expanding a vertex u,
push u onto S.
3. Reverse the direction of all arcs to obtain the transpose graph.
4. While S is non-empty :
i. Pop the top vertex v from S. Perform a depth first search starting at
v.
ii. The set of visited vertices will give the strongly connected component
containing v; record this and remove all these vertices from the
graph G and the stack S.
Answer
1. The convex hull of a set S of points in the plane is defined as the smallest
convex polygon containing all the points of S.
2. The vertices of the convex hull of a set S of points form a (not necessarily
proper) subset of S.
3. To check whether a particular point p S is extreme, see each possible
triplet of points and check whether p lies in the triangle formed by these
three points.
More pdf : www.motivationbank.in
Graph Algorithm 3–10 B (CS/IT-Sem-5)
Fig. 3.9.1.
4. If p lies in any triangle then it is not extreme, otherwise it is.
5. We denote the convex hull of S by CH(S). Convex hull is a convex set
because the intersection of convex sets is convex and convex hull is also
a convex closure.
Graham-Scan algorithm :
The procedure GRAHAM-SCAN takes as input a set Q of points, where
|Q| 3. It calls the functions Top(S), which return the point on top of stack
S without changing S, and to NEXT-TO-TOP(S), which returns the point one
entry below the top of stack S without changing S.
GRAHAM-SCAN(Q)
1. Let p0 be the point in Q with the minimum y-coordinate, or the leftmost
such point in case of a tie.
2. Let <p1, p2, ...., pm> be the remaining points in Q, sorted by polar angle in
counter clockwise order around po (if more than one point has the same
angle remove all but the one that is farthest from po).
3. Top [S] 0
4. PUSH (p0, S)
5. PUSH (p1, S)
6. PUSH (p2, S)
7. for i 3 to m
8. do while the angle formed by points NEXT-To-TOP(S), Top(S), and
pi makes a non left turn.
9. do POP(S)
10. PUSH (pi, S)
11. return S
Time complexity :
The worst case running time of GRAHAM-SCAN is
T(n) = O(n) + O(n log n) + O(1) + O(n) = O(n log n)
where n = |Q|
Graham’s scan running time depends only on the size of the input it is
independent of the size of output.
More pdf : www.motivationbank.in
3–11 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
PART-2
Greedy Methods with Examples such as Optimal
Reliability Allocation.
Questions-Answers
Answer
1. Greedy algorithms are simple and straight forward.
2. Greedy algorithms are shortsighted in their approach in the sense that
they take decisions on the basis of information at hand without worrying
about the effect these decisions may have in the future.
3. Greedy algorithms are easy to invent, easy to implement and most of
the time quite efficient.
4. Many problems cannot be solved correctly by greedy approach.
5. Greedy algorithms are used to solve optimization problems.
Answer
The greedy algorithm consists of four functions :
i. A function that checks whether chosen set of items provide a solution.
ii. A function that checks the feasibility of a set.
iii. The selection function tells which of the candidates are most promising.
iv. An objective function, which does not appear explicitly, gives the value
of a solution.
Structure of greedy algorithm :
i. Initially the set of chosen items is empty i.e., solution set.
ii. At each step
a. Item will be added in a solution set by using selection function.
b. If the set would no longer be feasible
Reject items under consideration (and is never consider again).
c. Else if set is still feasible
add the current item.
More pdf : www.motivationbank.in
Graph Algorithm 3–12 B (CS/IT-Sem-5)
Que 3.12. Define activity selection problem and give its solution
by using greedy approach with its correctness.
Answer
1. An activity selection is the problem of scheduling a resource among
several competing activity. Given a set S = {1, 2, …, n} of n activities.
2. Each activity has si a start time, and fi a finish time.
3. If activity i is selected, the resource is occupied in the intervals (si, fi).We
say i and j are compatible activities if their start and finish time does not
overlap i.e., i and j compatible if si fj and sj fi
4. The activity selection problem is, to select a maximal sized subset of
mutually compatible activities.
Here we maximize the number of activities selected, but if the profit were
proportional to si – fi, this will not maximize the profit.
2
i si fi 1
3
1 1 4 1
2 3 5 4
1
3 0 6 5
1 4
4 5 7
6
4 3 8 1 4
7
6 5 9 1 4
7 6 10 8
1 4
8 8 11 9
1 4 8
9 8 12
10
10 2 13 1 4 8
11
11 12 14 1 4 8
12 -
1 4 8 11
time
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Fig. 3.12.1.
Greedy algorithm :
Assume that f1 f2 … fn
More pdf : www.motivationbank.in
3–13 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Greedy-Activity-Selector (s, f)
1. n length [s]
2. A {a1}
3. i1
4. for m 2 to n
5. do if sm fi
6. then A A {(m)}
7. im
8. return A
The algorithm starts with {1} and checks to see which can be added after 1,
updating the global “finishing time” and comparing with each start time.
The activity picked is always the first that is compatible. Greedy algorithms
do not always produce optimal solutions.
Correctness : Greedy algorithm does not always produce optimal solutions
but GREEDY-ACTIVITY-SELECTOR does.
Answer
Greedy algorithm : Refer Q. 3.10, Page 3–11B, Unit-3.
Greedy approach : Greedy approach works by making the decision that
seems most promising at any moment it never reconsiders this decision,
whatever situation may arise later. Activity selection problem uses greedy
approach.
Algorithm for greedy activity selection : Refer Q. 3.12, Page 3–12B,
Unit-3.
Numerical :
Sorted activities a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11
Starting time 2 1 3 0 5 3 5 6 8 8 12
Finish time 3 4 5 6 7 8 9 10 11 12 14
We select first activity a1
(2, 3)
Check for activity a2
Starting time of a2 time of a1
More pdf : www.motivationbank.in
Graph Algorithm 3–14 B (CS/IT-Sem-5)
a2 is not selected
Check for activity a3
Starting time of a3 finish time of a1
a3 is selected
Check for activity a4
Starting time of a4 finish time of a3
a4 is not selected
Check for activity a5
Starting time of a5 finish time of a3
a5 is selected
Check for activity a6
Starting time of a6 finish time of a5
a6 is not selected
Check for activity a7
Starting time of a7 finish time of a5
a7 is not selected
Check for activity a8
Starting time of a8 finish time of a5
a8 is not selected
Check for activity a9
Starting time of a9 finish time of a5
a9 is selected
Check for activity a10
Starting time of a10 finish time of a9
a10 is not selected
Check for activity a11
Starting time of a11 finish time of a9
a11 is selected.
Therefore selected activities are :
a1 : (2, 3)
a3 : (3, 5)
a5 : (5, 7)
a9 : (8, 11)
a11 : (12, 14)
Answer
Greedy algorithm : Refer Q. 3.10, Page 3–11B, Unit-3.
Greedy algorithm defined in two different forms :
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 3–15 B (CS/IT-Sem-5)
Answer
1. An optimization problem is the problem of finding the best solution
from all feasible solutions.
2. Optimization problems can be divided into two categories depending
on whether the variables are continuous or discrete.
3. There is no way in general that one can specify if a greedy algorithm
will solve a particular optimization problem.
4. However if the following properties can be demonstrated, then it is
probable to use greedy algorithm :
a. Greedy choice property : A globally optimal solution can be
arrived at by making a locally optimal greedy choice. That is, when
we are considering which choice to make, we make the choice
that looks best in the current problem, without considering results
from sub-problems. ·
b. Optimal substructure : A problem exhibits optimal substructure
if an optimal solution to the problem contains within it optimal
solutions to sub-problems.
More pdf : www.motivationbank.in
Graph Algorithm 3–16 B (CS/IT-Sem-5)
PART-3
Knapsack.
Questions-Answers
Answer
1. The knapsack problem is a problem in combinatorial optimization.
2. Given a set of items, each with a weight and a value, determine the
number of each item to include in a collection so that the total weight is
less than or equal to a given limit and the total value is as large as
possible.
Approach use to solve the problem :
1. In knapsack problem, we have to fill the knapsack of capacity W, with
a given set of items I1, I2... In having weight w1, w2...wn in such a manner
that the total weight of items cannot exceed the capacity of knapsack
and maximum possible value (v) can be obtained.
2. Using branch and bound approach, we have a bound that none of the
items can have total sum more than the capacity of knapsack and must
give maximum possible value.
3. The implicit tree for this problem is a binary tree in which left branch
implies inclusion and right implies exclusion.
4. Upper bound of node can be calculated as :
ub = v + (W – wi+1) (vi+1 / wi+1)
Answer
Greedy algorithm for the discrete knapsack problem :
1. Compute value/weight ratio vi /wi for all items.
2. Sort the items in non-increasing order of the ratios vi /wi.
3. Repeat until no item is left in sorted list using following steps :
a. If current item fits, use it.
b. Otherwise skip this item, and proceed to next item.
More pdf : www.motivationbank.in
3–17 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
For example : Knapsack problem for the following instance using greedy
approach. The item can be selected or skipped completely.
1 7 ` 49
2 3 ` 12
3 4 ` 42
4 5 ` 30
Consider W = 10.
Solution : This is also called 0/1 knapsack. Either we can completely select
an item or skip it. First of all we will compute value-to-weight ratio and
arrange them in non-increasing order of the ratio.
Item Weight Value Value/Weight
3 4 ` 42 10.5
1 7 ` 49 7
4 5 ` 30 6
2 3 ` 12 4
Answer
The 0/1-knapsack problem is defined as follows :
1. Given, a knapsack of capacity c and n items of weights {w1, w2, ..., wn}
and profits {p1, p2, ....., pn}, the objective is to choose a subset of n
objects that fits into the knapsack and that maximizes the total profit.
2. Consider a knapsack (bag) with a capacity of c.
3. We select items from a list of n items.
More pdf : www.motivationbank.in
Graph Algorithm 3–18 B (CS/IT-Sem-5)
Answer
We can use 0/1-knapsack problem when the items cannot be divided into
parts and fractional knapsack problem when the items can be divided into
fractions.
First arrange in non-increasing order of value/weight :
Item ID Weight Value Value/Weight
E 10 10 1
B 50 35 0.7
F 10 6 0.6
C 40 20 0.5
A 100 40 0.4
D 20 4 0.2
According to 0/1-knapsack problem, either we select an item or reject. So the
item will be selected according to value per weight.
E is selected W = 10 < 100
B is selected W = 10 + 50
= 60 < 100
F is selected W = 60 + 10
= 70 < 100
C cannot be selected because
W = 70 + 40 = 110 > 100
Hence we select D
W = 70 + 20 = 90 < 100
Total value = 10 + 35 + 6 + 4 = 55
According to fractional knapsack problem, we can select fraction of any item.
E is selected W = 10 < 100
B is selected W = 10 + 50
= 60 < 100
F is selected W = 60 + 10
= 70 < 100
If we select C W = 70 + 40
= 110 > 100
Hence we select the fraction of item C as
100 W 100 70
=
Weight of C 40
Weight of C = 30/40 = 0.75
So, W= 0.75 × 40 = 30
W= 70 + 30 = 100
Total value = 10 + 35 + 6 + 0.75 (20)
= 10 + 35 + 6 + 15 = 66
More pdf : www.motivationbank.in
Graph Algorithm 3–20 B (CS/IT-Sem-5)
Que 3.20. Consider the weight and values of item listed below.
Note that there is only one unit of each item. The task is to pick a
subset of these items such that their total weight is no more than
11 kgs and their total value is maximized. Moreover, no item may
be split. The total value of items picked by an optimal algorithm is
denoted by Vopt. A greedy algorithm sorts the items by their value-
to-weight rations in descending order and packs them greedily,
starting from the first item in the ordered list. The total value of
items picked by the greedy algorithm is denoted by V greedy. Find
the value of V opt – V greedy.
Item I1 I2 I3 I4
W 10 7 4 2
V 60 28 20 24
Answer
For Vgreedy :
Item W V
I1 10 60
I2 7 28
I3 4 20
I4 2 24
Arrange the items by V/W ratio in descending order :
Item W V V/W
I4 2 24 12
I1 10 60 6
I3 4 20 5
I2 7 28 4
Total weight W = 11 kg
I4 is picked so W = 11 – 2 = 9 kg
I1 cannot picked 10 > 9
I3 is picked, W = 9 – 4 = 5 kg
I2 cannot be picked 7 > 5
I4 and I3 are picked so
Vgreedy = V(I4) + V(I3) = 24 + 20 = 44
More pdf : www.motivationbank.in
3–21 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
For Vopt : For calculating Vopt we use 0/1 knapsack problem, so only item 1 is
picked. Hence, Vopt = 60
So, Vopt – Vgreedy = 60 – 44 = 16
Answer
N=8
W = {1, 11, 21, 23, 33, 43, 45, 55}
P = {11, 21, 31, 33, 43, 53, 55, 65}
M = 110
Now, arrange the value of Pi in decreasing order
N Wi Pi V i = Wi × P i
1 1 11 11
2 11 21 231
3 21 31 651
4 23 33 759
5 33 43 1419
6 43 53 2279
7 45 55 2475
8 55 65 3575
Now, fill the knapsack according to decreasing value of Pi. First we choose
item N = 1 whose weight is 1.
Then choose item N = 2 whose weight is 11.
Then choose item N = 3 whose weight is 21.
Now, choose item N = 4 whose weight is 23.
Then choose item N = 5 whose weight is 33.
Total weight in knapsack is = 1 + 11 + 21 + 23 + 33 = 89
Now, the next item is N = 6 and its weight is 43, but we want only 21 because
M = 110.
So, we choose fractional part of it, i.e.,
The value of fractional part of N = 6 is,
2279
× 21 = 1113
43
More pdf : www.motivationbank.in
Graph Algorithm 3–22 B (CS/IT-Sem-5)
21
33
23
21 110
11
Answer
Numerical :
w = {2, 11, 22, 15}
c = 40
p = {11, 21, 31, 33}
Initially,
Item wi pi
I1 2 11
I2 11 21
I3 22 31
I4 15 33
Taking value per weight ratio, i.e., pi / wi
Item wi vi / wi pi
I1 2 11 22
I2 11 21 232
I3 22 31 682
I4 15 33 495
Item wi pi pi
I3 22 31 682
I4 15 33 495
I2 11 21 232
I1 2 11 22
22
15
40
3
The value of fractional part of I1 is,
232
= × 3 = 63
11
Thus, the maximum value is,
= 682 + 495 + 63 = 1190
PART-4
Minimum Spanning Trees-Prim’s and Kruskal’s Algorithm, Single.
Questions-Answers
Answer
Spanning tree :
1. A spanning tree of a graph is a subgraph that contains all the vertices
and is a tree.
2. A spanning tree of a connected graph G contains all the vertices and has
the edges which connect all the vertices. So, the number of edges will be
1 less the number of nodes.
More pdf : www.motivationbank.in
Graph Algorithm 3–24 B (CS/IT-Sem-5)
3. If graph is not connected, i.e., a graph with n vertices has edges less than
n – 1 then no spanning tree is possible.
4. A graph may have many spanning trees.
Minimum spanning tree :
1. Given a connected weighted graph G, it is often desired to create a
spanning tree T for G such that the sum of the weights of the tree edges
in T is as small as possible.
2. Such a tree is called a minimum spanning tree and represents the
‘‘cheapest’’ way of connecting all the nodes in G.
3. There are number of techniques for creating a minimum spanning tree
for a weighted graph but the most famous methods are Prim’s and
Kruskal’s algorithm.
Answer
i. In this algorithm, we choose an edge of G which has smallest weight
among the edges of G which are not loops.
ii. This algorithm gives an acyclic subgraph T of G and the theorem given
below proves that T is minimal spanning tree of G. Following steps are
required :
Step 1 : Choose e1, an edge of G, such that weight of e1, w(e1) is as small
as possible and e1 is not a loop.
Step 2 : If edges e1, e2, ........., ei have been selected then choose an edge
ei+1 not already chosen such that
i. the induced subgraph
G[{e1 ..... ei+1}] is acyclic and
ii. w(ei+1) is as small as possible
Step 3 : If G has n vertices, stop after n – 1 edges have been chosen.
Otherwise repeat step 2.
If G be a weighted connected graph in which the weight of the edges are all
non-negative numbers, let T be a subgraph of G obtained by Kruskal’s
algorithm then, T is minimal spanning tree.
Answer
Spanning tree : Refer Q. 3.23, Page 3–23B, Unit-3.
i. Kruskal’s algorithm : Refer Q. 3.24, Page 3–24B, Unit-3.
ii. Prim’s algorithm :
First it chooses a vertex and then chooses an edge with smallest weight
incident on that vertex. The algorithm involves following steps :
Step 1 : Choose any vertex V1 of G.
Step 2 : Choose an edge e1 =V1V2 of G such that V2 V1 and e1 has
smallest weight among the edge e of G incident with V1.
Step 3 :If edges e1, e2, ......., ei have been chosen involving end points
V1, V2, ............., Vi+1, choose an edge ei+1 = VjVk with Vj = {V1 ....... Vi+1}
and Vk {V1 ...........Vi+1} such that ei+1 has smallest weight among the
edges of G with precisely one end in {V1 ............... Vi+1}.
Step 4 :Stop after n – 1 edges have been chosen. Otherwise goto
step 3.
Comparison :
S. No. Kruskal’s algorithm Prim’s algorithm
1. Kruskal’s algo rithm Prim’s algorithm initializes with a
initiates with an edge. node.
2. Kruskal’s algorithm selects Prim’s algorithms span from one
the edges in a way that the node to another.
position of the edge is not
based on the last step.
3. Kruskal’s can be used on In Prim’s algorithm, graph must be
disconnected graphs. a connected graph.
4. Kruskal’s time complexity Prim’s algo rithm has a time
in worst case is O(E log E). co mple xity in wo rst case o f
O(E log V).
Answer
Minimum spanning tree : Refer Q. 3.23, Page 3–23B, Unit-3.
Prim’s algorithm : Refer Q. 3.25, Page 3–24B, Unit-3.
For example :
According to algorithm we choose vertex A from the set {A, B, C, D, E, F, G,
H, I, J}.
More pdf : www.motivationbank.in
Graph Algorithm 3–26 B (CS/IT-Sem-5)
4 C
4 B 2 1
A E
4 F
2
1 10
D 3
G 5
6 3
5 4
H J
2 I
3
Now edge with smallest weight incident on A is e = AD
4
B C
4 2
1
A E
4 F
2 3
1 10
D G
5
5 6 4 3
H J I
2 3
Now we look on weight
W(A, B) = 4
W(D, B) = 4 W(D, H) = 5
W(D, J) = 6
We choose e = AB, since it is minimum.
W(D, B) can also be chosen because it has same value.
4
B C
4 2
1
A 4 E
2 F
1 10 3
D G
5
5 6 4 3
H J I
2 3
Again, W(B, C) = 4
W(B, J) = 10
W(D, H) = 5
W(D, J) = 6
We choose e = BC, since it has minimum value.
More pdf : www.motivationbank.in
3–27 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
B 4 C
4 2 1
A 4 E F
1 10 2 3
D G
5 6 4 3 5
H J I
2 3
Now, W(B, J) = 10
W(C, E) = 2
W(C, F) = 1
We choose e = CF, since it has minimum value.
B 4 C
4 2 1
A 4 E F
1 10 2 3
D G
5 6 4 3 5
H J I
2 3
Now, W(C, E) = 2
W(F, G) = 3
W(F, I) = 5
We choose e = CE, since it has minimum value.
B 4 C
4 2 1
A 4 E F
1 10 2 3
D G
5 6 4 3 5
H J I
2 3
W(E, G) = 2
W(F, G) = 3
W(F, I) = 5
We choose e = EG, since it has minimum value.
More pdf : www.motivationbank.in
Graph Algorithm 3–28 B (CS/IT-Sem-5)
B 4 C
4 2 1
A 4 F
E
2 3
1 10
D G
5 6 4 3 5
H J I
2 3
W(G, J) = 4
W(G, I) = 3
W(F, I) = 5
We choose e = GI, since it has minimum value.
B 4 C
4 2 1
A 4 F
E
2 3
1 10
D G
5 6 4 3 5
H J I
2 3
W(I, J) = 3
W(G, J) = 4
We choose e = IJ, since it has minimum value.
W(J, H) = 2
Hence, e = JH will be chosen.
The final minimal spanning tree is given as :
B 4 C
4 2 1
A E F
1 2
D G
4
3
H J I
2 3
Fig. 3.26.1.
More pdf : www.motivationbank.in
3–29 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
7 1 6
2
2 4
6 6
3 3
4 5
7
5 6
7
Fig. 3.27.1.
Answer
Minimum spanning tree : Refer Q. 3.23, Page 3–23B, Unit-3.
Kruskal’s algorithm : Refer Q. 3.24, Page 3–24B, Unit-3.
Numerical :
Step 1 : Arrange the edge of graph according to weight in ascending order.
Edges Weight Edge Weight
13 2 32 6
46 3 17 7
25 4 35 7
36 5 56 7
34 6
41 6
Step 2 : Now draw the vertices as given in graph,
2 4
3
5 6
Now draw the edge according to the ascending order of weight. If any edge
forms cycle, leave that edge.
More pdf : www.motivationbank.in
Graph Algorithm 3–30 B (CS/IT-Sem-5)
2 2 4
3
5 6
Step 4 : Select edge 46
1
2 2 4
3
3
5 6
Step 5 : Select edge 25
1
2 2 4
4 3
3
5 6
Step 6 : Select edge 36
1
2 2 4
4 3
3
5
5 6
Step 7 : Select edge 23
2 2 4
6
4 3
3
5
5 6
All the remaining edges, such as 34, 41, 12, 35, 56 are rejected because they
form cycle.
All the vertices are covered in this tree. So, the final tree with minimum cost
of given graph is
More pdf : www.motivationbank.in
3–31 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
2 2 4
6
4 3
3
5
5 6
Minimum cost = 2 + 3 + 4 + 5 + 6 = 20
Time complexity : Time complexity is O(|E| log |E|).
Answer
Minimum spanning tree : Refer Q. 3.23, Page 3–23B, Unit-3.
Prim’s algorithm : Refer Q. 3.25, Page 3–24B, Unit-3.
Numerical :
6 7
b c d
4 2 9
4
a 11 i 14 e
6
5 7 10
h g f
1 2
Fig. 3.28.2.
Let a be the source node. Select edge (a, b) as distance between edge (a, b)
is minimum.
b
4
a
More pdf : www.motivationbank.in
Graph Algorithm 3–32 B (CS/IT-Sem-5)
a
Now, select edge (c, i)
6
b c
4
2
a i
Now, select edge (i, g)
6
b c
4
2
a i
6
g
Now, select edge (g, h)
6
b c
4
2
a i
6
h g
1
Now, select edge (g, f)
6
b c
4
2
a i
6
1 2
h g f
Now, select edge (f, e)
6
b c
4
2
a i e
6
10
1 g 2
h f
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 3–33 B (CS/IT-Sem-5)
h 1 g 2 f e
PART-5
Source Shortest Paths-Dijkstra’s and Bellman-Ford Algorithm.
Questions-Answers
Que 3.29. Prove that the weights on the edge of the connected
undirected graph are distinct then there is a unique minimum
spanning tree. Give an example in this regard. Also discuss prim’s
minimum spanning tree algorithm in detail.
AKTU 2018-19, Marks 07
Answer
Proof :
1. Let we have an algorithm that finds an MST (which we will call A) based
on the structure of the graph and the order of the edges when ordered
by weight.
2. Assume MST A is not unique.
3. There is another spanning tree with equal weight, say MST B.
4. Let e1 be an edge that is in A but not in B.
5. Then, B should include at least one edge e2 that is not in A.
6. Assume the weight of e1 is less than that of e2.
7. As B is a MST, {e1} B must contain a cycle.
8. Replace e2 with e1 in B yields the spanning tree {e1} B – {e2} which has
a smaller weight compared to B.
9. This contradicts that B is not a MST.
So, MST of undirected graph with distinct edge is unique.
More pdf : www.motivationbank.in
Graph Algorithm 3–34 B (CS/IT-Sem-5)
Example :
1
1 7
2 2 4
4 3 3
8 5
5 6
6
Fig. 3.29.1.
Step 1 : Arrange the edge of graph according to weight in ascending
order.
Edges Weight Edge Weight
12 1 14 7
13 2 35 8
46 3
25 4
36 5
56 6
Step 2 : Now draw the vertices as given in graph,
2 4
3
5 6
Now draw the edge according to the ascending order of weight. If any
edge forms cycle, leave that edge.
Step 3 :
1 1
2 4
3
5 6
More pdf : www.motivationbank.in
3–35 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Step 4 :
1 1
2 2 4
3
5 6
Step 5 :
1 1
2 2 4
3
3
5 6
Step 6 :
1 1
2 2 4
4 3
3
5 6
Step 7 :
1 1
2 2 4
4 3
3
5
5 6
All the remaining edges, such as : 14, 35, 56 are rejected because they
form cycle.
All the vertices are covered in this tree. So, the final tree with minimum
cost of given graph is
1 1
2 2 4
4 3
3
5
5 6
Prim’s algorithm : Refer Q. 3.25, Page 3–24B, Unit-3.
More pdf : www.motivationbank.in
Graph Algorithm 3–36 B (CS/IT-Sem-5)
Answer
1. Dijkstra’s algorithm, is a greedy algorithm that solves the single source
shortest path problem for a directed graph G = (V, E) with non-negative
edge weights, i.e., we assume that w(u, v) 0 each edge (u, v) E.
2. Dijkstra’s algorithm maintains a set S of vertices whose final shortest
path weights from the source s have already been determined.
3. That is, for all vertices v S, we have d[v] = (s, v).
4. The algorithm repeatedly selects the vertex u V – S with the minimum
shortest path estimate, inserts u into S, and relaxes all edges leaving u.
5. We maintain a priority queue Q that contains all the vertices in
v – s, keyed by their d values.
6. Graph G is represented by adjacency list.
7. Dijkstra’s always chooses the “lightest or “closest” vertex in V – S to
insert into set S that it uses as a greedy strategy.
Dijkstra’s algorithm :
DIJKSTRA (G, w, s)
1. INITIALIZE-SINGLE-SOURCE (G, s)
2. s
3. Q V[G]
4. while Q
5. do u EXTRACT-MIN (Q)
6. S S {u}
7. for each vertex v Adj [u]
8. do RELAX (u, v, w)
RELAX (u, v, w) :
1. If d[u] + w(u, v) < d[v]
2. then d[v] d[u] + w(u, v)
3. [v] u
More pdf : www.motivationbank.in
3–37 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Que 3.31. Find the shortest path in the below graph from the
source vertex 1 to all other vertices by using Dijkstra’s algorithm.
50
2 3
10
10
1 30 20
100
5 4
60
Answer
Initialize :
50 S:{}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
100
5 4
60
Extract min (1) :
50 S : {1}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
100
5 4
60
All edges leaving (1) :
10
50 S : {1}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
10 30 100
100
5 4
60
100 30
More pdf : www.motivationbank.in
Graph Algorithm 3–38 B (CS/IT-Sem-5)
Extract min(2) :
50 60 S : {1, 2}
2 3
10 Q:1 2 3 4 5
10 0
1 30 20
10 60 30 100
100
5 4
60
All edges leaving (2) :
50 60 S : {1, 2}
2 3
10 Q:1 2 3 4 5
10 0
1 30 20
10 60 30 100
100
5 4 60 30 100
60 30
100
Extract min(4) :
10 60
50 S : {1, 2, 4}
2 3
10 Q:1 2 3 4 5
10 0
0 1 30 20
10 60 30 100
100
5 4 60 30 100
60
100 30
All edges leaving (4) :
50 50 S : {1, 2, 4}
2 3 Q:1 2 3 4 5
10
0
10
1 30 20 10 60 30 100
100 60 30 100
5 4 50
60
100 30
Extract min(3) :
10 S : {1, 2, 4, 3}
50
2 3 50 Q:1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50
100
More pdf : www.motivationbank.in
3–39 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Shortest path
2 3
10
1 30 10 20
5 4
Answer
1. Bellman-Ford algorithm finds all shortest path length from a source
s V to all v V or determines that a negative-weight cycle exists.
2. Bellman-Ford algorithm solves the single source shortest path problem
in the general case in which edges of a given digraph G can have
negative weight as long as G contains no negative cycles.
3. This algorithm, uses the notation of edge relaxation but does not use
with greedy method.
4. The algorithm returns boolean TRUE if the given digraph contains no
negative cycles that are reachable from source vertex otherwise it
returns boolean FALSE.
More pdf : www.motivationbank.in
Graph Algorithm 3–40 B (CS/IT-Sem-5)
Bellman-Ford (G, w, s) :
1. INITIALIZE-SINGLE-SOURCE (G, s)
2. for each vertex i 1 to V[G] – 1
3. do for each edge (u, v) in E[G]
4. do RELAX (u, v, w)
5. for each edge (u, v) in E[G] do
6. do if d[u] + w(u, v) < d[v] then
7. then return FALSE
8. return TRUE
RELAX (u, v, w) :
1. If d[u] + w(u, v) < d[v]
2. then d[v] d[u] + w(u, v)
3. [v] u
If Bellman-Ford returns true, then G forms a shortest path tree, else there
exists a negative weight cycle.
Answer
Dijkstra algorithm fails to find a shortest path when the graph contains
negative edges.
Bellman Ford algorithm fails to find a shortest path when the graph contain
negative weight cycle.
No, Bellman Ford cannot detect all negative weight cycle in a graph.
More pdf : www.motivationbank.in
3–41 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Numerical :
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G
2 C 11
D E 1
7
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G
1
2 C 11
D E 1
7
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G
2 C 11
D E 1
7 12
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G 13
2 C 11
D E 1
7 12
15
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G 13
2 C 11
D E 1
7 12
15 18
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G 13
2 C 11
D E 1
7
More pdf : www.motivationbank.in
Graph Algorithm 3–42 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
4–1 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
4 Dynamic Programming
and Backtracking
CONTENTS
Part-1 : Dynamic Programming with .................... 4–2B to 4–8B
Examples such as Knapsack
Questions-Answers
Answer
1. Dynamic programming is a stage-wise search method suitable for
optimization problems whose solutions may be viewed as the result of a
sequence of decisions.
2. It is used when the sub-problems are not independent.
3. Dynamic programming takes advantage of the duplication and arranges
to solve each sub-problem only once, saving the solution (in table or
something) for later use.
4. Dynamic programming can be thought of as being the reverse of
recursion. Recursion is a top-down mechanism i.e., we take a problem,
split it up, and solve the smaller problems that are created. Dynamic
programming is a bottom-up mechanism i.e., we solve all possible small
problems and then combine them to obtain solutions for bigger problems.
Difference :
1. In recursion, sub-problems are solved multiple times but in dynamic
programming sub-problems are solved only one time.
2. Recursion is slower than dynamic programming.
For example :
Consider the example of calculating nth Fibonacci number.
fibo(n) = fibo(n – 1) + fibo(n – 2)
fibo(n – 1) = fibo(n – 2) + fibo(n – 3)
fibo(n – 2) = fibo(n – 3) + fibo(n – 4)
.................................
................................
................................
fibo(2) = fibo(1) + fibo(0)
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 4–3 B (CS/IT-Sem-5)
In the first three steps, it can be clearly seen that fibo(n – 3) is calculated
twice. If we use recursion, we calculate the same sub-problems again
and again but with dynamic programming we calculate the sub-problems
only once.
Answer
Principle of optimality : Principle of optimality states that whatever the
initial state is, remaining decisions must be optimal with regard the state
following from the first decision.
Approaches in dynamic programming :
There are two approaches of solving dynamic programming problems :
1. Bottom-up approach : Bottom-up approach simply means storing the
results of certain calculations, which are then re-used later because the
same calculation is a sub-problem in a larger calculation.
2. Top-down approach : Top-down approach involves formulating a
complex calculation as a recursive series of simpler calculations.
Answer
Following are the elements of dynamic programming :
1. Optimal sub-structure : Optimal sub-structure holds if optimal solution
contains optimal solutions to sub-problems. It is often easy to show the
optimal sub-problem property as follows :
i. Split problem into sub-problems.
ii. Sub-problems must be optimal; otherwise the optimal splitting would
not have been optimal.
There is usually a suitable “space” of sub-problems. Some spaces are
more “natural” than others. For matrix chain multiply we choose sub-
problems as sub-chains.
2. Overlapping sub-problem :
i. Overlapping sub-problem is found in those problems where bigger
problems share the same smaller problems. This means, while
solving larger problems through their sub-problems we find the
same sub-problems more than once. In these cases a sub-problem
is usually found to be solved previously.
ii. Overlapping sub-problems can be found in Matrix Chain
Multiplication (MCM) problem.
Dynamic Programming & Backtracking 4–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1..4
2..2 3..4 2..3 4..4 1..1 2..2 3..3 4..4 1..1 2..3 1..2 3..3
Answer
LCS-Length (X, Y) :
1. m length[X]
2. n length[Y]
3. for i 1 to m
4. do c[i, 0] 0
5. for j 0 to n
6. do c[0, j] 0
7. for i 1 to m
8. do for j 1 to n
9. do if xi = yj
10. then c[i, j] c[i – 1, j – 1] + 1
11. b[i, j] “”
12. else if c[i – 1, j] c[i, j – 1]
13. then c[i, j] c[i – 1, j]
14. b[i, j] “”
15. else c[i, j] c[i, j – 1]
16. b[i, j] “”
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 4–5 B (CS/IT-Sem-5)
Answer
Dynamic 0/1-knapsack(v, w, n, W) :
1. for (w = 0 to W) V[0, w] = 0
2. for (i = 1 to n)
3. for (w = 0 to W)
4. if (w[i] w) then
5. V[i, w] = max{V[i – 1, w], v[i] + V[i – 1, w – w[i]]};
6. else V[i, w] = V[i – 1, w];
7. return V[n, W];
Now, as we know that V [n, W] is the total value of selected items, the can
be placed in the knapsack. Following steps are used repeatedly to select
actual knapsack item.
Let, i = n and k = W then
while (1 > 0 and k > 0)
{
if (V[i, k] V[i – 1, k]) then
mark ith item as in knapsack
i = i – 1 and k = k – wi // selection of ith item
else
i=i–1 //do not select ith item
}
Que 4.6. Differentiate between dynamic programming and greedy
approach. What is 0/1 knapsack problem ? Solve the following
instance using dynamic programming. Write the algorithm also.
Knapsack Capacity = 10, P = < 1, 6, 18, 22, 28 > and w = < 1, 2, 5, 6, 7>.
Dynamic Programming & Backtracking 4–6 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Answer
Answer
Knapsack problem with res pect to dynamic programming
approach : Refer Q. 4.5, Page 4–5B, Unit-4.
Numerical :
w = {5, 10, 15, 20}
W = 25
v = {50, 60, 120, 100}
Initially,
Item wi vi
I1 5 50
I2 10 60
I3 15 120
I4 20 100
5
15 25
5
The value of fractional part of I2 is,
60
= × 5 = 30
10
Thus, the maximum value is,
= 50 + 120 + 3 = 200
Answer
PART-2
All Pair Shortest Paths : Warshall’s and Floyd’s Algorithm,
Resource Allocation Problem.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms4–9 B (CS/IT-Sem-5)
Questions-Answers
Answer
1. Floyd-Warshall algorithm is a graph analysis algorithm for finding
shortest paths in a weighted, directed graph.
2. A single execution of the algorithm will find the shortest path between
all pairs of vertices.
3. It does so in (V3) time, where V is the number of vertices in the graph.
4. Negative-weight edges may be present, but we shall assume that there
are no negative-weight cycles.
5. The algorithm considers the “intermediate” vertices of a shortest path,
where an intermediate vertex of a simple path p = (v1, v2, ..., vm) is any
vertex of p other than v1 or vm, that is, any vertex in the set {v2, v3, ...,
vm–1}.
6. Let the vertices of G be V = {1, 2, ..., n}, and consider a subset {1, 2, ...,
k} of vertices for some k.
7. For any pair of vertices i, j V, consider all paths from i to j whose
intermediate vertices are all drawn from {1, 2, ..., k}, and let p be a
minimum-weight path from among them.
8. Let dij( k) be the weight of a shortest path from vertex i to vertex j with
all intermediate vertices in the set {1, 2, ..., k}.
A recursive definition is given by
wij if k 0
dij( k) = ( k –1) ( k –1) ( k –1)
min( dij , dik dkj ) if k 1
Floyd-Warshall (W) :
1. n rows [W]
2. D(0) W
3. for k 1 to n
4. do for i 1 to n
5. do for j 1 to n
( k –1)
6. do dij( k) min( dij , dik( k–1) dkj( k –1) )
7. return D(n)
4–10 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
Que 4.10. Define Floyd Warshall algorithm for all pair shortest
path and apply the same on following graph :
3
1 2
4
6 3 5
1
1 2
3 4
2
Fig. 4.10.1.
AKTU 2019-20, Marks 07
Answer
Floyd Warshall algorithm : Refer Q. 4.9, Page 4–9B, Unit-4.
Numerical :
dij(k) = min[ dij(k–1) , dik
(k –1) ( k –1)
dkj ]
1 2 3 4 5
1 0 6 3
2 3 0
D(0) =
3 0 2
4 1 1 0
5 4 2 0
1 2 3 4 5
1 0 4 6 3
2 3 0 9 6
D(1) =
3 6 4 0 2
4 4 1 1 0
5 7 4 3 2 0
More pdf : www.motivationbank.in
4–11 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
1 2 3 4 5
1 0 4 6 3
2 3 0 7 6
D(2) = 3 6 3 0 2
4 4 1 1 0
5 6 3 3 2 0
Now, if we find D(3), D(4) and D(5) there will be no change in the entries.
Answer
Floyd-Warshall algorithm : Refer Q. 4.9, Page 4–9B, Unit-4.
Time complexity of Floyd-Warshall algorithm is O(n3).
Example :
2
3 4
8
1 2 3
–4 1 –5
7
5 4
6
Fig. 4.11.1.
0 3 8 4 NIL 1 1 NIL 1
0 1 7 NIL NIL NIL 2 2
(0)
D = 4
(0) 0 ; NIL 3 NIL NIL NIL
2 5 0 4 NIL 4 NIL NIL
6 0 NIL NIL NIL 5 NIL
4–12 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
0 3 8 4 NIL 1 1 NIL 1
0 1 7 NIL NIL NIL 2 2
(1)
D(1) = 4 0 ; NIL 3 NIL NIL NIL
2 5 5 0 2 4 1 4 NIL 1
6 0 NIL NIL NIL 5 NIL
0 3 8 4 4 NIL 1 1 2 1
0 1 7 NIL NIL NIL 2 2
(2)
D(2) = 4 0 5 11 ; NIL 3 NIL 2 2
2 5 5 0 2 4 1 4 NIL 1
6 0 NIL NIL NIL 5 NIL
0 3 8 4 4 NIL 1 1 2 1
0 1 7 NIL NIL NIL 2 2
(3)
D(3) = 4 0 5 11 ; NIL 3 NIL 2 2
2 1 5 0 2 4 3 4 NIL 1
6 0 NIL NIL NIL 5 NIL
0 3 1 4 4 NIL 1 4 2 1
3 0 4 1 1 4 NIL 4 2 1
(4)
(4)
D = 7 4 0 5 3 ; 4 3 NIL 2 1
2 1 5 0 2 4 3 4 NIL 1
8 5 1 6
0 4 3 4 5 NIL
0 1 3 2 4 NIL 3 4 5 1
3 0 4 1 1 4 NIL 4 2 1
(5)
D(5) = 7 4 0 5 3 ; 4 3 NIL 2 1
2 1 5 0 2 4 3 4 NIL 1
8 5 1 6 0 4 3 4 5 NIL
PART-3
Backtracking, Branch and Bound with Examples such as
Travelling Salesman Problem.
Questions-Answers
Answer
1. Backtracking is a general algorithm for finding all solutions to some
computational problems.
2. Backtracking is an important tool for solving constraint satisfaction
problems, such as crosswords, verbal arithmetic, and many other
puzzles.
3. It is often the most convenient (if not the most efficient) technique for
parsing, for the knapsack problem and other combinational optimization
problem.
4. It can be applied only for problems which admit the concept of a ‘‘partial
candidate solution’’ and a relatively quick test of whether it can possibly
be completed to a valid solution.
Iterative backtracking algorithm :
algorithm ibacktrack (n)
// Iterative backtracking process
// All solutions are generated in x[1 : n] and printed as soon as they are
found
{
k = 1;
while (k != 0)
{
if ( there remains an untried x[k] in T(x[1], x[2], ..., x[k – 1])
and B_k(x[1], ..., x[k] ) is true )
{
if (x[1], ..., x[k] is a path to an answer node )
write (x[1 : k]);
k = k + 1; // Consider the next set
}
else
k = k – 1; // Backtrack to the previous set
}
}
Answer
Travelling Salesman Problem (TSP) :
Travelling salesman problem is the problem to find the shortest possible
route for a given set of cities and distance between the pair of cities that
visits every city exactly once and returns to the starting point.
Backtracking approach is used to solve TSP problem.
Backtracking algorithm for the TSP :
1. Let G be the given complete graph with positive weights on its edges.
2. Use a search tree that generates all permutations of V = {1 ... n},
specifically the one illustrated in Fig. 4.13.1 for the case n = 3.
1 2 3
1 2 3
2 3 1 3 1 2
3 2 3 1 2 1
Fig. 4.13.1.
3. A node at depth i of this tree (the root is at depth 0) stores an
i-permutation of {1, ..., n}. A leaf stores a permutation of {1, ..., n}, which
is equivalent to saying that it stores a particular Hamiltonian cycle (tour)
of G.
4. For the travelling salesman problem, we will not do any static pruning
on this tree, we will do dynamic pruning, during the search.
Proof :
1. At some point during the search, let v be a non-leaf node of this tree that
is just being visited, and let w be the weight of the shortest tour found to
this point.
2. Let (π1π ... k) be the k-permutation of {1 ... n} stored at v.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 4–15 B (CS/IT-Sem-5)
Answer
For solving travelling salesman problem we represent the solution space
by a state space tree. We define three cost functions C, l and u where
li Ci ui for all the nodes i of the state space tree.
Step 1 : First find the cost matrix as given cost on the edges of the graph.
Step 2 : Now, find reduced matrix by subtracting the smallest element
from row i (column j) introduce a zero in to row i (column j). Repeating
this process as often as needed, the cost matrix can be reduced.
Add the total amount subtracted from the column and rows and make this
as the root of the state space tree.
Step 3 : Let M be the reduced cost matrix for node A and let B be a child of
A such that the tree edge (A, B) corresponds to inclusion of edge (i, j) in the
tour. If B is not a leaf, then the reduced cost matrix for B may be obtained
by apply following given steps :
a. Change all the entries in row i, column j of M to . This includes
use of any more edges leaving vertex i or entering vertex j.
b. Set M (j, 1) = . This excludes use of the edge (j, 1).
c. Reduce all the rows and columns in the resulting matrix except
for rows and columns containing only .
Suppose T is the total amount subtracted in step (c), then
I(B) = l(A) + M(i, j) + T
Dynamic Programming & Backtracking4–16 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
For leaf nodes l = c is easily computed as each leaf defines a unique tour.
For the upper bound u, we assume ui = for all nodes i.
Step 4 : Find the root of the node as, combine the total amount subtracted
from cost matrix to find the reduced cost matrix M.
Answer
Travelling salesman problem : Refer Q. 4.13, Page 4–13B, Unit-4.
Numerical :
20 30 10 11
15 16 4 2
Cost matrix = 3 5 2 4
19 6 6 3
16 4 7 16
1. Reduce each column and row by reducing the minimum value from
each element in row and column.
Row Column
1 3
10 10 20 0 1 10 17 0 1
2 13 14 2 0 12 11 2 0
2 1 3 0 2 0 3 0 2 =M
1
3 16 3 3 0 15 3 0 0
4 12 0 3 12 11 0 0 12
2. So, total expected cost is : 10 + 2 + 2 + 3 + 4 + 1 + 3 = 25.
3. We have discovered the root node V1 so the next node to be expanded
will be V2, V3, V4, V5. Obtain cost of expanding using cost matrix for node
2.
4. Change all the elements in 1st row and 2nd column.
More pdf : www.motivationbank.in
4–17 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
12 11 2 0
M2 = 0 0 2
15 0 0
11 0 12
5. Now, reducing M2 in rows and columns, we get :
12 11 2 0
M2 = 0 0 2
15 0 0
11 0 12
Total cost for M2 = 25 + 10 + 0 = 35
6. Similarly, for node 3, we have :
12 2 0
M3 = 0 3 0 2
15 3 0
11 0 12
7. Now, reducing M3, we get :
12 2 0
M3 = 0 3 0 2
15 3 0
11 0 12
Total cost for M3 = 25 + 17 + 0 = 42
8. Similarly, for node 4, we have :
12 11 0
M4 = 0 3 2
15 3 0 0
11 0 0
9. Now, reducing M4, we get :
12 11 0
M4 = 0 3 2
15 3 0 0
11 0 0
4–18 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
Total cost = 25 + 0 + 0 = 25
10. Similarly, for node 5, we have :
12 11 2
M5 = 0 3 0
15 3 0
11 0 0 12
11. Now, reducing M5, we get :
10 9 0
M5 = 0 3 0
15 3 0
11 0 0 12
Total cost = 25 + 1 + 2 = 28
1 V1 = 25
2 3 4 5 V 5 = 28
V2 = 35 V3 = 42 V4 = 25
Fig. 4.15.1.
12. Now, the promising node is V4 = 25. Now, we can expand V2, V3 and V5.
Now, the input matrix will be M4.
13. Change all the elements in 4th row and 2nd column.
12 11 0
M6 = 0 2
11 0
14. On reducing M6, we get :
12 11 0
M6 = 0 2
11 0
More pdf : www.motivationbank.in
4–19 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Total cost = 25 + 3 + 0 = 28
15. Similarly, for node 3, we have :
12 0
M7 = 0 3 2
3 0
11 0
16. On reducing M7, we get :
12 0
M7 = 0 3 2
3 0
11 0
Total cost = 25 + 0 + 0 = 25
17. Similarly, for node 5, we have :
12 11
M8 = 0 3
11 0 0
18. On reducing M8, we get :
1 0
M8 = 0 3
11 0 0
Total cost = 25 + 0 + 11 = 36
2 3 4 5
2 3 5
V2 = 28 V 3 = 25 V 5 = 36
Fig. 4.15.2.
19. Now, promising node is V3 = 25. Now, we can expand V2 and V5. Now,
the input matrix will be M7.
4–20 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
20. Change all the elements in 3rd row and 2nd column.
12 0
M9 =
0
11
21. On reducing M9, we get :
1 0
M9 =
0
0
Total cost = 25 + 3 + 0 = 28
22. Similarly, for node 5, we have :
12
M10 =
3
11 0
23. On reducing M10, we get :
0
M10 =
0
11 0
Total cost = 25 + 2 + 12 + 3 = 42.
1
2 3 4 5
2 3 5
2 5
V 2 = 28 V 5 = 42
Fig. 4.15.3.
More pdf : www.motivationbank.in
4–21 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
24. Here V2 is the most promising node so next we are going to expand this
node further. Now, we are left with only one node not yet traversed
which is V5.
10 6 5 2 16
V1 V4 V3 V2 V5 V1
So, total cost of traversing the graph is :
10 + 6 + 5 + 2 + 16 = 39
PART-4
Graph Colouring, N-Queen Problem.
Questions-Answers
Answer
1. Graph colouring is a simple way of labelling graph components such as
vertices, edges, and regions under some constraints.
2. In a graph, no two adjacent vertices, adjacent edges, or adjacent regions
are coloured with minimum number of colours. This number is called
the chromatic number and the graph is called a properly coloured
graph.
3. While graph colouring, the constraints that are set on the graph are
colours, order of colouring, the way of assigning colour, etc.
4. A colouring is given to a vertex or a particular region. Thus, the vertices
or regions having same colours form independent sets.
For example :
Blue
Yellow Red
Blue Yellow
Fig. 4.16.1. Properly coloured graph.
Answer
1. In N-Queens problem, the idea is to place queens one by one in different
columns, starting from the leftmost column.
2. When we place a queen in a column, we check for clashes with already
placed queens.
3. In the current column, if we find a row for which there is no clash, we
mark this row and column as part of the solution.
4. If we do not find such a row due to clashes then we backtrack and
return false.
Procedure for solving N-Queens problem :
1. Start from the leftmost column.
2. If all queens are placed return true.
3. Try all rows in the current column. Do following for every tried row :
a. If the queen can be placed safely in this row then mark this [row,
column] as part of the solution and recursively check if placing
queen here leads to a solution.
b. If placing queen in [row, column] leads to a solution then return
true.
c. If placing queen does not lead to a solution then unmark this [row,
column] (backtrack) and go to step (a) to try other rows.
4. If all rows have been tried and nothing worked, return false to trigger
backtracking.
Algorithm/pseudocode for N-Queens problem :
N-Queens are to be placed on an n × n chessboard so that no two attack i.e.,
no two Queens are on the same row, column or diagonal.
PLACE (k, i)
1. for j 1 to k – 1
2. do if (x(j) = i) or Abs (x[j] – i) = (Abs (j – k))
3. then return false
4. return true
Place (k, i) returns true if a queen can be placed in the kth row and ith column
otherwise return false.
x[ ] is a global array whose first k – 1 values have been set. Abs(r) returns the
absolute value of r.
N-Queens (k, n)
More pdf : www.motivationbank.in
4–23 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
1. for i 1 to n
2. do if PLACE (k, i)
3. then x[k] i
4. if k = n, then print x[1 …. N]
5. else N-Queens (k + 1, n)
[Note : For 8-Queen problem put n = 8 in the algorithm.]
Answer
Algorithm for N-Queens problem : Refer Q. 4.17, Page 4–21B, Unit-4.
4-Queens problem :
1. Suppose we have 4 × 4 chessboard with 4-queens each to be placed in
non-attacking position.
1 2 3 4
1
2
3
4
Fig. 4.18.1.
2. Now, we will place each queen on a different row such that no two
queens attack each other.
3. We place the queen q1 in the very first accept position (1, 1).
4. Now if we place queen q2 in column 1 and 2 then the dead end is
encountered.
5. Thus, the first acceptable position for queen q2 is column 3 i.e., (2, 3) but
then no position is left for placing queen q3 safely. So, we backtrack one
step and place the queen q2 in (2, 4).
6. Now, we obtain the position for placing queen q3 which is (3, 2). But later
this position lead to dead end and no place is found where queen q2 can
be placed safely.
4–24 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
1 2 3 4
1 q1
2 q2
3 q3
Fig. 4.18.2.
7. Then we have to backtrack till queen q1 and place it to (1, 2) and then all
the other queens are placed safely by moving queen q2 to (2, 4), queen q3
to (3, 1) and queen q4 to (4, 3) i.e., we get the solution < 2, 4, 1, 3>. This
is one possible solution for 4-queens problem.
1 2 3 4
1 q1
2 q2
3 q3
4 q4
Fig. 4.18.3.
8. For other possible solution the whole method is repeated for all partial
solutions. The other solution for 4-queens problem is <3, 1, 4, 2> i.e.,
1 2 3 4
1 q1
2 q
2
3 q3
4 q4
Fig. 4.18.4.
9. Now, the implicit tree for 4-queen for solution <2, 4, 1, 3> is as follows :
10. Fig. 4.18.5 shows the complete state space for 4-queens problem. But we
can use backtracking method to generate the necessary node and stop
if next node violates the rule i.e., if two queens are attacking.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 4–25 B (CS/IT-Sem-5)
1 5
q1 q1
× × × × ×
2 3
q1 q1 q1
q2 q2 q2
q1 q1
q2 q2
q3 q3
× × q1
q2
q3
q4
Fig. 4.18.5.
Answer
Branch and bound technique :
1. It is a systematic method for solving optimization problems.
2. It is used where backtracking and greedy method fails.
3. It is sometimes also known as best first search.
4. In this approach we calculate bound at each stage and check whether it
is able to give answer or not.
5. Branch and bound procedure requires two tools :
a. The first one is a way of covering the feasible region by several
smaller feasible sub-regions. This is known as branching.
b. Another tool is bounding, which is a fast way of finding upper and
lower bounds for the optimal solution within a feasible sub-region.
Solution to 4-Queens problem :
Basically, we have to ensure 4 things :
1. No two queens share a column.
4–26 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
2. No two queens share a row.
3. No two queens share a top-right to left-bottom diagonal.
4. No two queens share a top-left to bottom-right diagonal.
Number 1 is automatic because of the way we store the solution. For number
2, 3 and 4, we can perform updates in O(1) time and the whole updation
process is shown in Fig. 4.19.1.
5
Q Q
1
2 3 6
Q Q Q
Q Q Q
4 7
Q Q
Q Q
Q Q
8
Q
Q
Q
Q
Fig. 4.19.1.
PART-5
Hamiltonian Cycles and Sum of Subsets.
Questions-Answers
Answer
Backtracking : Refer Q. 4.12, Page 4–13B, Unit-4.
Sum of subset problem with example :
In the subset-sum problem we have to find a subset s of the given set
S = (S1, S2, S3, ......, Sn) where the elements of the set S are n positive integers
in such a manner that sS and sum of the elements of subset ‘s’ is equal to
some positive integer ‘X’.
Algorithm for sum-subset problem :
Subset-Sum (S, t)
1. C
2. Z S
3. K
4. t1 t
5. while (Z ) do
6. K max(Z)
7. if (K < t) then
8. ZZ–K
9. t1 t1 – K
10. CCK
11. else Z Z – K
12. print C // Subset Sum elements whose
// Sum is equal to t1
This procedure selects those elements of S whose sum is equal to t. Every
time maximum element is found from S, if it is less than t then this element
is removed from Z and also it is subtracted from t.
For example :
Given S = <1, 2, 5, 7, 8, 10, 15, 20, 25> & m = 35
Z S, m = 35
k max[Z] = 25
K<m
Z=Z–K
i.e., Z = <1, 2, 5, 7, 8, 10, 15, 20> & m1 m
Subtracting K from m1, we get
New m1 = m1(old) – K = 35 – 25 = 10
In new step,
K max[Z] = 20
K > m1
i.e., Z = <1, 2, 5, 7, 8, 10, 15>
In new step,
4–28 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
K max[Z] = 15
K > m1
i.e., Z = <1, 2, 5, 7, 8, 10>
In new step,
K max[Z] = 10
K > m1
i.e., Z = <1, 2, 5, 7, 8>
In new step,
K max[Z] = 8
K > m1
i.e., Z = <1, 2, 5, 7> & m2 m1
New m2 = m2(old) – K = 10 – 8 = 2
In new step
K max[Z] = 7
K > m2
i.e., Z = <1, 2, 5>
In new step, K max[Z] = 5
K > m2
i.e., Z = <1, 2>
In new step, K max[Z] = 2
K > m2
i.e., Z = <1>
In new step,
K=1
K < m2
m3 = 01
Now only those numbers are needed to be selected whose sum is 01, therefore
only 1 is selected from Z and rest other number found as max[Z] are subtracted
from Z one by one till Z become .
Que 4.21. Solve the subset sum problem using backtracking, where
Answer
n=4
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 4–29 B (CS/IT-Sem-5)
m = 18
w{4} = {5, 10, 8, 13}
Sorted order : w{4} = {5, 8, 10, 13}
Now we construct state-space tree.
I = Include
I1 0 E1 E = Exclude
5 0
I2 I2 E2
E2 8 E3 0
15
I3 ×
I3
5 8
23 18
E3 ×
× E3 I3
15 5
× I4 E4
15
I4 E4 18 5
28 15 ×
× ×
First Subset S1 = {5, 13}
Similarly, S2 = {8, 10}
Answer
Difference between backtracking and branch and bound technique :
S. No. Backtracking Branch and bound
1. It is a methodological way of It is a systematic method for
trying out various sequences of solving optimization problems.
decisions.
2. It is applied in dynamic It is applied where greedy and
programming technique. dynamic programming technique
fails.
3. It is sometimes called depth first It is also known as best first
search. search.
4. This approach is effective for This approach is effective for
decision problem. optimization problems.
5. This algorithm is simple and easy This algorithm is difficult to
to understand. understand.
a e
c
f
d
Fig. 4.23.1.
Answer
Hamiltonian circuit problem :
1. Given a graph G = (V, E), we have to find the Hamiltonian circuit using
backtracking approach.
2. We start our search from any arbitrary vertex, say ‘a’. This vertex ‘a’
becomes the root of our implicit tree.
3. The first element of our partial solution is the first intermediate vertex
of the Hamiltonian cycle that is to be constructed.
4. The next adjacent vertex is selected on the basis of alphabetical (or
numerical) order.
5. If at any stage any arbitrary vertex makes a cycle with any vertex other
than vertex ‘a’ then we say that dead end is reached.
6. In this case we backtrack one step, and again search begins by selecting
another vertex and backtrack the element from the partial solution
must be removed.
7. The search using backtracking is successful if a Hamiltonian cycle is
obtained.
Numerical :
1. Firstly, we start our search with vertex ‘a’, this vertex ‘a’ becomes the
root of our implicit tree.
a Root
2. Next, we choose vertex ‘b’ adjacent to ‘a’ as it comes first in lexicographical
order (b, c, d).
a Root
b c d
3. Next, we select ‘c’ adjacent to ‘b’
More pdf : www.motivationbank.in
4–31 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
a Root
b c d
c e
a Root
b c d
c e
d e
a Root
b c d
c e
d e
e f
6. Next, we select vertex ‘f ’ adjacent to ‘e’. The vertex adjacent to ‘f ’ are ‘d’
and ‘e’ but they have already visited. Thus, we get the dead end and we
backtrack one step and remove the vertex ‘f ’ from partial solution.
4–32 B (CS/IT-Sem-5)
Dynamic Programming & Backtracking
More pdf : www.motivationbank.in
a Root
b c d
c e
d e
e f
f
Dead end
7. From backtracking, the vertex adjacent to ‘e’ are ‘b’, ‘c’, ‘d’, ‘f ’ from which
vertex ‘f ’ has already been checked and ‘b’, ‘c’, ‘d’ have already visited.
So, again we backtrack one step. Now, the vertex adjacent to ‘d’ are ‘e’,
‘f ’ from which ‘e’ has already been checked and adjacent of ‘f ’ are ‘d’ and
‘e’. If ‘e’ vertex visited then again we get dead state. So again we backtrack
one step.
8. Now, adjacent to ‘c’ is ‘e’ and adjacent to ‘e’ is ‘f ’ and adjacent to ‘f ’ is ‘d’
and adjacent to ‘d’ is ‘a’. Here, we get the Hamiltonian cycle as all the
vertex other than the start vertex ‘a’ is visited only once (a – b – c – e – f
– d – a).
a Root
b c d
Backtrack
c e
d e
e f
f c b d
Dead end
More pdf : www.motivationbank.in
4–33 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
a Root
b c d
Backtrack
c e
d e
e f
Dead end
d e
Dead end
Again backtrack
0
a Root
1
b c d
Backtrack
2
c e
d e 3
Dead end
f 4 d
d 5 e
a Solution
More pdf : www.motivationbank.in
5–1 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
5 Selected Topics
CONTENTS
Part-1 : Algebraic Computation, ..................... 5–2B to 5–4B
Fast Fourier Transform
Questions-Answers
Answer
1. The Fast Fourier Transform (FFT) is a algorithm that computes a
Discrete Fourier Transform (DFT) of n-length vector in O(n log n)
time.
2. In the FFT algorithm, we apply the divide and conquer approach to
polynomial evaluation by observing that if n is even, we can divide a
degree (n – 1) polynomial.
A(x) = a0 + a1x + a2x2 +... + an–1 xn–1
n
into two degree – 1 polynomials.
2
A[0](x) = a0 + a2x + a4x2 + ... + an–2 xn/2–1
A[1](x) = a1 + a3x + a5x2 + ... + an–1 xn/2–1
Where A[0] contains all the even index coefficients of A and A[1] contains
all the odd index coefficients and we can combine these two polynomials
into A, using the equation,
A(x) = A[0] (x2) + xA [1] (x2) ...(5.1.1)
So that the problem of evaluating A(x) at kn where k = 0, 1, 2, ..., n–1
reduces to,
n
i. Evaluating the degree – 1 polynomial A[0](x) and A[1](x) at the
2
point (kn)2 i.e.,
(0n)2, (1n)2, ..., ( n 1 2
n )
because we know that if kn is a complex nth root of unity then (kn)2 is
n th
a complex root of unity. Thus, we can evaluate each A[0](x) and
2
A[1](x) at (kn)2 values.
ii. Combining the results according to the equation (5.1.1). This
observation is the basis for the following procedure which computes
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 5–3 B (CS/IT-Sem-5)
the DFT of an n-element vector a = (a0, a1,..., an–1) where for sake of
simplicity, we assume that n is a power of 2.
FFT (a, w) :
1. n length [a] n is a power of 2.
2. if n = 1
3. then return a
4. n e2i/n
5. x 0 x will store powers of initially x = 1.
6. a[0] (a0, a2,...an–2)
7. a[1] (a1, a3, ... an–1)
8. y[0] FFT(a[0], 2) Recursive calls with 2 as (n/2)th root of unity.
9. y[1] FFT(a[0], 2)
10. for k 0 to (n/2) – 1
11. do yk y[0] [1]
k + x yk
12. yk+(n/2) y[0] – xy[1]
k k
13. x xn
14. return y
Line 2-3 represents the basis of recursion; the DFT of one element is the
element itself. Since in this case
y0 = a0 10 = a0 1 = a0
Line 6-7 defines the recursive coefficient vectors for the polynomials A[0]
and A[1]. = kn
Line 8-9 perform the recursive DFTn/2 computations setting for
n
k = 0, 1, 2, ...,
–1 i.e.,
2
y[0] [0] 2k
k = A ( n ) , y[1] [1]
k = A ( n )
2k
Answer
Application of Fast Fourier Transform :
1. Signal processing.
2. Image processing.
3. Fast multiplication of large integers.
4. Solving Poisson’s equation nearly optimally.
Recursive algorithm : Refer Q. 5.1, Page 5–2B, Unit-5.
PART-2
String Matching.
Questions-Answers
Answer
String matching is a process of finding one or more occurrences of a pattern
in a text.
String matching problem :
Given a text array T[1 .. n] of n character and a pattern array P[1 .. m] of m
characters.
The problem is to find an integer s, called valid shift where 0 s < n – m and
T[s + 1 .... s + m] = P[1 ... m].
We further assume that the elements of P and T are characters drawn from
a finite alphabet such as {0, 1} or {A, B, … Z, a, b, …, z}.
String : A string is traditionally a sequence of character, either as a literal
constant or as some kind of variable.
Substring : Given a string T[1 .. n], the substring is defined as T[i .. j] for
some 0 i j n – 1, that is, the string formed by the characters in T from
index j, inclusive. This means that a string is a substring of itself (simply take
i = 0 and j = m).
Proper substring : The proper substring of string T[1 .. n] is T[i .. j] for
some 0 i j n – 1, that is, we must have either i > 0 or j < m – 1.
Using these definition, we can say given any string T[1 .. n], the substring
are
More pdf : www.motivationbank.in
Design and Analysis of Algorithms5–5 B (CS/IT-Sem-5)
Answer
Basic types of string matching algorithms are :
String matching algorithm
Fig. 5.4.1.
Naive string matching :
The Naive approach simply test all the possible placement of pattern P[1 .. m]
relative to text T[1 .. n]. Specifically, we try shifts s = [0, 1, …., n – m],
successively and for each shift, s, compare T[s + 1 .. s + m] to P[1 .. m].
Naive string matcher (T, P)
1. n length [T]
2. m length [P]
3. for s 0 to n – m
4. do if P[1 .. m] = T[s +1 .. s + m]
5. then print “pattern occurs with shift” s.
The Naive string matching procedure can be interpreted graphically as a
sliding a pattern P[1 .. m] over the text T[1 .. m] and noting for which shift all
of the characters in the pattern match the corresponding characters in the
text.
To analyze the time of Naive matching, the given algorithm is implemented
as follows, note that in this implementation, we use notation P[1 .. j] to
denote the substring of P from index i to index j. That is, P[1 ... j] = P[i]
P[i + 1] … P[j].
Naive string matcher (T, P)
1. n length [T]
2. m length [P]
3. for s 0 to n–m do
4. j1
Selected Topics 5–6 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
5. while j m and T[s + j] = P[j] do
6. j j +1
7. if j > m then
8. return valid shift s
9. return no valid shift exist // i.e., there is no substring of T matching P.
Que 5.5. Show the comparisons that Naive string matcher makes
for the pattern P = {10001} in the text T = {0000100010010}
Answer
Given, P = 10001
T = 0000100010010
i. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=1
1 0 0 0 1
ii 0 0 0 0 1 0 0 0 1 0 0 1 0
S=1
1 0 0 0 1
iii. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=2
1 0 0 0 1
iv. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=3
1 0 0 0 1
v. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=4
1 0 0 0 1
vi. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=5
1 0 0 0 1
More pdf : www.motivationbank.in
Design and Analysis of Algorithms5–7 B (CS/IT-Sem-5)
vii. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=6
1 0 0 0 1
viii. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=7
1 0 0 0 1
ix. 0 0 0 0 1 0 0 0 1 0 0 1 0
S=8
1 0 0 0 1
Answer
Knuth-Morris-Pratt algorithm for string matching :
COMPUTE-PREFIX-FUNCTION (P)
1. m length [P]
2. [1] 0
3. k 0
4. for q 2 to m
5. do while k > 0 and P [k + 1] P[q]
6. do k [k]
7. if P[k + 1] = P [q]
8. then k k + 1
9. [q] k
10. return
KMP-MATCHER calls the auxiliary procedure COMPUTE-PREFIX-
FUNCTION to compute .
KMP-MATCHER (T, p)
1. n length [T]
2. m length [P]
3. COMPUTE-PREFIX-FUNCTION (P)
4. q 0
5. for i 1 to n
6. do while q > 0 and P [q + 1] T[i]
7. do q [q]
8. if P [q +1] = T[i]
Selected Topics 5–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
9. then q q + 1
10. if q = m
11. then print “pattern occurs with shift” i – m
12. q [q]
Prefix function of the string ababababca :
m length [P]
m = 10
Initially, P[1] = 0, k = 0
for q 2 to 10
for q = 2, k 0
& P[0 + 1] = P[2]
[2] = 0
for q = 3, k 0
& P[0 + 1] = P[3]
k k + 1= 1
& [3] 1
for q = 4, k > 0
& P[1 + 1] = P[4]
k1+1= 2
& [4] 2
for q = 5, k > 0
& P[2 + 1] = P[5]
k2+1= 3
& [5] 3
for q = 6, k > 0
& P[3 + 1] = P[6]
k3+1= 4
& [6] 4
for q = 7, k > 0
& P[4 + 1] = P[7]
k4+1= 5
& [7] 5
for q = 8, k > 0
& P[5 + 1] = P[8]
k5+1= 6
& [8] 6
for q = 9, k > 0
& P[6 + 1] = P[9]
k [k] = 6
& [9] 6
for q = 10, k > 0
& P[6 + 1] = P[10]
k6+1= 7
& [10] 7
More pdf : www.motivationbank.in
5–9 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
String a b a b a b a b c a
P[i] 1 2 3 4 5 6 7 8 9 10
[i] 0 0 1 2 3 4 5 6 6 7
Que 5.7. Compute the prefix function for the pattern P = abacab
using KNUTH-MORRIS-PRATT algorithm. Also explain Naive string
matching algorithm. AKTU 2017-18, Marks 10
Answer
Prefix function of the string abacab :
m length [P]
m=6
Initially, [1] = 0, k = 0
for q 2 to 6
for q = 2, k 0
& P[0 + 1] P[2]
[2] = 0
for q = 3, k 0
& P[0 + 1] = P[3]
k= k+1=1
& [3] = 1
for q = 4, k > 0
& P[1 + 1] P[4]
k [1] = 0
P[1] P[4]
& [4] = 0
for q = 5, k > 0
& P[0 + 1] = P[5]
k0+1= 1
& [5] = 1
for q = 6, k > 0
& P[1 + 1] = P[6]
k1+1= 2
& [6] = 2
String a b a c a b
P[i] 1 2 3 4 5 6
[i] 0 0 1 0 1 2
Naive string matching algorithm : Refer Q. 5.4, Page 5–5B, Unit-5.
Selected Topics 5–10 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Que 5.8. Describe in detail Knuth-Morris-Pratt string matching
algorithm. Compute the prefix function for the pattern
ababbabbabbababbabb when the alphabet is = {a, b}.
AKTU 2019-20, Marks 07
Answer
Knuth-Morris-Pratt string matching algorithm : Refer Q. 5.6,
Page 5–7B, Unit-5.
Numerical :
pattern = ababbabbabbababbabb
length 19
Initially, (1) = 0 and k = 0
For q 2 to 9
For q = 2 and k / 0
P[0 + 1] P[2].
[2] = 0
For q= 3
P[0 + 1] = P[3]
k= k+1=1
[3] = 1
For q= 4k>0
P[1 + 1] = P[4]
P[2] = P[4]
k= k+1=2
[4] = 2
For q= 5k>0
P[2 + 1] P[5]
[5] = 0
For q= 6
P[0 + 1] = P[6]
k= k+1=1
[6] = 1
For q= 7k>0
P[1 + 1] = P[7]
P[2] = P[7]
k= k+1=2
[7] = 0
For q= 8k>0
P[2 + 1] = P[8]
P[3] P[8]
[8] = 0
For q= 9
P[0 + 1] = P[9]
k= k+1
More pdf : www.motivationbank.in
Design and Analysis of Algorithms5–11 B (CS/IT-Sem-5)
[9] = 2
For q= 10 k > 0
P[2 + 1] P[10]
[10] = 0
For q= 11 k > 0
P[0 + 1] P[11]
[11] = 0
For q= 12 k > 0
P[0 + 1] P[12]
k= k+1
[12] = 2
For q= 13 k > 0
P[2 + 1] P[13]
[13] = 0
For q= 14
P[0 + 1] = P[14]
k= k+1
[14] = 2
For q= 15 k > 0
P[2 + 1] P[15]
[15] = 0
For q= 16 k > 0
P[0 + 1] P[16]
[16] = 0
For q= 17
P[0 + 1] P[17]
k= k+1
[17] = 2
For q= 18 k > 0
P[2 + 1] P[18]
[18] = 0
For q= 19
P[0 + 1] P[19]
[19] = 0
String a b a b b a b b a b b a b a b b a b b
P[i] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
[i] 0 0 1 2 0 1 2 0 2 0 0 2 0 2 0 0 2 0 0
Valid matching
9 3 8 4 4 4 4 10 9 2 3 1 9 2 5
Spurious
hit
The number of spurious hits is 3.
More pdf : www.motivationbank.in
5–13 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Answer
Knuth-Morris-Pratt algorithm is used to determine if a text T is a cycle
rotation of another string T.
Knuth-Morris-Pratt algorithm : Refer Q. 5.6, Page 5–7B, Unit-5.
PART-3
Theory of NP – Completeness.
Questions-Answers
Answer
P : Class P are the problems which can be solved in polynomial time, which
take time like O(n), O(n2), O(n3).
Example : Finding maximum element in an array or to check whether a
string is palindrome or not. So, there are many problems which can be
solved in polynomial time.
NP : Class NP are the problems which cannot be solved in polynomial time
like TSP (travelling salesman problem).
Example : Subset sum problem is best example of NP in which given a set
of numbers, does there exist a subset whose sum is zero, but NP problems
are checkable in polynomial time means that given a solution of a problem,
we can check that whether the solution is correct or not in polynomial time.
NP-complete : The group of problems which are both in NP and NP-hard
are known as NP-complete problem.
Now suppose we have a NP-complete problem R and it is reducible to Q
then Q is at least as hard as R and since R is an NP-hard problem, therefore
Q will also be at least NP-hard, it may be NP-complete also.
NP-hard
P
Fig. 5.12.1. Relationship among P, NP, NP-complete
and NP-hard problems.
2. It is also often used to define the complexity class NP-complete which is
the intersection of NP and NP-hard.
3. Consequently class NP-hard can be understood as the class of problems
that are NP-complete or harder.
4. There are no polynomial time algorithms for NP-hard problems.
5. A problem being in NP means that the problem is “easy” (in a certain
specific sense), whereas a problem being NP-hard means that the
problem is “difficult” (in another specific sense).
6. A problem being in NP and a problem being NP-hard are not mutually
exclusive. When a problem is both in NP and NP-hard, we say that the
problem is NP-complete.
7. All problems in NP can be solved deterministically in time O(2n).
8. An example of an NP-hard problem is the decision problem subset-sum.
Given a set of integers, does any non-empty subset of them add up to
zero ? i.e., a yes or no question, and happens to be NP-complete.
9. There are, also decision problems that are NP-hard but not NP-complete.
10. For example, in the halting problem “given a program and its input, will
it run forever” i.e., yes or no question, so this is a decision problem. It is
case to prove that the halting problem is NP-hard but not NP-complete.
Answer
A language L {0,1}* is NP-complete if it satisfies the following two properties :
i. L ∈ NP ; and
ii. For every Lp L
NP-hard : If a language L satisfies property (ii), but not necessarily property
(i), we say that L is NP-hard.
NP-complete : We use the notation LNPC to denote that L is NP-complete.
Theorem : If any NP-complete problem is polynomial time solvable, then
P = NP. If any problem in NP is not polynomial time solvable, then all NP
complete problems are not polynomial time solvable.
More pdf : www.motivationbank.in
5–15 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Proof : Suppose that L P and also that L NPC. For any L NP, we have
Lp L by property (ii) of the definition of NP-completeness. We know if
Lp L then L P implies L P, which proves the first statement.
To prove the second statement, suppose that there exists and L NP such
that L P. Let L NPC be any NP-complete language, and for the purpose
of contradiction, assume that L P. But then we have L p L and thus L P.
Answer
NP-hard problem :
1. We say that a decision problem Pi is NP-hard if every problem in NP is
polynomial time reducible to Pi.
2. In symbols,
Poly
Pi is NP-hard if, for every Pj NP, Pj Pi .
3. This does not require Pi to be in NP.
4. Highly informally, it means that Pi is ‘as hard as’ all the problem in NP.
5. If Pi can be solved in polynomial time, then all problems in NP.
6. Existence of a polynomial time algorithm for an NP-hard problem implies
the existence of polynomial solution for every problem in NP.
NP-complete problem :
1. There are many problems for which no polynomial time algorithms is
known.
2. Some of these problems are travelling salesman problem, optimal graph
colouring, the Knapsack problem, Hamiltonian cycles, integer
programming, finding the longest simple path in a graph, and satisfying
a Boolean formula.
3. These problems belongs to an interesting class of problems called the
‘‘NP-complete’’ problems, whose status is unknown.
4. The NP-complete problems are traceable i.e., require a super polynomial
time.
Polynomial time problem :
An algorithm is said to be solvable in polynomial time if the number of steps
required to complete the algorithm for a given input is O(nk) for some non-
negative integer k where n is the complexity of input.
Selected Topics 5–16 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Polynomial time verifiable algorithm : A polynomial time algorithm A
is said to be polynomial time verifiable if it has following properties :
1. The input to A consists of an instance I of X (X is a decision problem) and
a string S such that the length of S is bounded by some polynomial in the
size of I.
2. The output of A is either yes or no.
3. If I is a negative instance of X, then the output of A is “no” regardless of
the value of S.
4. If I is a positive instance of X, then there is at least one choice of S for
which A output “yes”.
Procedure to solve NP-problems :
1. The class NP is the set of all decision problems that have instances that
are solvable in polynomial time using a non-deterministic turing machine.
2. In a non-deterministic turing machine, in contrast to a deterministic
turing machine, for each state, several rules with different actions can
be applied.
3. Non-deterministic turing machine branches into many copies that are
represented by a computational tree in which there are different
computational paths.
4. The class NP corresponds to a non-deterministic turing machine that
guesses the computational path that represents the solution.
5. By doing so, it guesses the instances of the decision problem.
6. In the second step, a deterministic turing machine verifies whether the
guessed instance leads to a “yes” answer.
7. It is easy to verify whether a solution is valid or not. This statement does
not mean that finding a solution is easy.
Answer
Answer
NP-complete problem : Refer Q. 5.14, Page 5–15B, Unit-5.
Minimum vertex cover problem :
1. A vertex cover of a graph is a set of vertices such that each edge of the
graph is incident to at least one vertex of the set.
2. The problem of finding a minimum vertex cover is a classical optimization
problem in computer science and is a typical example of an NP-hard
optimization problem that has an approximation algorithm.
3. Its decision version, the vertex cover problem, was one of Karp's 21 NP-
complete problems and is therefore a classical NP-complete problem in
computational complexity theory.
4. Furthermore, the vertex cover problem is fixed-parameter tractable
and a central problem in parameterized complexity theory.
5. The minimum vertex cover problem can be formulated as a half-integral
linear program whose dual linear program is the maximum matching
problem.
6. Formally, a vertex cover V of undirected graph G = (V, E) is a subset of
V such that uv V v V i.e., it is a set of vertices V where every edge
has at least one endpoint in the vertex cover V. Such a set is said to
cover the edges of G.
Example : Fig. 5.16.1 shows examples of vertex covers, with some
vertex cover V marked in dark.
Fig. 5.16.1.
7. A minimum vertex cover is a vertex cover of smallest possible size.
8. The vertex cover number is the size of a minimum cover, i.e., = |V|.
The Fig. 5.16.2 shows examples of minimum vertex covers in the previous
graphs.
Selected Topics 5–18 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Fig. 5.16.2.
Answer
Types of NP-complete problems :
NP-complete problems
Fig. 5.17.1.
Hamiltonian cycle problem :
1. A Hamiltonian cycle of an undirected graph G = (V, E) is a simple cycle
that contains each vertex in V.
2. A graph that contains a Hamiltonian cycle is said to be Hamiltonian,
otherwise it is said to be non-Hamiltonian.
The clique problem :
1. A clique in an undirected graph G = (V, E) is a subset V V of vertices,
each pair of which is connected by an edge in E.
2. The size of a clique is the number of vertices it contains.
CLIQUE = {<G, K> : G is a graph with a clique of size K }
3. The clique problem is the optimization problem of finding a clique of
maximum size in a graph.
4. As a decision problem, we ask simply whether a clique of a given size k
exists in the graph.
Answer
Theorem : Hamiltonian circuit (HC) is NP-complete.
Proof :
1. Let us define a non-deterministic algorithm A that takes, as input, a
graph G encoded as an adjacency list in binary notation, with the vertices
numbered 1 to N.
2. We define A to first iteratively call the choose method to determine a
sequence S of N + 1 numbers from 1 to N.
3. Then, we have A to check that each number from 1 to N appears exactly
once in S (for example, by sorting S), except for the first and last numbers
in S, which should be the same.
More pdf : www.motivationbank.in
5–19 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Answer
Problem : The CLIQUE problem is defined as {< G, k >|, G is a graph with
a k-clique}. Show that CLIQUE is NP-complete.
Proof :
1. First, to show that CLIQUE is in NP. Given an instance of < G, k > and
a k-clique we can easily verify in O(n2) time that we do, in fact, have a
k-clique.
2. Now, we want to show that 3-SAT is CLIQUE. Let F be a boolean
formula in CNF.
3. For each literal in F we will make a vertex in the graph i.e.,
(x1 + x2 + x3) ( x1 x2 x3 ) has 6 vertices.
Let k be the number of clauses in F.
4. We will connect each vertex to all of the other vertices that are logically
compatible except for the ones that are in the same clause.
5. Now, if we have a satisfiable assignment we will have a k-clique because
the satisfying vertices will all be connected to one another.
6. Thus, we can use CLIQUE to solve 3-SAT so CLIQUE is NP-complete.
Answer
Different complexity classes :
There are some complexity classes involving randomized algorithms :
1. Randomized polynomial time (RP) : The class RP consists of all
languages L that have a randomized algorithm A running in worst case
polynomial time such that for any input x in *
1
x L P[A(x) accepts]
2
x L P[A(x) accepts] = 0
Selected Topics 5–20 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Independent repetitions of the algorithms can be used to reduce the
probability of error to exponentially small.
2. Zero-error probabilistic polynomial time (ZPP) : The class ZPP is
the class of languages which have Las Vegas algorithms running in
expected polynomial time.
ZPP = RP co-RP
where a language L is in co-X where X is a complexity class if and only
if its complement * – L is in X.
3. Probabilistic polynomial time (PP) : The class PP consists of all
languages L that have a randomized algorithm A running in worst case
polynomial time such that for any input x in *.
1
x L P[A(x) accepts]
2
1
x L P[A(x) accepts] <
2
To reduce the error probability, we cannot repeat the algorithm several
times on the same input and produce the output which occurs in the
majority of those trials.
4. Bounded-error probabilistic polynomial time (BPP) : The class
BPP consists of all languages that have a randomized algorithm A
running in worst case polynomial time such that for any input x in *.
3
x L P[A(x) accepts]
4
1
x L P[A(x) accepts]
4
For this class of algorithms, the error probability can be reduced to 1/
2n with only a polynomial number of iterations.
For a given a graph G = (V, E) and a number k, does there exist a tour
C on G such that the sum of the edge weights for edges in C is less than
or equal to k.
Proof :
Part 1 : TSP is in NP.
Proof :
1. Let a hint S be a sequence of vertices V = v1,..., vn.
2. We then check two things :
a. First we check that every edge traversed by adjacent vertices is an
edge in G, such that the sum of these edge weights is less than or
equal to k.
b. Secondly we check that every vertex in G is in V, which assures
that every node has been traversed.
3. We accept S if and only if S satisfies these two questions, otherwise
reject.
4. Both of these checks are clearly polynomial, thus our algorithm forms a
verifier with hint S, and TSP is consequently in NP.
More pdf : www.motivationbank.in
5–21 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Answer
1. To show the problem is in NP, let us take a graph G(V, E) and a colouring
c, and checks in O(n2 ) time whether c is a proper colouring by checking
if the end points of every edge e E have different colours.
2. To show that 3-COLOURING is NP-hard, we give a polytime reduction
from 3-SAT to 3-COLOURING.
Selected Topics 5–22 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
3. That is, given an instance of 3-SAT, we will construct an instance of
3-COLOURING (i.e., a graph G(V, E)) where G is 3-colourable iff is
satisfiable.
4. Let be a 3-SAT instance and C1, C2, ..., Cm be the clauses of defined
over the variables {x1, x2, ..., xn}.
5. The graph G(V, E) that we will construct needs to capture two things :
a. Somehow establish the truth assignment for x1, x2, ..., xn via the
colours of the vertices of G; and
b. Somehow capture the satisfiability of every clause Ci in .
6. To achieve these two goals, we will first create a triangle in G with three
vertices {T, F, B} where T stands for True, F for False and B for Base.
7. Consider {T, F, B} as the set of colours that we will use to colour (label)
the vertices of G.
8. Since this triangle is part of G, we need 3 colours to colour G.
9. Now we add two vertices vi, vi for every literal xi and create a triangle B,
T F
vn
B
v1
vn
v1
v2
v2
Fig. 5.21.1.
10. This construction captures the truth assignment of the literals.
11. Since if G is 3-colourable, then either vi or vi gets the colour T, and we
interpret this as the truth assignment to vi.
12. Now we need to add constraints to G to capture the satisfiability of the
clauses of .
13. To do so, we introduce the Clause Satisfiability Gadget, (the OR-gadget).
For a clause Ci = (a b c), we need to express the OR of its literals using
our colours {T, F, B}.
14. We achieve this by creating a small gadget graph that we connect to the
literals of the clause. The OR-gadget is constructed as follows :
a ab
abc
b
Fig. 5.21.2.
More pdf : www.motivationbank.in
5–23 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
15. Consider this gadget graph as a circuit whose output is the node labeled
a b c. We basically want this node to be coloured T if Ci is satisfied and
F otherwise.
16. This is a two step construction : The node labelled a b captures the
output of (a b) and we repeat the same operation for ((a b) c). If we
play around with some assignments to a, b, c, we will notice that the
gadget satisfies the following properties :
a. If a, b, c are all coloured F in a 3-colouring, then the output node of
the OR-gadget has to be coloured F. Thus capturing the
unsatisfiability of the clause Ci = (a b c).
b. If one of a, b, c is coloured T, then there exists a valid
3-colouring of the OR-gadget where the output node is coloured T.
Thus again capturing the satisfiability of the clause.
17. Once we add the OR-gadget of every Ci in , we connect the output node
of every gadget to the Base vertex and to the False vertex of the initial
triangle, as follows :
a ab
abc T
b
B
c F
Fig. 5.21.3.
18. Now we prove that our initial 3-SAT instance is satisfiable if and only
the graph G as constructed above is 3-colourable. Suppose is satisfiable
and let (x1*, x2*, ..., xn* ) be the satisfying assignment.
19. If xi* is assigned True, we colour vi with T and vi with F (recall they are
connected to the Base vertex, coloured B, so this is a valid colouring).
20. Since is satisfiable, every clause Ci = (a b c) must be satisfiable, i.e.,
at least of a, b, c is set to True. By the property of the OR-gadget, we
know that the gadget corresponding to Ci can be 3-coloured so that the
output node is coloured T.
21. And because the output node is adjacent to the False and Base vertices
of the initial triangle only, this is a proper 3-colouring.
22. Conversely, suppose G is 3-colourable. We construct an assignment of
the literals of by setting xi to True if vi is coloured T and vice versa.
23. Now consider this assignment is not a satisfying assignment to , then
this means there exists at least one clause Ci = (a b c) that was not
satisfiable.
24. That is, all of a, b, c were set to False. But if this is the case, then the
output node of corresponding OR-gadget of Ci must be coloured F.
25. But this output node is adjacent to the False vertex coloured F; thus
contradicting the 3-colourability of G.
26. To conclude, we have shown that 3-COLOURING is in NP and that it is
NP-hard by giving a reduction from 3-SAT.
Selected Topics 5–24 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
27. Therefore 3-COLOURING is NP-complete.
Que 5.22. Prove that P is the subset of NP.
Answer
To prove : P is the subset of NP.
Proof :
1. If L P, then LNP, as there is a polynomial time algorithm to decide L,
this algorithm can easily be converted into a row argument verification
algorithm that simply ignores any exception and accepts exactly those
input strings it determines to be in L.
2. Thus, P NP.
NP
P P = NP P N
PART-4
Approximation Algorithm.
Questions-Answers
Answer
Approximation algorithm :
1. An approximation algorithm is a way of dealing with NP-completeness
for optimization problem. This technique does not guarantee the best
solution.
2. The best of an approximation algorithm is to come as close as possible to
the optimum value in a reasonable amount of time which is at most
polynomial time.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms 5–25 B (CS/IT-Sem-5)
Proof :
TSP is 2-approximate :
Let H* denote the optimal tour. Observe that a TSP with one edge removed
is a spanning tree (not necessarily MST).
It implies that the weight of the MST ‘T ’ is in lower bound on the cost of an
optimal tour.
c(T) c(H*)
A “Full” walk, W, traverse every edge of MST, T, exactly twice. That is,
c(W) = 2c(T)
which means
c(W) 2c(H*)
Selected Topics 5–26 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
and we have
c(W) / c(H*) p(n) = 2
That is, the cost of walk, c(W), of the solution produced by the algorithm is
within a factor of p(n) = 2 of the cost c(H*) of an optimal solution.
Answer
A vertex cover of an undirected graph G = (V, E) is a subset of V V such
that if edge (u, v) G then u V or v V (or both).
Problem : Find a vertex cover of maximum size in a given undirected graph.
This optimal vertex cover is the optimization version of an NP-Complete
problem but it is not too hard to find a vertex cover that is near optimal.
Approx-vertex-cover (G : Graph)
1. c
2. E E[G]
3. while E is not empty
4. do Let (u, v) be an arbitrary edge of E
5. c c {u, v}
6. Remove from E every edge incident on either u or v
7. return c
Analysis : It is easy to see that the running time of this algorithm is
O(V + E), using adjacency list to represent E.
Que 5.25. Describe approximation algorithm in detail. What is
the approximation ratio ? Show that vertex cover problem is
2-approximate.
Answer
Approximation algorithm : Refer Q. 5.23, Page 5–24B, Unit-5.
Proof :
Vertex cover problem is 2-approximate :
Goal : Since this is a minimization problem, we are interested in smallest
possible c / c*. Specifically we want to show c / c* = 2 = p(n).
In other words, we want to show that Approx-Vertex-Cover algorithm returns
a vertex-cover that is almost twice the size of an optimal cover.
Proof : Let the set c and c* be the sets output by Approx-Vertex-Cover and
Optimal-Vertex-Cover respectively. Also, let A be the set of edges.
Because, we have added both vertices, we get c = 2|A| but Optimal-Vertex-
Cover would have added one of two.
c / c* p(n) = 2.
Formally, since no two edge in A are covered by the same vertex from c* and
the lower bound : |c*| A ...(5.25.1)
on the size of an Optimal-Vertex-Cover.
Now, we pick both end points yielding an upper bound on the size of Vertex-
Cover :
More pdf : www.motivationbank.in
5–27 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
|c| 2|A|
Since, upper bound is an exact in this case, we have
|c|= 2|A| ...(5.25.2)
Take |c|/ 2 = |A| and put it in equation (5.25.1)
|c*| |c|/ 2
|c*|/|c| 1/2
|c*|/|c| 2 = p(n) Hence the theorem proved.
Answer
Problem : Given a complete graph with weights on the edges, find a cycle of
least total weight that visits each vertex exactly once. When the cost function
satisfies the triangle inequality, we can design an approximate algorithm for
TSP that returns a tour whose cost is not more than twice the cost of an
optimal tour.
APPROX-TSP-TOUR (G, c) :
1. Select a vertex r V[G] to be a “root” vertex.
2. Compute a minimum spanning tree T for G from root r using MST-
PRIM (G, c, r).
3. Let L be the list of vertices visited in a pre-order tree walk of T.
4. Return the Hamiltonian cycle H that visits the vertices in the order L.
Outline of an approx-TSP tour : First, compute a MST (minimum spanning
tree) whose weight is a lower bound on the length of an optimal TSP tour.
Then, use MST to build a tour whose cost is no more than twice that of MST’s
weight as long as the cost function satisfies triangle inequality.
Answer
i. Nearest neighbour :
The following well-known greedy algorithm is based on the nearest-
neighbour heuristic i.e., always go next to the nearest unvisited city.
Step 1 : Choose an arbitrary city as the start.
Step 2 : Repeat the following operation until all the cities have been
visited : go to the unvisited city nearest the one visited last (ties can be
broken arbitrarily).
Step 3 : Return to the starting city.
Example :
1. For the instance represented by the graph in Fig. 5.27.1, with a as the
starting vertex, the nearest-neighbour algorithm yields the tour
(Hamiltonian circuit) sa: a – b – c – d – a of length 10.
Selected Topics 5–28 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. The optimal solution, as can be easily checked by exhaustive search, is
the tour s*: a – b – d – c – a of length 8. Thus, the accuracy ratio of this
approximation is
f (sa ) 10
r(sa) = = =1.25
f (s*) 8
i.e., tour sa is 25 % longer than optimal tour s*.
1
a b
6 2
3 3
c d
1
Fig. 5.27.1. Instance of the traveling salesman problem.
3. Unfortunately, except for its simplicity, not many good things can be
said about the nearest-neighbour algorithm.
4. In particular, nothing can be said in general about the accuracy of
solutions obtained by this algorithm because it can force us to traverse
a very long edge on the last leg of the tour.
5. Indeed, if we change the weight of edge (a, d) from 6 to an arbitrary
large number w 6 in given example, the algorithm will still yield the
tour a – b – c – d – a of length 4 + w, and the optimal solution will still be
a – b – d – c – a of length 8. Hence,
f (s a ) 4w
r(sa) = =
f (s*) 8
which can be made as large as we wish by choosing an appropriately
large value of w. Hence, RA = for this algorithm.
ii. Multifragment heuristic :
Another natural greedy algorithm for the traveling salesman problem
considers it as the problem of finding a minimum-weight collection of
edges in a given complete weighted graph so that all the vertices have
degree 2.
Step 1 : Sort the edges in increasing order of their weights. (Ties can be
broken arbitrarily.) Initialize the set of tour edges to be constructed to
the empty set.
Step 2 : Repeat this step n times, where n is the number of cities in the
instance being solved : add the next edge on the sorted edge list to the
set of tour edges, provided this addition does not create a vertex of
degree 3 or a cycle of length less than n; otherwise, skip the edge.
Step 3 : Return the set of tour edges.
Example :
1. Applying the algorithm to the graph in Fig. 5.27.1 yie lds
{(a, b), (c, d), (b, c), (a, d)}.
More pdf : www.motivationbank.in
5–29 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Answer
Approximation algorithm : Refer Q. 5.23, Page 5–24B, Unit-5.
p(n) approximation algorithm : A is a p(n) approximate algorithm if and
only if for every instance of size n, the algorithm achieves an approximation
ratio of p(n). It is applied to both maximization (0 < C(i) C*(i)) and
minimization (0 < C* (i) C(i)) problem because of the maximization factor
and costs are positive. p(n) is always greater than 1.
Approximation algorithm for Travelling Salesman Problem (TSP) :
1. The key to designing approximation algorithm is to obtain a bound on
the optimal value (OPT).
2. In the case of TSP, the minimum spanning tree gives a lower bound on
OPT.
3. The cost of a minimum spanning tree is not greater than the cost of an
optimal tour.
The algorithm is as follows :
1. Find a minimum spanning tree of G.
Selected Topics 5–30 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2. Duplicate each edge in the minimum spanning tree to obtain a Eulerian
graph.
3. Find a Eulerian tour (J) of the Eulerian graph.
4. Convert J to a tour T by going through the vertices in the same order of
T, skipping vertices that were already visited.
PART-5
Randomized Algorithm.
Questions-Answers
Answer
Approximation algorithm : Refer Q. 5.23, Page 5–24B, Unit-5.
Randomized algorithm :
1. A randomized algorithm is defined as an algorithm that is allowed to
access a source of independent, unbiased bits and it is then allowed to
use these random bits to influence its computation.
2. An algorithm is randomized if its output is determined by the input as
well as the values produced by a random number generator.
3. A randomized algorithm makes use of a randomizer such as a random
number generator.
4. The execution time of a randomized algorithm could also vary from
run to run for the same input.
5. The algorithm typically uses the random bits as an auxiliary input to
guide its behaviour in the hope of achieving good performance in the
“average case”.
6. Randomized algorithms are particularly useful when it faces a malicious
attacker who deliberately tries to feed a bad input to the algorithm.
Randomized algorithm are categorized into two classes :
i. Las Vegas algorithm : This algorithm always produces the same
output for the same input. The execution time of Las Vegas algorithm
depends on the output of the randomizer.
ii. Monte Carlo algorithm :
a. In this algorithm output might differ from run to run for the same
input.
More pdf : www.motivationbank.in
Design and Analysis of Algorithms5–31 B (CS/IT-Sem-5)
b. Consider any problem for which there are only two possible
answers, say yes and no.
c. If a Monte Carlo algorithm is used to solve such a problem then
the algorithm might give incorrect answers depending on the
output of the randomizer.
d. Then the requirement is that the probability of an incorrect answer
from a Monte Carlos algorithm be low.
Que 5.30. Write a short note on randomized algorithm. Write its
merits, and applications.
Answer
Randomized algorithm : Refer Q. 5.29, Page 5–30B, Unit-5.
Merits :
1. Simple.
2. High efficiency.
3. Better complexity bounds.
4. Random selection of good and bad choices.
5. Cost efficient.
Applications :
1. Randomized quick sort algorithm
2. Randomized minimum-cut algorithm
3. Randomized algorithm for N-Queens problem
4. Randomized algorithm for majority element
Que 5.31. Write the EUCLID’S GCD algorithm. Compute gcd (99,
78) with EXTENDED-EUCLID.
Answer
Euclid’s GCD algorithm :
The inputs a and b are arbitrary non-negative integers.
EUCLID (a, b) {
if (b = = 0)
then return a;
else return EUCLID (b, a mod b); }
EXTEUCLID (a, b) {
//returns a triple (d, x, y) such that d = gcd (a, b)
//d = = (a × x + b × y)
if (b = = 0) return (a, 1, 0);
(d1, x1, y1) = EXTEUCLID (b, a % b);
d = d1;
x = y 1;
y = x1 – (a div b) × y1; //div = integer division
return (d, x, y);
}
Selected Topics 5–32 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Numerical :
Let a = 99 and b = 78
a b [a/b] d x y
99 78 1 3 – 11 14
78 21 3 3 3 – 11
21 15 1 3 –2 3
15 6 2 3 1 –2
6 3 2 3 0 1
3 0 – 3 1 0
i. In the 5th receive (a = 6, b = 3), values from the 6th call (b = 0) has
d1 = 3, x1 = 1 and y1 = 0. Still within the 5th call we calculate that
d = d1 = 3, x = y1 = 0 and y = x1 – (a div b) × y1 = 1 – 2 × 0 = 1.
ii. In the 4th receive (a = 15, b = 6), the values d1 = 3, x1 = 0 and y1 = 1 from
the 5th call, then compute x = y1 = 1 and
y = x1 – (a div b) × y1 = 0 – 2 × 1 = – 2.
iii. In the 3rd receive (a = 21, b = 15), x = – 2 and
y = x1 – (a div b) × y1 = 1 – (1 × – 2) = 3.
iv. In the 2nd receive (a = 78, b = 21), x = 3 and
y = x1 – (a div b) × y1 = (– 2) – 3 × 3 = – 11.
v. In the 1st receive (a = 99, b = 78), x = – 11 and
y = x1 – (a div b) × y1 = 3 – 1 × (– 11) = 14.
vi. The call EXTEUCLID (99, 78) return (3, – 11, 14), so gcd (99, 78) = 3 and
gcd (99, 78) = 3 = 99 × (–11) + 78 × 14.
More pdf : www.motivationbank.in
Design and Analysis of AlgorithmsSQ–1 B (CS/IT-Sem-5)
1 Introduction
(2 Marks Questions)
Ans.
S. No. Time complexity Space complexity
1. Time complexity is the Space complexity is the amount
amount of time required for of memory ne eded for an
an algorithm to complete its algorithm to solve the problem.
process.
2. It is expressed using Big It is expressed only using Big
Oh(O), theta () and Oh(O) notation.
omega () notation.
4
c=
3
Hence, T(n) = (n2)
n n
n 7n 3n
10 5 2
log n
n 7n 7n 49n 9n
100 50 50 25 4
30 n 31 n 32 33
T(n) = 0
1 1 n 3 n + ................+ log n times
2 2 2 2
= (n log n)
More pdf : www.motivationbank.in
SQ–7 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
2 Advanced
Data Structures
(2 Marks Questions)
2.5. Draw BSTs of height 2, 3 and 4 on the set of keys {10, 4, 5, 16,
1, 17, 21}
Ans. Keys = {10, 4, 5, 16, 1, 17, 21}
BST of height 2 :
10
4 17
1 5 16 21
More pdf : www.motivationbank.in
2 Marks Questions SQ–8 B (CS/IT-Sem-5)
BST of height 3 :
5
4 10
1 17
16 21
BST of height 4 :
5
4 10
1 16
17
16
Insert 12 : 10 12
Insert 13 : 10 12 13
12
Split,
10 13
12
Insert 14 :
10 13 14
More pdf : www.motivationbank.in
SQ–9 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
12
Insert 15 :
10 13 14 15
12 14
Split,
10 13 15
4. So th (n + 1)/2 as required.
Taking log both sides we get,
h logt (n + 1)/2
More pdf : www.motivationbank.in
SQ–11 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
3 Graph Algorithm
(2 Marks Questions)
3 0 0 1 0 1 3 4 / 2 /
4 1 0 0 1 0 4 3 / 0 /
More pdf : www.motivationbank.in
SQ–15 B (CS/IT-Sem-5)
Design and Analysis of Algorithms
Dynamic Programming,
4 Backtracking and
Branch and Bound
(2 Marks Questions)
4.6. What are the constraints used for solving the problem in
backtracking algorithm ?
Ans. There are two types of constraints used to solve the problem
in backtracking :
i. Explicit constraints
ii. Implicit constraints
More pdf : www.motivationbank.in
2 Marks Questions SQ–20 B (CS/IT-Sem-5)
5 Selected Topics
(2 Marks Questions)
Design and Analysis of Algorithms SP–1 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. V) ODD SEMESTER THEORY
EXAMINATION, 2015-16
DESIGN AND ANALYSIS OF ALGORITHMS
SECTION – A
Note : Attempt all parts. All parts carry equal marks. Write answer
of each part in short. (2 × 10 = 20)
SECTION – B
Note : Attempt any five questions from this section : (10 × 5 = 50)
2. Explain insertion in red-black tree. Show steps for inserting
1, 2, 3, 4, 5, 6, 7, 8 and 9 into empty RB-tree.
Solved Paper (2015-16) SP–2 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
3. Discuss knapsack problem with respect to dynamic
programming approach. Find the optimal solution for
given problem, w (weight set) = {5, 10, 15, 20} and W
(Knapsack size) = 25 and v = {50, 60, 120, 100}.
G M P X
A C D E I K N O R S T U V Y Z
Fig. 1.
SECTION – C
Note : Attempt any two questions from this section : (15 × 2 = 30)
Solved Paper (2015-16) SP–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2015-16)
SECTION – A
Note : Attempt all parts. All parts carry equal marks. Write answer
of each part in short. (2 × 10 = 20)
SECTION – B
Note : Attempt any five questions from this section : (10 × 5 = 50)
2. Explain insertion in red-black tree. Show steps for inserting
1, 2, 3, 4, 5, 6, 7, 8 and 9 into empty RB-tree.
Ans. Insertion :
i. We begin by adding the node as we do in a simple binary search
tree and colouring it red.
RB-INSERT(T, z)
1. y nil [T]
2. x root [T]
3. while x nil [T]
4. do y x
5. if key[z] < key [x]
6. then x left [x]
7. else x right [x]
8. p[z] y
9. if y = nil [T]
10. then root [T] z
11. else if key [z] < key[y]
12. then left [y] z
13. else right [y] z
14. left [z] nil[T]
15. right [z] nil[T]
16. colour [z] RED
17. RB-INSERT-FIXUP(T, z)
ii. Now, for any colour violation, RB-INSERT-FIXUP procedure is
used.
RB-INSERT-FIXUP(T, z)
1. while colour [p[z]] = RED
2. do if p[z] = left[p [p [z]]]
3. then y right[p [p [z]]]
4. if colour[y] = RED
Solved Paper (2015-16) SP–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
5. then colour[p [z]] BLACK case 1
6. colour[y] BLACK case 1
7. colour[p [p [z]]] RED case 1
8. z p [p [ z]] case 1
9. else if z = right[p[z]]
10. then z p[z] case 2
11. LEFT-ROTATE(T, z) case 2
12. colour[p[z]] BLACK case 3
13. colour[p[p[z]]] RED case 3
14. RIGHT-ROTATE(T, p[p[z]]) case 3
15. else (same as then clause with “right” and “left”exchanged)
16. colour[root[T]] BLACK
Numerical :
B
Insert 1 : 1
B
1
Insert 2 : R
2
Insert 3 :
B
B
1 Case 2 2
R
2 1 3
R R R
3
Insert 4 :
B B
2 2
1 3 1 3
R R B B 4
4
R R
Insert 5 :
B B
2 Case 2 2 B
B B
1 3 1 4
R B R
4 3 5
R
5 R
Design and Analysis of Algorithms SP–9 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 6 :
B B
2 2
B B B R
1 4 1 4
R
3 5 3 5
R
R 6 B B 6
R
Insert 7 :
B B
2 Case 2 2
B B B R
1 4 1 4
R B
3 5 B 3 6
R R
R 6 R 5 7
R
7
Insert 8 :
B B
2 2
B R B R
1 4 1 4
B R
B 3 6 B 3 6
R B
R 5 7 B 5 7
R R
8 8
Insert 9 :
B
2 Case 2 2
B R
1 4 1 4
R
B 3 6 3 6
B
R 5 7 5 8
R
8 7 9
R
9
5
15 25
5
The value of fractional part of I2 is,
60
= × 5 = 30
10
Thus, the maximum value is,
= 50 + 120 + 3 = 200
G M P X
A C D E I K N O R S T U V Y Z
Fig. 1.
Ans. B-tree :
A B-tree of order m is an m-ary search tree with the following
properties :
1. The root is either leaf or has atleast two children.
2. Each node, except for the root and the leaves, has between m/2
and m children.
3. Each path from the root to a leaf has the same length.
4. The root, each internal node and each leaf is typically a disk block.
5. Each internal node has upto (m – 1) key values and upto m children.
Design and Analysis of Algorithms SP–13 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Numerical :
Insertion :
G M P X
A C D E I K N O R S T U V Y Z
G M P T X
A C D E I K N O R S U V Y Z
G M T X
A C D E I K N O R S U V Y Z
Insert B :
G M T X
A B C D E I K N O R S U V Y Z
C G M T X
A B D E I K N O R S U V Y Z
Solved Paper (2015-16) SP–14 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert Q :
C G M T X
A B D E I K N O Q R S U V Y Z
Insert L :
P
C G M T X
A B D E I K L N O Q R S U V Y Z
Insert F :
C G M T X
A B D E F I K L N O Q R S U V Y Z
Deletion :
Delete F :
C G M T X
A B D E I K L N O Q R S U V Y Z
Delete M :
C G L T X
A B D E I K N O Q R S U V Y Z
Design and Analysis of Algorithms SP–15 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Delete G :
P
C L T X
A B D E I K N O Q R S U V Y Z
Delete D :
C L P T X
A B E I K N O Q R S U V Y Z
Delete B :
E L P T X
A C I K NO Q R S UV Y Z
1 2 3 4
1
2
3
4
Fig. 2.
2. Now, we will place each queen on a different row such that no two
queens attack each other.
3. We place the queen q1 in the very first accept position (1, 1).
4. Now if we place queen q2 in column 1 and 2 then the dead end is
encountered.
5. Thus, the first acceptable position for queen q2 is column 3 i.e., (2, 3)
but then no position is left for placing queen q3 safely. So, we
backtrack one step and place the queen q2 in (2, 4).
6. Now, we obtain the position for placing queen q3 which is (3, 2). But
later this position lead to dead end and no place is found where
queen q2 can be placed safely.
1 2 3 4
1 q1
2 q2
3 q3
Fig. 3.
7. Then we have to backtrack till queen q1 and place it to (1, 2) and
then all the other queens are placed safely by moving queen q2 to
(2, 4), queen q3 to (3, 1) and queen q4 to (4, 3) i.e., we get the solution
< 2, 4, 1, 3>. This is one possible solution for 4-queens problem.
Design and Analysis of Algorithms SP–17 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1 2 3 4
1 q1
2 q2
3 q3
4 q4
Fig. 4.
8. For other possible solution the whole method is repeated for all
partial solutions. The other solution for 4-queens problem is <3, 1,
4, 2> i.e.,
1 2 3 4
1 q1
2 q
2
3 q3
4 q4
Fig. 5.
9. Now, the implicit tree for 4-queen for solution <2, 4, 1, 3> is as
follows :
10. Fig. 6 shows the complete state space for 4-queens problem. But we
can use backtracking method to generate the necessary node and
stop if next node violates the rule i.e., if two queens are attacking.
0
1 5
q1 q1
× × × × ×
2 3
q1 q1 q1
q2 q2 q2
q1 q1
q2 q2
q3 q3
× × q1
q2
q3
q4
Fig. 6.
Solved Paper (2015-16) SP–18 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
7. Explain greedy single source shortest path algorithm with
example.
Ans.
1. Dijkstra’s algorithm, is a greedy algorithm that solves the single
source shortest path problem for a directed graph G = (V, E) with
non-negative edge weights, i.e., we assume that w(u, v) 0 each
edge (u, v) E.
2. Dijkstra’s algorithm maintains a set S of vertices whose final
shortest path weights from the source s have already been
determined.
3. That is, for all vertices v S, we have d[v] = (s, v).
4. The algorithm repeatedly selects the vertex u V – S with the
minimum shortest path estimate, inserts u into S, and relaxes all
edges leaving u.
5. We maintain a priority queue Q that contains all the vertices in
v – s, keyed by their d values.
6. Graph G is represented by adjacency list.
7. Dijkstra’s always chooses the “lightest or “closest” vertex in V – S
to insert into set S that it uses as a greedy strategy.
For example :
Initialize :
50 S:{}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
100
5 4
60
Extract min (1) :
50 S : {1}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
100
5 4
60
All edges leaving (1) :
10
50 S : {1}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
10 30 100
100
5 4
60
100 30
Design and Analysis of Algorithms SP–19 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Extract min(2) :
50 60 S : {1, 2}
2 3
10 Q:1 2 3 4 5
10 0
1 30 20
10 60 30 100
100
5 4
60
All edges leaving (2) :
50 60 S : {1, 2}
2 3
10 Q:1 2 3 4 5
10 0
1 30 20
10 60 30 100
100
5 4 60 30 100
60 30
100
Extract min(4) :
10 60
50 S : {1, 2, 4}
2 3
10 Q:1 2 3 4 5
10 0
0 1 30 20
10 60 30 100
100
5 4 60 30 100
60
100 30
All edges leaving (4) :
50 50 S : {1, 2, 4}
2 3 Q:1 2 3 4 5
10
0
10
1 30 20 10 60 30 100
100 60 30 100
5 4
60 50
100 30
Extract min(3) :
10 S : {1, 2, 4, 3}
50
2 3 50 Q:1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50
100
Solved Paper (2015-16) SP–20 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
All edges leaving (3) :
10 S : {1, 2, 4, 3, }
50
2 3 50 Q:1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50 60
100
Extract min(5) :
10 S : {1, 2, 4, 3, 5}
50
2 3 50 Q: 1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50
60
60
Shortest path
2 3
10
1 30 10 20
5 4
Valid matching
9 3 8 4 4 4 4 10 9 2 3 1 9 2 5
Spurious
hit
The number of spurious hits is 3.
SECTION – C
Note : Attempt any two questions from this section : (15 × 2 = 30)
12 7 4 8
14 15 9 10 11 6
20
Fig. 7.
CONSOLIDATE operation :
CONSOLIDATE(H)
1. for i 0 to D(n[H])
2. do A[i] NIL
3. for each node w in the root list of H
4. do x w
5. d degree[x]
6. while A[d] NIL
7. do y A[d] Another node with the same degree as x.
8. if key[x] > key[y]
9. then exchange x y
10. FIB-HEAP-LINK(H, y, x)
11. A[d] NIL
12. dd+1
13. A[d] x
14. min[H] NIL
15. for i 0 to D(n[H])
16. do if A[i] NIL
17. then add A[i] to the root list of H
18. if min[H] = NIL or key [A[i]] < key[min[H]]
19. then min[H] A[i]
FIB-HEAP-LINK(H, y, x)
1. remove y from the root list of H
2. make y a child of x, incrementing degree[x]
3. mark[y] FALSE
11. What is minimum spanning tree ? Explain Prim’s algorithm
and find MST of graph Fig. 8.
6 7
b c d
4 9
2
a 11 i 14 e
4
7 6
5 10
h g f
1 2
Fig. 8.
Solved Paper (2015-16) SP–24 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Ans. Minimum spanning tree :
1. Given a connected weighted graph G, it is often desired to create a
spanning tree T for G such that the sum of the weights of the tree
edges in T is as small as possible.
2. Such a tree is called a minimum spanning tree and represents the
‘‘cheapest’’ way of connecting all the nodes in G.
3. There are number of techniques for creating a minimum spanning
tree for a weighted graph but the most famous methods are Prim’s
and Kruskal’s algorithm.
Prim’s algorithm :
First it chooses a vertex and then chooses an edge with smallest
weight incident on that vertex. The algorithm involves following
steps :
Step 1 : Choose any vertex V1 of G.
Step 2 : Choose an edge e1 =V1V2 of G such that V2 V1 and e1 has
smallest weight among the edge e of G incident with V1.
Step 3 : If edges e1, e2, ......., ei have been chosen involving end
points V1, V2, ............., Vi+1, choose an edge ei+1 = VjVk with Vj = {V1
....... Vi+1} and Vk {V1 ...........Vi+1} such that ei+1 has smallest
weight among the edges of G with precisely one end in {V1 ...............
Vi+1}.
Step 4 : Stop after n – 1 edges have been chosen. Otherwise goto
step 3.
Numerical :
6 7
b c d
4 2 9
4
a 11 i 14 e
6
5 7 10
h g f
1 2
Fig. 9.
Let a be the source node. Select edge (a, b) as distance between
edge (a, b) is minimum.
b
4
a
Now, select edge (b, c)
6
b c
4
a
Design and Analysis of Algorithms SP–25 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Now, select edge (c, i)
6
b c
4
2
a i
Now, select edge (i, g)
6
b c
4
2
a i
6
h g
1
Now, select edge (g, f)
6
b c
4
2
a i
6
1 2
h g f
h 1 g 2 f e
City B City C
35
If salesman starting city is A. Then a TSP tour in the graph is
ABDCA
Cost of the tour = 10 + 25 + 30 + 15 = 80 Units.
Backtracking approach is used to solve TSP problem.
Backtracking algorithm for the TSP :
1. Let G be the given complete graph with positive weights on its
edges.
2. Use a search tree that generates all permutations of V = {1 ... n},
specifically the one illustrated in Fig. 10 for the case n = 3.
Design and Analysis of Algorithms SP–27 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
0
1 2 3
1 2 3
2 3 1 3 1 2
3 2 3 1 2 1
Fig. 10.
3. A node at depth i of this tree (the root is at depth 0) stores an
i-permutation of {1, ..., n}. A leaf stores a permutation of {1, ..., n},
which is equivalent to saying that it stores a particular Hamiltonian
cycle (tour) of G.
4. For the travelling salesman problem, we will not do any static
pruning on this tree, we will do dynamic pruning, during the search.
Design and Analysis of Algorithms SP–1 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. V) ODD SEMESTER THEORY
EXAMINATION, 2016-17
DESIGN AND ANALYSIS OF ALGORITHMS
Section-A
1. Attempt all parts. All parts carry equal marks. Write answer of
each part in short. (2 × 10 = 20)
Section-B
Note : Attempt any five questions from this section. (10 × 5 = 50)
2. Explain the concepts of quick sort method and analyze its
complexity with suitable example.
4. Insert the nodes 15, 13, 12, 16, 19, 23, 5, 8 in empty red-black
tree and delete in the reverse order of insertion.
Solved Paper (2016-17) SP–2 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
5. Write short note on Dijkstra’s algorithm shortest paths
problems.
Section-C
Note : Attempt any two questions from this section. (15 × 2 = 30)
10. i. State Bellman-Ford algorithm.
ii. Consider following instance for simple knapsack problem.
Find the solution using greedy method.
N=8
P = {11, 21, 31, 33, 43, 53, 55, 65}
W = {1, 11, 21, 23, 33, 43, 45, 55}
M = 110
Design and Analysis of Algorithms SP–3 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2016-17)
Section-A
1. Attempt all parts. All parts carry equal marks. Write answer of
each part in short. (2 × 10 = 20)
Section-B
Note : Attempt any five questions from this section. (10 × 5 = 50)
2. Explain the concepts of quick sort method and analyze its
complexity with suitable example.
Ans. Quick sort :
Quick sort works by partitioning a given array A[p ... r] into two
non-empty subarray A[p ... q – 1] and A [q + 1 ... r] such that every
key in A[p ... q – 1] is less than or equal to every key in A[q + 1 ... r].
Then the two subarrays are sorted by recursive calls to quick sort.
Quick_Sort (A, p, r)
1. If p < r then
2. q Partition (A, p, r)
3. Recursive call to Quick_Sort (A, p, q – 1)
4. Recursive call to Quick_Sort (A, q + 1, r)
As a first step, Quick sort chooses as pivot one of the items in the
array to be sorted. Then array is partitioned on either side of the
pivot. Elements that are less than or equal to pivot will move toward
the left and elements that are greater than or equal to pivot will
move toward the right.
Partition (A, p, r)
1. x A[r]
2. i p – 1
3. for j p to r – 1
4. do if A[j] x
5. then i i + 1
6. then exchange A[i] A[j]
7. exchange A[i + 1] A[r]
8. return i + 1
Example : Given array to be sorted
3 1 4 1 5 9 2 6 5 3 5 8 9
Sort the array A using quick sort algorithm.
Solved Paper (2016-17) SP–6 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Step 1 : The array is Pivoted about it first element i.e., Pivot (P) = 3
3 1 4 1 5 9 2 6 5 3 5 8 9
P
Step 2 : Find first element larger then pivot (make underline) and
find element not larger than pivot from end make over line.
3 1 4 1 5 9 2 6 5 3 5 8 9
P
Underline Overline
Step 3 : Swap these element and scan again.
3 1 3 1 5 9 2 6 5 4 5 8 9
P
Array after swapping
3 1 3 1 5 9 2 6 5 4 5 8 9
P
Underline Overline
Apply swapping,
3 1 3 1 2 9 5 6 5 4 5 8 9
Again apply scanning,
3 1 3 1 2 9 5 6 5 4 5 8 9
Overline Underline
The pointers have crossed
i.e., overline on left of underlined
Then, in this situation swap pivot with overline.
2 1 3 1 3 9 5 6 5 4 5 8 9
P
Now, pivoting process is complete.
Step 4 : Recursively sort subarrays on each side of pivot.
Subarray 1 : 2 1 3 1
Subarray 2 : 9 5 6 5 1 5 8 9
First apply Quick sort for subarray 1.
Design and Analysis of Algorithms SP–7 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
2 1 3 1
P
Underline Overline
2 1 1 3
P
Underline Overline
The pointers have crossed.
i.e., overline on left of underlined.
Swap pivot with overline
1 1 2 3
Sorted array
Now, for subarray 2 we apply Quick sort procedure.
9 5 6 5 4 5 8 9
P
Overline Underline
The pointer has crossed. Then swap pivot with overline.
8 5 6 5 4 5 9 9
Subarray 4
Subarray 3
8 5 6 5 4 5
P Overline
Swap overline with pivot.
5 5 6 5 4 8
P
5 5 6 5 4
Underline Overline
5 5 4 5 6
Overline on left of underlined.
Swap pivot with overline.
4 5 5 5 6
P
Now combine all the subarrays
Sorted array 1 1 2 3 3 4 5 5 5 6 8 9 9
Pivot
Solved Paper (2016-17) SP–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Analysis of complexity :
i. Worst case :
1. Let T(n) be the worst case time for quick sort on input size n. We
have a recurrence
T(n) = max (T(q) + T(n – q – 1)) + (n) ...(1)
o q n 1
where q ranges from 0 to n – 1, since the partition produces two
regions, each having size n – 1.
2. Now we assume that T(n) cn2 for some constant c.
Substituting our assumption in eq. (1.19.1) we get
T(n) max (cq2 + c(n – q – 1)2) + (n)
o q n 1
10, 25 5, 16
5, 10, 16, 25
3. Consider the second half : 35, 48, 5 again divide into two sub-
arrays
35 , 48 8
35, 48 8
8, 35, 48
4. Merge these two sorted sub-arrays,
5, 10, 16, 25 8, 35, 48
4. Insert the nodes 15, 13, 12, 16, 19, 23, 5, 8 in empty red-black
tree and delete in the reverse order of insertion.
Ans. Insertion :
B
Insert 15 : 15
Design and Analysis of Algorithms SP–11 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B
R 15
Insert 13 : 13
Insert 12 :
B
B
15 13
R
R 13 12 15
12 R R
Insert 16 :
B B
13 Case 3 13
12 15 12 15
R R B B 16
16
R R
Insert 19 :
B B
13 Case 2 13
B B B
12 15 12 16
R B R
16 15 19
R
19 R
Insert 23 :
B B
13 13
B B B R
12 16 12 16
R
15 19 15 19
R
R 23 B B 23
R
Insert 5 :
B
B 13
R
12 16
B
5 15 19
R B
23
R
Solved Paper (2016-17) SP–12 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 8 :
B B
13 13
B R
B
B 12 16 8 16
R R
2nd 15 19 B
R 5 5 12 15 19
B B
23 B R
1st 8 23
R
R
Deletions :
Delete 8 :
B B
13 13
B R B R
8 16 12 16
R
5 12 15 19 B 5 R B 15 19 B
R B
R 23 23 R
Delete 5 :
B
B
13 13
R R B
R
12 16 12 16
R B B
5 15 19 15 19 B
B
23 R 23 R
Delete 23 :
B B
13 13
B 12 R B 12 R
16 16
B 15 19 B B 15 19 B
23 R
Delete 19 :
B B B
13 13 13
R Case 2 B
B 12 16 B 12 16 12 16 B
R
B 15 19 B B 15 15 R
Design and Analysis of Algorithms SP–13 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Delete 16 :
B B
13 13
B B
B 12 16 B 12 15
15 R
Delete 12 :
B B
13 13
B 12 15 B 15 B
Delete 13 :
B B
13 15
15 B
Delete 15 :
No tree
Section-C
Note : Attempt any two questions from this section. (15 × 2 = 30)
10. i. State Bellman-Ford algorithm.
ii. Consider following instance for simple knapsack problem.
Find the solution using greedy method.
N= 8
P = {11, 21, 31, 33, 43, 53, 55, 65}
W = {1, 11, 21, 23, 33, 43, 45, 55}
M = 110
Ans.
i.
1. Bellman-Ford algorithm finds all shortest path length from a source
s V to all v V or determines that a negative-weight cycle exists.
2. Bellman-Ford algorithm solves the single source shortest path
problem in the general case in which edges of a given digraph G
can have negative weight as long as G contains no negative cycles.
3. This algorithm, uses the notation of edge relaxation but does not
use with greedy method.
4. The algorithm returns boolean TRUE if the given digraph contains
no negative cycles that are reachable from source vertex otherwise
it returns boolean FALSE.
Design and Analysis of Algorithms SP–17 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Bellman-Ford (G, w, s) :
1. INITIALIZE-SINGLE-SOURCE (G, s)
2. for each vertex i 1 to V[G] – 1
3. do for each edge (u, v) in E[G]
4. do RELAX (u, v, w)
5. for each edge (u, v) in E[G] do
6. do if d[u] + w(u, v) < d[v] then
7. then return FALSE
8. return TRUE
RELAX (u, v, w) :
1. If d[u] + w(u, v) < d[v]
2. then d[v] d[u] + w(u, v)
3. [v] u
If Bellman-Ford returns true, then G forms a shortest path tree,
else there exists a negative weight cycle.
ii.
N=8
W = {1, 11, 21, 23, 33, 43, 45, 55}
P = {11, 21, 31, 33, 43, 53, 55, 65}
M = 110
Now, arrange the value of Pi in decreasing order
N Wi Pi V i = Wi × P i
1 1 11 11
2 11 21 231
3 21 31 651
4 23 33 759
5 33 43 1419
6 43 53 2279
7 45 55 2475
8 55 65 3575
Now, fill the knapsack according to decreasing value of Pi. First we
choose item N = 1 whose weight is 1.
Then choose item N = 2 whose weight is 11.
Then choose item N = 3 whose weight is 21.
Now, choose item N = 4 whose weight is 23.
Then choose item N = 5 whose weight is 33.
Total weight in knapsack is = 1 + 11 + 21 + 23 + 33 = 89
Now, the next item is N = 6 and its weight is 43, but we want only
21 because M = 110.
So, we choose fractional part of it, i.e.,
The value of fractional part of N = 6 is,
2279
× 21 =1113
43
Solved Paper (2016-17) SP–18 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
21
33
23
21 110
11
10 10 20 0 1 10 17 0 1
2 13 14 2 0 12 11 2 0
2 1 3 0 2 0 3 0 2 =M
1
3 16 3 3 0 15 3 0 0
4 12 0 3 12 11 0 0 12
2. So, total expected cost is : 10 + 2 + 2 + 3 + 4 + 1 + 3 = 25.
3. We have discovered the root node V1 so the next node to be expanded
will be V2, V3, V4, V5. Obtain cost of expanding using cost matrix for
node 2.
4. Change all the elements in 1st row and 2nd column.
12 11 2 0
M2 = 0 0 2
15 0 0
11 0 12
5. Now, reducing M2 in rows and columns, we get :
12 11 2 0
M2 = 0 0 2
15 0 0
11 0 12
Total cost for M2 = 25 + 10 + 0 = 35
6. Similarly, for node 3, we have :
12 2 0
M3 = 0 3 0 2
15 3 0
11 0 12
7. Now, reducing M3, we get :
12 2 0
M3 = 0 3 0 2
15 3 0
11 0 12
Total cost for M3 = 25 + 17 + 0 = 42
Solved Paper (2016-17) SP–20 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
8. Similarly, for node 4, we have :
12 11 0
M4 = 0 3 2
15 3 0 0
11 0 0
9. Now, reducing M4, we get :
12 11 0
M4 = 0 3 2
15 3 0 0
11 0 0
Total cost = 25 + 0 + 0 = 25
10. Similarly, for node 5, we have :
12 11 2
M5 = 0 3 0
15 3 0
11 0 0 12
11. Now, reducing M5, we get :
10 9 0
M5 = 0 3 0
15 3 0
11 0 0 12
Total cost = 25 + 1 + 2 = 28
1 V1 = 25
2 3 4 5 V 5 = 28
V2 = 35 V3 = 42 V4 = 25
Fig. 1.
12. Now, the promising node is V4 = 25. Now, we can expand V2, V3 and
V5. Now, the input matrix will be M4.
13. Change all the elements in 4th row and 2nd column.
Design and Analysis of Algorithms SP–21 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
12 11 0
M6 = 0 2
11 0
14. On reducing M6, we get :
12 11 0
M6 = 0 2
11 0
Total cost = 25 + 3 + 0 = 28
15. Similarly, for node 3, we have :
12 0
M7 = 0 3 2
3 0
11 0
16. On reducing M7, we get :
12 0
M7 = 0 3 2
3 0
11 0
Total cost = 25 + 0 + 0 = 25
17. Similarly, for node 5, we have :
12 11
M8 = 0 3
11 0 0
18. On reducing M8, we get :
1 0
M8 = 0 3
11 0 0
Solved Paper (2016-17) SP–22 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Total cost = 25 + 0 + 11 = 36
2 3 4 5
2 3 5
V2 = 28 V 3 = 25 V 5 = 36
Fig. 2.
19. Now, promising node is V3 = 25. Now, we can expand V2 and V5.
Now, the input matrix will be M7.
20. Change all the elements in 3rd row and 2nd column.
12 0
M9 =
0
11
21. On reducing M9, we get :
1 0
M9 =
0
0
Total cost = 25 + 3 + 0 = 28
22. Similarly, for node 5, we have :
12
M10 =
3
11 0
23. On reducing M10, we get :
0
M10 =
0
11 0
Total cost = 25 + 2 + 12 + 3 = 42.
Design and Analysis of Algorithms SP–23 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1
2 3 4 5
2 3 5
2 5
V 2 = 28 V 5 = 42
Fig. 3.
24. Here V2 is the most promising node so next we are going to expand
this node further. Now, we are left with only one node not yet
traversed which is V5.
10 6 5 2 16
V1 V4 V3 V2 V5 V1
So, total cost of traversing the graph is :
10 + 6 + 5 + 2 + 16 = 39
T F
vn
B
v1
vn
v1
v2
v2
Fig. 4.
10. This construction captures the truth assignment of the literals.
11. Since if G is 3-colourable, then either vi or vi gets the colour T, and
we interpret this as the truth assignment to vi.
12. Now we need to add constraints to G to capture the satisfiability of
the clauses of .
a ab
abc
b
c F
Fig. 6.
18. Now we prove that our initial 3-SAT instance is satisfiable if and
only the graph G as constructed above is 3-colourable. Suppose is
satisfiable and let (x1*, x2*, ..., xn* ) be the satisfying assignment.
19. If xi* is assigned True, we colour vi with T and vi with F (recall
they are connected to the Base vertex, coloured B, so this is a valid
colouring).
20. Since is satisfiable, every clause Ci = (a b c) must be satisfiable,
i.e., at least of a, b, c is set to True. By the property of the OR-
gadget, we know that the gadget corresponding to Ci can be 3-
coloured so that the output node is coloured T.
21. And because the output node is adjacent to the False and Base
vertices of the initial triangle only, this is a proper 3-colouring.
22. Conversely, suppose G is 3-colourable. We construct an assignment
of the literals of by setting xi to True if vi is coloured T and vice
versa.
23. Now consider this assignment is not a satisfying assignment to ,
then this means there exists at least one clause Ci = (a b c) that
was not satisfiable.
24. That is, all of a, b, c were set to False. But if this is the case, then the
output node of corresponding OR-gadget of Ci must be coloured F.
25. But this output node is adjacent to the False vertex coloured F; thus
contradicting the 3-colourability of G.
26. To conclude, we have shown that 3-COLOURING is in NP and that
it is NP-hard by giving a reduction from 3-SAT.
27. Therefore 3-COLOURING is NP-complete.
Design and Analysis of Algorithms SP–1 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B.Tech.
(SEM. V) ODD SEMESTER THEORY
EXAMINATION, 2017-18
DESIGN AND ANALYSIS OF ALGORITHM
Time : 3 Hours Max. Marks : 100
Section-A
SECTION-B
7 1 6
2
2 4
6 6
3 3
4 5
7
5 6
7
Fig. 2.
SECTION-C
b. Find the shortest path in the below graph from the source
vertex 1 to all other vertices by using Dijkstra’s algorithm.
50
2 3
10
10
1 30 20
100
5 4
60
Fig. 2.
b. Given the six items in the table below and a knapsack with
weight 100, what is the solution to the knapsack problem in
all concepts. i.e., explain greedy all approaches and find the
optimal solution.
Item ID Weight Value Value/Weight
A 100 40 .4
B 50 35 .7
C 40 20 .5
D 20 4 .2
E 10 10 1
F 10 6 .6
Solved Paper (2017-18) SP–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2017-18)
Section-A
n n
n 7n 3n
10 5 2
log n
n 7n 7n 49n 9n
100 50 50 25 4
30 n 31 n 32 33
T(n) = 0
1 1 n 3 n + ................+ log n times
2 2 2 2
= (n log n)
SECTION-B
n 4n n
5 5
log n
n 4n 4n 16n
25 25 25 25 n
Insert S : F S
Insert Q : F Q S
Insert K : F K Q S
Insert C : C F K Q S
Insert L : C F K L Q S
As, there are more than 5 keys in this node.
Find median, n[x] = 6 (even)
n[ x ] 6
Median = = 3
2 2
Now, median = 3,
So, we split the node by 3rd key.
K
C F K L Q S
Median
(splitting point) C F L Q S
Move up
Solved Paper (2017-18) SP–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert H, T :
K
C F H L Q S T
Insert V :
K
C F H L Q S T V
Insert W :
K
C F H L Q S T V W
More than 5 keys split node from Median.
n[x] = 6 [even]
n[ x] 6
Median = 3 (i.e., 3rd key move up)
2 2
K S
C F H L Q T V W
Insert M, R, N :
K S
CF H L MNQR T V W
Insert P :
K S
C F H LM N P QR T V W
More than 5 key
split the node
n[x] = 6
n[ x] 6
Median = 3 (i.e., 3rd key move up)
2 2
Design and Analysis of Algorithms SP–9 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
K N S
C F H LM P QR T V W
Insert A, B :
KN S
A B CF H LM PQ R T V W
Insert X, Y :
K N S
A B CF H LM PQR TV WXY
Insert D :
K N S
A B CD FH LM PQ R TV WXY
More than 5 key
split the node
n[x] = 6 (even)
n[ x] 6
Median = 3 (i.e., 3rd key move up)
2 2
C K NS
AB D FH LM PQR TV WXY
Insert Z, E :
CK NS
A B D E FH L M PQR TV X YZ
Insert G, I :
N
CK SW
A B DEFG HI LM PQR TV X YZ
CF K SW
A B DE GHI LM PQR TV X YZ
Fig. 1. Inserted all given information with degree t = 3.
7 1 6
2
2 4
6 6
3 3
4 5
7
5 6
7
Fig. 2.
Ans. Minimum spanning tree (Minimum cost spanning tree) :
1. Given a connected weighted graph G, it is often desired to create a
spanning tree T for G such that the sum of the weights of the tree
edges in T is as small as possible.
2. Such a tree is called a minimum spanning tree and represents the
‘‘cheapest’’ way of connecting all the nodes in G.
3. There are number of techniques for creating a minimum spanning
tree for a weighted graph but the most famous methods are Prim’s
and Kruskal’s algorithm.
Kruskal’s algorithm :
i. In this algorithm, we choose an edge of G which has smallest weight
among the edges of G which are not loops.
Design and Analysis of Algorithms SP–11 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
ii. This algorithm gives an acyclic subgraph T of G and the theorem
given below proves that T is minimal spanning tree of G. Following
steps are required :
Step 1 :Choose e1, an edge of G, such that weight of e1, w(e1) is as
small as possible and e1 is not a loop.
Step 2 :If edges e1, e2, ........., ei have been selected then choose an
edge ei+1 not already chosen such that
i. the induced subgraph
G[{e1 ..... ei+1}] is acyclic and
ii. w(ei+1) is as small as possible
Step 3 :If G has n vertices, stop after n – 1 edges have been chosen.
Otherwise repeat step 2.
If G be a weighted connected graph in which the weight of the
edges are all non-negative numbers, let T be a subgraph of G
obtained by Kruskal’s algorithm then, T is minimal spanning tree.
Numerical :
Step 1 : Arrange the edge of graph according to weight in ascending
order.
Edges Weight Edge Weight
13 2 32 6
46 3 17 7
25 4 35 7
36 5 56 7
34 6
41 6
Step 2 : Now draw the vertices as given in graph,
2 4
3
5 6
Now draw the edge according to the ascending order of weight. If
any edge forms cycle, leave that edge.
Step 3 : Select edge 13
1
2 2 4
3
5 6
Solved Paper (2017-18) SP–12 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Step 4 : Select edge 46
1
2 2 4
3
3
5 6
Step 5 : Select edge 25
1
2 2 4
4 3
3
5 6
Step 6 : Select edge 36
1
2 2 4
4 3
3
5
5 6
Step 7 : Select edge 23
1
2 2 4
6
4 3
3
5
5 6
All the remaining edges, such as 34, 41, 12, 35, 56 are rejected
because they form cycle.
All the vertices are covered in this tree. So, the final tree with
minimum cost of given graph is
2 2 4
6
4 3
3
5
5 6
Minimum cost = 2 + 3 + 4 + 5 + 6 = 20
Time complexity : Time complexity is O(|E| log |E|).
Design and Analysis of Algorithms SP–13 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
d. What is red-black tree ? Write an algorithm to insert a node
in an empty red-black tree explain with suitable example.
Ans. Red-black tree :
a. A red-black tree is a binary tree where each node has colour as an
extra attribute, either red or black.
b. It is a self-balancing Binary Search Tree (BST) where every node
follows following properties :
1. Every node is either red or black.
2. The root is black.
3. Every leaf (NIL) is black.
4. If a node is red, then both its children are black.
5. For each node, all paths from the node to descendent leave
contain the same number of black nodes.
Insertion algorithm :
i. We begin by adding the node as we do in a simple binary search
tree and colouring it red.
RB-INSERT(T, z)
1. y nil [T]
2. x root [T]
3. while x nil [T]
4. do y x
5. if key[z] < key [x]
6. then x left [x]
7. else x right [x]
8. p[z] y
9. if y = nil [T]
10. then root [T] z
11. else if key [z] < key[y]
12. then left [y] z
13. else right [y] z
14. left [z] nil[T]
15. right [z] nil[T]
16. colour [z] RED
17. RB-INSERT-FIXUP(T, z)
ii. Now, for any colour violation, RB-INSERT-FIXUP procedure is
used.
RB-INSERT-FIXUP(T, z)
1. while colour [p[z]] = RED
2. do if p[z] = left[p [p [z]]]
3. then y right[p [p [z]]]
4. if colour[y] = RED
5. then colour[p [z]] BLACK case 1
6. colour[y] BLACK case 1
7. colour[p [p [z]]] RED case 1
8. z p [p [ z]] case 1
9. else if z = right[p[z]]
10. then z p[z] case 2
Solved Paper (2017-18) SP–14 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
11. LEFT-ROTATE(T, z) case 2
12. colour[p[z]] BLACK case 3
13. colour[p[p[z]]] RED case 3
14. RIGHT-ROTATE(T, p[p[z]]) case 3
15. else (same as then clause with “right” and “left”
exchanged)
16. colour[root[T]] BLACK
For example :
B
Insert 41 : 41
B
41
Insert 38 : R
38
Insert 31 :
B B
41
R 38
Case 3 R R
38
R 31 41
31
Insert 12 :
B
38 B
B R 38
Case 1 R R
31 41
R 31 41
12 R
12
Insert 19 :
B B
38
B B 38
Case 2, 3 B B
31 41
R 19 41
12 R R
R 12 31
19
Insert 8 :
B B
38 38
B B R B
Case 1
19 41 19 41
R R B B
12 31 12 31
R R
8 8
Design and Analysis of Algorithms SP–15 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Thus final tree is
B
38
R B
19 41
B
B
12 31
R
8
25 2 10 20
7 6
so, i = 4 to 1, call MAX-HEAPIFY (A, i)
i.e., first we call MAX-HEAPIFY (A, 4)
Az [l] = 7, A [i] = A [4] = 25, A [r] = 6
l left [4] = 8
r right [4] = 9
8 9 and 7 > 25 (False)
Then, largest 4
9 9 and 6 > 25 (False)
Then, largest = 4
A [i] A [4]
Now call MAX-HEAPIFY (A, 2)
Solved Paper (2017-18) SP–16 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
6
14 3
i=4
25 2 10 20
l=8
7 6 r=9
(i)
Similarly for i = 3, 2, 1 we get the following heap tree.
6 i 6
i
14 3 14 r 20
l
25 2 10 20 25 2 10 3
l r
7 6 7 6
(ii) (iii)
i
6 i 25
r
l 25 20 6 r 20
l
14 2 10 3 14 2 10 3
7 6 7 6
(iv) (v)
25 25
14 20 14 20
i
6 2 10 3 7 2 10 3
l 7 6 r 6 6
(vi) (vii)
So final tree after BUILD-MAX-HEAP is
25
14 20
7 2 10 3
6 6
(viii)
25 6
14 20 14 20
7 2 10 3 7 2 10 3
6 6 6 25
6 14 20 7 2 10 3 6 25
Now call MAX-HEAPIFY (A, 1) we get
Now exchange A [1] and A [8] and size = 8 – 1 = 7
20 6
14 10 14 10
7 2 6 3 7 2 3
6
6 20
6 14 10 7 2 6 3 20
14 3
7 10 7 10
6 2 6 3 6 2 6 14
3 7 10 6 2 6 14
10 3
7 6 7 6
6 2 3 6 2 10
3 7 6 6 2 10
3 2 3 7
2 6 6 3 7
Again, call MAX-HEAPIFY (A, 1)
exchange A [1] and A [4] and size = 4 – 1 = 3
6 3
6 2 6 2
3 6
3 6 2 6
6 2
3 2 3 6
2 3 6
3 2
2 3
2 3
2 3 6 6 7 10 14 20 25
SECTION-C
Fig. 3.
4. If p lies in any triangle then it is not extreme, otherwise it is.
5. We denote the convex hull of S by CH(S). Convex hull is a convex
set because the intersection of convex sets is convex and convex
hull is also a convex closure.
Graham-Scan algorithm :
The procedure GRAHAM-SCAN takes as input a set Q of points,
where|Q| 3. It calls the functions Top(S), which return the point
on top of stack S without changing S, and to NEXT-TO-TOP(S),
which returns the point one entry below the top of stack S without
changing S.
GRAHAM-SCAN(Q)
1. Let p0 be the point in Q with the minimum y-coordinate, or the
leftmost such point in case of a tie.
2. Let <p1, p2, ...., pm> be the remaining points in Q, sorted by polar
angle in counter clockwise order around po (if more than one point
has the same angle remove all but the one that is farthest from po).
3. Top [S] 0
4. PUSH (p0, S)
5. PUSH (p1, S)
6. PUSH (p2, S)
7. for i 3 to m
8. do while the angle formed by points NEXT-To-TOP(S), Top(S), and
pi makes a non left turn.
9. do POP(S)
10. PUSH (pi, S)
11. return S
b. Find the shortest path in the below graph from the source
vertex 1 to all other vertices by using Dijkstra’s algorithm.
50
2 3
10
10
1 30 20
100
5 4
60
Fig. 4.
Solved Paper (2017-18) SP–20 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Ans. Initialize :
50 S:{}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
100
5 4
60
Extract min (1) :
50 S : {1}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
100
5 4
60
All edges leaving (1) :
10
50 S : {1}
2 3
10 Q: 1 2 3 4 5
10 0
0 1 30 20
10 30 100
100
5 4
60
100 30
Extract min(2) :
50 60 S : {1, 2}
2 3
10 Q:1 2 3 4 5
10 0
1 30 20
10 60 30 100
100
5 4
60
All edges leaving (2) :
50 60 S : {1, 2}
2 3
10 Q:1 2 3 4 5
10 0
1 30 20
10 60 30 100
100
5 4 60 30 100
60 30
100
Design and Analysis of Algorithms SP–21 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Extract min(4) :
10 60
50 S : {1, 2, 4}
2 3
10 Q:1 2 3 4 5
10 0
0 1 30 20
10 60 30 100
100
5 4 60 30 100
60
100 30
50 50 S : {1, 2, 4}
2 3 Q:1 2 3 4 5
10
0
10
1 30 20 10 60 30 100
100 60 30 100
5 4 50
60
100 30
Extract min(3) :
10 S : {1, 2, 4, 3}
50
2 3 50 Q:1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50
100
10 S : {1, 2, 4, 3, }
50
2 3 50 Q:1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50 60
100
Solved Paper (2017-18) SP–22 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Extract min(5) :
10 S : {1, 2, 4, 3, 5}
50
2 3 50 Q: 1 2 3 4 5
10
0
10
0 1 30 20 10 30 100
100 60 30 100
5 4 30
60 50
60
60
Shortest path
2 3
10
1 30 10 20
5 4
NP-hard
P
Fig. 5. Relationship among P, NP, NP-complete and NP-hard problems.
2. It is also often used to define the complexity class NP-complete
which is the intersection of NP and NP-hard.
3. Consequently class NP-hard can be understood as the class of
problems that are NP-complete or harder.
4. There are no polynomial time algorithms for NP-hard problems.
5. A problem being in NP means that the problem is “easy” (in a
certain specific sense), whereas a problem being NP-hard means
that the problem is “difficult” (in another specific sense).
6. A problem being in NP and a problem being NP-hard are not
mutually exclusive. When a problem is both in NP and NP-hard, we
say that the problem is NP-complete.
7. All problems in NP can be solved deterministically in time O(2n).
8. An example of an NP-hard problem is the decision problem subset-
sum. Given a set of integers, does any non-empty subset of them
add up to zero ? i.e., a yes or no question, and happens to be NP-
complete.
9. There are, also decision problems that are NP-hard but not NP-
complete.
10. For example, in the halting problem “given a program and its input,
will it run forever” i.e., yes or no question, so this is a decision
problem. It is case to prove that the halting problem is NP-hard but
not NP-complete.
b. Given the six items in the table below and a knapsack with
weight 100, what is the solution to the knapsack problem in
all concepts. i.e., explain greedy all approaches and find the
optimal solution.
Item ID Weight Value Value/Weight
A 100 40 .4
B 50 35 .7
C 40 20 .5
D 20 4 .2
E 10 10 1
F 10 6 .6
Ans. We can use 0/1-knapsack problem when the items cannot be divided
into parts and fractional knapsack problem when the items can be
divided into fractions.
Design and Analysis of Algorithms SP–29 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
First arrange in non-increasing order of value/weight :
Item ID Weight Value Value/Weight
E 10 10 1
B 50 35 0.7
F 10 6 0.6
C 40 20 0.5
A 100 40 0.4
D 20 4 0.2
According to 0/1-knapsack problem, either we select an item or
reject. So the item will be selected according to value per weight.
E is selected W = 10 < 100
B is selected W = 10 + 50
= 60 < 100
F is selected W = 60 + 10
= 70 < 100
C cannot be selected because
W = 70 + 40 = 110 > 100
Hence we select D
W = 70 + 20 = 90 < 100
Total value = 10 + 35 + 6 + 4 = 55
According to fractional knapsack problem, we can select fraction of
any item.
E is selected W = 10 < 100
B is selected W = 10 + 50
= 60 < 100
F is selected W = 60 + 10
= 70 < 100
If we select C W = 70 + 40
= 110 > 100
Hence we select the fraction of item C as
100 W 100 70
=
Weight of C 40
Weight of C = 30/40 = 0.75
So, W= 0.75 × 40 = 30
W= 70 + 30 = 100
Total value = 10 + 35 + 6 + 0.75 (20)
= 10 + 35 + 6 + 15 = 66
String a b a c a b
P[i] 1 2 3 4 5 6
[i] 0 0 1 0 1 2
Naive string matching algorithm :
1. The Naive approach simply test all the possible placement of pattern
P[1 .. m] relative to text T[1 .. n]. Specifically, we try shifts s = [0, 1,
…., n – m], successively and for each shift, s, compare T[s + 1 .. s +
m] to P[1 .. m].
2. The Naive string matching procedure can be interpreted graphically
as a sliding a pattern P[1 .. m] over the text T[1 .. m] and noting for
which shift all of the characters in the pattern match the
corresponding characters in the text.
Naive string matcher (T, P)
1. n length [T]
2. m length [P]
3. for s 0 to n–m do
4. j1
5. while j m and T[s + j] = P[j] do
Design and Analysis of Algorithms SP–31 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
6. j j +1
7. if j > m then
8. return valid shift s
9. return no valid shift exist // i.e., there is no substring of T matching
P.
Note : 1. Attempt all sections. If require any missing data; then choose
suitable.
2. Any special paper specific instruction.
Section-A
SECTION-B
SECTION-C
Solved Paper (2018-19) SP–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2018-19)
Note : 1. Attempt all sections. If require any missing data; then choose
suitable.
2. Any special paper specific instruction.
Section-A
4. So th (n + 1)/2 as required.
Taking log both sides we get,
h logt (n + 1)/2
3 0 0 1 0 1 3 4 / 2 /
4 1 0 0 1 0 4 3 / 0 /
SECTION-B
c n (1 – )cn cn
1
So, Total cost = cn + cn + .... (k + 1) times = cn(k + 1)
log 1 (cn)
= cn ×
1
Time complexity = O n log 1
1
(cn) = O n log 1
1
n
b. Define BNP, NP hard and NP complete problems. Prove that
travelling salesman problem is NP complete.
Ans. P (BNP) : Class P are the problems which can be solved in
polynomial time, which take time like O(n), O(n2), O(n3).
Example : Finding maximum element in an array or to check
whether a string is palindrome or not. So, there are many problems
which can be solved in polynomial time.
NP : Class NP are the problems which cannot be solved in
polynomial time like TSP (travelling salesman problem).
Example : Subset sum problem is best example of NP in which
given a set of numbers, does there exist a subset whose sum is
zero, but NP problems are checkable in polynomial time means
that given a solution of a problem, we can check that whether the
solution is correct or not in polynomial time.
NP-complete : The group of problems which are both in NP and
NP-hard are known as NP-complete problem.
Now suppose we have a NP-complete problem R and it is reducible
to Q then Q is at least as hard as R and since R is an NP-hard
problem, therefore Q will also be at least NP-hard, it may be NP-
complete also.
NP hard and NP complete :
1. We say that a decision problem Pi is NP-hard if every problem in NP
is polynomial time reducible to Pi.
2. In symbols,
Poly
Pi is NP-hard if, for every Pj NP, Pj Pi .
3. This does not require Pi to be in NP.
4. Highly informally, it means that Pi is ‘as hard as’ all the problem in
NP.
5. If Pi can be solved in polynomial time, then all problems in NP.
6. Existence of a polynomial time algorithm for an NP-hard problem
implies the existence of polynomial solution for every problem in
NP.
Solved Paper (2018-19) SP–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
NP-complete problem :
1. There are many problems for which no polynomial time algorithms
is known.
2. Some of these problems are travelling salesman problem, optimal
graph colouring, the Knapsack problem, Hamiltonian cycles, integer
programming, finding the longest simple path in a graph, and
satisfying a Boolean formula.
3. These problems belongs to an interesting class of problems called
the ‘‘NP-complete’’ problems, whose status is unknown.
4. The NP-complete problems are traceable i.e., require a super
polynomial time.
Proof :
Part 1 : TSP is in NP.
Proof :
1. Let a hint S be a sequence of vertices V = v1,..., vn.
2. We then check two things :
a. First we check that every edge traversed by adjacent vertices is an
edge in G, such that the sum of these edge weights is less than or
equal to k.
b. Secondly we check that every vertex in G is in V, which assures
that every node has been traversed.
3. We accept S if and only if S satisfies these two questions, otherwise
reject.
4. Both of these checks are clearly polynomial, thus our algorithm
forms a verifier with hint S, and TSP is consequently in NP.
Part 2 : TSP is NP-Hard.
Proof :
1. To show that TSP is NP-Hard, we must show that every problem y
in NP reduces to TSP in polynomial time.
2. To do this, consider the decision version of Hamiltonian Cycle (HC).
3. Take G = (V, E), set all edge weights equal to 1, and let k = |V|= n,
that is, k equals the number of nodes in G.
4. Any edge not originally in G then receives a weight of 2 (traditionally
TSP is on a complete graph, so we need to add in these extra edges).
5. Then pass this modified graph into TSP, asking if there exists a tour
on G with cost at most k. If the answer to TSP is YES, then HC is
YES. Likewise if TSP is NO, then HC is NO.
First direction : HC has a YES answer => TSP has a YES answer.
Proof :
1. If HC has a YES answer, then there exists a simple cycle C that
visits every node exactly once, thus C has n edges.
2. Since every edge has weight 1 in the corresponding TSP instance
for the edges that are in the HC graph, there is a Tour of weight n.
Since k = n, and given that there is a tour of weight n, it follows that
TSP has a YES answer.
Second direction : HC has a NO answer => TSP has a NO answer.
Design and Analysis of Algorithms SP–9 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Proof :
1. If HC has a NO answer, then there does not exist a simple cycle C
in G that visits every vertex exactly once. Now suppose TSP has a
YES answer.
2. Then there is a tour that visits every vertex once with weight at
most k.
3. Since the tour requires every node be traversed, there are n edges,
and since k = n, every edge traversed must have weight 1, implying
that these edges are in the HC graph. Then take this tour and
traverse the same edges in the HC instance. This forms a
Hamiltonian Cycle, a contradiction.
This concludes Part 2. Since we have shown that TSP is both in NP
and NP-Hard, we have that TSP is NP-Complete.
12 22 40 90
Insert 45, 58 :
35
12 22 40 45 58 90
Insert 78 :
35 58
12 22 40 45 78 90
Insert 67, 60 :
35 58
12 22 40 45 60 67 78 90
Delete 35 :
40 58
12 22 45 60 67 78 90
Delete 22 :
40 58
12 25 60 67 78 90
Design and Analysis of Algorithms SP–11 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
e. Prove that the weights on the edge of the connected
undirected graph are distinct then there is a unique
minimum spanning tree. Give an example in this regard.
Also discuss prim’s minimum spanning tree algorithm in
detail.
Ans. Proof :
1. Let we have an algorithm that finds an MST (which we will call A)
based on the structure of the graph and the order of the edges
when ordered by weight.
2. Assume MST A is not unique.
3. There is another spanning tree with equal weight, say MST B.
4. Let e1 be an edge that is in A but not in B.
5. Then, B should include at least one edge e2 that is not in A.
6. Assume the weight of e1 is less than that of e2.
7. As B is a MST, {e1} B must contain a cycle.
8. Replace e2 with e1 in B yields the spanning tree {e1} B – {e2}
which has a smaller weight compared to B.
9. This contradicts that B is not a MST.
So, MST of undirected graph with distinct edge is unique.
Example :
1
1 7
2 2 4
4 3 3
8 5
5 6
6
Fig. 1.
Step 1 : Arrange the edge of graph according to weight in ascending
order.
Edges Weight Edge Weight
12 1 14 7
13 2 35 8
46 3
25 4
36 5
56 6
Step 2 : Now draw the vertices as given in graph,
Solved Paper (2018-19) SP–12 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
1
2 4
3
5 6
Now draw the edge according to the ascending order of weight. If
any edge forms cycle, leave that edge.
Step 3 :
1 1
2 4
3
5 6
Step 4 :
1 1
2 2 4
3
5 6
Step 5 :
1 1
2 2 4
3
3
5 6
Step 6 :
1 1
2 2 4
4 3
3
5 6
Design and Analysis of Algorithms SP–13 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Step 7 :
1 1
2 2 4
4 3
3
5
5 6
All the remaining edges, such as : 14, 35, 56 are rejected because
they form cycle.
All the vertices are covered in this tree. So, the final tree with
minimum cost of given graph is
1 1
2 2 4
4 3
3
5
5 6
Prim’s algorithm :
First it chooses a vertex and then chooses an edge with smallest
weight incident on that vertex. The algorithm involves following
steps :
Step 1 : Choose any vertex V1 of G.
Step 2 : Choose an edge e1 =V1V2 of G such that V2 V1 and e1 has
smallest weight among the edge e of G incident with V1.
Step 3 :If edges e1, e2, ......., ei have been chosen involving end
points V1, V2, ............., Vi+1, choose an edge ei+1 = VjVk with Vj = {V1
....... Vi+1} and Vk {V1 ...........Vi+1} such that ei+1 has smallest
weight among the edges of G with precisely one end in {V1 ...............
Vi+1}.
Step 4 :Stop after n – 1 edges have been chosen. Otherwise goto
step 3.
SECTION-C
9 18
i=4
45 5 9 1
l=8 r=9
17 6
so i = 4 to 1 call MAX HEAPIFY (A, i)
i.e., first we call MAX HEAPIFY (A, 4)
A[l] = 7, A[i] = A[4] = 45, A[r] = 6
l left [4] = 2 × 4 = 8
r right[4] = 2 × 4 + 1 = 9
8 9 and A[8] = 17 < 45 (False)
Then, largest 4.
Similarly for i = 3, 2, 1 we get the following heap tree :
i 23
23 9 18
i=3 l r
9 18
l=6 r=7 45 5 9 1
45 5 9 1
17 6
17 6
45
i
23 23 18
r
i
45 l 18
9 1
5 9
l
9 5 9 1 r
17 6
17 6
45
23 18
17 5 9 1
9 6
45 6
23 18 23 18
17 5 9 1 17 5 9 1
9 6 9 45
45
9 5 9 1 9 5 9 1
6 23
23 45
Now, call MAX HEAPIFY (A, 1), exchange A[1] and A[4] and size =
5–1=4
9 9
5 6 5 6
1 1
9 9 17 18 23 45
Now, call MAX HEAPIFY (A, 1), exchange A[1] and A[3] and size =
4–1=3
9 9
1 6 1 6
6 9 9 17 18 23 45
Exchange A[1] and A[2] size 3 – 1 = 2
5 5
1 1 1
5 6 9 9 17 18 23 45
Design and Analysis of Algorithms SP–17 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
The sorted array
1 5 6 9 9 17 18 23 45
Now call MAX HEAPIFY (A, 1) and exchange A[1] and A[7]
18 1
9 17 9 17
6 5 9 1 6 5 9 18
Now call MAX HEAPIFY (A, 1) and size = 7 – 1 = 6 exchange A[1]
and A[6]
17 17
6 9 6 9
1 5 9 1 5 9
Exchange A[1] and A[5] and size = 6 – 1 = 5
9 5
6 9 6 9
1 5 1 9
Bk BL Case1 Bk BL
(a) (b )
Fig. 2.
Case 2 : It occurs when x is the first of three roots of equal degree,
that is, degree[x] = degree[next-x] = degree[sibling[next-x]], then
again pointer move one position further down the list, and next
iteration executes either case 3 or case 4.
sibling
prev-x x next-x [next-x] prev-x x next-x
a b c d a b c d
Bk Bk Bk Case 2 Bk Bk Bk
(a) (b )
Fig. 3.
Solved Paper (2018-19) SP–18 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Case 3 : If degree[x] = degree[next-x] degree [sibling[next-x]]
and key[x] key[next-x], we remove next-x from the root list and
link it to x, creating Bk+1 tree.
sibling
prev-x x next-x [next-x] prev-x x next-x
a b c d a b d
Bk Bk BL Case 3
Bk BL
c
key [x] key [next-x]
(a) Bk
Bk+1
(b )
Fig. 4.
Case 4 : degree[x] = degree[next-x] degree[sibling[next-x] and
key[next-x] key x, we remove x from the root list and link it to
next-x, again creating a Bk+1 tree.
sibling
prev-x x next-x [next-x] prev-x x next-x
a b c d a c d
Bk Bk BL Case 4
b Bk BL
key [x] key [next-x]
Bk
(a) Bk+1
(b )
Fig. 5.
Algorithm for union of binomial heap :
1. The BINOMIAL-HEAP-UNION procedure repeatedly links binomial
trees where roots have the same degree.
2. The following procedure links the Bk-1 tree rooted at node to the
Bk-1 tree rooted at node z, that is, it makes z the parent of y. Node z
thus becomes the root of a Bk tree.
BINOMIAL-LINK (y, z)
i p[y] z
ii. sibling [y] child[z]
iii. child[z] y
iv. degree[z] degree[z] + 1
3. The BINOMIAL-HEAP-UNION procedure has two phases :
a. The first phase, performed by the call of BINOMIAL-HEAP-
MERGE, merges the root lists of binomial heaps H1 and H2
Design and Analysis of Algorithms SP–19 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
into a single linked list H that is sorted by degree into
monotonically increasing order.
b. The second phase links root of equal degree until at most one
root remains of each degree. Because the linked list H is sorted
by degree, we can perform all the like operations quickly.
BINOMIAL-HEAP-UNION(H1, H2)
1. H MAKE-BINOMIAL-HEAP ( )
2. head[H] BINOMIAL-HEAP-MERGE(H1, H2)
3. Free the objects H1 and H2 but not the lists they point to
4. if head[H] = NIL
5. then return H
6. prev-x NIL
7. x head[H]
8. next-x sibling[x]
9. while next-x NIL
10. do if (degree[x] degree[next-x]) or
(sibling[next-x] NIL and degree[sibling[next-x]] = degree[x])
11. then prev-x x case 1 and 2
12. x next-x case 1 and 2
13. else if key[x] key[next-x]
14. then sibling[x] sibling[next-x] case 3
15. BINOMIAL-LINK(next-x, x) case 3
16. else if prev-x = NIL
case 4
17. then head[H] next-x
case 4
18. else sibling[prev-x] next-x case 4
19. BINOMIAL-LINK(x, next-x) case 4
20. x next-x case 4
21. next-x sibling[x]
22. return H
BINOMIAL-HEAP-MERGE(H1, H2)
1. a head[H1]
2. b head[H2]
3. head[H1] min-degree (a, b)
4. if head[H1] = NIL
5. return
6. if head[H1] = b
7. then b a
8. a head[H1]
9. while b NIL
10. do if sibling[a] = NIL
11. then sibling[a] b
12. return
13. else if degree [sibling[a]] < degree[b]
14. then a sibling[a]
15. else c sibling[b]
Solved Paper (2018-19) SP–20 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
16. sibling[b] sibling[a]
17. sibling[a] b
18. a sibling[a]
19. bc
Time complexity of union of two binomial heap is O(log n).
B B
8 8 Left B
R rotation 11
20 R R
R 11 R 8 20
11
R
20
R 14 R 14
B B
11 11
B Right B
B B
8 20 rotation 8 14
R
R 4 R 9 14 R 4 R 9 12 R 20 R
12 R
Delete 12 : Node 12 is red and leaf node. So simply delete node 12.
B B
11 11
B B B B
8 14 8 14
R 4 R 9 12 R 20 R R 4 R 9 20 R
R 4 9 R 20 R R 9 20 R
9 R 20 R 20 R
Delete 14 : Node 14 is internal node replace node 14 with node 20
and do not change the colour.
Solved Paper (2018-19) SP–22 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B B
11 11
B B B B
8 14 8 20
R
20
Ans. Dijkstra algorithm fails to find a shortest path when the graph
contains negative edges.
Bellman Ford algorithm fails to find a shortest path when the graph
contain negative weight cycle.
No, Bellman Ford cannot detect all negative weight cycle in a graph.
Numerical :
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G
2 C 11
D E 1
7
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G
1
2 C 11
D E 1
7
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G
2 C 11
D E 1
7 12
Design and Analysis of Algorithms SP–23 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G 13
2 C 11
D E 1
7 12
15
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G 13
2 C 11
D E 1
7 12
15 18
0 2 2 3
B F H
A
1 1
5 10 2
4 9 3 G 13
2 C 11
D E 1
7
5 0
I2 I2 E2
E2 8 E3 0
15
I3 ×
I3
5 8
23 18
E3 ×
× E3 I3
15 5
× I4 E4
15
I4 E4 18 5
28 15 ×
× ×
First Subset S1 = {5, 13}
Similarly, S2 = {8, 10}
0 3 8 4 NIL 1 1 NIL 1
0 1 7 NIL NIL NIL 2 2
(0)
D(0) = 4 0 ; NIL 3 NIL NIL NIL
2 5 0 4 NIL 4 NIL NIL
6 0 NIL
NIL NIL 5 NIL
0 3 8 4 NIL 1 1 NIL 1
0 1 7 NIL NIL NIL 2 2
(1)
D(1) = 4 0 ; NIL 3 NIL NIL NIL
2 5 5 0 2 4 1 4 NIL 1
6
0 NIL NIL NIL 5 NIL
0 3 8 4 4 NIL 1 1 2 1
0 1 7 NIL NIL NIL 2 2
(2)
D(2) = 4 0 5 11 ; NIL 3 NIL 2 2
2 5 5 0 2 4 1 4 NIL 1
6 0 NIL NIL NIL 5 NIL
0 3 8 4 4 NIL 1 1 2 1
0 1 7 NIL NIL NIL 2 2
(3)
D(3) = 4 0 5 11 ; NIL 3 NIL 2 2
2 1 5 0 2 4 3 4 NIL 1
6 0 NIL NIL NIL 5 NIL
Solved Paper (2018-19) SP–26 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
0 3 1 4 4 NIL 1 4 2 1
3 0 4 1 1 4 NIL 4 2 1
(4)
D(4) = 7 4 0 5 3 ; 4 3 NIL 2 1
2 1 5 0 2 4 3 4 NIL 1
8 5 1 6
0 4 3 4 5 NIL
0 1 3 2 4 NIL 3 4 5 1
3 0 4 1 1 4 NIL 4 2 1
(5)
D(5) = 7 4 0 5 3 ; 4 3 NIL 2 1
2 1 5 0 2 4 3 4 NIL 1
8 5 1 6 0 4 3 4 5 NIL
Design and Analysis of Algorithms SP–1 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B. Tech.
(SEM. V) ODD SEMESTER THEORY
EXAMINATION, 2019-20
DESIGN AND ANALYSIS OF ALGORITHMS
Note : Attempt all sections. If require any missing data; then choose
suitably.
Section-A
Section-B
Section-C
Solved Paper (2019-20) SP–4 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
SOLUTION OF PAPER (2019-20)
Note : Attempt all sections. If require any missing data; then choose
suitably.
Section-A
Section-B
f(n) = nlog b a E n1 E
= (n1 + 1) where E = 1
= (n2)
Solved Paper (2019-20) SP–6 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
n2
Again 2 f c f(n2) ...(1)
2
eq. (1) is true for c = 2
T(n) = (f(n))
T(n) = f(n2)
ii. Let T(n) be the time taken by merge sort to sort any array of n
elements.
n n
Therefore, T(n) = T g(n)
2 2
where g(n) (n)
This recurrence, which becomes :
n
T(n) = 2T g (n)
2
when n is even is a special case of our general analysis for divide-
and conquer algorithms.
n
Compare the above given recurrence with T(n) = aT f (n)
b
we get a= 2
b=2
f(n) = g(n)
log a log 2
Now we find, n b = n 2 n1 n
f(n) = (n)
i.e., case 2 of Master’s theorem applied then
T(n) = nlog b a log n
T(n) = (n log n)
Hence, the worst case running time of merge sort is (n log n).
B
Insert 9 : 12
R
9
B
Insert 81 : 12
R
R
9 81
Design and Analysis of Algorithms SP–7 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
B B
12 12
Insert 76 : R B
R Case 1 B
9 81 9 81
R
76 76 R
Insert 23 : B B
12 12
B B
B Case 3 B
9 81 9 76
R
76 23 81
R R R
23
Insert 43 : B B
12 12
B B
B Case 1 R
9 76 9 76
R R B B
23 81 23 81
R 43 43 R
Insert 65 : B B
12 12
B B
R Case 3 R
9 76 9 76
R B B
23 81 B 43 81
R
R
43 23 65 R
R
65
Insert 88 : B
12
B
R
9 76
B
43 81 B
R
R R
23 65 88
Solved Paper (2019-20) SP–8 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 76 : B
12
B
R
9 76 Case 2
B B
43 81
R
R R
23 65 76 R 88
76 B
R
12 81 B
B 9 B 43 76 R 88 R
R 23 R 65
Insert 32 : B
76
R B
12 81
B
B R
9 43 76 R 88
R
23 R 65
R
32
76 B
R
12 81 B
B 9 R 43 76 R 88 R
B 23 B 65
32 R
Design and Analysis of Algorithms SP–9 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Insert 54 : B
76
R 12 81 B
R R
B 9 R 43 76 88
B 23 65 B
R R
32 54
Delete 23 : B B
76 76
R 12 81 B R 12 81 B
R R R R
B 9 R 43 76 88 B 9 R 43 76 88
B 23 65 B B 32 65 B
R R R
32 54 54
Delete 81 : B B
76 76
R 12 81 B R 12 88 B
R R R
B 9 R 43 76 88 B 9 R 43 76
B 32 65 B B 32 65 B
R R
54 54
Section-C
n n n 7n
2 4 8 8
log n
2
n n n n n n n n n 7n
2
4 8 16 8 16 32 16 32 64 8
Split
10 20 25 35 10 25 35
Insert 30 :
20
10 25 30 35
Insert 55 :
20 20 30
Split
10 25 30 35 55 10 25 35 55
Insert 40, 45 :
20 30 20 30 40
Split
10 25 35 40 45 55 10 25 35 45 55
Insert 50, 55 :
Solved Paper (2019-20) SP–14 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
20 30 40 20 30 40 50
Split
10 25 35 45 50 55 55 10 25 35 45 55 55
Split
30
20 40 50
10 25 35 45 55 55
Insert 60, 75 :
30
20 40 50 Split
10 25 35 45 55 55 60 75
30
20 40 50 55
10 25 35 45 55 60 75
Insert 70, 65 :
Design and Analysis of Algorithms SP–15 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
30
20 40 50 55 Split
10 25 35 45 55 60 65 70 75
30
40 50 55 65 Split
20
10 25 35 45 55 60 70 75
30 50
20 40 55 65
10 25 35 45 55 60 70 75
Insert 80, 85 :
30 50
20 40 55 65
Split
10 25 35 45 55 60 70 75 80 85
30 50
20 40 55 65 75
10 25 35 45 55 60 70 80 85
Insert 90 :
30 50
20 40 55 65 75
10 25 35 45 55 60 70 80 85 90
Number of nodes splitting operations = 9.
Solved Paper (2019-20) SP–16 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. Explain the algorithm to delete a given element in a binomial
heap. Give an example for the same.
Ans. Deletion of key from binomial heap :
The operation BINOMIAL-HEAP-DECREASE (H, x, k) assigns a
new key ‘k’ to a node ‘x’ in a binomial heap H.
BINOMIAL-HEAP-DECREASE-KEY (H, x, k)
1. if k > key [x] then
2. Message “error new key is greater than current key”
3. key [x] k
4. y x
5. z P [y]
6. While (z NIL) and key [y] < key [z]
7. do exchange key [y] key [z]
9. y z
10. z P [y]
Deleting a key : The operation BINOMIAL-HEAP-DELETE
(H, x) is used to delete a node x’s key from the given binomial heap
H. The following implementation assumes that no node currently
in the binomial heap has a key of – .
BINOMIAL-HEAP-DELETE (H, x)
1. BINOMIAL-HEAP-DECREASE-KEY (H, x, – )
2. BINOMIAL-HEAP-EXTRACT-MIN(H)
For example : Operation of Binomial-Heap-Decrease (H, x, k) on
the following given binomial heap :
Suppose a binomial heap H is as follows :
Head[H] 37 10 1
41 28 13 6 16 12 25
77 8 14 29 26 23 18
11 17 38 42
27
The root x with minimum key is 1. x is removed from the root list of
H. i.e.,
Design and Analysis of Algorithms SP–17 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
x
Head[H] 37 10 1
41 28 13 6 16 12 25
77 8 14 29 26 23 18
11 17 38 42
27
Now, the linked list of x’s children is reversed and set head[H] to
point to the head of the resulting list, i.e., another binomial heap H.
Head[H ] 25 12 16 6
18 26 23 8 14 29
42 11 17 38
27
37 18 10 8 14 29
41 16 28 13 11 17 38
26 23 77 27
42
Fig. 1.
4. If p lies in any triangle then it is not extreme, otherwise it is.
5. We denote the convex hull of S by CH(S). Convex hull is a convex
set because the intersection of convex sets is convex and convex
hull is also a convex closure.
Design and Analysis of Algorithms SP–19 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
Graham-Scan algorithm :
The procedure GRAHAM-SCAN takes as input a set Q of points,
where |Q| 3. It calls the functions Top(S), which return the point
on top of stack S without changing S, and to NEXT-TO-TOP(S),
which returns the point one entry below the top of stack S without
changing S.
GRAHAM-SCAN(Q)
1. Let p0 be the point in Q with the minimum y-coordinate, or the
leftmost such point in case of a tie.
2. Let <p1, p2, ...., pm> be the remaining points in Q, sorted by polar
angle in counter clockwise order around po (if more than one point
has the same angle remove all but the one that is farthest from po).
3. Top [S] 0
4. PUSH (p0, S)
5. PUSH (p1, S)
6. PUSH (p2, S)
7. for i 3 to m
8. do while the angle formed by points NEXT-To-TOP(S), Top(S), and
pi makes a non left turn.
9. do POP(S)
10. PUSH (pi, S)
11. return S
Time complexity :
The worst case running time of GRAHAM-SCAN is
T(n) = O(n) + O(n log n) + O(1) + O(n) = O(n log n)
where n = |Q|
Graham’s scan running time depends only on the size of the input
it is independent of the size of output.
22
15
40
3
The value of fractional part of I1 is,
232
= × 3 = 63
11
Thus, the maximum value is,
= 682 + 495 + 63 = 1190
Design and Analysis of Algorithms SP–21 B (CS/IT-Sem-5)
More pdf : www.motivationbank.in
b. Define Floyd Warshall algorithm for all pair shortest path
and apply the same on following graph :
3
1 2
4
6 3 5
1
1 2
3 4
2
Fig. 2.
Ans. Floyd Warshall algorithm :
1. Floyd-Warshall algorithm is a graph analysis algorithm for finding
shortest paths in a weighted, directed graph.
2. A single execution of the algorithm will find the shortest path
between all pairs of vertices.
3. It does so in (V3) time, where V is the number of vertices in the
graph.
4. Negative-weight edges may be present, but we shall assume that
there are no negative-weight cycles.
5. The algorithm considers the “intermediate” vertices of a shortest
path, where an intermediate vertex of a simple path p = (v1, v2, ...,
vm) is any vertex of p other than v1 or vm, that is, any vertex in the
set {v2, v3, ..., vm–1}.
6. Let the vertices of G be V = {1, 2, ..., n}, and consider a subset {1, 2,
..., k} of vertices for some k.
7. For any pair of vertices i, j V, consider all paths from i to j whose
intermediate vertices are all drawn from {1, 2, ..., k}, and let p be a
minimum-weight path from among them.
8. Let dij( k) be the weight of a shortest path from vertex i to vertex j
with all intermediate vertices in the set {1, 2, ..., k}.
A recursive definition is given by
wij if k 0
dij( k) = ( k –1) ( k –1) ( k –1)
min( dij , dik dkj ) if k 1
Numerical :
dij(k) = min[ dij(k–1) , dik
(k –1) ( k –1)
dkj ]
2 3 0
D(0) =
3 0 2
4 1 1 0
5 4 2 0
1 2 3 4 5
1 0 4 6 3
2 3 0 9 6
D(1) =
3 6 4 0 2
4 4 1 1 0
5 7 4 3 2 0
1 2 3 4 5
1 0 4 6 3
2 3 0 7 6
D(2) = 3 6 3 0 2
4 4 1 1 0
5 6 3 3 2 0
Now, if we find D(3), D(4) and D(5) there will be no change in the
entries.
More pdf
ST :Uwww.motivationbank.in
D YZONE A D I T YA .COM
www.studyzoneaditya.com