Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DAA-22-23

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Printed Pages: 02 Sub Code:KCS-503

Paper Id: 2 3 1 2 3 6 Roll No.

B.TECH.
(SEM V) THEORY EXAMINATION
2022-23 DESIGN & ANALYSIS OF
ALGORITHM
Time: 3 Hours Total Marks: 100
Note: Attempt all Sections. If you require any missing data, then choose suitably.

SECTION A

1. Attempt all questionsin brief. 2x10 = 20


(a) Discuss the basic steps in the complete development of an algorithm.
An algorithm is a plan for solving a problem.
There are many ways to write an algorithm. Some are very informal, some are quite formal
and mathematical in nature, and some are quite graphical. The development of an
algorithm (a plan) is a key step in solving a problem. Once we have an algorithm, we
can translate it into a computer program in some programming language. Our
algorithm development process consists of five major steps.
Step 1: Obtain a description of the problem.
Step 2: Analyze the problem.
Step 3: Develop a high-level algorithm.
Step 4: Refine the algorithm by adding more detail.
Step 5: Review the algorithm.

(b) Explain and compare best and worst time complexity of Quick Sort.
Quick sort algorithm is one of the most widely used sorting algorithms. It follows a divide and
conquer paradigm. We usually use Recursion in quicksort implementation. In each recursive call, a
pivot is chosen, then the array is partitioned in such a way that all the elements less than pivot lie to
the left and all the elements greater than pivot lie to the right. After every call, the chosen pivot
occupies its correct position in the array which it is supposed to as in a sorted array. So with each step,
our problem gets reduced by 2 which leads to quick sorting.

Time Complexity

 Partition of elements take n time

 And in quicksort problem is divide by the factor 2

 Best Time Complexity : O(nlogn)


QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
 Average Time Complexity : O(nlogn)
 Worst Time Complexity : O(n^2)
 Worst Case will happen when array is sorted

(c) Discuss Skip list and its operations.

A skip list is a probabilistic data structure. The skip list is used to store a sorted list of
elements or data with a linked list. It allows the process of the elements or data to view
efficiently. In one single step, it skips several elements of the entire list, which is why it is
known as a skip list.The skip list is an extended version of the linked list. It allows the user
to search, remove, and insert the element very quickly. It consists of a base list that includes
a set of elements which maintains the link hierarchy of the subsequent elements.

Skip List Basic Operations

There are the following types of operations in the skip list.

o Insertion operation: It is used to add a new node to a particular location in a specific


situation.
o Deletion operation: It is used to delete a node in a specific situation.
o Search Operation: The search operation is used to search a particular node in a skip list.

(d) Discuss the properties of binomial trees.


A Binomial Tree of order 0 has 1 node. A Binomial Tree of order k can be constructed by taking
two binomial trees of order k-1 and making one the leftmost child of the other. A Binomial Tree of
order k the has following properties.
 It has exactly 2k nodes.
 It has depth as k.
 There are exactly kaiCi nodes at depth i for i = 0, 1, . . . , k.
 The root has degree k and children of the root are themselves Binomial Trees with order
k-1, k-2,.. 0 from left to right.

(e) Illustrate the applications of Graph Coloring Problem


Graph coloring is the procedure of assignment of colors to each vertex of a graph G such that no
adjacent vertices get same color. The objective is to minimize the number of colors while coloring a
graph. The smallest number of colors required to color a graph G is called its chromatic number of
that graph. Graph coloring problem is a NP Complete problem.
Method to Color a Graph
The steps required to color a graph G with n number of vertices are as follows −
Step 1 − Arrange the vertices of the graph in some order.
Step 2 − Choose the first vertex and color it with the first color.

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Step 3 − Choose the next vertex and color it with the lowest numbered color that has not been
colored on any vertices adjacent to it. If all the adjacent vertices are colored with this color, assign a
new color to it. Repeat this step until all the vertices are colored.

(f) Define principle of optimality.

A problem is said to satisfy the Principle of Optimality if the subsolutions of an optimal solution of
the problem are themesleves optimal solutions for their subproblems.

Examples:

(g) The shortest path problem satisfies the Principle of Optimality.


(h) This is because if a,x1,x2,...,xn,b is a shortest path from node a to node b in a graph,
then the portion of xi to xj on that path is a shortest path from xi to xj.
(i) The longest path problem, on the other hand, does not satisfy the Principle of
Optimality. Take for example the undirected graph G of nodes a, b, c, d, and e, and
edges (a,b) (b,c) (c,d) (d,e) and (e,a). That is, G is a ring. The longest (noncyclic) path
from a to d to a,b,c,d. The sub-path from b to c on that path is simply the edge b,c. But
that is not the longest path from b to c. Rather, b,a,e,d,c is the longest path. Thus, the
subpath on a longest path is not necessarily a longest path.

(j) Differentiate Backtracking and Branch and Bound Techniques.

Branch-and-Bound is used to solve


Backtracking is used to find all possible
optimisation problems. When it realises
solutions available to a problem. When
that it already has a better optimal
it realises that it has made a bad choice,
Approach solution that the pre-solution leads to, it
it undoes the last choice by backing it
abandons that pre-solution. It
up. It searches the state space tree until
completely searches the state space tree
it has found a solution for the problem.
to get optimal solution.

Backtracking is used for solving Branch-and-Bound is used for solving


Problems
Decision Problem. Optimisation Problem.

In Branch-and-Bound as the optimum


In backtracking, the state space tree is solution may be present anywhere in the
Searching
searched until the solution is obtained. state space tree, so the tree needs to be
searched completely.

Efficiency Backtracking is more efficient. Branch-and-Bound is less efficient.

Useful in solving N-Queen Useful in solving Knapsack


Applications
Problem, Sum of subset, Hamilton Problem, Travelling Salesman Problem

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


cycle problem, graph coloring problem

(k) Discuss backtracking problem solving approach.


Backtracking is an algorithmic technique for solving problems recursively by trying
to build a solution incrementally, one piece at a time, removing those solutions that
fail to satisfy the constraints of the problem at any point in time (by time, here, is
referred to the time elapsed till reaching any level of the search tree). Backtracking
can also be said as an improvement to the brute force approach. So, the idea behind
the backtracking technique is that it searches for a solution to a problem among all
the available options. Initially, we start the backtracking from one possible option
and if the problem is solved with that selected option, then we return the solution else
we backtrack and select another option from the remaining available options. There
also might be a case where none of the options will give you the solution and hence,
we understand that backtracking won’t give any solution to that particular problem.
We can also say that backtracking is a form of recursion. This is because the process
of finding the solution from the various options available is repeated recursively until
we don’t find the solution, or we reach the final state.

(l) Define NP, NP hard and NP complete. Give an example ofeach.


NP Problem:
The NP problems set of problems whose solutions are hard to find but easy to verify and are
solved by Non-Deterministic Machine in polynomial time.

NP-HardProblem:
A Problem X is NP-Hard if there is an NP-Complete problem Y, such that Y is reducible to X in
polynomial time. NP-Hard problems are as hard as NP-Complete problems. NP-Hard Problem
need not be in NP class.
example:
1. Hamiltonian cycle.
2. optimization problem.
3. Shortest path
NP-Complete Problem:
A problem X is NP-Complete if there is an NP problem Y, such that Y is reducible to X in
polynomial time. NP-Complete problems are as hard as NP problems. A problem is NP-Complete
if it is a part of both NP and NP-Hard Problem. A non-deterministic Turing machine can solve NP-
Complete problem in polynomial time.
Example:
1. Decision problems.
2. Regular graphs

(m) Explain Randomized algorithms.


An algorithm that uses random numbers to decide what to do next anywhere in its logic is
called a Randomized Algorithm. For example, in Randomized Quick Sort, we use a
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
random number to pick the next pivot (or we randomly shuffle the array).Typically, this
randomness is used to reduce time complexity or space complexity in other standard
algorithms.

SECTION B

Attempt any three of the following: 10x3 = 30


(n) Explain Merge sort algorithm and sort the following
sequence {23, 11, 5, 15, 68,31, 4, 17} using merge sort.

Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to sort
the elements. It is one of the most popular and efficient sorting algorithm. It divides the given list
into two equal halves, calls itself for the two halves and then merges the two sorted halves. We have
to define the merge() function to perform the merging.The sub-lists are divided again and again
into halves until the list cannot be divided further. Then we combine the pair of one element lists
into two-element lists, sorting them in the process. The sorted two-element pairs is merged into the
four-element lists, and so on until we get the sorted list.

Algorithm

In the following algorithm, arr is the given array, beg is the starting element, and end is the
last element of the array.

MERGE_SORT(arr, beg, end)

if beg < end


set mid = (beg + end)/2
MERGE_SORT (arr, beg, mid)
MERGE_SORT (arr, mid + 1, end)
MERGE (arr, beg, mid, end)
end of if

END MERGE_SORT

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


The important part of the merge sort is the MERGE function. This function performs the merging
of two sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to build one sorted
array A[beg…end]. So, the inputs of the MERGE function are A[], beg, mid, and end.

The implementation of the MERGE function is given as follows -

/* Function to merge the subarrays of a[] */


void merge(int a[], int beg, int mid, int end)
{
int i, j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;

int LeftArray[n1], RightArray[n2]; //temporary arrays

/* copy data to temp arrays */


for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i];
for (int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];

i = 0, /* initial index of first sub-array */


j = 0; /* initial index of second sub-array */
k = beg; /* initial index of merged sub-array */

while (i < n1 && j < n2)


{
if(LeftArray[i] <= RightArray[j])
{
a[k] = LeftArray[i];
i++;
}
else
{
a[k] = RightArray[j];
j++;
}
k++;
}
while (i<n1)
{
a[k] = LeftArray[i];
i++;
k++;
}

while (j<n2)
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
{
a[k] = RightArray[j];
j++;
k++;
}
}

(o) What are the various differences in Binomial and Fibonacci


Heap? Explain.

Eature Binomial Heap Fibonacci Heap

Collection of binomial Collection of min-heap-ordered


Structure
trees trees

– Insertion O(log n) O(1) amortized

– Deletion O(log n) O(log n)

– Decrease-
O(log n) O(1) amortized
key

– Merging O(log n) O(1) amortized

Space Requires more space due Requires more space due to


Efficiency to binomial trees additional properties

Key Efficient decrease-key Extremely fast insertion and


Advantage operations merging operations

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Eature Binomial Heap Fibonacci Heap

Priority queues, graph Advanced algorithms,


Applications
algorithms specifically decrease-key ops

(p) Prove that if the weights on the edge of the connected


undirected graph are distinct then there is a unique
Minimum Spanning Tree. Give an example in this regard.
Also discuss Kruskal’s Minimum Spanning Tree in detail.
1. Say we have an algorithm that finds an MST (which we will call A)
based on the structure of the graph and the order of the edges when
ordered by weight.
2. Assume MST A is not unique.
3. There is another spanning tree with equal weight, say MST B.
4. Let e1 be an edge that is in A but not in B.
5. Then B should include at least one edge e2 that is not in A.
6. Assume the weight of e1 is less than that of e2.
7. As B is a MST, {e1} B must contain a cycle.
8. Replace e2 with e1 in B yields the spanning tree {e1} B
MST using Kruskal’s algorithm
Below are the steps for finding MST using Kruskal’s algorithm:
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If
the cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Input Graph:

4.

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be
having (9 – 1) = 8 edges.

Step 1: Pick edge 7-6. No cycle is formed, include it.

5.
6. Add edge 7-6 in the MST
Step 2: Pick edge 8-2. No cycle is formed, include it.

7.
8. Add edge 8-2 in the MST
Step 3: Pick edge 6-5. No cycle is formed, include it.

Step 4: Pick edge 0-1. No cycle is formed, include it.

Step 5: Pick edge 2-5. No cycle is formed, include it.


QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard it. Pick edge 2-
3: No cycle is formed, include it.

Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-
7. No cycle is formed, include it.

Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.

Discuss LCS algorithm to compute Longest Common Subsequence of two given strings and
time complexity analysis.
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
Given two strings, S1 and S2, the task is to find the length of the Longest Common Subsequence,
i.e. longest subsequence present in both of the strings.
A longest common subsequence (LCS) is defined as the longest subsequence which is common in
all given input sequences.
Examples:
Input: S1 = “AGGTAB”, S2 = “GXTXAYB”
Output: 4
Explanation: The longest subsequence which is present in both strings is “GTAB”.
Input: S1 = “BD”, S2 = “ABCD”
Output: 2
Explanation: The longest subsequence which is present in both strings is “BD”.
Generate all the possible subsequences and find the longest among them that is present in both
strings using recursion.
Follow the below steps to implement the idea:
 Create a recursive function [say lcs()].
 Check the relation between the First characters of the strings that are not yet processed.
 Depending on the relation call the next recursive function as mentioned
above.
 Return the length of the LCS received as the answer.
 int lcs(string X, string Y, int m, int n)
 {
 if (m == 0 || n == 0)
 return 0;
 if (X[m - 1] == Y[n - 1])
 return 1 + lcs(X, Y, m - 1, n - 1);
 else
 return max(lcs(X, Y, m, n - 1),
 lcs(X, Y, m - 1, n));
 }
 TimeComplexity: O(2m*n)
Auxiliary Space: O(1)

Explain and Write the Naïve-String string matching algorithm: Suppose the given pattern
p= aa b and given text T = a c a a b c.
Apply Naïve-String Matching algorithm on above Pattern (P) and Text
(T) to find the number of occurrences of P in T.

The naïve approach tests all the possible placement of Pattern P [1.......m] relative to text T [1......n].
We try shift s = 0, 1.......n-m, successively and for each shift s. Compare T [s+1.......s+m] to P
[1......m].

The naïve algorithm finds all valid shifts using a loop that checks the condition P [1.......m] = T
[s+1.......s+m] for each of the n - m +1 possible value of s.

NAIVE-STRING-MATCHER (T, P)
1. n ← length [T]
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
2. m ← length [P]
3. for s ← 0 to n -m
4. do if P [1.....m] = T [s + 1....s + m]
5. then print "Pattern occurs with shift" s

Analysis: This for loop from 3 to 5 executes for n-m + 1(we need at least m characters at the end)
times and in iteration we are doing m comparisons. So the total complexity is O (n-m+1).

SECTION C
2. Attempt any one part of the following: 10x1 = 10
(a) Examine the following recurrence
relation: (i) T (n) = T (n-1) + n 4

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


(ii) T (n) = T (n/4) + T (n/2) + n 2

(b) Explain algorithm for counting sort. Illustrate the operation


of counting sort on the following array:
A={0,1,3,0,3,2,4,5,2,4,6,2,2,3}.

This sorting technique doesn't perform sorting by comparing elements. It performs sorting by
counting objects having distinct key values like hashing. After that, it performs some
arithmetic operations to calculate each object's index position in the output sequence.
Counting sort is not used as a general-purpose sorting algorithm.

Counting sort is effective when range is not greater than number of objects to be sorted. It
can be used to sort the negative input values.

1. countingSort(array, n) // 'n' is the size of array


2. max = find maximum element in the given array
3. create count array with size maximum + 1
4. Initialize count array with all 0's
5. for i = 0 to n
6. find the count of every unique element and
7. store that count at ith position in the count array
8. for j = 1 to max
9. Now, find the cumulative sum and store it in count array
10. for i = n to 1
11. Restore the array elements
12. Decrease the count of every restored element by 1
13. end countingSort

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


3. Attempt any one part of the following: 10 *1 = 10

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Discuss the various cases for insertion of key in red-black tree for given sequence of key in
an empty red-black tree- {15,13,12,16,19,23,5,8}. Also show that a red-black tree with n
internal nodes has height at most 2lg(n+1).q
Red-Black Tree is a self-balancing Binary Search Tree (BST) where every node follows following
rules:

Red Black Tree Insertion Rules


1-If tree is empty, create new node as root node with color black
2-If tree is not empty, create new node as leaf node with color red
3-If parent of new node is black then exit
4-If parent of new node is red, then check the color of parents sibling of new node
a. If color is black or null then do suitable rotation & recolor
b. If color is red then recolor & also check if parents parent of new node is not root node then
recolor it & recheck
Insertion:

1. Insert the new node the way it is done in Binary Search Trees.
2. Color the node red
3. If an inconsistency arises for the red-black tree, fix the tree according to the type of
discrepancy.
4. In Red black tree if imbalancing occurs then for removing it two methods are used that are:

a) Recoloring
b) Rotation

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
(a) Explain and write an algorithm for union of two binomial heaps
and write its time complexity.

It is the most important operation performed on the binomial heap. Merging in a heap can be done by
comparing the keys at the roots of two trees, and the root node with the larger key will become the
child of the root with a smaller key than the other. The time complexity for finding a union is O(logn).
The function to merge the two trees is given as follows -

1. function merge(a,b)
2. if a.root.key ? b.root.key
3. return a.add(b)
4. else
5. return b.add(a)
To perform the union of two binomial heaps, we have to consider the below cases:

Case 1: If degree[x] is not equal to degree[next x], then move pointer ahead.

Case 2: if degree[x] = degree[next x] = degree[sibling(next x)] then,Move the pointer ahead.

Case 3: If degree[x] = degree[next x] but not equal to degree[sibling[next x]] and key[x] < key[next
x] then remove [next x] from root and attached to x.

Case 4: If degree[x] = degree[next x] but not equal to degree[sibling[next x]] and key[x] > key[next
x] then remove x from root and attached to [next

example. Consider two binomial heaps -

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


We can see that there are two binomial heaps, so, first, we have to combine both heaps. To combine
the heaps, first, we need to arrange their binomial trees in increasing order.

In the above heap first, the pointer x points to the node 12 with degree B0, and the pointer next[x]
points the node 18 with degree B0. Node 7 with degree B1 is the sibling of 18, therefore, it is
represented as sibling[next[x]].

Now, first apply Case1 that says 'if degree[x] ≠ degree[next x] then move pointer ahead' but in the
above example, the degree[x] = degree[next[x]], so this case is not valid.

Now, apply Case2 that says 'if degree[x] = degree[next x] = degree[sibling(next x)] then Move
pointer ahead'. So, this case is also not applied in the above heap.

Now, apply Case3 that says ' If degree[x] = degree[next x] ≠ degree[sibling[next x]] and key[x] <
key[next x] then remove [next x] from root and attached to x'. We will apply this case because
the above heap follows the conditions of case 3 -

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


degree[x] = degree[next x] ≠ degree[sibling[next x]] {as, B0 = B0¬ ≠ B1} and key[x] < key[next x]
{as 12 < 18}.

So, remove the node 18 and attach it to 12 as shown below -

x = 12, next[x] = 7, sibling[next[x]] = 3, and degree[x] = B1, dgree[next[x]] = B1,


degree[sibling[next[x]]] = B1

Now we will reapply the cases in the above binomial heap. First, we will apply case 1. Since x is
pointing to node 12 and next[x] is pointing to node 7, the degree of x is equal to the degree of next x;
therefore, case 1 is not valid.

Here, case 2 is valid as the degree of x, next[x], and sibling[next[x]] is equal. So, according to the
case, we have to move the pointer ahead.

Therefore, x = 7, next[x] = 3, sibling[next[x]] = 15, and degree[x] = B1, dgree[next[x]] = B1,


degree[sibling[next[x]]] = B2

Now, let's try to apply case 3, here, first condition of case3 is satisfied as degree[x] = degree[next[x]]
≠ degree[sibling[next[x]]], but second condition (key[x] < key[next x]) of case 3 is not satisfied.

Now, let's try to apply case 4. So, first condition of case4 is satisfied and second condition (key[x] >
key[next x]) is also satisfied. Therefore, remove x from the root and attach it to [next[x]].

Now, the pointer x points to node 3, next[x] points to node 15, and sibling[next[x]] points to the node
6. Since, the degree of x is equal to the degree of next[x] but not equal to the degree[sibling[next[x]]],
and the key value of x is less than the key value of next[x], so we have to remove next[x] and attach
it to x as shown below -
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
Now, x represents to the node 3, and next[x] points to node 6. Since, the degree of x and next[x] is
not equal, so case1 is valid. Therefore, move the pointer ahead. Now, the pointer x points the node 6.
The B4 is the last binomial tree in a heap, so it leads to the termination of the loop. The above tree is
the final tree after the union of two binomial heaps.

Time Complexity:

binomial heap is the collection of binomial trees, and every binomial tree satisfies the min-heap
property. It means that the root node contains a minimum value. Therefore, we only have to compare
the root node of all the binomial trees to find the minimum key. The time complexity of finding the
minimum key in binomial heap is O(logn).

4. Attempt any one part of the following: 10*1 = 10


(a) Explain “greedy algorithm” Write its pseudo code to prove
that fractional Knapsack problem has a greedy-choice
property.
A greedy algorithm is an approach for solving a problem by selecting the best option available at
the moment. I
Fractional Knapsack Problem
Given the weights and profits of N items, in the form of {profit, weight} put these items in a
knapsack of capacity W to get the maximum total profit in the knapsack. In Fractional Knapsack,
we can break items for maximizing the total value of the knapsack.
Input: arr[] = {{60, 10}, {100, 20}, {120, 30}}, W = 50
Output: 240
Explanation: By taking items of weight 10 and 20 kg and 2/3 fraction of 30 kg.
Hence total price will be 60+100+(2/3)(120) = 240
Input: arr[] = {{500, 30}}, W = 10
Output: 166.667
Fractional Knapsack Problem using Greedy algorithm:
An efficient solution is to use the Greedy approach.
The basic idea of the greedy approach is to calculate the ratio profit/weight for each item and sort
the item on the basis of this ratio. Then take the item with the highest ratio and add them as much
as we can (can be the whole element or a fraction of it).

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


This will always give the maximum profit because, in each step it adds an element such that this is
the maximum possible profit for that much weight.
Illustration:
Check the below illustration for a better understanding:
Consider the example: arr[] = {{100, 20}, {60, 10}, {120, 30}}, W = 50.
Sorting: Initially sort the array based on the profit/weight ratio. The sorted array will be {{60, 10},
{100, 20}, {120, 30}}.
Iteration:
 For i = 0, weight = 10 which is less than W. So add this element in the knapsack. profit
= 60 and remaining W = 50 – 10 = 40.
 For i = 1, weight = 20 which is less than W. So add this element too. profit = 60 + 100 =
160 and remaining W = 40 – 20 = 20.
 For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the
element. Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and
remaining W becomes 0.
So the final profit becomes 240 for W = 50.
Follow the given steps to solve the problem using the above approach:
 Calculate the ratio (profit/weight) for each item.
 Sort all the items in decreasing order of the ratio.
 Initialize res = 0, curr_cap = given_cap.
 Do the following for every item i in the sorted order:
 If the weight of the current item is less than or equal to the remaining
capacity then add the value of that item into the result
 Else add the current item as much as we can and break out of the loop.
 Return res.
Time Complexity: O(N * logN)
Auxiliary Space: O(N)

(b) What are single source shortest paths? Write down Dijkstra’s algorithm for it.
The Single-Source Shortest Path (SSSP) problem consists of finding the shortest paths between a
given vertex v and all other vertices in the graph. Algorithms such as Breadth-First-Search (BFS) for
unweighted graphs or Dijkstra [1] solve this problem.
Dijkstra's Algorithm is a Graph algorithm that finds the shortest path from a source vertex to all
other vertices in the Graph (single source shortest path). It is a type of Greedy Algorithm that only
works on Weighted Graphs having positive weights. The time complexity of Dijkstra's Algorithm
is O(V2) with the help of the adjacency matrix representation of the graph. This time complexity can
be reduced to O((V + E) log V) with the help of an adjacency list representation of the graph,
where V is the number of vertices and E is the number of edges in the graph.
Dijkstra's Algorithm with an Example

The following is the step that we will follow to implement Dijkstra's Algorithm:

Step 1: First, we will mark the source node with a current distance of 0 and set the rest of the nodes
to INFINITY.

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Step 2: We will then set the unvisited node with the smallest current distance as the current node,
suppose X.

Step 3: For each neighbor N of the current node X: We will then add the current distance of X with
the weight of the edge joining X-N. If it is smaller than the current distance of N, set it as the new
current distance of N.

Step 4: We will then mark the current node X as visited.

Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in the graph.

Let us now understand the implementation of the algorithm with the help of an example:

Figure 6: The Given Graph

1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because the path to
node A is 0 and the path from node A to B is 4, and the minimum((0 + 4), INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because the path to
node A is 0 and the path from node A to C is 5, and the minimum((0 + 5), INFINITY) is 5.
Both the neighbors of node A are now relaxed; therefore, we can move ahead.
7. We will now select the next unvisited node with the least path and visit it. Hence, we will
visit node B and perform relaxation on its unvisited neighbors. After performing relaxation,
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
the path to node C will remain 5, whereas the path to node E will become 11, and the path to
node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D, and F. Since
only node F is unvisited, it will be relaxed. Thus, the path to node B will remain as it is, i.e., 4,
the path to node D will also remain 13, and the path to node F will become 14 (8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to node F will
remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as all its
neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.

the final paths we concluded are:

1. A=0
2. B = 4 (A -> B)
3. C = 5 (A -> C)
4. D = 4 + 9 = 13 (A -> B -> D)
5. E = 5 + 3 = 8 (A -> C -> E)
6. F = 5 + 3 + 6 = 14 (A -> C -> E -> F)

Pseudocode:

1. function Dijkstra_Algorithm(Graph, source_node)


2. // iterating through the nodes in Graph and set their distances to INFINITY
3. for each node N in Graph:
4. distance[N] = INFINITY
5. previous[N] = NULL
6. If N != source_node, add N to Priority Queue G
7. // setting the distance of the source node of the Graph to 0
8. distance[source_node] = 0
9.
10. // iterating until the Priority Queue G is not empty
11. while G is NOT empty:
12. // selecting a node Q having the least distance and marking it as visited
13. Q = node in G with the least distance[]
14. mark Q visited
15.
16. // iterating through the unvisited neighboring nodes of the node Q and performing relaxat
ion accordingly
17. for each unvisited neighbor node N of Q:
18. temporary_distance = distance[Q] + distance_between(Q, N)
19.
20. // if the temporary distance is less than the given distance of the path to the Node, upd
ating the resultant distance with the minimum value
21. if temporary_distance < distance[N]
22. distance[N] := temporary_distance
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
23. previous[N] := Q
24.
25. // returning the final list of distance
26. return distance[], previous[]

5. Attempt any one part of the following: 10*1 = 10


What is the sum of subsets problem? Let w={5,7,10,12,15,18,20}
(a)
and m=35. Find all possible subsets of w that sum to m using
recursive backtracking algorithm for it. Draw the portion of the
state-space tree that is generated.
The subset sum problem is to find the subset of elements that are selected from a given set whose
sum adds up to a given number K. We are considering the set contains non-negative values. It is
assumed that the input set is unique (no duplicates are presented).
Examples:
Input: set[] = {1,2,1}, sum = 3
Output: [1,2],[2,1]
Explanation: There is a subset (4, 5) with sum 9.
Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 30
Output: []
Explanation: There is no subset that add up to 30.
Algorithm:
Let, S is a set of elements and m is the expected sum of subsets. Then:

1. Start with an empty set.


2. Add to the subset, the next element from the list.
3. If the subset is having sum m then stop with that subset as solution.
4. If the subset is not feasible or if we have reached the end of the set then backtrack through the
subset until we find the most suitable value.
5. If the subset is feasible then repeat step 2.
6. If we have visited all the elements without finding a suitable subset and if no backtracking is
possible then stop without solution.

Example:

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


(b) Illustrate n queen’s problem. Examine 4 queen’s problem using
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
back tracking method.
N-Queens Problem

N - Queens problem is to place n - queens in such a manner on an n x n chessboard that no queens


attack each other by being in the same row, column or diagonal.

It can be seen that for n =1, the problem has a trivial solution, and no solution exists for n =2 and n
=3. So first we will consider the 4 queens problem and then generate it to n - queens problem.

Given a 4 x 4 chessboard and number the rows and column of the chessboard 1 through 4.

Since, we have to place 4 queens such as q1 q2 q3 and q4 on the chessboard, such that no two queens
attack each other. In such a conditional each queen must be placed on a different row, i.e., we put
queen "i" on row "i."

Now, we place queen q1 in the very first acceptable position (1, 1). Next, we put queen q2 so that both
these queens do not attack each other. We find that if we place q2 in column 1 and 2, then the dead
end is encountered. Thus the first acceptable position for q2 in column 3, i.e. (2, 3) but then no
position is left for placing queen 'q3' safely. So we backtrack one step and place the queen 'q2' in (2, 4),
the next best possible solution. Then we obtain the position for placing 'q3' which is (3, 2). But later
this position also leads to a dead end, and no place is found where 'q4' can be placed safely. Then we
have to backtrack till 'q1' and place it to (1, 2) and then all other queens are placed safely by moving
q2 to (2, 4), q3 to (3, 1) and q4 to (4, 3). That is, we get the solution (2, 4, 1, 3). This is one possible
solution for the 4-queens problem. For another possible solution, the whole method is repeated for all
partial solutions. The other solutions for 4 - queens problems is (3, 1, 4, 2) i.e.

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as follows:

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Fig shows the complete state space for 4 - queens problem. But we can use backtracking method to
generate the necessary node and stop if the next node violates the rule, i.e., if two queens are
attacking.

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Explain Approximation algorithm with suitable examples
Discuss RMP string matching algorithm and also find the prefix function for the following
pattern: ababbabaa.
Define approximation algorithms. What is approximation ratio? Approximate the
travelling salesman problem with triangle inequality.
Write KMP algorithm for string matching? Perform the KMP algorithm to search the
occurrences of the pattern abaab in the text string abbabaabaabab.

Write short notes on following:


(i.) Randomized algorithm. (ii.) NP- complete and NP hard.
What is approximation algorithm? Explain set cover problem using approximation algorithm.
e. Write Rabin Karp string matching algorithm. Working modulo q=11, how many spurious hits
does the Rabin karp matcher in the text T= 3141592653589793, when looking for the pattern
P=26.
Write and explain the algorithm to solve vertex cover problem usingapproximation
algorithm.
Explain and Write the Knuth-Morris-Pratt algorithm for pattern matching alsowrite its time
complexity
Explain KMP algorithm. Find the prefix function for the pattern
P: ababaaca and apply KMP algorithm on Text T: aabbababaacaab
What are approximation algorithms? What is meant by P (n) approximation algorithms.
Discuss approximation algorithm for Vertex cover Problem.

Prove that the Travelling Salesman Problem is NP-Complete.

Write an algorithm of Naïve Matching and implement it by any example.


Explain the classes P, NP, NPC and NP hard. How are they related to each other? b. Write short
notes on – (a) Approximation Algorithms. (b) Randomized Algorithms.

4 - Queens solution space with nodes numbered in DFS


QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
It can be seen that all the solutions to the 4 queens problem can be represented as 4 - tuples (x1, x2, x3,
x4) where xi represents the column on which queen "qi" is placed.

6. Attempt any one part of the following: 10*1 = 10


(a) What is string matching algorithm? Explain Rabin-Karp method with
examples.

String Matching Algorithm is also called "String Searching Algorithm." This is a vital class
of string algorithm is declared as "this is the method to find a place where one is several
strings are found within the larger string."

Given a text array, T [1.....n], of n character and a pattern array, P [1......m], of m characters.
The problems are to find an integer s, called valid shift where 0 ≤ s < n-m and T
[s+1......s+m] = P [1......m]. In other words, to find even if P in T, i.e., where P is a substring
of T. The item of P and T are character drawn from some finite alphabet such as {0, 1} or {A,
B .....Z, a, b..... z}.

Algorithms used for String Matching:

There are different types of method is used to finding the string

1. The Naive String Matching Algorithm


2. The Rabin-Karp-Algorithm
3. Finite Automata
4. The Knuth-Morris-Pratt Algorithm
5. The Boyer-Moore Algorithm

The Rabin-Karp-Algorithm

The Rabin-Karp string matching algorithm calculates a hash value for the pattern, as well as for each
M-character subsequences of text to be compared. If the hash values are unequal, the algorithm will
determine the hash value for next M-character sequence. If the hash values are equal, the algorithm
will analyze the pattern and the M-character sequence. In this way, there is only one comparison per
text subsequence, and character matching is only required when the hash values match.

RABIN-KARP-MATCHER (T, P, d, q)
1. n ← length [T]
2. m ← length [P]
3. h ← dm-1 mod q
4. p ← 0
5. t0 ← 0
QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162
6. for i ← 1 to m
7. do p ← (dp + P[i]) mod q
8. t0 ← (dt0+T [i]) mod q
9. for s ← 0 to n-m
10. do if p = ts
11. then if P [1.....m] = T [s+1.....s + m]
12. then "Pattern occurs with shift" s
13. If s < n-m
14. then ts+1 ← (d (ts-T [s+1]h)+T [s+m+1])mod q

Example: For string matching, working module q = 11, how many spurious hits does the Rabin-Karp
matcher encounters in Text T = 31415926535.......

1. T = 31415926535.......
2. P = 26
3. Here T.Length =11 so Q = 11
4. And P mod Q = 26 mod 11 = 4
5. Now find the exact match of P mod Q...

Solution:

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Complexity:

The running time of RABIN-KARP-MATCHER in the worst case scenario O ((n-m+1) m but it
has a good average case running time. If the expected number of strong shifts is small O (1) and

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


prime q is chosen to be quite large, then the Rabin-Karp algorithm can be expected to run in time O
(n+m) plus the time to require to process spurious hits.

(b) Explain approximation algorithm. Explore set


cover problem using approximation algorithm.
An approximation algorithm is a way of dealing with NP-completeness for an optimization
problem. This technique does not guarantee the best solution. The goal of the approximation
algorithm is to come as close as possible to the optimal solution in polynomial time. Such
algorithms are called approximation algorithms or heuristic algorithms.
Featuresof ApproximationAlgorithm :
Here, we will discuss the features of the Approximation Algorithm as follows.
 An approximation algorithm guarantees to run in polynomial time though it does not
guarantee the most effective solution.
 An approximation algorithm guarantees to seek out high accuracy and top quality
solution(say within 1% of optimum)
 Approximation algorithms are used to get an answer near the (optimal) solution of an
optimization problem in polynomial time
.
Set Cover Algorithm
The set cover takes the collection of sets as an input and and returns the minimum number of sets
required to include all the universal elements.
The set cover algorithm is an NP-Hard problem and a 2-approximation greedy algorithm.
Algorithm
Step 1 − Initialize Output = {} where Output represents the output set of elements.
Step 2 − While the Output set does not include all the elements in the universal set, do the following

 Find the cost-effectiveness of every subset present in the universal set using the
formula, Cost(Si)Si−Output
 Find the subset with minimum cost effectiveness for each iteration performed. Add the
subset to the Output set.
Step 3 − Repeat Step 2 until there is no elements left in the universe. The output achieved is the final
Output set.
Pseudocode
APPROX-GREEDY-SET_COVER(X, S)
U=X
OUTPUT = ф
while U ≠ ф
select Si Є S which has maximum |Si∩U|
U=U–S
OUTPUT = OUTPUT∪ {Si}
return OUTPUT

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


Analysis
assuming the overall number of elements equals the overall number of sets (|X| = |S|), the code runs
in time O(|X|3)
Example

Let us look at an example that describes the approximation algorithm for the set covering problem in
more detail
S1 = {1, 2, 3, 4} cost(S1) = 5
S2 = {2, 4, 5, 8, 10} cost(S2) = 10
S3 = {1, 3, 5, 7, 9, 11, 13} cost(S3) = 20
S4 = {4, 8, 12, 16, 20} cost(S4) = 12
S5 = {5, 6, 7, 8, 9} cost(S5) = 15
Step 1
The output set, Output = ф
Find the cost effectiveness of each set for no elements in the output set,
S1 = cost(S1) / (S1 – Output) = 5 / (4 – 0)
S2 = cost(S2) / (S2 – Output) = 10 / (5 – 0)
S3 = cost(S3) / (S3 – Output) = 20 / (7 – 0)
S4 = cost(S4) / (S4 – Output) = 12 / (5 – 0)
S5 = cost(S5) / (S5 – Output) = 15 / (5 – 0)
The minimum cost effectiveness in this iteration is achieved at S1, therefore, the subset added to the
output set, Output = {S1} with elements {1, 2, 3, 4}
Step 2
Find the cost effectiveness of each set for the new elements in the output set,
S2 = cost(S2) / (S2 – Output) = 10 / (5 – 4)
S3 = cost(S3) / (S3 – Output) = 20 / (7 – 4)
S4 = cost(S4) / (S4 – Output) = 12 / (5 – 4)

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162


S5 = cost(S5) / (S5 – Output) = 15 / (5 – 4)
The minimum cost effectiveness in this iteration is achieved at S3, therefore, the subset added to the
output set, Output = {S1, S3} with elements {1, 2, 3, 4, 5, 7, 9, 11, 13}.
Step 3
Find the cost effectiveness of each set for the new elements in the output set,
S2 = cost(S2) / (S2 – Output) = 10 / |(5 – 9)|
S4 = cost(S4) / (S4 – Output) = 12 / |(5 – 9)|
S5 = cost(S5) / (S5 – Output) = 15 / |(5 – 9)|
The minimum cost effectiveness in this iteration is achieved at S2, therefore, the subset added to the
output set, Output = {S1, S3, S2} with elements {1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 13}
Step 4
Find the cost effectiveness of each set for the new elements in the output set,
S4 = cost(S4) / (S4 – Output) = 12 / |(5 – 11)|
S5 = cost(S5) / (S5 – Output) = 15 / |(5 – 11)|
The minimum cost effectiveness in this iteration is achieved at S4, therefore, the subset added to the
output set, Output = {S1, S3, S2, S4} with elements {1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 16, 20}
Step 5
Find the cost effectiveness of each set for the new elements in the output set,
S5 = cost(S5) / (S5 – Output) = 15 / |(5 – 14)|
The minimum cost effectiveness in this iteration is achieved at S5, therefore, the subset added to the
output set, Output = {S1, S3, S2, S4, S5} with elements {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 16, 20}
The final output that covers all the elements present in the universal finite set is, Output = {S1, S3, S2,
S4, S5}.

QP23DP1_032 | 10-01-2023 15:13:03 | 117.55.241.162

You might also like