Mca Assignment
Mca Assignment
Mca Assignment
ALGORITHMS
1. Write Insertion sort algorithm. Determine its complexity in Best,
Average and Worst Case. Sort the following sequence in increasing
order: 35, 37, 18, 15, 40, 12; Using Insertion Sort.
Ans 1:
#include <stdio.h>
int main()
{
int n, array[1000], c, d, t;
d--;
}
}
return 0;
}
2.
Give a divide and conquer based algorithm (Write a pseudo-code)
to perform following:
(i) find the is smallest element in an array of size. Derive the
running time complexity of your algorithm.
(ii) finding the position of an element in an array of n numbers
Estimate the number of key comparisons made by your algorithms.
ANS 2)
(i) Pseudo code
minIndex=1
min=A[1]
i=0
while(i <= n)
if A[i]<min
min=A[i]
minIndex = i
i+=1
#include<stdio.h>
int main() {
int a[30], i, num, smallest;
return (0);
}
OUTPUT:
1 Enter no of elements : 5
2 11 44 22 55 99
3 Smallest Element : 11
/ Find h to do binary
search while (val < key)
{
l = h; // store previous high
h = 2*h; // double high index
val = arr[h]; // update new val
}
/ Driver program
int main()
{
int arr[] = {3, 5, 7, 9, 10, 90, 100, 130,
140, 160, 170};
int ans = findPos(arr, 10);
if (ans==-1)
cout << "Element not found";
else
cout << "Element found at index " << ans;
return 0;
}
Run on IDE
Output:
int main()
{
int i;
int m;
int j;
int n;
char c;
char list[DATA];
string word[100];
while (list[i])
{
if (isupper(list[i])) list[i]=tolower(list[i]);
putchar (list[i]);
i++;
}
else
return fibo(n-2)+fibo(n-1);
}
The complexity of the above program would be exponential
Now let are solve this problem dynamically.
int memory[500]
memset(memory, -1 ,500)
int fibo(int n) {
if(n<=2)
return 1;
if(memory[n]!=-1)
return memory[n];
int s=fibo(n-1)+fibo(n-2);
memory[n]=s;
return s;
}
We have n possible inputs to the function: 1, 2, Z, n. Each input will either:–
1. be computed, and the result saved
2. be returned from the memory
Each input will be computed at most once Time complexity is O(n × k), where k is the time
complexity of computing an input if we assume that the recursive calls are returned directly
from memory(O(1))
Since we’re only doing constant amount of work tocompute the answer to an input, k = O(1)
Total time complexity is O(n).
Algorithm To Determine Optimal Parenthesization of a Product of N Matrices
Input n matrices.
Separate it into two sub-sequences.
Find out the minimum cost of multiplying every set.
Calculate the sum of these costs, and add in the cost of multiplying the two matrices.
Repeat the above steps for every possible position at which the sequence of matrices can
split, and take the minimum cost.
Optimality Principle
The influence of Richard Bellman is seen in algorithms throughout the computer science
literature. Bellman’s principle of optimality has been used to develop highly efficient
dynamic programming solutions to many important and difficult problems. The paradigm is
now well entrenched as one of the most successful algorithm design tools employed by
computer scientists.
The optimality principle was given a broad and general statement by Bellman [23, making it
applicable to problems of diverse types. Since computer programs are often employed to
implement solutions based on the principle of optimality, Bellman’s impact on computing in
general has been immense. In this paper we wish to focus in particular on the influence of
Bellman’s work on the area of computer science known as algorithm design and analysis. A
primary goal of algorithm design and analysis is to discover theoretical properties of classes
of algorithms (e.g., how efficient they are, when they are applicable) and thus learn how to
better apply the algorithms to new problems. From the perspective of algorithm design and
analysis, combinatorial optimization problems form the class of problems on which the
principle of optimality has had its greatest impact. Problem decomposition is a basic
technique for attacking problems of this type-the solution to a large problem is obtained by
combining solutions to smaller subproblems. The trick of this approach, of course, is to
define an efficient decomposition procedure which assures that combining optimal solutions
to subproblems will result in an optimal solution to the larger problem. As a standard course
of action, computer scientists attempt to define a decomposition based on Bellman’s
principle of optimality. Problem decompositions based on the principle of optimality not only
are at the heart of dynamic programming algorithms, but are also integral parts of the
strategies of other important classes of algorithms, such as branch and bound .
Matrix Multiplication
A1*A2 => 30* 35 => 1050
A2*A3 => 35*15 => 525
A3*A1 => 15 *5 => 75
A2 A3 A1
A1 (1050)
A2 (525)
A3 (75)
5. Differentiate Between
(i) Greedy technique and Dynamic programming technique
(ii) NP-Complete & NP Hard Problems
(iii) Decidable & Un-decidable problems
(iv) Context free & Context sensitive Language
(v) Strassen’s Algorithm & Chain Matrix Multiplication algorithm
Ans 5 i)
Greedy technique and Dynamic programming technique
Greedy algorithms have a local choice of Dynamic programming would solve all dependent
subproblems and then select one that would lead
the subproblem that will lead to an optimal to
answer an optimal solution.
A greedy algorithm is one which finds A Dynamic algorithm is applicable to problems that
optimal solution at each and every stage exhibit Overlapping subproblems and Optimal
with the hope of finding global optimum at substructure properties.
the end.
Intuitively this means that we can solve Y quickly if we know how to solve X quickly.
Precisely, Yis reducible to X, if there is a polynomial time algorithm f to transform
instances y of Y to instances x = f(y) of X in polynomial time, with the property that the
answer to y is yes, if and only if the answer to f(y) is yes.
Example
3-SAT. This is the problem wherein we are given a conjunction (ANDs) of 3-clause
disjunctions (ORs), statements of the form (x_v11 OR x_v21 OR x_v31) AND
(x_v12 OR x_v22 OR x_v32) AND
... AND
(x_v1n OR x_v2n OR x_v3n)
where each x_vij is a boolean variable or the negation of a variable from a finite predefined
list (x_1, x_2, ... x_n).
It can be shown that every NP problem can be reduced to 3-SAT. The proof of this is
technical and requires use of the technical definition of NP (based on non-
deterministic Turing machines). This is known as Cook's theorem.
What makes NP-complete problems important is that if a deterministic polynomial time
algorithm can be found to solve one of them, every NP problem is solvable in
polynomial time (one problem to rule them all).
NP-hard
Intuitively, these are the problems that are at least as hard as the NP-complete
problems. Note that NP-hard problems do not have to be in NP, and they do not have
to be decision problems.
The precise definition here is that a problem X is NP-hard, if there is an NP-
complete problem Y, such that Y is reducible to X in polynomial time.
But since any NP-complete problem can be reduced to any other NP-complete problem
in polynomial time, all NP-complete problems can be reduced to any NP-hard problem
in polynomial time. Then, if there is a solution to one NP-hard problem in polynomial
time, there is a solution to all NP problems in polynomial time. Example
The halting problem is an NP-hard problem. This is the problem that given a program
P and input I, will it halt? This is a decision problem but it is not in NP. It is clear that
any NP-complete problem can be reduced to this one. As another example, any NP-
complete problem is NP-hard.
int i, j, k, L, q;
/ L is chain length.
for (L=2; L<n; L++)
{
for (i=1; i<n-L+1; i++)
{
j = i+L-1;
m[i][j] = INT_MAX;
for (k=i; k<=j-1; k++)
{
// q = cost/scalar multiplications
q = m[i][k] + m[k+1][j] + p[i-1]*p[k]*p[j];
if (q < m[i][j])
m[i][j] = q;
}
}
}
return m[1][n-1];
}
int main()
{
int arr[] = {1, 2, 3, 4};
int size = sizeof(arr)/sizeof(arr[0]);
getchar();
return 0;
}
Big-O is a measure of the longest amount of time it could possibly take for the algorithm to
complete.
f(n) ≤ cg(n), where f(n) and g(n) are non-negative functions, g(n) is upper bound,
then f(n) is Big O of g(n). This is denoted as "f(n) = O(g(n))"
Big Omega describes the best that can happen for a given data size.
"f(n) ≥ cg(n)", this makes g(n) a lower bound function
Theta is basically saying that the function, f(n) is bounded both from the top and bottom by
the same function, g(n).
f(n) is theta of g(n) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n))
This is denoted as "f(n) = Θ(g(n))"
Some Divide and Conquer algorithms are Merge Sort, Binary Sort, etc.
Dynamic Programming is similar to Divide and Conquer when it comes to dividing a large
problem into sub-problems. But here, each sub-problem is solved only once. There is no
recursion. The key in dynamic programming is remembering. That is why we store the result
of sub-problems in a table so that we don't have to compute the result of a same sub-
problem again and again.
Some algorithms that are solved using Dynamic Programming are Matrix
Chain Multiplication, Tower of Hanoi puzzle, etc..
Another difference between Dynamic Programming and Divide and Conquer approach
is that -
In Divide and Conquer, the sub-problems are independent of each other while in case of
Dynamic Programming, the sub-problems are not independent of each other (Solution
of one sub-problem may be required to solve another sub-problem).
Ans 6iii)
Knapsack Problem
Given a set of items, each with a weight and a value, determine a subset of items to include
in a collection so that the total weight is less than or equal to a given limit and the total
value is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a
subproblem in many, more complex mathematical models of real-world problems. One
general approach to difficult problems is to identify the most restrictive constraint, ignore the
others, solve a knapsack problem, and somehow adjust the solution to satisfy the ignored
constraints.
Applications
In many cases of resource allocation along with some constraint, the problem can be
derived in a similar way of Knapsack problem. Following is a set of example. Finding
the least wasteful way to cut raw materials portfolio optimization
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions of
items.
According to the problem statement,
· There are n items in the store
th
· Weight of i item wi>0wi>0
th
· Profit for i item pi>0pi>0 and
· Capacity of the Knapsack is W
In this version of Knapsack problem, items can be broken into smaller pieces. So, the
th
thief may take only a fraction xi of i item.
0 xi 10 xi 1
th
The i item contributes the weight xi.wixi.wi to the total weight in the knapsack
and profit xi.pixi.pi to the total profit.
Hence, the objective of this algorithm is to
maximize∑n=1n(xi.pi)maximize∑n=1n(xi.pi)
subject to constraint,
∑n=1n(xi.wi) W∑n=1n(xi.wi) W
It is clear that an optimal solution must fill the knapsack exactly, otherwise we could add a
fraction of one of the remaining items and increase the overall profit. Thus, an optimal
solution can be obtained by
∑n=1n(xi.wi)=W∑n=1n(xi.wi)=W
In this context, first we need to sort those items according to the value of piwipiwi, so
that pi+1wi+1pi+1wi+1 ≤ piwipiwi . Here, x is an array to store the fraction of items.
Item A B C D
Weight 40 10 20 24
Ratio (piwi)(piwi) 7 10 6 5
As the provided items are not sorted based on piwipiwi. After sorting, the items are as
shown in the following table.
Item B A C D
Weight 10 40 20 24
Ratio (piwi)(piwi) 10 7 6 5
Solution
After sorting all the items according to piwipiwi. First all of B is chosen as weight of B is less
than the capacity of the knapsack. Next, item A is chosen, as the available capacity of the
knapsack is greater than the weight of A. Now, C is chosen as the next item. However, the
whole item cannot be chosen as the remaining capacity of the knapsack is less than the
weight of C.
Hence, fraction of C (i.e. (60 − 50)/20) is chosen.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can
be selected.
The total weight of the selected items is 10 + 40 + 20 * (10/20) = 60
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different combination
of items.
ANS 6 iv)
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. One starts at the root(selecting some arbitrary node as the root in the case of a
graph) and explores as far as possible along each branch before backtracking.
DFS pseudocode (recursive implementation)
The pseudocode for DFS is shown below.
In the init() function, notice that we run the DFS function on every node. This is because the
graph might have two different disconnected parts so to make sure that we cover every
vertex, we can also run the DFS algorithm on every node. DFS(G, u)
u.visited = true
for each v ∈ G.Adj[u]
if v.visited == false
DFS(G,v)
init() {
For each u ∈ G
u.visited = false
For each u ∈ G
DFS(G, u)
}
BFS DFS
BFS Stands for “Breadth First Search”. DFS stands for “Depth First Search”.
BFS starts traversal from the root node and DFS starts the traversal from the root node and
then explore the search in the level by level explore the search as far as possible from the root
manner i.e. as close as possible from the root node i.e. depth wise.
node.
Breadth First Search can be done with the Depth First Search can be done with the help
help of queue i.e. FIFO implementation. of Stack i.e. LIFO implementations.
This algorithm works in single stage. The This algorithm works in two stages – in the first stage
visited vertices are removed from the queue the visited vertices are pushed onto the stack and later
and then displayed at once. on when there is no vertex further to visit those are
popped-off.
BFS is slower than DFS. DFS is more faster than BFS.
BFS requires more memory compare to DFS. DFS require less memory compare to BFS.
Applications of BFS Applications of DFS
> To find Shortest path > Useful in Cycle detection
> Single Source & All pairs shortest paths > In Connectivity testing
> In Spanning tree > Finding a path between V and W in the graph.
> In Connectivity > useful in finding spanning trees & forest.
BFS is useful in finding shortest path.BFS can DFS in not so useful in finding shortest path. It is used
be used to find the shortest distance between to perform a traversal of a general graph and the idea
some starting node and the remaining nodes of DFS is to make a path as long as possible, and then
of the graph. go back (backtrack) to add branches also as long as
possible.
Time Complexity:-
DFS:
Time complexity is again O(|V|), you need to traverse all nodes.
Space complexity - depends on the implementation, a recursive implementation can
have a O(h)space complexity [worst case], where h is the maximal depth of your tree.
Using an iterative solution with a stack is actually the same as BFS, just using a stack
instead of a queue - so you get both O(|V|) time and space complexity.
( Note that the space complexity and time complexity is a bit different for a tree then for a
general graphs becase you do not need to maintain a visited set for a tree, and |E| = O(|
V|), so the |E| factor is actually redundant.
Ans 6 (v)
Prim’s algorithm is also a Greedy algorithm. It starts with an empty spanning tree. The idea is
to maintain two sets of vertices. The first set contains the vertices already included in the MST,
the other set contains the vertices not yet included. At every step, it considers all the edges
that connect the two sets, and picks the minimum weight edge from these edges.
After picking the edge, it moves the other endpoint of the edge to the set containing MST.
A group of edges that connects two set of vertices in a graph is called cut in graph theory.
So, at every step of Prim’s algorithm, we find a cut (of two sets, one contains the
vertices already included in MST and other contains rest of the verices), pick the
minimum weight edge from the cut and include this vertex to MST Set (the set that
contains already included vertices).
How does Prim’s Algorithm Work? The idea behind Prim’s algorithm is simple, a spanning
tree means all vertices must be connected. So the two disjoint subsets (discussed above)
of vertices must be connected to make a Spanning Tree. And they must be connected with
the minimum weight edge to make it a Minimum Spanning Tree.
Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.
2) Assign a key value to all vertices in the input graph. Initialize all key values as INFINITE.
Assign key value as 0 for the first vertex so that it is picked first.
3) While mstSet doesn’t include all vertices
Z.a) Pick a vertex u which is not there in mstSet and has minimum key value.
Z.c) Update key value of all adjacent vertices of u. To update the key values, iterate
through all adjacent vertices. For every adjacent vertex v, if weight of edge u-v is less
than the previous key value of v, update the key value as weight of u-v
The idea of using key values is to pick the minimum weight edge from cut. The key values
are used only for vertices which are not yet included in MST, the key value for these
vertices indicate the minimum weight edges connecting them to the set of vertices included
in MST.
Let us understand with the following example:
The set mstSet is initially empty and keys assigned to vertices are {0, INF, INF, INF, INF, INF,
INF, INF} where INF indicates infinite. Now pick the vertex with minimum key value. The
vertex 0 is picked, include it in mstSet. So mstSet becomes {0}. After including
to mstSet, update key values of adjacent vertices. Adjacent vertices of 0 are 1 and 7. The
key values of 1 and 7 are updated as 4 and 8. Following subgraph shows vertices and their
key values, only the vertices with finite key values are shown. The vertices included in MST
are shown in green color.
Pick the vertex with minimum key value and not already included in MST (not in mstSET).
The vertex 1 is picked and added to mstSet. So mstSet now becomes {0, 1}. Update the
key values of adjacent vertices of 1. The key value of vertex 2 becomes 8.
Pick the vertex with minimum key value and not already included in MST (not in mst SET).
We can either pick vertex 7 or vertex 2, let vertex 7 is picked. So mstSet now becomes {0,
1, 7}. Update the key values of adjacent vertices of 7. The key value of vertex 6 and 8
becomes finite (7 and 1 respectively).
Pick the vertex with minimum key value and not already included in MST (not in mstSET).
Vertex 6 is picked. So mstSet now becomes {0, 1, 7, 6}. Update the key values of adjacent
vertices of 6. The key value of vertex 5 and 8 are updated.
We repeat the above steps until mstSet includes all vertices of given graph. Finally, we get
the following graph.
Ans 7(ii)
Set of strings of a’s and b’s ending with the string abb.
So L = {abb, aabb, babb, aaabb, ababb, …………..}
Ans 7 (iii)
According to Noam Chomosky, there are four types of grammars − Type 0,
Type 1, Type 2, and Type 3. The following table shows how they differ from
each other −
Grammar Grammar Accepted Language Accepted Automaton
Type
Take a look at the following illustration. It shows the scope of each type of
grammar −
Type - 3 Grammar
Type-3 grammars generate regular languages. Type-3 grammars must have a single non-
terminal on the left-hand side and a right-hand side consisting of a single terminal or
single terminal followed by a single non-terminal.
The productions must be in the form X → a or X → aY where X, Y ∈ N (Non terminal) and a ∈ T
(Terminal)
The rule S → ε is allowed if S does not appear on the right side of any rule.
Example
X→ε
X → a | aY
Y→b
Type - 2 Grammar
Type-2 grammars generate context-free languages. The productions must be
in the form A → γ where A ∈ N (Non terminal)
and γ ∈ (T ∪ N)* (String of terminals and non-terminals).
Ans 7(iv)
In formal language theory, a context-free language is a language generated by some
context-free grammar. The set of all context-free languages is identical to the set of
languages accepted by pushdown automata. Proof:
If L1 and L2 are context-free languages, then each of them has a context-free grammar;
call the grammars G1 and G2.
Our proof requires that the grammars have no non-terminals in common. So we shall
subscript all of G1’s non-terminals with a 1 and subscript all of G2’s non-terminals with a 2.
Now, we combine the two grammars into one grammar that will generate the union of the
two languages. To do this, we add one new non-terminal, S, and two new productions. S
-> S1 | S2
S is the starting non-terminal for the new union grammar and can be replaced either by the
starting non-terminal for G1 or for G2, thereby generating either a string from L1 or from L2.
Since the non-terminals of the two original languages are completely different, and once we
begin using one of the original grammars, we must complete the derivation using only the
rules from that original grammar. Note that there is no need for the alphabets of the two
languages to be the same. Therefore, it is proved that if L1 and L2 are context –
free languages, then L1 Union L2 is also a context – free language.
Ans 7 (v)
8. Write note on each of the following:
M Abb aa aaa
N Bba aaa aa
Here,
x2x1x3 = ‘aaabbaaa’
and y2y1y3 = ‘aaabbaaa’
We can see that
x2x1x3 = y2y1y3
Hence, the solution is i = 2, j = 1, and k = 3.
iv) K-Colourability Problem:
Vertex coloring is the most common graph coloring problem. The problem is, given m
colors, find a way of coloring the vertices of a graph such that no two adjacent vertices are
colored using same color. The other graph coloring problems like Edge Coloring (No
vertex is incident to two edges of same color) and Face Coloring (Geographical Map
Coloring) can be transformed into vertex coloring.
Chromatic Number: The smallest number of colors needed to color a graph G is
called its chromatic number. For example, the following can be colored minimum 3
colors. (v) Independent Set
Independent sets are represented in sets, in which
· there should not be any edges adjacent to each other. There should not be any common vertex
between any two edges.
· there should not be any vertices adjacent to each other. There should not be any common edge
between any two vertices.