Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Design & Analysis of Algorithms Mcs 031 Assignment Free

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Design & analysis of algorithms mcs 031 Assignment Free

Question 1: Explain with an example of each of the following: Further, count the number of
operations, by each sorting method. (20 Marks)
Insertion sort algorithm somewhat resembles selection sort. Array is imaginary divided into two
parts - sorted one and unsorted one. At the beginning, sorted part contains first element of the
array and unsorted one contains the rest. At every step, algorithm takes first element in the
unsorted part and inserts it to the right place of the sorted one. When unsorted part
becomes empty, algorithm stops.
· Sort: 34 8 64 51 32 21
· 34 8 64 51 32 21
The algorithm sees that 8 is smaller than 34 so it swaps.
· 8 34 64 51 32 21
51 is smaller than 64, so they swap.
· 8 34 51 64 32 21
The algorithm sees 32 as another smaller number and moves it to its appropriate location
between 8 and 34.
· 8 32 34 51 64 21
· The algorithm sees 21 as another smaller number and moves into
between 8 and 32.
· Final sorted numbers:
8 21 32 34 51 64 No. of operations = no, of comparisations + no, of assignments (including
control variables)
For above example: 25 + 26=51operations.
No. of operations = no, of comparisations + no, of assignments (excluding control variables)
For above example: 12 + 13=25operations.zzzz

Selection Sort element at index 2 with that at index 4. The result is:

We reduce the effective size of the array to 4, making the highest index in the effective array
now 3. The largest element in this effective array (index 0-3) is at index 1, so we swap elements
at index 1 and 3 (in bold):

The next two steps give us:

No. of operations = no, of comparisations + no, of assignments ( excluding control variables)


For above example: 10 + 15 =25operations.
No. of operations = no, of comparisations + no, of assignments ( including control variables)
For above example: 20 + 25 =45operations.
Heap Sort:

Given an array of 6 elements: 15, 19, 10, 7, 17, 16, sort it in ascending order using heap sort.
Steps:
1. Consider the values of the elements as priorities and build the heap tree.
2. Start deleteMin operations, storing each deleted element at the end of the heap array.
After performing step 2, the order of the elements will be opposite to the order in the heap tree.
Hence, if we want the elements to be sorted in ascending order, we need to build the heap tree
in descending order - the greatest element will have the highest priority.
Note that we use only one array , treating its parts differently:
a. when building the heap tree, part of the array will be considered as the heap,
and the rest part - the original array.
b. when sorting, part of the array will be the heap, and the rest part - the sorted array.
This will be indicated by colors: white for the original array, blue for the heap and red for the
sorted array
Here is the array: 8, 10, 5, 12
A. Construction of heap tree

Destruction of Heap
No. of operations = no, of comparisations + no, of assignments ( excluding control variables)
For above example: 4 + 13 =17 operations.
No. of operations = no, of comparisations + no, of assignments ( including control variables)
For above example: 8 + 17 =25operations

Merge Sort:

The sorting algorithm Merge sort produces a sorted sequence by sorting its two halves and
merging them.
Quick Sort:

The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:

1. Choose a pivot value. We take the value of the middle element as pivot value, but it can be
any value, which is in range of sorted values, even if it doesn't present in the array.

2. Partition. Rearrange elements in such a way, that all elements which are lesser than the pivot
go to the left part of the array and all elements greater than the pivot, go to the right part of the
array. Values equal to the pivot can stay in any part of the array. Notice, that array may be
divided in non-equal parts.

3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.

Partition algorithm in detail

There are two indices i and j and at the very beginning of the partition algorithm i points to the
first element in the array and j points to the last one. Then algorithm moves i forward, until an
element with value greater or equal to the pivot is found. Index j is moved backward, until an
element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps
to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes
greater than j.

After partition, all values before i-th element are less or equal than the pivot and all values after
j-th element are greater or equal to the pivot.
Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.

Question 2:
(a) Write a randomized algorithm to statistic in a set of n elements Select. (8 Marks)

Definition: A randomized algorithm is an algorithm that can make calls to a random number

generator during the execution of the algorithm.

These calls will be of the form x := Random(a, b), where a, b are integers, a ≤ b.

A call to Random(a, b) returns an integer in [a, b], with every integer in the range being

equally likely to occur.

Successive calls to Random( ) are assumed to be mutually independent.

Algorithm:

Randomize-in-place(A)

Input. An array A[1..n] of n elements.

Output. A rearrangement of the elements of array A, with every permutation of the n

elements equally likely to occur.

for i := 1 to n do

swap A[i] and A[Random(i, n)]

(b) Write a recursive procedure to compute the factorial of a number.

(5 Marks)
FUNCTION FACTORIAL (N: INTEGER): INTEGER

BEGIN

IF N <= 0 THEN

FACTORIAL := 1

ELSE

FACTORIAL := N * FACTORIAL(N - 1)

END;

(c) Design a Turing Machine that increments a binary number which is

stored on the input tape. (7 Marks)

The binary increment function adds one to a binary number. The x state takes us to the rightmost
digit, then we switch to the i state - if a zero is read then we change it to 1, if a 1 is read we
change it to a 0 and "carry one", (move left and repeat). Be sure to test your machine on the input
111.

Examples of binary increment:

1001 -> 1010

111-> 1000

The x state requires three different clauses.

R: Move one position to the right.

• L: Move one position to the left.


HALT: halt

x,1,1,R,x

x,0,0,R,x

x, , ,L,i

i,0,1,0,HALT

i,1,0,L,i

i, ,1,0,HALT

Question 3:

(a) Consider Best first search technique and Breadth first search technique. Answer the following
with respect to these techniques. Give justification for your answer in each case. (10 Marks)

(i) Which algorithm has some knowledge of problem space?

Best First Search has some knowledge of problem space


Best-first search is a search algorithm which explores a graph by expanding the most promising
node chosen according to a specified rule.
best-first search as estimating the promise of node n by a "heuristic evaluation function f(n)
which, in general, may depend on the description of n, the description of the goal, the
information gathered by the search up to that point, and most important, on any extra knowledge
about the problem domain

heuristic that attempts to predict how close the end of a path is to a solution, so that paths which
are judged to be closer to a solution are extended first.
(ii) Which algorithm has the property that if a wrong path is chosen, it can be corrected
afterwards?

The Breadth first search is guaranteed to find the correct path even if a wrong path is selected
first, since it travels all paths until target is found. Searching takes place as:

First all nodes one edge away from the start node are visited, then two edges away, and so on
until all nodes are visited. This way, we'll find the path from the start to the goal with the
minimum number of traversed edges. Another way to word it is like this: Visit the neighbor
nodes, then the neighbor's neighbor nodes, and so on until the goal node was found. An example
of a breadth-first search is in where the nodes are numbered in the order they are visited.

(b) Write Kruskal's algorithm and use it to find a minimal cost spanning tree of the following
graph. (10 Marks)

(Show the intermediate steps).


Kruskals algorithm is an algorithm in graph theory that finds a minimum spanning tree for a
connected weighted graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. If the
graph is not connected, then it finds a minimum spanning forest

Steps are:
Spanning tree of above

Question 4:

(a) What is Randomized Quicksort? Analyse the expected running time

of Randomized Quicksort, with the help of a suitable example. (5 Marks)


In Randomized quick sort a random element is choose as a pivot element

Algorithm:
void RandQuickSort(int Array[], int l, int r) {

int piv=l+(rand()%(r-1+1);
swap(Array[1],Array[piv]);
int i = l+1;
int j = r;

void RandQuickSort(int Array[], int l, int r) {

int piv=l+(rand()%(r-1+1);
swap(Array[1],Array[piv]);
int i = l+1;
int j = r;

while (1) {
while(Array[i] <= Array[1] && i<r) ++i;
while (Array[j] <= Array[l] && j>l) –-j;
if (i >=j) {
swap(Array[j],Array[l]);
return j;
}
else Swap(Array[i],Array[j]);
}

(b) Explain the Greedy Structure algorithm. Give an example in which

the Greedy technique fails to deliver an optimal solution. (5 Marks)

A greedy algorithm is any algorithm that follows the problem solving metaheuristic of making
the locally optimal choice at each stage with the hope of finding the global optimum.

For example, applying the greedy strategy to the traveling salesman problem yields the following
algorithm: "At each stage visit the unvisited city nearest to the current city".

In general, greedy algorithms have five pillars:

1. A candidate set, from which a solution is created

2. A selection function, which chooses the best candidate to be added to the solution

3. A feasibility function, that is used to determine if a candidate can be used to contribute to a


solution

4. An objective function, which assigns a value to a solution, or a partial solution, and

5. A solution function, which will indicate when we have discovered a complete solution

Greedy algorithms produce good solutions on some mathematical problems, but not on others.
Most problems for which they work well have two properties:

For many other problems, greedy algorithms fail to produce the optimal solution, and may even
produce the unique worst possible solutions. One example is the nearest neighbor algorithm
mentioned above: for each number of cities there is an assignment of distances between the cities
for which the nearest neighbor heuristic produces the unique worst possible tour

Imagine the coin example with only 25-cent, 10-cent, and 4-cent coins. The greedy algorithm
would not be able to make change for 41 cents, since after committing to use one 25-cent coin
and one 10-cent coin it would be impossible to use 4-cent coins for the balance of 6 cents.
Whereas a person or a more sophisticated algorithm could make change for 41 cents change with
one 25-cent coin and four 4-cent coins.

(c) Describe the two properties that characterize a good dynamic

programming Problem. (5 Marks)

There are a number of characteristics that are common to all dynamic programming problems.
These are:

1. The problem can be divided into stages with a decision required at each stage.
In the capital budgeting problem the stages were the allocations to a single plant. The decision
was how much to spend. In the shortest path problem, they were defined by the structure of the
graph. The decision was were to go next.

2. Each stage has a number of states associated with it.

The states for the capital budgeting problem corresponded to the amount spent at that point in
time. The states for the shortest path problem was the node reached.

3. The decision at one stage transforms one state into a state in the next stage.

The decision of how much to spend gave a total amount spent for the next stage. The decision of
where to go next defined where you arrived in the next stage.

4. Given the current state, the optimal decision for each of the remaining states does not depend
on the previous states or decisions.

In the budgeting problem, it is not necessary to know how the money was spent in previous
stages, only how much was spent. In the path problem, it was not necessary to know how you got
to a node, only that you did.

5. There exists a recursive relationship that identifies the optimal decision for stage j, given that
stage j+1 has already been solved.

6. The final stage must be solvable by itself.

The last two properties are tied up in the recursive relationships given above.

The big skill in dynamic programming, and the art involved, is to take a problem and determine
stages and states so that all of the above hold. If you can, then the recursive relationship makes
finding the values relatively easy. Because of the difficulty in identifying stages and states, we
will do a fair number of examples.

(d) Define an NP-complete problem. Give examples of two such

problems. (5 Marks)

The complexity class NP-complete (abbreviated NP-C or NPC), is a class of problems having
two properties:

Any given solution to the problem can be verified quickly (in polynomial time); the set of
problems with this property is called NP (nondeterministic polynomial time).
If the problem can be solved quickly (in polynomial time), then so can every problem in NP

An interesting example is the graph isomorphism problem, the graph theory problem of
determining whether a graph isomorphism exists between two graphs. Two graphs are
isomorphic if one can be transformed into the other simply by renaming vertices. Consider these
two problems:
Graph Isomorphism: Is graph G1 isomorphic to graph G2?
Subgraph Isomorphism: Is graph G1 isomorphic to a subgraph of graph G2?

The Subgraph Isomorphism problem is NP-complete

In the mathematical field of graph theory the Hamiltonian path problem and the Hamiltonian
cycle problem are problems of determining whether a Hamiltonian path or a Hamiltonian cycle
exists in a given graph (whether directed or undirected). Both problems are NP-complete

You might also like