Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (4 votes)
5K views

Data Structure Book Part2

A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most used orders are numerical order and lexicographical order. Sorting techniques are categorized into Internal Sorting and external sorting.

Uploaded by

renukumars
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
5K views

Data Structure Book Part2

A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most used orders are numerical order and lexicographical order. Sorting techniques are categorized into Internal Sorting and external sorting.

Uploaded by

renukumars
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

18. List the operations performed on priority Queue.

19. Differentiate between binary tree and Binary search trey



20. Differentiate between general tree and binary tree.:_,/'

PartB

1. (a) Write an algorithm to find an element from binary search tree.

(b) Write a program to insert and delete an element from binary search tree.

(~ Write a routine to generate the AVL tree.

3. What are the. different tree traversal techniques? Explain with examples.

4. Write a function to perform insertion and deletemin in a binary heap.

5. Define Hash function. Write routines to find and insert an element in seperate chaining.

SORTING

4.1 PRELIMINARIES

A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most used orders are numerical order and lexicographical order. Efficient sorting is importantto optimizing the use of other algorithms that require sorted lists to work correctly and for producing human - readable input.

Sorting algorithms are often classified by :

*

Computational complexity (worst, average and best case) in terms of the size of the list (N). F or typical sorting algorithms good behaviour is O(NlogN) and worst case behaviour is O(N2) and the average case behaviour is O(N).

Memory Utilization

Stability - Maintaining relative order of records with equal keys .. No. of com paris ions.

Methods applied like Insertion, exchange, selection, merging etc.

*

*

*

*

Sorting is a process of I incar ordering of list of objects. Sorting techniques are categorized into

::::> Internal Sorting

:;;;;> External Sorting

/

Internal Softing takes place in the main memory ofa computer.

eg : - Bubble sort, Insertion sott, Shell sort, Quick sort, Heap sort, etc.

External Sorting, takes place in the secondary memory of a computer, Since the number of objects to be sorted is too large to fit in main memory.

eg . - Merge Sort, Multiway Merge, Polyphase merge. 4.2 INSERTION SORT

Insertion sorts works by taking elements from the list one by one and inserting them in their current position into a new sorted list. Insertion sort consists ofN - 1 passes, whereN is the number of elements to be sorted. The i'h pass of insertion sort will insert the ith element A[i] into its rightful place among A[l], A[2] --- A[i - 1]. After doing this insertion the records occupying A[ I ] .... A[i] are in sorted order.

INSERTION SORT PROCEDURE

void. Insertion Sort (int A[ ], int N)

int i,j, Temp;

for (i = 1; i < N; i++) {

Temp=A[i];

for (j = i; j>O; j --)

if(A U - 1] > Temp) Am =A[j -1];

AU] = Temp;

Example

Consider an unsorted array as follows,

20

10

60

40

30

15

PASSES OF INSERTION SORT

ORIGINAL 20 10 60 40 30 15 POSITIONS MOVED
After i = 1 10 20 60 40 30 15 1
After i = 2 10 20 60 40 30 15 0
After i = 3 10 20 40 60 30 15 1
Afteri = 4 10 20 30 40 60 15 2
After j = 5 10 15 '20 30 40 60 4
Sorted Array 10 15 20 30 40 60 ANALYSIS OF INSERTION SORT WORST CASE ANALYSIS BEST CASE ANALYSIS AVERAGE CASE ANALYSIS

LIMITATIONS OF INSERTION SORT:

O(N2) O(N) O(W)

*

It is relatively efficient for small lists and mostly - sorted lists.

*

It i~ eynen«ive heC>lH«e of sh ifrino »11 followirro elements hv nne,

4.3 SHELL SORT

Shell sort was invented by Donald Shell. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. It works by arranging the data sequence in a two - dimensional array and then sorting the columns of the array using insertion sort.

In shell short the whole array is first fragmented into K segments, where K is preferably a prime number. After the first pass the whole array is partially sorted. In the next pass, the value of K is reduced which increases the size of each segment and reduces the number of segments. The next value ofK is chosen so that it is relatively prime to its previous value. The process is repeated until J( = 1, at which the array is sorted. The insertion sort is applied to each segment, so each successive segment is partially sorted. The shell sort is also called the Diminishing Increment Sort, because the value ofK decreases continuously.

SHELL SORT ROUTINE

void shellsort (intA[ ], int N)

int i,j, k, temp;

for (k = N/2; k > 0 ; k = kJ2) for (i = k; i <N; i++)

temp=A[i];

for (j = i; j >= k & & A [j - k] > temp; j = j - k) {

A[j] = AU - kJ;

Am = temp;

Example
Consider an unsorted array as follows.
81 94 11 96 12 35 17 95 28 58
Here N = 10, the first pass as K = 5 (10/2)
81 94 11 96 12 35 17 95 28 58 81 94 II 96 12 35 17 95 28

58

I I, I ,
I ,
I , After first pass
- ."
35 17 II ~8 . q 81 94 95 96' 58
(L__' ,
In second Pass, K isreduced to 3
35 17 11 28 12 81 94 95 96 58 .I. ? ! Ii ,
I
I
I
~, s:> After second pass,
28 12 11 35 17 81 58 95 96 94
In third pass, K is reduced to 1
28 12 11 35 17 81 58 95 96 94
I I I I I I I I I I

The first sorted array is
11 12 17 28 35 58 81 94 95 96
ANALYSIS OF SHELL SORT;
WORST CASE ANALYSIS - O(W)
BEST CASE ANALYSIS - O(N IogN)
AVERAGE CASE ANALYSIS - O(N15)
ADVANTAGES OF SHELL SORT: *

It is one of the fastest algorithms for sorting small number of elements. It requires relatively small amounts of memory.

*

4.4 HEAP SORT

In heap sort the array is interpreted as a binary tree. This method has 2 phases. In phase 1, binary heap is constructed and In phase 2 delete min routine is performed.

Phase 1 :

Two properties of a binary Heap namely structure property and heap order property is achieved in phase 1.

Structure Property

For any element in array position i, the left child is in position 2i, the right child is in 2i + 1, (ie) the cell after the left child.

Heap Order Property

The key value in the parent node is smaller than or equal to the key value of any of its child node. To Build the heap, apply the heap order propery starting from the right most non - leaf node at the bottom level.

Phase 2 ;

The array elements are sorted using deletemin operation, which gives the elements arranged in descending order.

For example. Consider the following array.

6

8

4

10

2

7

6

7

3

4

5

1

2

Phase 1 ;

Binary Heap satisfying structure property

Binary heap satisfying heap order property

Phase 2 :.

Remove the smallest element from the heap and place it in the array.

Find the smallest child and swap with the root.

7

Reconstruct the heap till it satisfies the heap order property.

Remove the smallest element and place it in the array.

6 7

Reconstruct the heap till it satisfies the heap order property.

Swap (4) minimum & last element (10)

Place the value (4) in the array.

Similarly, apply the procedure for other elements.

4

5

6

7

9
5 6 7 I 6 4 2
3 4 5 6 7 G
~
7 6 4 2
8 I
2 3 4 5 6 7 2

3

7

4

5

6

Final array sorted in descending order using heapsort.

After the last deletemin, the array will contain the elements in descending order. To get the elements sorted in ascending order, Deletemax routine of heap is applied, in which the heap order property is changed. i.e. the parent has a larger key value then its children.

Consider the same example

7

4

6

10

21

8

2

3

5

7

4

6

n

~/"""" f f I I I I I ,

I I ,

I \ , -

,-_ ....

I ~ ~I
4 5 6 7 10
4 5 6 7 Now the binary heap satisfies the heap order property.

Now, the root node contains the maximum key value, so apply delete max routine to this heap .

................ ................

-.

,

,

I ,

I I I I I I I I

Remove the maximum element & place it in an array from the heap

Reconstruct the heap till it satisfies heap order property.

,',/' I I I I I I I I I I I ,

I

' .... __ .... "'"

1

""""

" ,

I I I I "

I I

I

" "

Remove the maximum element from the heap and place it in the array.

Reconstruct the heap till it satisfies the heap order property.

_--...,_

/ \

/ I

, ,

, ,

/ I

, I I

I , ,

I

I I I ,

I I

Remove the maximum element from the heap and place in the array.

21 8 10
4 5 6 7 Reconstruct the heap till it satisfies the heap order property .

... --- ............. ,

,

,

\

\

\

\ I I I I I ,

I I I

.............. - ... ;~

1

~I
7 8 10
1 4 5 6 7 Similarly, apply the procedure for other elements.

~I
6 7 8 10
1 .. 3 4 5 6 7 ~ .,---_ ... G
, ,
, , \
, , \ ~4
, \
, I ,
, , ..
, , I
I I , I
I I ,
I I , ,
, I
I I
, I
I , , I~
I , 6 7 8
I ,
\ , ,
, ,
, .... _-_ ... ,
I 2 3 4 5 6 7
9'1

4 6 7 8 101
2 3 4 5 6 7 The deletion of last element in the heap gives array in an sorted order. HEAP SORT USING DELETEMAX ROUTINE

void DeleteMax (int A [ ], int root, int bottom)

int maxchild, temp;

while (root *2 < = bottom)

if (root * 2 = = bottom) maxchild = root * 2

else if (A [root * 2] > A [root * 2 + 1]) maxchild = root * 2

else

maxchild = root * 2 + I; if(A [root] <A [maxchild])

{

swap (&A [root], &A [max child]) root = maxchild;

else

break;

void heapsort (int A] ], int N)

int i;

for (i= N/2; i > = 0; i--) DeleteMax (A, i, N);

for (i = N - 1; i > 0 ; i - -)

swap (&A [0], & A [i]; DeleteMax (A, 0, i-I);

ANALYSIS OF HEAP SORT

WORST CASE ANALYSIS BEST CASE ANALYSIS AVERAGE CASE ANAL SIS -

ADVANTAGES OF HEAP SORT :

O(N 10gN) O(N 10gN) O(N 10gN)

I. It is efficient for sorting large number of elements.

2. It has the advantages of worst case O(N log N) running time. LIMITATIONS

It is not a stable sort.

*

It requires more processing time

4.5 MERGE SORT

The most common algorithm used in external sorting is the mergesort. This algorithm follows divide and conquer strategy. In dividing phase, the problem is divided into smaller problem and solved recursively. In conquering phase, the partitioned array is merged together recursively. Merge sort is applied to the first half and second half of the array. This gives two sorted halves, which can then be recurrively merged together using the merging algorithm.

The basic merging algorithm takes two input arrays A and B and an output array C. The first element of A array and B array are compared, then the smaller element is stored in the output array C and the corresponding pointer is incremented. When either input array is exhausted the remainder of the other array is copied to an output array C.

MERGE SORT ROUTINE

void mergesort (intA [ ], int temp [], int n)

msort (A, temp, 0, n-l);

void msort (intA [], inttemp [j, int left, int right)

int center;

if (left < right)

center = (right +left)/2;

msort (A, temp, left, center);

msort (A, temp, center + 1, right); merge (A, temp, left, center + 1, right);

MERGE ROUTINE

void merge (int A [ ], int temp [ ], int left, int center, int right)

int i, left _ end, num _elements, tmp _pos; left_end = center - 1;

tmp _pos = left;

num _elements = right - left + 1 ;

while « left < = left end) && (center < = right»

If (A [left] < = A[centerJ)

temp [tmp pos] = A [left];

tmp_pos++; left++;

else

temp [tmp_pos] =A [center]; tmp_pos++;

center ++;

while (left < = left_end)

temp [tmp pos] = A[left]; left++;

tmppos++;

while (center < = right)

temp [temp_pos] = A [center];

center ++;

tmp_pos++;

}

for(i = 0; i < = num_elements ; itt)

A[right] = temp [right]; right++;

Example

For instance, to sort the eight element array 24, 13,26. 1,2,27,38,15, we recursively sort the first four and last four elements, obtaining 1, 13,24,26,2, 15,27,38 then these array is divided into two halves and the merging algorithm is applied to get the final sorted array,

Now, the merging algorithm is applied as follows,

Let us consider first 4 elements 1, 13,24,26 asA array and the next four elements 2, 15,27,38 as B array.

II I 13 I 24 I 261

i

Aptr

~

Jptr

A arrllY

B array

I I I I , I I I I r

Cptr

C aray

First, the element 1 from A array and element 2 from B array is compared, then the smallest element I from A array is copied to an output array C. Then the pointersAptr and Cptr is incremented by one.

II I 13 124 1261 121 15 27 I 381
i r
Aptr Bptr
A array B array Cptr

C aray

Next, 13 and 2 are compared and the smallest element 2 from B array is copiedto C array and the pointers Bptr and Cptr gets incremented by one. This proceeds until A array and B array are exhausted, and all the elements are copied to an output array C.

II 113 124 1261

1

Aptr

12 1 15 127 I 381

r

Bptr

11 I 13

1

Aptr

12 I 15 127 I 381

r

Bptr

11113 124 1261 12 I 15 127 I 381
r r
Aptr Bptr I 1 1 13 I 24 I 261

f

Aptr

12 1 15 I 27 I 381

i

Bptr

Itl l3 124 I 261 121 15 1 27 1 381
i r
Aptr Bptr Since A array is exhausted, the remaining elements ofB array is then copied to C array.

11 113 1 24 1261 12 1 15 1 27 1 381

i i

Aptr Bptr

11121131151241261271381

1

ANALYSIS OF MERGESORT WORST CASE ANALYSIS BEST CASE ANALYSIS AVERAGE CASE ANALYSIS :

O(N logN) O(N 10gN) o (N 10gN)

Limitations of Merge Sort

~ergesort sorts the

external storage device.

... It requires extra memory space Advantages

of

making

use

data

larger

...

amount

...

It has better cache performance Merge Sort is a Stable Sort

It is simpler to understand than heapsort .

...

...

4.6 QUICK SORT

Quick Sort is the most efficient internal sorting technique. It posseses a Very good average case behaviour among all the sorting techniques. It is also called partioning sort which uses divide and conquer techniques.

The quick sort works by partioning the array A[I], A[2] A[n] by picking some keyvalue in the

array as a pivot element. Pivot element is used to rearrange the elements in the array. Pivot element is used to rearrange the elements in the array. Pivot can be the first element of an array and the rest of the elements are moved so that the elements on left side of the pivot are lesser than the pivot, whereas those on the right side are greater than the pivot. Now, the pivot element is placed in its correct position. Now the quicksort procedure is applied for left array and right array in a recursive manner.

QUICK SORT ROUTINE

r· void qsort (intA[ ], int left, int right)

int i,j, pivot, temp;

if (left < right)

pivot = left; i = left + I; j = right; while (i <j)

while (A [PIYOT] > = A[i])

i=i+ I;

while (A[PIVOT] < A[j])

of

temp = A[i]; II swap

A[i]=ADl; A[j] = temp;

II A[i] &AUl

}

}

temp = A[PIVOT]; 1/ swap A[PIVOT] & A[jJ A[PIYOT] = A[j];

A[j] = temp;

qsort (A, left, j - I); qsort (A,j+ I, right);

Example :-

Consider an unsorted array as follows,

40 20 70 14 60 61 97 30

Here PIYOT = 40, i = 20,j = 30

:::? The value of i is incremented till ali] ::; Pivot and the value ofj is decremented till am > pivot, this process is repeated until i <j.

:::? If ali] > pivot and a[j] < pivot and also if i < j then swap ali] and ajj], :::? 'If i > j then swap a[j] a~d a[pivot].

Once the correct location for PIVOT is found, then partition array into left sub array and right subarray, where left sub array contains all the elements less than the PIVOT and right sub array contains all the elements greater than the PIYOT

PASSES OF QUICK SORT

@ 20 70 14 60 61 97 30

Pivot

j

®

Pivot

20

60

61

97

30

70

14

Swap (a[i], afj], i.e As i < j swap (70, 30)

@ 20 30 14 60 61 97 70
Pivot j
.@) 20 30 14 60 61 97 70
Pivot
® 20 30 14 60 61 97 70
Pivot
@) 20 30 14 60 61 97 70
Pivot
@ 20 30 14 60 61 97 70
Pivot i,
@ 20 30 14 60 61 97 70
Pivot i,j
® 20 30 14 60 61 97 70
Pivot
As i > j swap (a[j], a[pivot])
i.e swap (60, 40)
14 20 30 40 60 61 97 70 Now, the pivot element has reached its correct position. The elements lesser than the Pivot {14, 20, 30} is considered as left sub array. The elements greater than the pivot {60, 61, 97, 70J is considered as right sub array. Then the q sort procedure is applied recursively for both these .arrays.

ANALYSIS OF QUICK SORT

WORST - CASE ANALYSIS = O[N2]

BEST - CASE ANALYSIS

== O[N 10gN)

AVERAGE - CASE ANALYSIS == O(N log N)

Advantages of Quick Sort

I. It is faster than other O(N log N) algorithms.

2. It has better cache performance and high speed. Limitation

* Requires More Memory space

4.7 External Sorting

It is used for sorting methods that are employed when the data to be sorted is too large to fit in primary memory.

Need for external storing

:::;. Duringthe sorting, some of the data must be stored externally such as tape or disk

:::;. The cost of accessing data is significantly greater than either book keeping or comparison. costs.

=> If tape is used as external memory then the items must be accessed sequentially. Steps to be followed

The basic external sorting algorithm uses the merge routine from merge sort.

I. Divide the file into runs that the size of a run is small enough to fit into main memory.

2. Sort each run in main memory

3. Merge the resulting runs together into successively bigger runs. Repeat the steps until the file is sorted.

4.7.1 The Simple Algorithm (2 way Merge)

Let us consider four tapes Tal, Ta2, Tbl, Tb2, which are two input and two output tapes. The a and b tapes can act as either input tapes or output tapes depending upon the algorithm.

Let the size of the run (M) is taken as3 to sort the following set of values ..

Tal 44 80 12 35 45 58 75 60 24 48 92 98 85 Ta2

Tbl

Tb2

Initial Run Construction

Step 1 : Read M records at a time from the ilp tape Tal.

Step 2 : Sort the records internally and write the resultant records alternately to Tb I and Tb2.

85

Tal Ta2

Tbl 12

I Tb2 35

44 45

80 58

24 48

60 92

75 98

First 3 records from the input tape Ta 1 is read and sorted internally as (12, 44, 80) and placed in Tbl. '

Then, next 3 records (35, 45, 58) are read and the sorted records is placed in Tb2.

Similarly the rest of the records are placed alternatively in Tbl and Tb2. Now Tbl and Tb2 contain a group of runs.

number of runs = 4.

First Pass

First run ofTbl & Tb2 are merged and the sorted records placed in Tal.

Similarly the second run ofTbl & Tb2 are merged and the sorted records placed in Ta2.

I
Tal 12 35 44 45 58 80 85
Ta2 24 48 60 75 92 98
Tbl
Tb2 Here the number of run is reduced to 2, but the size of the runs are increased. Second Pass

First run of Tal and Ta2 are merged and the sorted records is placed in Tbl and the second run of Tal is placed in Tb2.

Tal Ta2

Tbl 12 24 35 44 45 48 58 60 75 80 92

Tb2 85

Third pass

In the third pass, run from Tb I and Tb2 are merged and then sorted records is placed in Tal.

ITa 1 Ta2 Tbl Tb2

12 24 35 44 45 48 58 60 75 80

85 92

This algorithm will require log (N/M) passes, plus the initial run constructing pass.

4.7.2 MULTIWAY MERGE

The number of passes required to sort an input can be reduced by increasing the number of tapes. This can be done by extending the 2 - way merge to k - way merge. The only difference is that, it is more complicated to find the smallest of tile k elements, which can be overcome by using priority queues,

For the same input,

40 80 12 35 45 58 75 60 24 48 92 98 85 Let us consider 6 tapes Tal, Ta2, Ta3, Tbl, Tb2, Tb3 & M = 3 Initial run constructing pass

92

98

Tal Ta2 Ta3

Tbl 12

Tb2 35

Tb3 24

44 45 60

80 58 75

48 85

In first pass, first run of Tb I, Tb2 and Tb3 are merged and sorted records. are placed Tal. Similarly, second run from Tb 1 & Tb2 are merged and sorted records are then placed in Ta2.

FIRST PASS

Tal 12 24 35 44 45 58 60 75 80
Ta2 48 85 92 98
Ta3
: Tbl
Tb2
Tb3 In se~ond pass, runs from Tal and Ta2 is merged and the sorted records is placed in Tbl,which Contams the final sorted records.

.... ,_.~
£+
<r:: ,_.'"
,
... ,....'"
o +
<t:: _
<r::,....
,
.... ,_.M
o +
!2t-<'" SECOND PASS

Tal Ta2 Ta3

rsi: .12 24 35 44 45 48 58 60 75 80 85 92 98

Tb2

Tb3

o

o

<0

o

For the same example, 2 - way merge requires 4 passes to get the sorted elements, whereas in Multiway - Merge it is reduced to 3 passes, which also includes the initial run constructing pass.

Note: - After the initial run construction phase, the no. of passes required using k - way merging is log k [N/MJ because the runs get k times as large in each pass.·

4.7.3 POLYPHASE MERGE

<0

The k _ way merge strategy requires 2k tapes to perform the sorting. In some application it is possible to get by only k + I tapes.

For example, let us consider 3 tapes Tt' T2 and T) and an input file on TI that produce 8 runs.

The distribution of the runs in each tape various as follows,

Runs

o

1 . Equal distribution (4 & 4)

2. Unequal distribution (7 & 1)

3. Fibonaci numbers (3 & 5)

Equal Distribution

Put 4 runs on each tapes T2 & T3, after applying merge routine, the resultant tape TI has 4 runs, whereas other tapes T2 & TJ are empty which leads to adding an half pass for every pass.

-,

Tapes Run After After After After After

Construction T2 + T3 Splitting Tl + T2 Split T2 + T3

-,

Tl 0 4 2 0 0

<0

o

T2 4 0 2 0 I 0

.....
Vl
c:: c::
::; 0
~u <0 r-- -
Vl
0 -
Q.. P <"l
~ ,.... ,_.
I T3 4 0 0 2 0

- --- - -

In first pass, all the runs (4) are placed in one tapes, so it is logically divided and placed half oHhe runs (2) in any of the other tapes.

UNEQUAL DISTRIBUTION

For instance, if7 runs are placed on T2 and 1 run in T3. Then after the first merge TI will hold I rull

___ ~ ~ '" I.._Ll L ~._~ A 0 ;. ~o.ITO nnl" nn'" "At .r.f run the erocess.aet slower resulting,__1_m",o;..;.f_e __.L...- _

FIBONACCI NUMBERS

If the number of runs is the fibonacci number F(N), then the runs are distributed as 2 fibonaci numbers F(N - 1) & F(N - 2).

Here the number of runs is 8, a Fibonacci number, so it can be distributed as 3 runs in Tape T2 and 5 runs in T3.

Tape ; Run After After After After
• Canst T2 + T3 r, +T3 T, +T2 T2+ T)
Tl 0 3 1 0 1
T2 3 0 2 1 0
T3 5 2 0 1 0 This method of distributing runs gives the optimal result, i.e less number of passes to sort the records than the other two methods.

Questions

PartA

l.,/Define Sorting

2v/Differentiate Internal Sorting and External Sorting

3. Give examples for Internal Sorting and External Sorting 4"...-- Sort the sequence 3, 4, 1, 5, 9, 2, 6 using insertion sort.

5. Write the routine for Insertion Sort.

6. What is diminishing Increment Sort?

7. What is the average number of comparison used to sort N distinct items using Heap Sort. 8./Name the Sorting technique which uses divide and conquer strategy.

9. What is the best case and average case analysis for quicksort'Z-r'"

10. What is the need for external sort?

II. Compare two - way merge and Multi - way M~ge. :;<lk

12. Write a note on polyphase merge.

13. What is the running time of heap sort for presorted input?

14. Determine the running time of merge sort for sorted input.

15. Sort 3,4,5,9,2,6 using mergesort.

PART - B

I. Write down the shellsort algorithm Using this algorithm: trace the following numbers.

10 9

8

2

7

5

6

4

2. Explain QuickSort algorithm: with an example

3. Write the procedure for HeapSort and trace the given numbers,

18

10

8

5

15

13

4. What is external sorting? Give an example for Multiway merge and polyphase merging strategy with three tapes T1, Tzand T3•

Weighted Graph A graph is said to be weighted graph if every edge in the graph is assigned a weight or value. It call

. .. .... -------

5 GRAPH

A graph G = (V, E) consists of a set of vertices, V and set of edges E.

Vertics are referred to as nodes and the arc between the nodes are referred to 'as Edges. Each edge is a pair (v, w) where 1I, w E V. (i.e.) v = VI' W = V2 ...

Fig. 5.1.1

Here v; V2, v, V4 are the vertices and (VI' V2), (V2, V), (V" V4), (V4, VI)' (V2, V4), rv, VJ) are edges.

5.1 BASIC TERMINOLOGIES J>ife'cted Graph (or) Digraph

Directed graph is a graph which consists of directed edges, where each edge in E is unidirectional. It is also referred as Digraph. If (v, w) is a directed edge then (v, w) # (w, v)

Fig. 5.1.2

u,dtrected Graph

t\fI,l.!ndirected graph is a graph, which consists of undirected edges. If (v, w) is an undirected edge tben (v,w) = (w, v)

Fig. 5.1.3

2

2

Fig. 5.1.4 (a)

Fig. 5.1.4(b)

Complete Grapy

A complete graph is a graph in which there is an edge between every pair of vertics. A complete graph with n vertices will have n (n - 1)/2 edges.

Fig. 5.1.5

Fig. 5.1.5 (a) Vertices of a graph

In fig. S.l.S

Number ofvertics is 4 Number of edges is 6

(i.e) There is a path from every vertex to every other vertex. A complete digraph is a strongly connected graph.

Strongly Connected Graph

If there is a path from every vertex to every other vertex in a directed graph then it is said to be strongly connected graph. Otherwise, it is said to be weakly connected graph.

Fig. 5.1.6 Strongly Connected Graph Path

Fig. 5.1.7 Weakly Connected Graph

A path in a graph is a sequence of vertices lV]> lV2, lVn such that lVj, lVj+1 E E for 1:::; i :::; N·

Referring the Fig. 5.1.7 the path from VI to V3 is VI' V2, Vl•

Length

The length of the path is the number of edges on the path, which is equal to N-l, where N represents the number of vertices.

The length of the above path VI to V3 is 2. (i.e) (VI' V,), (V2' V,).

If there is a path from a vertex to itself, with no edges, then the path length is O. Loop

If the graph contains an edge (v, v) from a vertex to itself, then the path is referred to as a loop. Simple Path

A simple path is a path such that all vertices on the path, except possibly the first and the last are distinct.

A simple cycle is the simple path of length atleast one that begins and ends at the same vertex. Cycle

A cycle in a graph is a path in which first and last vertex are the same.

Fig. 5.1.8 A graph which has cycles is referred to as cyclic graph. Degree/

The number of edges incident on a vertex determines its degree. The degree of the vertex V is written as degree (V).

The indegree of the vertex V, is the number of edges entering into the vertex V.

Similarly the out degree of the vertex V is the number of edges exiting from that vertex V.

Fig. 5.1.9

In fig. 5.1.9

Indegree (VI) = 2

Outdegree (VI) = 1 ACyclic Graph

A directed graph which has no cycles is referred to as acyclic graph. It is abbreviated as DAG. DAG - Directed Acyclic Graph.

Fig. 5.1.10

5.2 Representation of Graph

Graph can be represented by Adjacency Matrix and Adjacency list. One simple way to represents a graph is Adjacency Matrix.

The adjacency Matrix A for a graph G = (V, E) with n vertices is an n x n matrix, such that A.. = 1, if there is an edge V. to V

1J. I J

Ajj = 0, if there is no edge.

Adjacency Matrix For Directed Graph

v'1

0 1 1 0
0 0 0 1
0 1 0 0
0 0 1 0 Fig. 5.2.1

Fig. 5.2.2

Example Vl,l = 1 Since there- is an edge VI to V2

Similarly VI,) = 1, there is an edge VI to V)

VI•I & VI,4 = 0, there is no edge.

Adjacency Matrix For Undirected Graph

Fig. 5.2.3

0 1 1 0
1 0 1 1
1 1 0 1
0 1 1 0 Fig. 5.2.4

Adjacency Matrix For Weighted Graph

To solve some graph problems, Adjacency matrix can be constructed as A'J == e,l' if there exists an edge from Vi to VJ

A == 0, ifthere is no edge & i = j

lJ

] f there is no arc from i to j, Assume C[ i, j] = 00 where i '* j ,

VI V2 V3 V4

Fig. 5.2.5

V

0 , 3 9
, co
I
co 0 co 7
00 1 0 00
4 00 1 8 0 Advantage

* Simple to implement.

Disadvantage

* Takes 0(n2) space to represents the graph

* It takes 0(n2) time to solve the most ofthe problems.

Adjacency List Representation

In this representation, we store a graph as a linked structure, We store all vertices in a list and then for each vertex, we have a linked list of its adjacency vertices

Fig. 5.2.6

Adjacency List

2

3 4

5 1------11---1

6 1------11---1

7

Fig. 5.2.8

Disadvantage

* It takes O(n) time to determine whether there is an arc from vertex i to vertex j, Since there can

be O{n) vertices on the adjacency list for vertex i. .

5.3 Topological Sort

A topological sort is a linear ordering of vertices in a directed acyclic graph such that if there is a path from Vi to Vi' then Vj appears after Vi in the linear ordering.

Topological ordering is not possible. Ifthe graph has a cycle, since for two vertices v and won the cycle, v precedes w and w precedes v.

To implement the topological sort, perform the following steps.

Step 1 : - Step 2 : - Step 3 : -

Step 4: - Step 5: - Step 6: -

Find the indegree for every vertex.

Place the vertices whose indegree is '0' on the empty queue.

Dequeue the vertex V and decrement the indegree's of all its adjacent vertices ..

Enqueue the vertex on the queue, if its indegree falls to zero. Repeat from step 3 until the queue becomes empty,

The topological ordering is the order in which the vertices dequeued.

R.outine to perform Topological Sort

r------------------~----------------

/* Assume that the graph is read into an adjacency matrix and that the indegrees are computed for every vertices and placed in an array (i.e. Indegree [ ] ) */

Now, Enqueue the vertex 'b' as its indegree becomes zero. Step 4

Dequeue the vertex 'b' from Q and decrement the indegree's of its adjacent vertex 'c' and 'd'.

Hence, Indegree [c) "" 0 & Indegree [d] "" I

Now, Enqueue the vertex 'c' as its indegree falls to zero.

Step 5

Dequeue the vertex 'c' from Q and decrement the ind~gree's of its adjacent vertex 'd'.

Hence, Indegree [dl = 0

Now, Enqueue the vertex 'd' as its indegree falls to zero. Step 6

___ .....L __ D....;;e.::tu~e""'u~e the vertex 'd'.

void Topsort (Graph G)

Queue Q;

int counter '"-= 0; Vertex V, W;

Q "" CreateQueue (NumVertex); Makeempty (Q);

for each vertex V

if (indegree (V] '"-= '"-= 0) Enqueue (V, Q);

while (! IsEmpty (Q»

V = Dequeue (Q);

TopNum [V] = + + counter; for each W adjacent to V

if (--Indegree [W] = = 0) Enqueue (W, Q);

if (counter ! = N um Vertex)

Error (" Graph has a cycle");

DisposeQueue (Q);

1* Free the Memory *1

Note:

Enqueue (V, Q) implies to insert a vertex V into the queue Q. Dequeue (Q) implies to delete a vertex from the queue Q. TopNum [V] indicates an array to place the topological numbering.

Illustration

abc d

a

0 1 1 0
0 0 1 1
0 0 0 1
I
0 0 010 b

c

d

Fig. 5.3.1

Adjacency Matrix

Step 1

Number of I's present in each column of adjacency matrix represents the indegree of the corresponding vertex.

In fig 5.3.1 Indegree [a] = 0 Indegree [c] = 2

Indegree [b] = 1 Indegree Cd] = 2

Step 2

Enqueue the vertex, whose indegree is '0'

In fig 5.3.1 vertex 'a' is 0, so place it on the queue.

Step 3

Dequeue the vertex 'a' from the queue and decrement the indegree's of its adjacent vertex 'b' &

'c'

Hence, Indegree [b) "" 0

&

Indegree [c) = 1

Step 7

As the queue becomes empty, topological ordering is performed, which is nothing but, the order in which the vertices are dequeued.

VERTEX 1 2 3 4
a [il] 0 0 0
b 1 [ill 0 0
c 2 1 [OJ 0
d 2 2 1 [Q]
ENQUEUE a b c d
DEQUEUE a b c d Example

RESULT OF APPLYING TOPOLOGICAL SORT TO THE GRAPH IN FIG 5.3.1

Adjacency Matrix :-

VI Vz V~ V4 Vs V6 V7
V· 0 1 1 1 0 0 0
I
V2 0 0 0 1 1 0 0
VJ 0 0 0 0 0 1 0
V4 0 0 1 0 0 1 I
V5 0 0 0 1 0 0 I
V6· 0 0 0 0 0 0 0
V7 0 0 0 0 0 I 0
I'iDEGREE 0 I 2 3 I 3 2 Indegree [V I] = 0 Indegree [V 4] = 3

Indegree [V J = 1 Indegree [V s] = 1

Indegree [V 3] = 2 Indegree [V 6] = 3

INDEGREE BEFORE DEQUEUE#
VERTEX 1 2 3 4 5 6 7
II VI [Q] 0 0 0 0 0 0
V2 1 [QJ 0 0 0 0 I 0
V3 2 I I I [Q] 0 0
V4 3 2 I []J 0 0 0
Vs I I [ill 0 0 0 0
V6 3 3 3 3 2 1 [Q]
V7 2 2 2 1 0 [Q] 0
\Jr ENQUEUE VI V2 V5 V4 VJ,V7 V6
DEQUEUE V V, V, V V V? V RESULT OF APPLYING TOPOLOGICAL SORT TO THE GRAPH IN FIG.

The topological order is VI' V2, v; V4, V1, V" V6 Analysis

The running time of this algorithm is O( lEI + IVI ). where E represents the Edges & V represents the vertices of the graph.

5.4 Shortest Path Algorithm

The Shortest path algorithm determines the minimum cost of the path from source to every other vertex.

N·1

The ~ost of the path VI' V2, --VN is Lei' ;+1. This is referred as weighted path length. The

r=]

un weighted path length is merely the number of the edges on the path, namely N - I.

Two types of shortest path problems, exist namely,

1. The single source shortest path problem

2. The all pairs shortest path problem

The single source shortest path algorithm finds the minimum cost from single source vertex to all other vertices. Dijkstra's algorithm is used to solve this problem which follows the greedy teChnique.

All pairs shortest path problem finds the shortest distance from each vertex to all other vertices. To Solve this problem dynamic programming technique known as floyd's algorithm is used.

These algorithms are applicable to both directed and undirected weighted graphs provided that they

~ d....;.o-"n ..... ot contain a cycle of negative length.

Single Source Shortest Path

Given an input graph G = (V, E) and a distinguished vertex S, find the shortest path from S to every other vertex in G. This problem could be applied to both weighted and unweighted graph.

5.4.1 Unweighted Shortest Path

In unweighted shortest path all the edges are assigned a weight of "1" for each vertex, The following three pieces of information is maintained.

Algorithm for unweighted graph

known

Specifies whether the vertex is processed or not. It is set to '1' after it is processed, otherwise '0'. Initially all vertices are marked unknown. (i.e) '0'.

dv

Specifies the distance from the source's', initially all vertices are unreachable except for s, Whose path length is '0'.

Pv

Specifies the bookkeeping variable which will allow us to print. The actual path. (i.e) The vertex which makes the changes in dv.

To implement the unweighted shortest path, perform the following steps:

Step 1 : - Assign the source node as's' and Enqueue's'.

Step 2 : - Dequeue the vertex's' from queue and assign the value of that vertex to be known and then find its adjacency vertices.

Step 3 :- If the distance of the adjacent vertices is equal to infinity then change the distance of that vertex as the distance of its source vertex increment by 'I' and Enqueue the vertex.

Step 4:- Repeat from step 2, until the queue becomes empty.

~OUTINE FOR UNWEIGHTED SHORTEST PATH void Unweighted (Table T)

{

Queue Q; Vertex V, W;

Q = CreateQueue (NumVertex); MakeEmpty (Q);

1* Enqueue the start vertex s *1 Enqueue (5, Q);

while (! IsEmpty CQ» {

l

Illustrations

v = Dequeue (Q);

T[V). Known = True; 1* Not needed anymore*/ for each W adjacent to V

if (T[W]. Dist = = INFINITY) {

Source vertex 'a' is initially assigned a path length '0'.

T[W] . Dist = T[Y] . Dist + 1 ; T[W] . path = V;

Enqueue (W, Q);

DisposeQueue (Q) ; 1* Free the memory */

Fig. 5.4.1(a)

Fig. 5.4.1(b)

Y KNOWN d P
v v
a 0 0 0
b 0 00 0
c 0 00 0
d 0 00 0
Queue a lNITIALCONFIGURATION After finding all vertices whose path length from 'a' is 1.

o 1

Fig. 5.4. 1 (c)

'Ii' is Dequeued
V KNOWN d P
v v
a 1 0 0
b 0 1 a
c 0 1 a
d 0 00 0
Queue b, c After finding all vertices whose path length from 'a' is 2.

Fig. 5.4.1(d)

2

'b' is dequeued
V KNOWN d P v
v
a I 0 0
b 1 1 a
c 0 1 a
d 0 2 b
Queue c.d 'c' is Dequeued
V KNOWN d, P v
a I 0 0
b ] I a
c 0 I a
d 0 2 b
Queue d 'd' is Dequeued
V KNOWN d v P v
a 1 0 0
b 1 1 a
c I I a
d 1 2 b
Queue empty Shortest path from source a vertex 'a' to other vertices are a_bis]

a_cis 1

a_ d is2

Illustrations 2

Figure 5.4.2 (a) An unweighed directed graph G V) is taken as a source node and its path length is initialized to '0'

Fig. 5.4.2 (b) Graph after marking the start node as reachable in zero edges 1

V6 V7

Fig. 5.4. 2 (c) Graph after finding all vertices whose path length from source vertex is '1' I

Fig. 5.4. 2 (d) Graph after finding all vertices whose path length from source vertex is '1'

Fig. 5.4. 2 (e) Final shortest path

V Known d v Py
VI 0 00 0
V2 0 00 0
V3 0 0 0
V4 0 00 0
V5 0 00 0
V6 0 00 0
V7 0 00 0 Intial configuration of table used in Unweighted Shortest path computation
Initial State V3 Dequeued VI Dequeued
-
V known d P known d Py known dy P,
y v ,
VI 0 00 0 0 V3 1 VJ
V2 0 00 0 0 00 0 0 2 VI
V3 0 0 0 0 0 0 0
V4 0 00 0 0 00 0 0 2 VI
V5 0 00 0 0 00 0 0 00 0
V6 0 00 0 0 V) 0 VJ
V7 0 00 0 0 00 0 0 "\00 0
Qi V; VI' Vo I V6, V2, V4 Tn general when the vertex 'V' IS dequeued and the dlstanceof Its adjacency vertex Ow' IS lI1hmtlVe then distance and path of'w' is calculated as follows

T [W]. Dist = T[V]. Dist + I T[W]. path = V

When VJ is dequeued, known is set to 1 for that vertex the distance of its adjacent vertices V I and Vo are updated if INFINITY.

Path length of VI is calculated as T [VJ Dist = T [VJ]. Dist + 1 =0+1

=J

and its actual path is also calculated as T [VI) . path = V3

lIPy T [V6]. Dist = T[VJ Dist + 1

= I

T [VJ Path = VJ

Similarly, when the other vertices are dequeued, the table is updated as above.

V6 Dequeued V2 Dequeued V4 Dequeued
V known d v P v known dy Py known d v P .
VI VJ VJ VJ
V2 0 2 V, 2 VI 2 VI
VJ I 0 0 0 0 0 0
V4 0 2 VI 0 2 VI 2 VI
Vs 0 00 0- 0 3 V2 0 3 V2
V6 V) VJ VJ
V, 0 00 0 0 00 0 0 3 V4
Q: V2, V4 V4, V5 V5, V_
----- ~- r Vs Dequeued V7 Dequeued
V known d p. known d P
v v .
V\ 1 1 V3 1 1 V,
V2 1 2 V\ 1 2 VI
V3 1 0 0 I 0 0
V4 1 2 VI 1 2 VI
Vs I 3 V 1 3 V2
2
V6 I 1 V3 1 I V3
V7 0 3 V4 I 3 V4
Q: V7 empty Data changes during the unweighted shortest path algorithm.

The shortest distance from the source vertex V 3 to all other vertex is listed below:

V3-VI is
V3-V2 IS 2
VJ-V4 is 2
V3-VS is 3
Y3-V6 is 1
V3-V7 is 3
Note The running time of this algorithm is O(IEI + IV!) if adjacency list is used. ('E' represents the Edges & 'V' represents the vertices of the graph).

5.4.2 Dijkstra's Algorithm

The general method to solve the single source shortest path problem is known as Dijkstra's algorithm. This is applied to the weighted graph G.

Dijkstra's algorithm is the prime example of Greedy technique, which generally solve a problem in stages by doing what appears to be the best thing at each stage. This algorithm proceeds in stages, just like the unweighted shortest path algorithm. At each stage, it selects a vertex v, which has the smallest dv among all the unknown vertices, and declares that as the shortest path from S to V and mark it to be known. We should set dw == dv + Cvw, if the new value for dw would be an

ROUTINE FOR ALGORITHM

Void Dijkstra (Graph G, Table T) {

int i; vertex V, W;

Read Graph (0; T) /* Read graph from adjacency list * / 1* Table Initialization *1

for (i = 0; i < Numvertex; i++)

{

T [i]. known = False; T [i]. Dist = Infinity;

T [i]. path = NotA vertex;

}

T [start]. dist = 0;

for (;;)

V = Smallest unknown distance vertex; if (V = = Not A vertex)

break;

T[V). known = True;

for each W adjacent to V if ( ! T[W]. known)

T [W]. Dist = Min [T[W]. Dist, T[V]. Dist + CvwJ T[W]. path = V;

Example 1 :

Figure 5.4.3 The directed graph G

Fig. 5.4.3 (a)

V known d P
v v
a 0 0 0
b 0 00 0
c 0 00 0
d 0 00 0 Vertex 'a' is choose 'n' as source and is declared as known vertex.

Initial Conflguratlon

Then the adjacent vertices of 'a' is found and its distance are updated as follows:

T [b] . Dist "" Min [T[b]. Dist, T[a] . Dist + C e. b] "" Min [00,0 + 2]

""2

T [d] .. Dist "" Min [T[d]. Dist, T[a] .. Dist + C.) =Min [00, 0+ 1]

= 1

V known d P
v v
a 1 0 0
b 0 2 a
c 0 00 0
d 0 I a Aft·er a is declared known

Fig.5.4.3(b)

Now, select the vertex with minimum distance, which is not known and mark that vertex as visited. Here 'd' is the next minimum distance vertex. The adjacent vertext to 'd' is 'c', therefore, the distance of c is updated a follows,

T[c]. Dist = Min [T[c]. Dist, T[d]. Dist + Cd) =Min [00, 1 + 1] =2

V known d v pJ
a I 0 o I:
b 0 2 I
a !
c 0 2 di
d 1 1 a I After d is declared known

Fig. 5.4.3 (c)

The next minimum vertex is b and mark it as visited.

Since the adjacent vertex d isalready visited, select the next minimum vertex 'c' and mark it as visited.

V known d v P v
a 1 0 0
b 1 2 a I
c 0 2 d
II d 1 1 a After b is declared known

V known d v P v
a 1 0 0
b I 2 a
c 1 2 d
d 1 1 a After c is declared known and algorithm terminates

Fig. 5.4.3 (d)

Fig. 5.4.3 (e)

Example: 2

Fig. 5.4.4 The directed Graph G

Fig. 5.44 (a)

V known d P
v v
VI 0 0 0
, V2 0 00 0
V3 0 00 0
V4 0 00 0
Vs 0 00 0
V6 0 00 0
V7 0 00 0 INITIAL CONFIGURATION

VI is taken as source vertex, and is marked known. Then the d, and P" of its adjacent vertices are updated.

Fig. 5.4.4(b)

V known d v P
v
VI 1 0 0
V2 0 2 VI
V3 0 00 0
V4 0 1 VI
Vs 0 00 0
V6 0 00 0
V7 0 00 0 Fig. 5.4.4(c)

Fig. 5.4.4 (d)

Fig.5.4.4( e)

V known d P
v v
VI 1 0 0
V2 0 2 VI
V3 0 3 V4
V4 1 I VI
Vs 0 3 V4
Vo 0 9 V4 ,
,
V7 0 5 V4 After V 4 is declared known

V known d P
v v
VI I 0 0
Vz 1 2 VI
V3 0 3 V4
V4 I 1 VI
Vs 0 3 V4
V6 0 9 V4
V7 0 5 V4 After V2 is declared known

V known d v P v
VI 1 0 0
Vz 1 2 VI
V3 0 3 V4
V4 1 1 VI
V . 1 3 V4
5,
Vo 0 9 i V4
V7 0 5 'I V4 After V. is declared known

Fig. 5.4.4(f)

Fig. 5.4.4 (h)

V known d P v
v
VI I 0 0
V2 I 2 VI
V) 1 3 V4
V4 1 1 VI
V, I 3 V4
V6 0 8 V)
V7 0 5 V4 After V3 is declared known

3

V known d v P v
VI 1 0 0
V2 I 2 VI
V) I 3 V4
V. I I VI
V, I 3 V4
V6 0 6 V7
V7 I 5 V. After V7is declared known

V known d P v
v
VI 1 0 0
V2 I 2 VI
V) I 3 V4
V. I 1 VI
V5 1 3 V4
V6 1 6 V7
V7 1 5 V4 After V6is declared known and algorithm termiD_a_tc_s_....a.. _

The shortest distance from the source vertex VI to all other vertex is listed below:

VI--V2 is 2 VI __ V) is 3
VI -- V4 is 1 VI--V, is 3
VI-V6 is 6 VI-V, is 5 Note :- The total running time of this algorithm is O(IEI +IVI2) = O(lVI2) 5.5 Minimum Spanning Tree

A spanning tree of a connected graph is its connected a cyclic subgraph that contains all the vertices of the graph.

A minimum spanning tree of a weighted connected graph G is its spanning tree of the smallest weight, where the weight ofa tree is defined as the sum of the weights on all its edges.

The total number of edges in Minimum Spanning Tree (MST) is IVI-l where V is the number of vertices. Aminimum spanning tree exists if and only if 0 is connected. For any spanning Tree T, if an edge e that is not in T is added, a cycle is created. The removal of any edge on the cycle reinstates the spanning tree property.

Fig. 5.5.1 Connected Graph G

3

Cost = 7

Cost = 8

Cost = 5

Cost = 9

Minimum Spanning Tree

Cost = 5

Spanning Tree with minimum cost is considered as Minimum Spanning Tree

5.5.1 Prim's Algorithm

Prim's algorithm is one of the way to compute a minimum spanning tree which uses a greedy technique. This algorithm begins with a set U initialised to {I}. It this grows a spanning tree, one edge at a time. At each step, it finds a shortest edge (u,v) such that the cost of(u, v) is the smallest among all edges, where u is in Minimum Spanning Tree and V is not in Minimum Spanning Tree.

SKETCH OF PRIM'S ALGORITHM

void Prim (Graph G)

MSTTREET;

Vertex u, v;

Set of vertices V;

Set of tree vertices U;

T=NULL;

1* Initialization of Tree begins with the vertex' l' */

U= {l}

while (U # V)

Let (u,v) be a lowest cost such that u is in U and v is in V - U;

T "" TU {(u, v)}; U=UU {V};

}

ROUTINE FOR PRIMS ALGORITHM

void Prims (Table T)

}

for (; ;)

Example: -

vertex V, W;

/* Table initialization * /

for (i = 0; i < Numvertex ; i++)

T[i]. known = False; T[i]. Dist = Infinity; T[i]. path == 0;

Let V be the start vertex with the smallest distance T[V]. dist = 0;

T[V]. known = True;

for each W adjacent to V

If (! T[W] . Known)

T[W].Dist = Min (T[W]. Dist, Cvw); T[W].path = V;

0--~--(~J

: ", :

3 I ,,1 I 2

I -, I

, ,I

I .... i

000--1--{~}~

Fig. 5.5.2 (a)

V known d v P v
a 0 0 0
b 0 00 0
c 0 00 0
d 0 00 0 INITIAL CONFIGURATION

Here, 'a' is taken as source vertex and marked as visited. Then the distance of its adjacent vertex is updated as follows:

T[bJ.dist = Min [T[b].Dist, C a, b] =Min(00,2)

=2

T[b].dist = Min [T[d].Dist, C •. b] =Min(oo,l)

= 1

T[ c ].dist = Min [T[ c] .Dist, C a, b] = Min (00,3)

=3

Fig. 5.5.2 (b)

V known d v P,
a 1 0 0
b 0 2 a
c 0 3 a
d 0 1 a After 'a' is marked visited

Next, vertex 'd' with minimum distance is marked as visited and the distance of its unknown adjacent vertex is updated.

T[b ].Dist = Min [T[b] .Dist, C d.h] = Min (2, 2)

=2

T[c].dist = Min [T[cJ.Dist, Cd) =Min (3,1)

- ,

* a

8.

2

V known d. II P',
a 1 0 0
b 0 2 a
,
c 0 1 d
d 1 1 a o

~-----

Fig. 5.5.2 (c)

After the vertex 'd' is marked visited.

Next, the vertex with minimum cost 'c' is marked as visited and the distance of its unknown adjacent vertex is updated.

V known d P
a 1 0 0
b 0 2 a
c 1 1 d
d 1 1 I
a 1 Fig. 5.5.2 (d)

After 'c' is marked visited

Since, there is no unknown vertex adjacent to 'c', there is no updation in the distance. Finally, the vertex 'b" which is not visited is marked.

V known d P,
v
a 1 0 0
b 1 2 a
c 1 1 d
d I 1 1 I 1 1

Fig. 5.5.2 (e)

After 'd' is marked visited, the algorithm terminates The minimum cost of this spanning

Tree is 4 [i.e C" b + C.,d + Co. d[

Fig. 5.5.2 (t)

MINIMUM SPANNING TREE

Example 2 :

Undirected Graph G.

V Known d , P,
I 0
VI 0 0
V2 0 ct:) 0
VJ 0 ct:) 0
V4 0 ct:) 0
VI 0 co 0
V6 0 ct:) 0
V7 0 o: 0 Initial Configuration of table used in prim's Algorithm

Cons ider V I as source vertex and proceed from there.

00

Fig. 5.5.3 (a)

Vertex V I is marked as visited and then the distance of its adjacent vertices are updated as follows.

T[V2].dist = Min [T[V2].dist, Cvl•v2] = Min [ct:), 2] = 2.

T[V4].dist = Min [T[VJdist, CV1 v4] = Min [<Xl, 1] = 1.

T(VJdist = Min [T[VJdist, Cvl, vl] = 4

,@----------(9

" 2

/ 0 -,

, '

, -,

G 01 Goo

4

V Known , d Pv
v
VI J 0 0
V2 0 2 V,
VJ 0 4 V,
V4 0 1 VI
v, 0 ct:) 0
V6 0 00 0
. V7 0 ct:) 0 Fig. 5.5.3 (b) The table after VI is declared known.

Vertex V4 is marked as visited and then the distance of its adjacent vertices are updated,

Fig. 5.5.3 (c)

V Known d v P,
V, 1 0 0
V2 0 2 VI
V1 0 2 V4
V4 I 1 VI
VI 0 7 V4
V6 0 8 V4
V7 0 4 V. The table after V4 is declared known

Vertex V2 is visited and its adjacent vertices distances were updated.

T[V Jdist = Min [T[V.].dist, CV2• V4] = Min [1,3] = 1.

T[VJdist = Min [T[Vs].dist, CV2• v5] = Min [7, 10] = 7.

o

/,/ /

~-------- V.

@

8

Fig. 5.5.3( e)

G

4

Fig. 5.5.3 (e)

G

4

G

7

V Known d P
v v
V 1 0 0
1
V2 I 2 V1
V3 0 2 V4
V4 1 1 VI
Vs 0 7 V4
V6 0 8 V
4
V7 0 4 V
4 The table after V2 is declared

T[VJdist = Min [T[V6]·dist, Cvl•v6] = Min [8, 5] = 5.

T[V6].dist = 5.

V Known d p •
v
VI 1 0 0
V2 1 2 VI
V, 1 2 V4
V4 1 1 VI
Vs 0 7 V4
V6 0 5 V3
V7 0 4 V4 The table after V 3 is declared known

T[V6].dist = Min [T[V ].dist C ]

6 '\/7,v6

= Min [5,1] = 1.

T[V 6J.dist = Min [T[V ] .dist C ]

6 'V/,\l5

= Min (7, 6) = 6.

4

@----------

1

@

I 6

I

/

* I V7

4

Fig. 5.5.3 (1)

o

6

4

Fig. 5.5.3 (II)

V Known d p.
v
VI 1 0 0
V2 1 2 VI
V3 1 2 V4
V4 1 1 VI
Vs 0 6 V7
V6 0 I V7
V7 1 4 V4 The table after V7 is declared known

V Known d, p.
VI 1 0 0
V2 I 2 VI
V, I 2 V4
V4 1 I VI
Vs 0 6 V7
V6 1 I V7
V7 1 4 V4 The table after V6 is declared known

V Known d p.
VI 1 0 0
V2 1 2 VI
V, 1 2 V4
V4 1 1 VI
Vs 1 6 V7
V6 1 1 V7
V7 1 4 V4 Fig. 5.5.3 (h)

The table after V s is declared known

The Minimum cost of this spanning tree is 16.

6

Fig .. 5.5.3 (i) 5.6 Depth First Search

Depth first works by selecting one vertex V ofG as a start vertex; V is marked visited. Then each unvisited vertex adjacent to V is searched in turn using depth first search recursively. This process continues until a dead end (i.e) a vertex with no adjacent unvisited vertices is encountered. At a deadend, the algorithm backsup one edge to the vertex it came from and tries to continue visiting unvisited vertices from there.

The algorithm eventually halts after backing up to the starting vertex, with the latter being a dead end. By then, all the vertices in the same connected component as the starting vertex have been visited. If unvisited vertices still remain, the depth first search must be restarted at anyone of them.

To implement the Depthfirst Search perform the following Steps:

Step: 1

Choose any node in the graph. Designate it as the search node and mark it as visited.

Using the adjacency matrix of the graph, find a node adjacent to the search nod~ that has not been visited yet. Designate this as the new search node and mark It

Step: 2

Repeat step 2 using the new search node. If no nodes satisfying (2) can be found, return to the previous search node and continue from there.

When a return to the previous search node in (3) is impossible, the search from the originally choosen search node is complete.

If the graph still contains unvisited nodes, choose any node that has not been visited and repeat step (1) through (4).

ROUTINE FOR DEPTH FIRST SEARCH

Step: 3

Step: 4

Step: 5

Void DFS (Vertex V)

visited [V] = True;

for each W adjacent to V if(! visited [W]) Dfs (W);

Example: -

Fig. 5.6

Adjacency Matrix

c

D

A B

A B C D

0 I 1 1 1
1 I, 0 0 1
I 0 0 1
1 I 1 0 Implementation

1. Let' A' be the source vertex. Mark it to be visited.

___ ..&...__,'--_. - - -

3. From 'B' the next adjacent vertex is 'd' Mark it has visited.

4 .. From 'D' the next unvisited vertex is 'C' Mark it to be visited.

Depth First Spanning Tree Applications of Depth First Search

1. To check whether the undirected graph is connected or not.

2. To check whether the connected undirected graph is Bioconnected or not.

3. To check the a Acyclicity of the directed graph. 5.6.1 Undirected Graphs

A undirected graph is 'connected' if and only if a depth first search starting from any node visits every node.

An Undirected graph

Adjacency Matrix

A

B

C

D

E

A B C D E

0 1 0 1 1
1 0 1 1 0
0 1 0 1 1
1 1 1 0 0
1 0 1 0 0 Implementation

We start at vertex' A' _ Then Mark A as visited and call DFS (B) recursively, Dfs (B) Marks B as visited and calls Dfs(c) recursively..

···0 0-

Fig. 5.6.1 (a)

Dfs (c) marks C as visited and cal!s Dfs (D) recursively. -No recursive calls are made to Dfs (B) since B is already visited.

Fig. 5.6.1(b)

Dfs(D) marks D as visited. Dfs(D) sees A,B,C as marked so no recursive call is made there, and Dfs(D) returns back to Dfs(C).

Dfs(C) calls Dfs(E), where E is unseen adjacent vertex to C.

Fig. 5.6.1 (c)

Fig. 5.6.1 (d)

Since all the vertices starting from 'A' are visited, the above graph is said to be connected. If the graph is not connected, then processing all nodes requires reversal calls to Dfs, and each generates a tree. This entire collection is a depth first spanning forest.

5.6.2 Biconnectivity

A connected undirected graph is biconnected if there are no vertices whose removal disconnects the rest of the graph.

Articulation Points

The vertices whose removal would disconnect the graph are known as articulation points.

Fig. 5.6.2 Connected Undirected Graph Here the removal of 'C' vertex will disconnect G from the graph.

lilly removal of 'D' vertex will disconnect E & F from the graph. Therefore 'C' & 'D' are articulation points.

Fig. 5.6.2 (a) Removal of vertex 'C'

Fig. 5.6.2 (b) Removal of vertex 'D' The graph is not biconnected, if it has articulation points.

Depth first search provides a linear time algorithm to find all articulation points in aconnected graph.

Steps to find Articulation Points :

Step 1: Perform Depth first search, starting at any vertex Step 2: Number the vertex as they are visited, as Num (v).

Step 3: Compute the lowest numbered vertex for every vertex v in the Depth first spanning tree, which we call as low (w), that is reachable from v by taking zero or more tree edges and then possibly one back edge. By definition, Low(v) is the minimum of

(i) Num (v)

(ii) The lowest Num (w) among all back edges (v, w) (iii) The lowest Low (w) among all tree edges (v, w)

Step 4 : (i) The root is an articulation if and only if it has more than two children.

(Ii) Any vertex v other than root is an articulation point if and only if v has same child w such that Low (w) ~ Num (v), The time taken to compute this

algorithm an a graph is O(IEI +IVI) .

Note

For any edge (v, w) we can tell whether it is a tree edge or back edge merely by checking Num (v) and Num (w).

IfNum (w) > Num (v) then the edge (Y, w) is a back edge.

~W)

....... ... .........

Num (v) = 1 "",,-\---' Num (w) = 2

Backedge (w, v) Fig. 5.6.3

ROUTINE TO COMPUTE LOW AND TEST FOR ARTICULATION POINTS

void AssignLow (Vertex V)

Vertex W;

Low [V] = Num [V]; 1* Rule I *1 for each W adjacent to V

If (Num [W] > Num [V]) 1* forward edge *1 {

Assign Low (W);

If (Low [W]> = Num [V])

Printf ("% V is an articulation pt \n", V); Low[V] = Min (Low [V], Low[V]); 1* Rule 3*1

Else

If (parent [V] ! = W) 1* Back edge * 1

Low [V] = Min (Low [V], Num [W))); /*Rule 2*1

............. ~

/

,,/1''''' ,

,

I

I I I I

I

I

I

I

~/

/~

-:

I I I

/ (5/4)

I

G (7/7)

Fig. 5.6.4 Depth First Tree For Fig (5.6.2) With Num and Lolt.

Low can be computed by performing a postorder traversal of the depth - first spanning tree. (ie) Low (F) "" Min (Num (F), Num CD»

/* Since there is no tree edge & only one back edge */ = Min (6, 4) =4

Low (F) = 4

Low (E) = Min (Num (E), Low (F»

1* there is no back edge *1. = Min (5, 4) = 4

Low (D) = Min (Num (D), Low (E), Num (A»

= Min (4,4,1) = 1 Low (D) = 1

Low (G) = Min (Num (G» = 7 1* Since there is no tree edge & back edge */ Low (C) = Min (Num (C), Low (D), Low (G»

=Min(3,1,7)= 1 Low (C) = I .

lilly Low (B) = Min (Num (B), Low (C» = Min (2, I) = 1

Low (A) = Min (Num (A), Low (B» =Min t l , l)= I

Low (A) = 1.

From fig (5.8) it is clear that Low (G) > Num (C) (ie) 7 > 3/* if Low (W) ~ Num (V)*I the 'v' is an articulation pt Therefore 'C' is an articulation point.

lll" Low (E) = Num CD), Hence D is an articulation point. 5.7.2 NP - Complete Problems

Adecision problem D is said to be NPcomplete if I. It belongs to class NP. 2. Every problem inNP

is polynomially reducible to D. .

A problem PI can be reduced to P2 as follows

Provide a mapping so that any instance of PI can be transformed to an instance ofP2· Solve P2 and then map the answer back to the original. As an example, numbers are entered into a pocket calculator in decimal. The decimal numbers are converted to binary, and all calculations are performed in binary. Then the final answer is converted back to decimal for display. For PI to be polynomially reducible to P2, all the work associated with the transformations must be performed in polynomial time.

The reason that NP - complete problems are the hardest NP problems is that a problem that is NP - complete can essentially be used as a subroutine for any problem in NP, with only a polynomial amount of overhead. Suppose we have an NP - complete problem PI. Suppose P2 is known to be in NP. Suppose further that PI polynomially reduces to P 2' so that we can solve PI by using P2 with only a polynomial time penalty. Since PI is NP - complete, every problem in NP polynomially reduces to PI. By applying the closure property to polynomials, we see that every problem in NP is polynomially reducible to P2; we reduce the problem to PI and then reduce PI to P2• Thus, P2 is NP - Complete. Travelling salesman problem is NP - complete. It is easy to see that a solution can be checked in polynomial time, so it is certainly in NP. Hamilton cycle problem can be polynomially reduced to travelling salesman problem. .

8amilton Cycle Problem transformed to travelling salesman problem.

Some of the examples of NP - complete problems are Hamilton circuit, Travelling salesman, knapsack, graph coloring, Bin packing and partition problem.

UNIT- V

PART - A Questions

1 . Define a graph

2. Compare directed graph and undirected graph

3. Define path, degree and cycle in a graph

4. What is an adjacency matrix?

5. Give the adjacency list for the following graph

6. Define Topological Sort

7. Define Shortest path problem. Give examples

8. Define Minimum Spanning Tree and write its properties.

9. What is DAG? Write its purpose

10. What are the different ways of traversing a graph?

11. What are the various applications of depth first search?

12. What is an articulation point?

13. When a graph is said to be binconnected?

14. Write down the recursive routine for depth first search.

15. Write a procedure to check the biconnectivity of the graph using' DFS.

16. Define Class NP

17. What is meant by NP_Complete problem?

1.

PART - B

What is Topological Sort? Write down the pseudocode to perform topological sort and apply the same to the following graph.

2. Explain the Dijkstra's algorithm and find the shortest path from A to all other vertices in the following graph.

3. Explain prim's algol .. in detail and find the minimum spanning tree for the following graph.

4. Find all the articulation points in the given graph. Show the depth_first spanning tree and the values of Num and Low for each vertex.

A

,APPENDIX

I

EFERENCE BOOKS

"Data Structures and Algorithm Analysis in C" by Mark Allen Weiss.

"Data Structures and Algorithms "by Alfred Aho, John E.Hopcroft and Jeffery D.Ullman "Data Structures and Algorithms in C++ "by Adam Drozdek.

"Introduction to Data Structures" by Bhagat Singh and Thomas L.Naps.

"Computer Algorithms" by Sara Baase and Allen Van Golder.

"Introduction to Design and Analysis" by Sara Baase and Allen Van Golder. "How to solve it by Computer" by Dromey.

B.E.IB.Tech, DEGREE EXAMINATION, NOVEMBERJDECEMBER 2005.

Second Semester

Computer Science and Engineering

CS 1151- DATA STRUCTURES

(Common to Information Technology and RE. (Part-Time) [Semester Computer Science and Engineering (Regulations 2005»

Time: Three hours

Maximum: 100 marks

Answer ALL questions.

PART - A (10 x 2 = 20 marks)

1. Define the top-down design strategy.

Start from beginning and explore the solution towards goal.

2. Define the worst case and average case complexities of an algorithm.

The worst case complexity for a given problem size n corresponds to the maximum complexity encountered among all problems of size 11. The average case complexity of the algorithm is averaged ones all possible problems of size 11.

3. Swap two adjacent elements by adjusting only the pointers (and not the data) using: singly linked list.

Void swap with next (position P, before P, List L)

Position P, After P;

P = Before P ~ Next; After P = P -+ Next;

P ~ Next = After P ~ Next; Before P -+ Next = After P; After P -+ Next = P;

. Define a queue model.

re model ofa queue is shown below:

__ D_e_qu_e_u_e_C_Q_) .... L Q ___,I. Enqueue (Q)

lqueue which inserts an element at the end and dequeue which deletes an element.

A full node is a node with two children. Prove that the number of full nodes plus one is equal to the number of leaves in a non empty binary tree.

et N == number of nodes, F == number of full nodes, L == number ofleaves, and H == number of half ides (nodes with one child). Clearly, N = F + H + L further 2F + H = N - 1. Subtraction L - F = 1.

Suppose that we replace the deletion function, which finds, return, and removes the minimum element in the priority queue, with find min, can both insert and find min be implemented in constant time?

es, when an element is inserted, we compare it to the current minimum and change the minimum the new element is smaller. Deletion operations are expensive in this scheme.

What is the running time of insertion sort if all keys are equal?

:N) because the while loop terminates immediately. Of course, accidentally changing the test to iclude equalities the running time to quadratic for this type of input.

Determine the average running time of quicksort. he average running time of quicksort is o(n log n).

. What is the space requirement of an adjacency list representation of a graph?

)(IEI+lvl) where E - no. of edges and V - no. ofverticies

O. Define an NP-complete problem.

, decision problem D is said to be NP - complete, if i) it belongs to class NP;

ii) every problem inNP is polynomially reducible to D.

PART B - (5 x 16 = 80 marks)

11. (i) Find the average search cost of the binary search. algorithm.

The average search cost of a binary search procedure with n keys is average search cost

(8)

I [log, nJ . . 1 {[lOS, nJ . }

= _ I (1+ 1)21 = _ I i X 2' + n

n i.O n i.O

Note that i x i = ~ [2Xi ] and I + x + x2 + + Xk _ Xk+1 - n

dx ,·2 ....-~

2~[1 2 kJ d[xk+l-l]

+x+x + .... +x =2- _

dx dx x-J

J Ilos, n1 . J L J

= - I i X 2' = - [(n + l)(L log, n J -J) + 2] = (n + 1) log2 n + I '" log n for large n

n ,-0 11 n 2 .

ll.(ii) Design an algorithm to compute the sum of the squares of n numbers.

That is S == ~Jay

io=zl

(8)

(ii)Algorithm description:

1. Input n

2. Set sum s to a

3. Settin~ I '" 1, input the value of ai

4. Add sand ai * ai

5. Increment the value of a by i

6. Repeat steps 3 to 5 for 11 times

7. Result the sum s.

12. (a) Give a procedure to convert an infix expression a + b * c + (d * e + f) * t

postfix notation. g 0

Algorithm description:

1. First, the symbol a is read, so it is passed through to the output. Then + is read and pushed onto the stack. Next 6 IS read and passed through to the output. The state of affairs at this junction is

b) (ii) Write a routine to insert an element in a linked list. utine insert:

.d insert (Element Type x, List L, position P)

sition Tmp cell;

ip Cell = malloc (Size of (struct node»; Trnp Cell == Null)

tel error ("out of space!!! "); rp Cell Element = x;

np Cell Next = P Next; Next = Tmp Cell;

. (a) (i) How do you insert an element in a binary search tree? gorithm routine for Search Tree:

sert (Element Type x, search Tree T)

(T==Null)

" create and return a one-node tree" I = malloe (size of (struct Tree node»; (T --Null)

rtel Error ("out of space! '! ");

se

Element = x;

left = T right = Null;

se

(x < T Element)

left = insert (x, T left);

(8)

(8)

else

if{x>T Element)

T Right = Insert (x, T Right) Return T;

13 a) (ii) Show that for the perfect binary tree of height h containing 2h+l • Inodes,

the sum of the heights of the nodes is 2h+l - 1 - I(h + 1). (8)

The tree consists of I node at height A, 2 nodes at height h -1, 22 nodes at height h - 2, and in general 2i nodes at height A - i. The sum of the heights of all nodes is

h

S = L2i (h-l)

j""O

= h + 2(h-l) + 4(h-2) + + 2h-I(l).

MUltiplying by 2 gives

2S = 2h + 4 (h-I) + 8 (h-2) + ._ .... + 2h(l)

we have 2h - 2(h-1) = 2, 4(h+ 1) - 4(h-2) = 4 and so on.

S = -h + 2 + 4 + 8 + "" .. _" + 2h-1 + 2h = (2hTI - 1) - (h + 1) Or

13. (b) Given input (4371, 1323, 6173, 4199, 4344, 9679, 1989) and a hash

function heX) = X(mod 10), show the resulting:

(i) Separate chaining table

(ii) open addressing hash table using linear probing

(iii) open addressing hash table using quadratic probing

(iv) open addressing hash table with second hash function hiX) = 7 - (X mod7).

On the assumption that we add collisions to the end of the list (which is the easier way if hash table is being built by hand), the seperate chaining hash table that results is shown here,

i) Separate chaining table

0l--_~+---l 1

1---+

21-- __ +-_1

3

1---+

4

1---+ ......... -1

5 ~_-+~_I 6

1----+--1

71-- __ +--1

81-- __ +---1 9 L-.-_--L_.....l

[ii) Open addressing hash table using linear probing

o

9679
4371
1989
1323
6173
4344



4199 2 3 4 5 6 7 8 9

(iii) Open addressing bash table using quadratic probing



!J(;7'
437]

1323
.173
4344


1!t19
41'" 2 3 4 5 ,

7 I ,

(iv) open addressing hash table with second hash function h2(X) = 7 - (X mod7).

1989 cannot be inserted into the table became hash2( 1989 = 6, and the alternative location 5, 1,7 and 3 are already taken. The table at this point is as follows:

o


4371

1323
6173
-. 9679

4344

4198 2 3 4

5

6 7 8 9

t (a) Write down the mergesort algorithm and give its worst case, best case and

average case analysis.

lerge Routine :

oid Merge (Element Type A [ ], Element Type TmpArray [ ], int Lpos, int Rpos, int. Right End)

t i, Left End, Num Elements, Tmp pas;

eft End = Rpos - 1;

mp pas = Lpos;

urn Elements = Right End - L pas + I;

hile (Lpos <= Left End && R pas <= Right End) (A[Lpos] <=A[RposJ)

mpArray [Tmppos ++] = A[Lpos ++]; lse

mpArray [Trnppos ++] = A [Rpos + I]; 'hile (Lpos <= LeftEnd)

mpArray [Tmppos ++] = A[Lpos ++]; /hile (Rpos <= Right End)

mpArray [Tmp pos++] = A[Rpos ++];

Jf (i = 0; i < NumElements.; i++, Right End-- ) , [Right End]= TmpArray [Right End];

!Jergesort Routine :

'oid Msort fElement Type A [], Element Type Tmp Array [], int left, int right)

~t center;

f(Left < Right)

enter = (Left + Right)/2;

/lsort (A, Tmp Array, left, center);

risort (A, Tmp Array, center + 1, Right); risort (A, Tmp Array, left, center + I, Right);

Analysis of mergesort:

For n = 1, the time t~ mergesort is constant, which we will denote by I, Otherwise, the time to me:ge ~o~ n number IS equal to the to do two recursive merge sorts of size n/2, pins time to merge, winch IS linear, Therefore the running time is given by the relation.

T(l) = 1

T(n) = 2T (n/2) + n Proceeding

2T(n/2) = 2 (2T(n/4) + n/2) = 4T(n/4) + n Continuing in this manner, the obtain

T(n) = 2K T(n/2k) + K.n.

Using 2K = n, we obtain T(n) = n + nlog2 n

The running time of merge sort is 0 (n log2 n).

Or

14.(b) Show how heap sort processes the input 142, 543, 123, 65, 453, 879, 572, 434, 111,

242,811, 102.

The input is read in as 142,643, 123,65,453, 879,572,434, 111,242,811, 102.

The result of the heap sort is 879,811,572,434,543,123,142,65,111,242,453,102.

879 is removed from the heap and placed at the end.

102 is placed in the hole and bubbled down obtaining. 811,543,572,434,453,123, 142,65,111,242,102,879.

Continuing the process, we obtain,

572,543, 142,434,453,123, 102,65, 111,242,811,879 543,453,142,434,242,123,102,65,111,672,811,879 453,434,142,111,242,123,102,65,543,572,811,879

65, l02, Ill, 123,142,242,434,453,543,572,811,879

15. (a) Find a minimum spanning tree for the grapb

Using both Prim's and Kruskals algorithms.

Prim's algorithm

Prim (G(N,E)) T=O

B f- {an Arbitrary member 2 N} While (B#N) do

Find e = (u, v) of minimum length such that UEB and VEB T f- TU {e}

T f- BU{v}

}

return T

Step 1

®

®

®

@)

®

Step 2
@)

2
® E CD (~)
® CD CD
Step 3
0)-2-® @
~.
®
\ ® ®
® CD
Step 4
0)-2-B @
2
® Ii .: @
® CD Step 5

Step 6

®

Step 7

2

®

3

o

®

Step 8

4
0

2
H 8

Step 9
C

4
f} 2
2
H Kruskal's Algorithm,

T=O

While «T has less than n - 1 edges) and E *" O)do

Choose an edge (u, v) from E oflowest cost delete (u, v) from E,

if (u, v) does not create a cycle in T then add (u, v) to T else discard (u, v)

Step 1
Step 4
@ ® @ ® ® c
® ~ ® (j ® E
,
2
® (0 " CD
Step 2
Step 5
0 ® \ 0) c
® -. ® ®
@)
() CD CD
Step 3
Step 6
0 ® \
2
® 0 ®
(0 CD ~) CD Step 7
C
@
H CD
Step 8
4
D
2
(])
Step 9
4
2
D 15. (b) (i) Write down the Dijkstra's algorithm for the shortest path. (8)

Dijkstra's Algorithm:

Void Dijkstra (Table T)

Vertex V, W;

fore ;)

V = smallest unknown distance vertex; if (V = = Not A vertex)

break;

T[V] .known = True;

for each W adjacent to V if(!T(W). known)

if(T[V]. Dist + CVW < T[W]. Dist) {/* update W *1

Decrease (T[W]. Dist to T[V].Dist + (CVW); T[W].path=V;

15. b) (ii) Explain how to modify Dijkstra's algorithm to produce a count of the number of

different minimum paths from v to w. (8)

Use an array count such that for any vertex x, count [x] is the number of distinct paths from s to u known so far. When a vertex is marked as known, its adjacency list is traversed. Let W be a vertex on the adjacency list.

If dv + Cv, w = dw, then increment count [w] by count [ v] because all shortest paths from s to u with last edge (v ,w) give a shortest path to w. If dv + Cv,w < dw, then pw and dw get updated. All presimsly known shortest paths to ware now invalid, but all shortest paths to v now lead to shortest paths for w, so set count [w] to equal count [v].

You might also like