Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
158 views53 pages

Daa Unit-2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 53

DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT II

Divide and Conquer: General Method, Defective chessboard, Binary Search, finding the
maximum and minimum, Merge sort, Quick sort.

The Greedy Method: The general Method, container loading, knapsack problem, Job sequencing
with deadlines, minimum-cost spanning Trees.

Divide and Conquer


General Method:

In divide and conquer approach, a problem is divided into smaller problems, then the smaller
problems are solved independently, and finally the solutions of smaller problems are combined
into a solution for the large problem.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

Divide and conquer is a design strategy which is well known to breaking down efficiency
barriers. When the method applies, it often leads to a large improvement in time complexity. For
example, from O (n2) to O (n log n) to sort the elements.

Divide and Conquer is one of the best-known general algorithm design technique. It works according
to the following general plan:

 Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests splitting
the inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’ sub problems.
 These sub problems must be solved, and then a method must be found to combine sub
solutions into a solution of the whole.
 If the sub problems are still relatively large, then the divide-and-conquer strategy can
possibly be reapplied.
 Often the sub problems resulting from a divide-and-conquer design are of the same type as
the original problem. For those cases the reapplication of the divide-and- conquer principle
is naturally expressed by a recursive algorithm.

UNIT-2 [1] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Control Abstraction of Divide and Conquer

A control abstraction is a procedure whose flow of control is clear but whose primary
operations are specified by other procedures whose precise meanings are left undefined. The
control abstraction for divide and conquer technique is DANDC(P), where P is the problem to be
solved.

In the above specification,


 Initially DAndC(P) is invoked, where ‘P’ is the problem to be solved.

 Small (P) is a Boolean-valued function that determines whether the input size is small
enough that the answer can be computed without splitting. If this so, the function ‘S’ is
invoked. Otherwise, the problem P is divided into smaller sub problems. These sub problems
P1, P2 …Pk are solved by recursive application of DAndC.

 Combine is a function that determines the solution to P using the solutions to the ‘k’ sub
problems.

UNIT-2 [2] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Recurrence equation for Divide and Conquer

The recurrence relation can be solved by i) substitution method or by using ii) master theorem.

1. Substitution Method - This method repeatedly makes substitution for each occurrence
of the function T in the right-hand side until all such occurrences disappears.
2. Master Theorem - The efficiency analysis of many divide-and-conquer algorithms is greatly
simplified by the master theorem. It states that, in recurrence equation T(n) = aT(n/b) + f (n),
If f (n)∈ Θ (nd) where d ≥ 0 then

UNIT-2 [3] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Binary Search:

Binary Search is a searching algorithm for finding an element's position in a sorted


array. In this approach, the element is always searched in the middle of a portion of an array.
Binary search can be implemented only on a sorted list of items. If the elements are not sorted
already, we need to sort them first

Problem definition: Let ai, 1 ≤ i ≤ n be a list of elements that are sorted in non-decreasing order.
The problem is to find whether a given element x is present in the list or not. If x is present we
have to determine a value j (element’s position) such that a j=x. If x is not in the list, then j is set
to zero.

UNIT-2 [4] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Solution:
Let P = (n, ai…al , x) denote an arbitrary instance of search problem where n is the number of
elements in the list, ai…al is the list of elements and x is the key element to be searched for in the
given list. Binary search on the list is done as follows:
Step1: Pick an index q in the middle range [i, l] i.e. q= [(n + 1)/2] and compare x with aq.
Step 2: if x = aq i.e key element is equal to mid element, the problem is immediately solved.
Step 3: if x <aqin this case x has to be searched for only in the sub-list ai, ai+1, ……, aq-
Therefore, problem reduces to (q-i, ai…aq-1, x).
Step 4: if x >aq,x has to be searched for only in the sub-list aq+1, ...,., al . Therefore problem
reduces to (l-i, aq+1…al, x).
For the above solution procedure, the Algorithm can be implemented as recursive or non-
recursive algorithm.

Recursive binary search algorithm:

UNIT-2 [5] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Example for Binary Search


Let us illustrate binary search on the following 9 elements:

The number of comparisons required for searching different elements is as follows:


1. Searching for x = 101

Number of comparisons = 4

2. Searching for x = 82

Number of comparisons = 3

3. Searching for x = 42

Number of comparisons = 4

UNIT-2 [6] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Continuing in this manner the number of element comparisons needed to find each of nine
elements is:

No element requires more than 4 comparisons to be found. Summing the comparisons needed to
find all nine items and dividing by 9, yielding 25/9 or approximately 2.77 comparisons per
successful search on the average. There are ten possible ways that an un-successful search may
terminate depending upon the value of x.

Analysis:
In binary search the basic operation is key comparison. Binary Search can be analyzed
with the best, worst, and average case number of comparisons. The numbers of comparisons for
the recursive and iterative versions of Binary Search are the same, if comparison counting is
relaxed slightly. For Recursive Binary Search, count each pass through the if-then-else block as
one comparison. For Iterative Binary Search, count each pass through the while block as one
comparison. Let us find out how many such key comparison does the algorithm make on an
array of n elements.

Best case – Θ(1) In the best case, the key is the middle in the array. A constant number of
comparisons (actually just 1) are required.

Worst case - Θ(log2 n) In the worst case, the key does not exist in the array at all. Through each
recursion or iteration of Binary Search, the size of the admissible range is halved. This halving
can be done ceiling(log2n ) times. Thus, [ log2 n ] comparisons are required.
Sometimes, in case of the successful search, it may take maximum number of comparisons.
[ log2 n ]. So worst case complexity of successful binary search is Θ (log 2 n).

Average case - Θ (log2n) To find the average case, take the sum of the product of number of
comparisons required to find each element and the probability of searching for that element. To
simplify the analysis, assume that no item which is not in array will be searched for, and that the
probabilities of searching for each element are uniform.

UNIT-2 [7] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Space Complexity –
The space requirements for the recursive and iterative versions of binary search are different.
Iterative Binary Search requires only a constant amount of space, while Recursive Binary Search
requires space proportional to the number of comparisons to maintain the recursion stack.

UNIT-2 [8] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Finding the maximum and minimum

Max-Min problem is to find a maximum and minimum element from the given array. We can
effectively solve it using divide and conquer approach.

A simple and straight forward algorithm to achieve this is given below.

Explanation:

 Straight MaxMin requires 2(n-1) comparisons in the best, average & worst cases.

 By realizing the comparison of a[i]>max is false, improvement in a algorithm can be


done. Hence we can replace the contents of the for loop by,

If(a[i]>Max) then Max = a[i]; Else if (a[i]<min) min=a[i]

 On the average a[i] is > max half the time. So, the avg. no. of comparison is 3n/2-1.

Algorithm based on Divide and Conquer strategy

Let P = (n, a [i],……,a [j]) denote an arbitrary instance of the problem. Here ‘n’ is the no. of
elements in the list (a[i],….,a[j]) and we are interested in finding the maximum and minimum of
the list. If the list has more than 2 elements, P has to be divided into smaller instances.

For example, we might divide ‘P’ into the 2 instances,

P1= ( [n/2],a[1], a[n/2])

P2= (n-[n/2], a[[n/2]+1],……., a[n])

After having divided ‘P’ into 2 smaller sub problems, we can solve them by recursively invoking
the same divide-and-conquer algorithm.

UNIT-2 [9] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Divide and conquer approach for Max. Min problem works in three stages.

 If a1 is the only element in the array, a1 is the maximum and minimum.


 If the array contains only two elements a1 and a2, then the single comparison between two
elements can decide the minimum and maximum of them.
 If there are more than two elements, the algorithm divides the array from the middle and
creates two sub problems. Both sub problems are treated as an independent problem and
the same recursive process is applied to them. This division continues until sub problem
size becomes one or two.

UNIT-2 [10] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Analysis – Time Complexity:

UNIT-2 [11] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Compared with the straight forward method (2n-2) this method saves 25% in comparisons.
Space Complexity:
Compared to the straight forward method, the MaxMin method requires extra stack space for i, j,
max, min, max1 and min1. Given n elements there will be [log2n] + 1 levels of recursion and we
need to save seven values for each recursive call. (6 + 1 for return address).

Merge sort

Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to
sort the elements. It is one of the most popular and efficient sorting algorithm. It divides the
given list into two equal halves, calls itself for the two halves and then merges the two sorted
halves. We have to define the merge() function to perform the merging.

The sub-lists are divided again and again into halves until the list cannot be divided further. Then
we combine the pair of one element lists into two-element lists, sorting them in the process. The
sorted two-element pairs is merged into the four-element lists, and so on until we get the sorted
list.
The time complexity of merge mort in the best case, worst case and average case is O(n log n)
and the number of comparisons used is nearly optimal.

UNIT-2 [12] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Algorithm:

The merging of two sorted arrays can be done as follows.


 Two pointers (array indices) are initialized to point to the first elements of the arrays
being merged.
 The elements pointed to are compared, and the smaller of them is added to a new array
being constructed

UNIT-2 [13] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

After that, the index of the smaller element is incremented to point to its immediate successor in
the array it was copied from. This operation is repeated until one of the two given arrays is
exhausted, and then the remaining elements of the other array are copied to the end of the new
array.

Example:

UNIT-2 [14] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Analysis of Time Complexity:

Quick sort
Quicksort is the other important sorting algorithm that is based on the divide-and-conquer
approach. Unlike mergesort, which divides its input elements according to their position in the
array, quicksort divides (or partitions) them according to their value.

A partition is an arrangement of the array’s elements so that all the elements to the left of some
element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater
than or equal to it:

UNIT-2 [15] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

In quick sort, the entire work happens in the division stage, with no work required to combine the
solutions to the sub problems.

The function partition() makes use of two pointers ‘i’ and ‘j’ which are moved toward each other in
the following fashion:

UNIT-2 [16] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [17] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

If A[i] < P --- Move i to Right


Else if A[i] > P----- Move j to Left
If i<j------- swap (A[i], A[j])
Else --- swap (A[j], P) // i>j

UNIT-2 [18] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [19] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [20] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [21] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [22] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [23] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Time Complexity for Best and Average case:

UNIT-2 [24] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

Time Complexity for Worst case:

UNIT-2 [25] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – II SEM

UNIT-2 [26] k.prasanthi


Greedy Method
GENERAL METHOD

Greedy is the most straight forward design technique. Most of the problems have n
inputs and require us to obtain a subset that satisfies some constraints. Any subset
that satisfies these constraints is called a feasible solution. We need to find a feasible
solution that either maximizes or minimizes the objective function. A feasible solution
that does this is called an optimal solution.

The greedy method is a simple strategy of progressively building up a solution, one


element at a time, by choosing the best possible element at each stage. At each stage,
a decision is made regarding whether or not a particular input is in an optimal solution.
This is done by considering the inputs in an order determined by some selection
procedure. If the inclusion of the next input, into the partially constructed optimal
solution will result in an infeasible solution then this input is not added to the partial
solution. The selection procedure itself is based on some optimization measure. Several
optimization measures are plausible for a given problem. Most of them, however, will
result in algorithms that generate sub-optimal solutions. This version of greedy
technique is called subset paradigm. Some problems like Knapsack, Job sequencing
with deadlines and minimum cost spanning trees are based on subset paradigm.

For the problems that make decisions by considering the inputs in some order, each
decision is made using an optimization criterion that can be computed using decisions
already made. This version of greedy method is ordering paradigm. Some problems like
optimal storage on tapes, optimal merge patterns and single source shortest path are
based on ordering paradigm.

CONTROL ABSTRACTION

Algorithm Greedy (a, n)


// a(1 : n) contains the „n‟ inputs
{
solution := ; // initialize the solution to empty
for i:=1 to n do
{
x := select (a);
if feasible (solution, x) then
solution := Union (Solution, x);
}
return solution;
}

Procedure Greedy describes the essential way that a greedy based algorithm will look,
once a particular problem is chosen and the functions select, feasible and union are
properly implemented.

The function select selects an input from „a‟, removes it and assigns its value to „x‟.
Feasible is a Boolean valued function, which determines if „x‟ can be included into the
solution vector. The function Union combines „x‟ with solution and updates the objective
function.

72
KNAPSACK PROBLEM

Let us apply the greedy method to solve the knapsack problem. We are given „n‟
objects and a knapsack. The object „i‟ has a weight wi and the knapsack has a capacity
„m‟. If a fraction xi, 0 < xi < 1 of object i is placed into the knapsack then a profit of p i xi
is earned. The objective is to fill the knapsack that maximizes the total profit earned.

Since the knapsack capacity is „m‟, we require the total weight of all chosen objects to
be at most „m‟. The problem is stated as:
n

maximize  p x i i
i  1
n

subject to  a i xi  M where, 0 < xi < 1 and 1 < i < n


i 1

The profits and weights are positive numbers.

Algorithm

If the objects are already been sorted into non-increasing order of p[i] / w[i] then the
algorithm given below obtains solutions corresponding to this strategy.

Algorithm GreedyKnapsack (m, n)


// P[1 : n] and w[1 : n] contain the profits and weights respectively of
// Objects ordered so that p[i] / w[i] > p[i + 1] / w[i + 1].
// m is the knapsack size and x[1: n] is the solution vector.
{
for i := 1 to n do x[i] := 0.0 // initialize x
U := m;
for i := 1 to n do
{
if (w(i) > U) then break;
x [i] := 1.0; U := U – w[i];
}
if (i < n) then x[i] := U / w[i];
}

Running time:

The objects are to be sorted into non-decreasing order of pi / wi ratio. But if we


disregard the time to initially sort the objects, the algorithm requires only O(n) time.

Example:

Consider the following instance of the knapsack problem: n = 3, m = 20, (p 1, p2, p3) =
(25, 24, 15) and (w1, w2, w3) = (18, 15, 10).

73
1. First, we try to fill the knapsack by selecting the objects in some order:

x1 x2 x3  wi xi  pi xi
1/2 1/3 1/4 18 x 1/2 + 15 x 1/3 + 10 x 1/4 25 x 1/2 + 24 x 1/3 + 15 x 1/4 =
= 16.5 24.25

2. Select the object with the maximum profit first (p = 25). So, x1 = 1 and profit
earned is 25. Now, only 2 units of space is left, select the object with next largest
profit (p = 24). So, x2 = 2/15

x1 x2 x3  wi xi  pi xi
1 2/15 0 18 x 1 + 15 x 2/15 = 20 25 x 1 + 24 x 2/15 = 28.2

3. Considering the objects in the order of non-decreasing weights wi.

x1 x2 x3  wi xi  pi xi
0 2/3 1 15 x 2/3 + 10 x 1 = 20 24 x 2/3 + 15 x 1 = 31

4. Considered the objects in the order of the ratio pi / wi .

p1/w1 p2/w2 p3/w3


25/18 24/15 15/10
1.4 1.6 1.5

Sort the objects in order of the non-increasing order of the ratio pi / xi. Select the
object with the maximum pi / xi ratio, so, x2 = 1 and profit earned is 24. Now, only 5
units of space is left, select the object with next largest pi / xi ratio, so x3 = ½ and the
profit earned is 7.5.
x1 x2 x3  wi xi  pi xi
0 1 1/2 15 x 1 + 10 x 1/2 = 20 24 x 1 + 15 x 1/2 = 31.5

This solution is the optimal solution.

OPTIMAL STORAGE ON TAPES

There are „n‟ programs that are to be stored on a computer tape of length „L‟. Each
program „i‟ is of length li, 1 ≤ i ≤ n. All the programs can be stored on the tape if and
only if the sum of the lengths of the programs is at most „L‟.

We shall assume that whenever a program is to be retrieved from this tape, the tape is
initially positioned at the front. If the programs are stored in the order i = i 1, i2, . . . . .
, in, the time tJ needed to retrieve program iJ is proportional to

l
1 k  j
ik

74
If all the programs are retrieved equally often then the expected or mean retrieval time
(MRT) is:
1
.
 t
n 1 J  n j

For the optimal storage on tape problem, we are required to find the permutation for
the „n‟ programs so that when they are stored on the tape in this order the MRT is
minimized.
n J

d (I)    l i k

J 1 K 1

Example

Let n = 3, (l1, l2, l3) = (5, 10, 3). Then find the optimal ordering?

Solution:

There are n! = 6 possible orderings. They are:

Ordering I d(I)
1, 2, 3 5 + (5 +10) +(5 + 10 + 3) = 38
1, 3, 2 5 + (5 + 3) + (5 + 3 + 10) = 31
2, 1, 3 10 + (10 + 5) + (10 + 5 + 3) = 43
2, 3, 1 10 + (10 + 3) + (10 + 3 + 5) = 41
3, 1, 2 3 + (3 + 5) + (3 + 5 + 10) = 29
3, 2, 1 3 + (3 + 10) + (3 + 10 + 5) = 34

From the above, it simply requires to store the programs in non-decreasing order
(increasing order) of their lengths. This can be carried out by using a efficient sorting
algorithm (Heap sort). This ordering can be carried out in O (n log n) time using heap
sort algorithm.

The tape storage problem can be extended to several tapes. If there are m  1 tapes,
To, ............ ,Tm – 1, then the programs
m 1
are to be distributed over these tapes.

The total retrieval time (RT) is 


J 0
d(IJ )

The objective is to store the programs in such a way as to minimize RT.

The programs are to be sorted in non decreasing order of their lengths l i‟s, l1 < l2 < .. .
.. . . ln.
The first „m‟ programs will be assigned to tapes To, ......... ,Tm-1 respectively. The next „m‟
programs will be assigned to T0, . . . . ,Tm-1 respectively. The general rule is that
program i is stored on tape Ti mod m.

75
Algorithm:

The algorithm for assigning programs to tapes is as follows:

Algorithm Store (n, m)


// n is the number of programs and m the number of tapes
{
j := 0; // next tape to store on
for i :=1 to n do
{
Print („append program‟, i, „to permutation for tape‟, j);
j := (j + 1) mod m;
}
}

On any given tape, the programs are stored in non-decreasing order of their lengths.

JOB SEQUENCING WITH DEADLINES

When we are given a set of „n‟ jobs. Associated with each Job i, deadline di > 0 and
profit Pi > 0. For any job „i‟ the profit pi is earned iff the job is completed by its
deadline. Only one machine is available for processing jobs. An optimal solution is the
feasible solution with maximum profit.

Sort the jobs in „j‟ ordered by their deadlines. The array d [1 : n] is used to store the
deadlines of the order of their p-values. The set of jobs j [1 : k] such that j [r], 1 ≤ r ≤
k are the jobs in „j‟ and d (j [1]) ≤ d (j[2]) ≤ . . . ≤ d (j[k]). To test whether J U {i} is
feasible, we have just to insert i into J preserving the deadline ordering and then verify
that d [J[r]] ≤ r, 1 ≤ r ≤ k+1.

Example:

Let n = 4, (P1, P2, P3, P4,) = (100, 10, 15, 27) and (d1 d2 d3 d4) = (2, 1, 2, 1). The
feasible solutions and their values are:

S. No Feasible Solution Procuring Value Remarks


sequence
1 1,2 2,1 110
2 1,3 1,3 or 3,1 115
3 1,4 4,1 127 OPTIMAL
4 2,3 2,3 25
5 3,4 4,3 42
6 1 1 100
7 2 2 10
8 3 3 15
9 4 4 27

76
Algorithm:

The algorithm constructs an optimal set J of jobs that can be processed by their
deadlines.

Algorithm GreedyJob (d, J, n)


// J is a set of jobs that can be completed by their deadlines.
{
J := {1};
for i := 2 to n do
{
if (all jobs in J U {i} can be completed by their dead lines)
then J := J U {i};
}
}

OPTIMAL MERGE PATERNS

Given „n‟ sorted files, there are many ways to pair wise merge them into a single sorted
file. As, different pairings require different amounts of computing time, we want to
determine an optimal (i.e., one requiring the fewest comparisons) way to pair wise
merge „n‟ sorted files together. This type of merging is called as 2-way merge patterns.
To merge an n-record file and an m-record file requires possibly n + m record moves,
the obvious choice choice is, at each step merge the two smallest files together. The
two-way merge patterns can be represented by binary merge trees.

Algorithm to Generate Two-way Merge Tree:

struct treenode
{
treenode * lchild;
treenode * rchild;
};

Algorithm TREE (n)


// list is a global of n single node binary trees
{
for i := 1 to n – 1 do
{
pt  new treenode
(pt  lchild)  least (list); // merge two trees with smallest
lengths
(pt  rchild)  least (list);
(pt  weight)  ((pt  lchild)  weight) + ((pt  rchild)  weight);
insert (list, pt);
}
return least (list); // The tree left in list is the merge
tree
}

77
Example 1:

Suppose we are having three sorted files X 1, X2 and X3 of length 30, 20, and 10 records
each. Merging of the files can be carried out as follows:

S.No First Merging Record moves in Second Record moves in Total no. of
first merging merging second merging records moves
1. X1 & X2 = T1 50 T1 & X 3 60 50 + 60 = 110
2. X2 & X3 = T1 30 T1 & X 1 60 30 + 60 = 90

The Second case is optimal.

Example 2:

Given five files (X1, X2, X3, X4, X5) with sizes (20, 30, 10, 5, 30). Apply greedy rule to
find optimal way of pair wise merging to give an optimal solution using binary merge
tree representation.

Solution:

20 30 10 5 30

X1 X2 X3 X4 X5

Merge X4 and X3 to get 15 record moves. Call this Z1.

X1 X2 Z1 X5

20 30 15 30

5 10

Merge Z1 and X1 to get 35 record moves. Call this Z2.

X2 Z2 X5

30 35 30

Z1 15 20 X1

X4 5 10 X3

78
Merge X2 and X5 to get 60 record moves. Call this Z3.

Z2 Z3

35 60

Z1 15 20 30 30

X1 X5 X2

5 10
X4 X3

Merge Z2 and Z3 to get 90 record moves. This is the answer. Call this Z 4.

Z4
95

Z2 35 60 Z3

Z1 15 20 30 30

X1 X5 X2

5 10
X4 X3

Therefore the total number of record moves is 15 + 35 + 60 + 95 = 205. This is an


optimal merge pattern for the given problem.

Huffman Codes

Another application of Greedy Algorithm is file compression.

Suppose that we have a file only with characters a, e, i, s, t, spaces and new lines, the
frequency of appearance of a's is 10, e's fifteen, twelve i's, three s's, four t's, thirteen
banks and one newline.

Using a standard coding scheme, for 58 characters using 3 bits for each character, the
file requires 174 bits to represent. This is shown in table below.

Character Code Frequency Total bits


A 000 10 30
E 001 15 45
I 010 12 36
S 011 3 9
T 100 4 12
Space 101 13 39
New line 110 1 3

79
Representing by a binary tree, the binary code for the alphabets are as follows:

a e i s l sp nl

The representation of each character can be found by starting at the root and recording
the path. Use a 0 to indicate the left branch and a 1 to indicate the right branch.

If the character ci is at depth di and occurs fi times, the cost of the code is equal to

df i i

With this representation the total number of bits is 3x10 + 3x15 + 3x12 + 3x3 + 3x4 +
3x13 + 3x1 = 174

A better code can be obtained by with the following representation.

nl

a e i s l sp

The basic problem is to find the full binary tree of minimal total cost. This can be done
by using Huffman coding (1952).

Huffman's Algorithm:

Huffman's algorithm can be described as follows: We maintain a forest of trees. The


weights of a tree is equal to the sum of the frequencies of its leaves. If the number of
characters is 'c'. c - 1 times, select the two trees T1 and T2, of smallest weight, and
form a new tree with sub-trees T1 and T2. Repeating the process we will get an optimal
Huffman coding tree.

Example:

The initial forest with the weight of each tree is as follows:

10 15 12 3 4 13 1
a e i s t sp nl

80
The two trees with the lowest weight are merged together, creating the forest, the
Huffman algorithm after the first merge with new root T 1 is as follows: The total weight
of the new tree is the sum of the weights of the old trees.

10 15 12 4 13 4
a e i t sp T1

s nl

We again select the two trees of smallest weight. This happens to be T 1 and t, which
are merged into a new tree with root T2 and weight 8.

10 15 12 13 8
a e i sp T2

T1 t

s nl

In next step we merge T2 and a creating T3, with weight 10+8=18. The result of this
operation in

15 12 13 18
e i sp T3

T2 a

T1 t

s nl

After third merge, the two trees of lowest weight are the single node trees representing
i and the blank space. These trees merged into the new tree with root T4.

15 25 18
e T4 T3

i sp T2 a

T1 t

s nl

81
The fifth step is to merge the trees with roots e and T 3. The results of this step is

25 33
T4 T5

i sp T3 e

T2 a

T1 t

s nl

Finally, the optimal tree is obtained by merging the two remaining trees. The optimal
trees with root T6 is:

T6
0 1

T5 T4
0 1 0 1

T3 e i sp
0 1
T2 a
0 1
T1 t
0 1
s nl

The full binary tree of minimal total cost, where all characters are obtained in the
leaves, uses only 146 bits.

Character Code Frequency Total bits


(Code bits X frequency)
A 001 10 30
E 01 15 30
I 10 12 24
S 00000 3 15
T 0001 4 16
Space 11 13 26
New line 00001 1 5
Total : 146

82
GRAPH ALGORITHMS

Basic Definitions:

 Graph G is a pair (V, E), where V is a finite set (set of vertices) and E is a finite
set of pairs from V (set of edges). We will often denote n := |V|, m := |E|.

 Graph G can be directed, if E consists of ordered pairs, or undirected, if E


consists of unordered pairs. If (u, v)  E, then vertices u, and v are adjacent.

 We can assign weight function to the edges: wG(e) is a weight of edge e  E.


The graph which has such function assigned is called weighted.

 Degree of a vertex v is the number of vertices u for which (u, v)  E (denote


deg(v)). The number of incoming edges to a vertex v is called in–degree of
the vertex (denote indeg(v)). The number of outgoing edges from a vertex is
called out-degree (denote outdeg(v)).

Representation of Graphs:

Consider graph G = (V, E), where V= {v1, v2,….,vn}.

Adjacency matrix represents the graph as an n x n matrix A = (ai,j), where

 1, if (vi , v j )  E,
a i, j  
 0, otherwise

The matrix is symmetric in case of undirected graph, while it may be asymmetric if


the graph is directed.

We may consider various modifications. For example for weighted graphs, we may
have
 w (vi, v j ), if (vi , v j )  E,
a i, j  
 default, otherwise,

Where default is some sensible value based on the meaning of the weight function
(for example, if weight function represents length, then default can be , meaning
value larger than any other value).

Adjacency List: An array Adj [1 . . . . . . . n] of pointers where for 1 < v < n, Adj [v]
points to a linked list containing the vertices which are adjacent to v (i.e. the vertices
that can be reached from v by a single edge). If the edges have weights then these
weights may also be stored in the linked list elements.

83
Paths and Cycles:

A path is a sequence of vertices (v1, v2, . . . . . . , vk), where for all i, (vi, vi+1)  E. A
path is simple if all vertices in the path are distinct.

A (simple) cycle is a sequence of vertices (v1, v2, . . . . . . , vk, vk+1 = v1), where for
all i, (vi, vi+1)  E and all vertices in the cycle are distinct except pair v 1, vk+1.

Subgraphs and Spanning Trees:

Subgraphs: A graph G‟ = (V‟ , E‟ ) is a subgraph of graph G = (V, E) iff V‟  V and E‟ 


E.

The undirected graph G is connected, if for every pair of vertices u, v there exists a
path from u to v. If a graph is not connected, the vertices of the graph can be divided
into connected components. Two vertices are in the same connected component iff
they are connected by a path.

Tree is a connected acyclic graph. A spanning tree of a graph G = (V, E) is a tree


that contains all vertices of V and is a subgraph of G. A single graph can have multiple
spanning trees.

Lemma 1: Let T be a spanning tree of a graph G. Then


1. Any two vertices in T are connected by a unique simple path.
2. If any edge is removed from T, then T becomes disconnected.
3. If we add any edge into T, then the new graph will contain a cycle.
4. Number of edges in T is n-1.

Minimum Spanning Trees (MST):

A spanning tree for a connected graph is a tree whose vertex set is the same as the
vertex set of the given graph, and whose edge set is a subset of the edge set of the
given graph. i.e., any connected graph will have a spanning tree.

Weight of a spanning tree w (T) is the sum of weights of all edges in T. The Minimum
spanning tree (MST) is a spanning tree with the smallest possible weight.

84
G:

A gra p h G:
T hre e ( of ma n y p o s s ib le) s p a n ni n g t re e s fro m gra p h G:

2 2
4
G: 3 5 3
6

1 1

A w e ig ht e d gra p h G: T h e min i ma l s p a n nin g t re e fro m w e ig ht e d gra p h G:

Here are some examples:

To explain further upon the Minimum Spanning Tree, and what it applies to, let's
consider a couple of real-world examples:

1. One practical application of a MST would be in the design of a network. For


instance, a group of individuals, who are separated by varying distances, wish
to be connected together in a telephone network. Although MST cannot do
anything about the distance from one connection to another, it can be used to
determine the least cost paths with no cycles in this network, thereby
connecting everyone at a minimum cost.

2. Another useful application of MST would be finding airline routes. The vertices of
the graph would represent cities, and the edges would represent routes between
the cities. Obviously, the further one has to travel, the more it will cost, so MST
can be applied to optimize airline routes by finding the least costly paths with no
cycles.

To explain how to find a Minimum Spanning Tree, we will look at two algorithms: the
Kruskal algorithm and the Prim algorithm. Both algorithms differ in their methodology,
but both eventually end up with the MST. Kruskal's algorithm uses edges, and Prim‟s
algorithm uses vertex connections in determining the MST.

Kruskal’s Algorithm

This is a greedy algorithm. A greedy algorithm chooses some local optimum (i.e.
picking an edge with the least weight in a MST).

Kruskal's algorithm works as follows: Take a graph with 'n' vertices, keep on adding the
shortest (least cost) edge, while avoiding the creation of cycles, until (n - 1) edges
have been added. Sometimes two or more edges may have the same cost. The order in
which the edges are chosen, in this case, does not matter. Different MSTs may result,
but they will all have the same total cost, which will always be the minimum cost.

85
Algorithm:

The algorithm for finding the MST, using the Kruskal‟s method is as follows:

Algorithm Kruskal (E, cost, n, t)


// E is the set of edges in G. G has n vertices. cost [u, v] is the
// cost of edge (u, v). „t‟ is the set of edges in the minimum-cost spanning tree.
// The final cost is returned.
{
Construct a heap out of the edge costs using heapify;
for i := 1 to n do parent [i] := -1;
// Each vertex is in a different set.
i := 0; mincost := 0.0;
while ((i < n -1) and (heap not empty)) do
{
Delete a minimum cost edge (u, v) from the heap and
re-heapify using Adjust;
j := Find (u); k := Find (v);
if (j  k) then
{
i := i + 1;
t [i, 1] := u; t [i, 2] := v;
mincost :=mincost + cost [u, v];
Union (j, k);
}
}
if (i  n-1) then write ("no spanning tree");
else return mincost;
}

Running time:

 The number of finds is at most 2e, and the number of unions at most n-1.
Including the initialization time for the trees, this part of the algorithm has a
complexity that is just slightly more than O (n + e).

 We can add at most n-1 edges to tree T. So, the total time for operations on T is
O(n).

Summing up the various components of the computing times, we get O (n + e log e) as


asymptotic complexity

Example 1:

10 50
1 2
45 40
30 3
35

4 25 5
55
20 15
6

86
Arrange all the edges in the increasing order of their costs:

Cost 10 15 20 25 30 35 40 45 50 55
Edge (1, 2) (3, 6) (4, 6) (2, 6) (1, 4) (3, 5) (2, 5) (1, 5) (2, 3) (5, 6)

The edge set T together with the vertices of G define a graph that has up to n
connected components. Let us represent each component by a set of vertices in it.
These vertex sets are disjoint. To determine whether the edge (u, v) creates a cycle,
we need to check whether u and v are in the same vertex set. If so, then a cycle is
created. If not then no cycle is created. Hence two Finds on the vertex sets suffice.
When an edge is included in T, two components are combined into one and a union is
to be performed on the two sets.

Edge Cost Spanning Forest Edge Sets Remarks

1 2 3 4 5 6 {1}, {2}, {3},


{4}, {5}, {6}

(1, 2) 10 1 2 3 4 5 6 {1, 2}, {3}, {4}, The vertices 1 and


{5}, {6} 2 are in different
sets, so the edge
is combined

(3, 6) 15 12 3 4 5 {1, 2}, {3, 6}, The vertices 3 and


{4}, {5} 6 are in different
6 sets, so the edge
is combined

(4, 6) 20 1 2 3 5 {1, 2}, {3, 4, 6}, The vertices 4 and


{5} 6 are in different
4 6 sets, so the edge
is combined

(2, 6) 25 1 2 5 {1, 2, 3, 4, 6}, The vertices 2 and


{5} 6 are in different
4 3 sets, so the edge
is combined
6

The vertices 1 and


(1, 4) 30 Reject 4 are in the same
set, so the edge is
rejected

(3, 5) 35 1 2 The vertices 3 and


5 are in the same
{1, 2, 3, 4, 5, 6} set, so the edge is
4 5 3
combined
6

87
MINIMUM-COST SPANNING TREES: PRIM'S ALGORITHM

A given graph can have many spanning trees. From these many spanning trees, we
have to select a cheapest one. This tree is called as minimal cost spanning tree.

Minimal cost spanning tree is a connected undirected graph G in which each edge is
labeled with a number (edge labels may signify lengths, weights other than costs).
Minimal cost spanning tree is a spanning tree for which the sum of the edge labels is as
small as possible

The slight modification of the spanning tree algorithm yields a very simple algorithm for
finding an MST. In the spanning tree algorithm, any vertex not in the tree but
connected to it by an edge can be added. To find a Minimal cost spanning tree, we
must be selective - we must always add a new vertex for which the cost of the new
edge is as small as possible.

This simple modified algorithm of spanning tree is called prim's algorithm for finding an
Minimal cost spanning tree.

Prim's algorithm is an example of a greedy algorithm.

Algorithm Algorithm Prim

(E, cost, n, t)
// E is the set of edges in G. cost [1:n, 1:n] is the cost
// adjacency matrix of an n vertex graph such that cost [i, j] is
// either a positive real number or  if no edge (i, j) exists.
// A minimum spanning tree is computed and stored as a set of
// edges in the array t [1:n-1, 1:2]. (t [i, 1], t [i, 2]) is an edge in
// the minimum-cost spanning tree. The final cost is returned.
{
Let (k, l) be an edge of minimum cost in E;
mincost := cost [k, l];
t [1, 1] := k; t [1, 2] := l;
for i :=1 to n do // Initialize near
if (cost [i, l] < cost [i, k]) then near [i] := l;
else near [i] := k;
near [k] :=near [l] := 0;
for i:=2 to n - 1 do // Find n - 2 additional edges for t.
{
Let j be an index such that near [j]  0 and
cost [j, near [j]] is minimum;
t [i, 1] := j; t [i, 2] := near [j];
mincost := mincost + cost [j, near [j]];
near [j] := 0
for k:= 1 to n do // Update near[].
if ((near [k]  0) and (cost [k, near [k]] > cost [k, j]))
then near [k] := j;
}
return mincost;
}

88
Running time:

We do the same set of operations with dist as in Dijkstra's algorithm (initialize


structure, m times decrease value, n - 1 times select minimum). Therefore, we get O
(n2) time when we implement dist with array, O (n + E  log n) when we implement it
with a heap.

For each vertex u in the graph we dequeue it and check all its neighbors in  (1 + deg
(u)) time. Therefore the running time is:

 

  
  1degv   n  degv   (n  m)
    
 v V   v V 

EXAMPLE 1:

Use Prim‟s Algorithm to find a minimal spanning tree for the graph shown below
starting with the vertex A.

4
B D

4
3 2 1 2
4 E 1

A C 2 G
6
2 F 1

SOLUTION:
0 3 6     
 
3 0 2 4    
6 2 0 1 4 2  
 
The cost adjacency matrix is  4 1 0 2  4 

1 
 4 2 0 2 

   2  2 0 1 



  4 1 1 0 


The stepwise progress of the prim‟s algorithm is as follows:

Step 1:

B 3  D Vertex A B C D E F G
Status 0 1 1 1 1 1 1
 E Dist. 0 3 6    
A G Next * A A A A A A
0 6
 F
C

89
Step 2:

4 D Vertex A B C D E F G
B 3
Status 0 0 1 1 1 1 1
Dist. 0 3 2 4   
 E
Next * A B B A A A
A 0 2  G

C 
F

Step 3:

Vertex A B C D E F G
B 3 1 D Status 0 0 0 1 1 1 1
Dist. 0 3 2 1 4 2 
4 E Next * A B C C C A
A 0 2  G

C 2 F

Step 4:

B 3 1 D Vertex A B C D E F G
Status 0 0 0 0 1 1 1
2 E Dist. 0 3 2 1 2 2 4
A 0 2 4 G Next * A B C D C D

C 2 F

Step 5:

Vertex A B C D E F G
B 3 1 D
Status 0 0 0 0 1 0 1
Dist. 0 3 2 1 2 2 1
2 E Next * A B C D C E
A 0 2 1 G

C 2 F

Step 6:

Vertex A B C D E F G
B 3 1 D
Status 0 0 0 0 0 1 0
Dist. 0 3 2 1 2 1 1
2 E Next * A B C D G E
A 0 2 1 G
C 1 F

Step 7:

Vertex A B C D E F G
B 3 1 D
Status 0 0 0 0 0 0 0
Dist. 0 3 2 1 2 1 1
2 E
A 0 2 1 G Next * A B C D G E

C 1 F

90
EXAMPLE 2:

Considering the following graph, find the minimal spanning tree using prim‟s algorithm.

8
1 4 4
9
3 5
4
1
2 3 3
4

 4 9 8  
 
 4  4 1  
The cost adjacent matrix is 9 4  3 3 
 
8 1 3  4 
 
   3 4  

The minimal spanning tree obtained as:

Vertex 1 Vertex 2 1 4
2 4
4 1 3 5
3 4 3

5 3 2 3

1 2

The cost of Minimal spanning tree = 11.

The steps as per the algorithm are as follows:

Algorithm near (J) = k means, the nearest vertex to J is k.

The algorithm starts by selecting the minimum cost from the graph. The minimum cost
edge is (2, 4).

K = 2, l = 4
Min cost = cost (2, 4) = 1

T [1, 1] = 2

T [1, 2] = 4

91
for i = 1 to 5 Near matrix Edges added to min spanning
tree:
Begin
T [1, 1] = 2
i=1 T [1, 2] = 4
is cost (1, 4) < cost (1, 2) 2
8 < 4, No
Than near (1) = 2 1 2 3 4 5

i=2
is cost (2, 4) < cost (2, 2) 2 4
1 < , Yes
So near [2] = 4 1 2 3 4 5

i=3
is cost (3, 4) < cost (3, 2) 2 4 4
1 < 4, Yes
So near [3] = 4 1 2 3 4 5

i=4
is cost (4, 4) < cost (4, 2) 2 4 4 2
 < 1, no
So near [4] = 2 1 2 3 4 5

i=5
is cost (5, 4) < cost (5, 2) 2 4 4 2 4
4 < , yes
So near [5] = 4 1 2 3 4 5

end
2 0 4 0 4
near [k] = near [l] = 0
near [2] = near[4] = 0 1 2 3 4 5

for i = 2 to n-1 (4) do

i=2

for j = 1 to 5
j=1
near(1)0 and cost(1, near(1))
2  0 and cost (1, 2) = 4

j=2
near (2) = 0

j=3
is near (3)  0
4  0 and cost (3, 4) = 3

92
j=4
near (4) = 0

J=5
Is near (5)  0
4  0 and cost (4, 5) = 4

select the min cost from the


above obtained costs, which is
3 and corresponding J = 3

min cost = 1 + cost(3, 4)


=1+3=4 T (2, 1) = 3
T (2, 2) = 4
T (2, 1) = 3
T (2, 2) = 4
2 0 0 0 4

Near [j] = 0 1 2 3 4 5
i.e. near (3) =0

for (k = 1 to n)

K=1
is near (1)  0, yes
2 0
and cost (1,2) > cost(1, 3)
4 > 9, No

K=2
Is near (2) 0, No

K=3
Is near (3)  0, No

K=4
Is near (4)  0, No

K=5 2 0 0 0 3
Is near (5)  0
4  0, yes 1 2 3 4 5
and is cost (5, 4) > cost (5, 3)
4 > 3, yes
than near (5) = 3

i=3

for (j = 1 to 5)
J=1
is near (1) 0
2 0
cost (1, 2) = 4

J=2
Is near (2) 0, No

93
J=3
Is near (3)  0, no
Near (3) = 0

J=4
Is near (4)  0, no
Near (4) = 0

J=5
Is near (5)  0
Near (5) = 3  3  0, yes
And cost (5, 3) = 3

Choosing the min cost from


the above obtaining costs
which is 3 and corresponding J
=5 T (3, 1) = 5
T (3, 2) = 3
Min cost = 4 + cost (5, 3)
=4+3=7

T (3, 1) = 5
T (3, 2) = 3

Near (J) = 0  near (5) = 0 2 0 0 0 0

for (k=1 to 5) 1 2 3 4 5

k=1
is near (1)  0, yes
and cost(1,2) > cost(1,5)
4 > , No

K=2
Is near (2)  0 no

K=3
Is near (3)  0 no

K=4
Is near (4)  0 no

K=5
Is near (5)  0 no

i=4

for J = 1 to 5
J=1
Is near (1)  0
2  0, yes
cost (1, 2) = 4

j=2
is near (2)  0, No

94
J=3
Is near (3)  0, No
Near (3) = 0

J=4
Is near (4)  0, No
Near (4) = 0

J=5
Is near (5)  0, No
Near (5) = 0

Choosing min cost from the


above it is only '4' and
corresponding J = 1

Min cost = 7 + cost (1,2)


= 7+4 = 11 0 0 0 0 0

T (4, 1) = 1 T (4, 1) = 1
T (4, 2) = 2 1 2 3 4 5 T (4, 2) = 2

Near (J) = 0  Near (1) = 0

for (k = 1 to 5)

K=1
Is near (1)  0, No

K=2
Is near (2)  0, No

K=3
Is near (3)  0, No

K=4
Is near (4)  0, No

K=5
Is near (5)  0, No

End.

The Single Source Shortest-Path Problem: DIJKSTRA'S ALGORITHMS

In the previously studied graphs, the edge labels are called as costs, but here we think
them as lengths. In a labeled graph, the length of the path is defined to be the sum of
the lengths of its edges.

In the single source, all destinations, shortest path problem, we must find a shortest
path from a given source vertex to each of the vertices (called destinations) in the
graph to which there is a path.

Dijkstra‟s algorithm is similar to prim's algorithm for finding minimal spanning trees.
Dijkstra‟s algorithm takes a labeled graph and a pair of vertices P and Q, and finds the

95
shortest path between then (or one of the shortest paths) if there is more than one.
The principle of optimality is the basis for Dijkstra‟s algorithms.

Dijkstra‟s algorithm does not work for negative edges at all.

The figure lists the shortest paths from vertex 1 for a five vertex weighted digraph.

8 0 1

4 2 1 3
1 2 5

2 4 5 3 1 3 4

3 4 3
1 4 1 2
Graph
6 1 3 4 5

Shortest Paths

Algorithm:

Algorithm Shortest-Paths (v, cost, dist, n)


// dist [j], 1 < j < n, is set to the length of the shortest path
// from vertex v to vertex j in the digraph G with n vertices.
// dist [v] is set to zero. G is represented by its
// cost adjacency matrix cost [1:n, 1:n].
{
for i :=1 to n do
{
S [i] := false; // Initialize S.
dist [i] :=cost [v, i];
}
S[v] := true; dist[v] := 0.0; // Put v in S.
for num := 2 to n – 1 do
{
Determine n - 1 paths from v.
Choose u from among those vertices not in S such that dist[u] is minimum;
S[u] := true; // Put u is S.
for (each w adjacent to u with S [w] = false) do
if (dist [w] > (dist [u] + cost [u, w]) then // Update distances
dist [w] := dist [u] + cost [u, w];
}
}

Running time:

Depends on implementation of data structures for dist.

 Build a structure with n elements A


 at most m = E  times decrease the value of an item mB
 „n‟ times select the smallest value nC
 For array A = O (n); B = O (1); C = O (n) which gives O (n 2) total.
 For heap A = O (n); B = O (log n); C = O (log n) which gives O (n + m log n)
total.

96
Example 1:

Use Dijkstras algorithm to find the shortest path from A to each of the other six
vertices in the graph:

4
B D

4
3 2 1 2
4 E 1

A C 2 G
6
2 F 1

Solution:

0 3 6     
 
3 0 2 4    
6 2 0 1 4 2  
 
The cost adjacency matrix is  4 1 0 2  4 
 4 2 0 2 
1 

  2  2 0 1 


  4 1 1 0 

The problem is solved by considering the following information:

 Status[v] will be either „0‟, meaning that the shortest path from v to v0 has
definitely been found; or „1‟, meaning that it hasn‟t.

 Dist[v] will be a number, representing the length of the shortest path from v to
v0 found so far.

 Next[v] will be the first vertex on the way to v0 along the shortest path found so
far from v to v0

The progress of Dijkstra‟s algorithm on the graph shown above is as follows:

Step 1:

 D Vertex A B C D E F G
B 3
Status 0 1 1 1 1 1 1
 E Dist. 0 3 6    
G Next * A A A A A A
A 0 6
 F
C

Step 2:

4 7 D Vertex A B C D E F G
B 3 Status 0 0 1 1 1 1 1
2 Dist. 0 3 5 7   
 E Next * A B B A A A
A 0 5  G

C 
F

97
Step 3:

Vertex A B C D E F G
B 3 6 D Status 0 0 0 1 1 1 1
Dist. 0 3 5 6 9 7 
9 E  G Next * A B C C C A
A 0 5
F7
C

Step 4:

B 3 7 D Vertex A B C D E F G
Status 0 0 0 0 1 1 1
8 E Dist. 0 3 5 6 8 7 10
A 0 5 10 G Next * A B C D C D

C 7 F

Step 5:

Vertex A B C D E F G
B 3 6 D
Status 0 0 0 0 1 0 1
Dist. 0 3 5 6 8 7 8
8 E Next * A B C D C F
A 0 5 8 G

C 7 F

Step 6:

Vertex A B C D E F G
B 3 8 D
Status 0 0 0 0 0 0 1
Dist. 0 3 5 6 8 7 8
8 E Next * A B C D C F
A 0 5 8 G
C 7 F

Step 7:

Vertex A B C D E F G
B 3 9 D
Status 0 0 0 0 0 0 0
Dist. 0 3 5 6 8 7 8
8 E 8 G Next * A B C D C F
A 0 5

C 7 F

98

You might also like