Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
70 views

Ada Module 4 Notes

Uploaded by

Lokko Prince
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Ada Module 4 Notes

Uploaded by

Lokko Prince
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 39

DEPARTMENT OF CS & IT

ANALYSIS AND DESIGN OF ALGORITHMS


(21BCA5C01)

Module 4 : Greedy Algorithms -I

Prepared by
Dr.A.Kannagi
UNIT 4: Greedy Algorithms -I

1. The greedy strategy


2. Greedy methods & optimization
3. Minimum cost spanning trees - Prims algorithm
4. Minimum cost spanning trees - Kruskal’s algorithm
5. Huffman codes
6. Single source shortest paths- Dijkstra’s algorithm
7. Knapsack

1. The greedy strategy


A greedy algorithm is an approach for solving a problem by selecting the
best option available at the moment. It doesn't worry whether the current best
result will bring the overall optimal result.
The algorithm never reverses the earlier decision even if the choice is wrong.
It works in a top-down approach.
This algorithm may not produce the best result for all the problems. It's
because it always goes for the local best choice to produce the global best result.
However, we can determine if the algorithm can be used with any problem if the
problem has the following properties:
1. Greedy Choice Property
If an optimal solution to the problem can be found by choosing the best
choice at each step without reconsidering the previous steps once chosen, the
problem can be solved using a greedy approach. This property is called greedy
choice property.
2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the optimal
solution to its subproblems, then the problem can be solved using a greedy

2
approach. This property is called optimal substructure.

2. Greedy methods & optimization

The components that can be used in the greedy algorithm are:

o Candidate set: A solution that is created from the set is known as a


candidate set.
o Selection function: This function is used to choose the candidate or subset
which can be added in the solution.
o Feasibility function: A function that is used to determine whether the
candidate or subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or
the partial solution.
o Solution function: This function is used to intimate whether the complete
function has been reached or not.
Applications of Greedy Algorithm
o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's algorithm or
the Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.
Pseudo code of Greedy Algorithm
Algorithm Greedy (a, n)
{
Solution : = 0;
for i = 0 to n do
{
x: = select(a);
if feasible(solution, x)
{
Solution: = union(solution , x)
}

3
return solution;
}
}

The above is the greedy algorithm. Initially, the solution is assigned with zero
value. We pass the array and number of elements in the greedy algorithm. Inside
the for loop, we select the element one by one and checks whether the solution is
feasible or not. If the solution is feasible, then we perform the union.

Let's understand through an example.

Suppose there is a problem 'P'. I want to travel from A to B shown as below:

P:A→B

The problem is that we have to travel this journey from A to B. There are
various solutions to go from A to B. We can go from A to B by walk, car, bike,
train, aeroplane, etc. There is a constraint in the journey that we have to travel this
journey within 12 hrs. If I go by train or aeroplane then only, I can cover this
distance within 12 hrs. There are many solutions to this problem but there are only
two solutions that satisfy the constraint.

If we say that we have to cover the journey at the minimum cost. This means
that we have to travel this distance as minimum as possible, so this problem is
known as a minimization problem. Till now, we have two feasible solutions, i.e.,
one by train and another one by air. Since travelling by train will lead to the
minimum cost so it is an optimal solution. An optimal solution is also the feasible
solution, but providing the best result so that solution is the optimal solution with
the minimum cost. There would be only one optimal solution.

The problem that requires either minimum or maximum result then that
problem is known as an optimization problem. Greedy method is one of the
strategies used for solving the optimization problems.

4
Disadvantages of using Greedy algorithm

Greedy algorithm makes decisions based on the information available at


each phase without considering the broader problem. So, there might be a
possibility that the greedy solution does not give the best solution for every
problem.

3. Minimum Cost Spanning Trees - Prims algorithm


Prim’s algorithm is also a Greedy algorithm. It starts with an empty spanning tree.
The idea is to maintain two sets of vertices. The first set contains the vertices already
included in the MST, the other set contains the vertices not yet included. At every step, it
considers all the edges that connect the two sets, and picks the minimum weight edge
from these edges. After picking the edge, it moves the other endpoint of the edge to the
set containing MST.
A group of edges that connects two set of vertices in a graph is called cut in graph
theory. So, at every step of Prim’s algorithm, we find a cut (of two sets, one contains the
vertices already included in MST and other contains rest of the verices), pick the minimum
weight edge from the cut and include this vertex to MST Set (the set that contains already
included vertices).
How does Prim’s Algorithm Work?
The idea behind Prim’s algorithm is simple, a spanning tree means all vertices must
be connected. So the two disjoint subsets (discussed above) of vertices must be connected
to make a Spanning Tree. And they must be connected with the minimum weight edge to
make it a Minimum Spanning Tree.
Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.

5
2) Assign a key value to all vertices in the input graph. Initialize all key values as INFINITE.
Assign key value as 0 for the first vertex so that it is picked first.
3) While mstSet doesn’t include all vertices
 Pick a vertex u which is not there in mstSet and has minimum key value.
 Include u to mstSets
 Update key value of all adjacent vertices of u. To update the key values, iterate through
all adjacent vertices. For every adjacent vertex v, if weight of edge u-v is less than the
previous key value of v, update the key value as weight of u-v
Let us understand with the following example:

Consider the following graph as an example for which we need to find the
Minimum Spanning Tree (MST).

Example of a graph

Step 1: Firstly, we select an arbitrary vertex that acts as the starting vertex of the
Minimum Spanning Tree. Here we have selected vertex 0 as the starting vertex.

6
0 is selected as starting vertex

Step 2: All the edges connecting the incomplete MST and other vertices are the
edges {0, 1} and {0, 7}. Between these two the edge with minimum weight is {0,
1}. So include the edge and vertex 1 in the MST.

1 is added to the MST

Step 3: The edges connecting the incomplete MST to other vertices are {0, 7}, {1,
7} and {1, 2}. Among these edges the minimum weight is 8 which is of the edges
7
{0, 7} and {1, 2}. Let us here include the edge {0, 7} and the vertex 7 in the MST.
[We could have also included edge {1, 2} and vertex 2 in the MST].

7 is added in the MST

Step 4: The edges that connect the incomplete MST with the fringe vertices are
{1, 2}, {7, 6} and {7, 8}. Add the edge {7, 6} and the vertex 6 in the MST as it has
the least weight (i.e., 1).

8
6 is added in the MST

Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include
edge {6, 5} and vertex 5 in the MST as the edge has the minimum weight (i.e., 2)
among them.

Include vertex 5 in the MST

Step 6: Among the current connecting edges, the edge {5, 2} has the minimum
weight. So include that edge and the vertex 2 in the MST.

9
Include vertex 2 in the MST

Step 7: The connecting edges between the incomplete MST and the other edges
are {2, 8}, {2, 3}, {5, 3} and {5, 4}. The edge with minimum weight is edge {2, 8}
which has weight 2. So include this edge and the vertex 8 in the MST.

Add vertex 8 in the MST

Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which
are minimum. But 7 is already part of MST. So we will consider the edge {2, 3}
and include that edge and vertex 3 in the MST.
10
Include vertex 3 in MST

Step 9: Only the vertex 4 remains to be included. The minimum weighted edge
from the incomplete MST to 4 is {3, 4}.

Include vertex 4 in the MST

The final structure of the MST is as follows and the weight of the edges of the
MST is (4 + 8 + 1 + 2 + 4 + 2 + 7 + 9) = 37.

11
4. Minimum Cost Spanning Trees - Kruskal’s algorithm
A minimum spanning tree (MST) or minimum weight spanning tree for a weighted,
connected, undirected graph is a spanning tree with a weight less than or equal to the
weight of every other spanning tree.
Introduction to Kruskal’s Algorithm:

In Kruskal’s algorithm, sort all edges of the given graph in increasing order. Then it
keeps on adding new edges and nodes in the MST if the newly added edge does not
form a cycle. It picks the minimum weighted edge at first and the maximum weighted
edge at last. Thus we can say that it makes a locally optimal choice in each step in order
to find the optimal solution. Hence this is a Greedy Algorithm.

How to find MST using Kruskal’s algorithm?

Below are the steps for finding MST using Kruskal’s algorithm:

1. Sort all the edges in non-decreasing order of their weight.


2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so
far. If the cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.

12
Kruskal’s algorithm to find the minimum cost spanning tree uses the greedy
approach. The Greedy Choice is to pick the smallest weight edge that does not cause a
cycle in the MST constructed so far. Let us understand it with an example:

Illustration:

Below is the illustration of the above approach:

Input Graph:

The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed
will be having (9 – 1) = 8 edges.
After sorting:

Weight Source Destination

1 7 6

2 8 2

2 6 5

4 0 1

4 2 5

13
Weight Source Destination

6 8 6

7 2 3

7 7 8

8 0 7

8 1 2

9 3 4

10 5 4

11 1 7

14 3 5

Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6. No cycle is formed, include it.

14
Add edge 7-6 in the MST

Step 2: Pick edge 8-2. No cycle is formed, include it.

Add edge 8-2 in the MST


Step 3: Pick edge 6-5. No cycle is formed, include it.

15
Add edge 6-5 in the MST
Step 4: Pick edge 0-1. No cycle is formed, include it.

Add edge 0-1 in the MST


Step 5: Pick edge 2-5. No cycle is formed, include it.

16
Add edge 2-5 in the MST
Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard it. Pick edge 2-
3: No cycle is formed, include it.

Add edge 2-3 in the MST


Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-
7. No cycle is formed, include it.

17
Add edge 0-7 in MST
Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.

Add edge 3-4 in the MST


Note: Since the number of edges included in the MST equals to (V – 1), so the algorithm
stops here

5. Huffman codes
Huffman Coding is a technique of compressing data to reduce its size without
losing any of the details. It was first developed by David Huffman.
Huffman Coding is generally useful to compress the data in which there are
frequently occurring characters.
How Huffman Coding works?
Suppose the string below is to be sent over a network.

18
Initial string

Each character occupies 8 bits. There are a total of 15 characters in the above string.
Thus, a total of 8 * 15 = 120 bits are required to send this string.
Using the Huffman Coding technique, we can compress the string to a smaller size.
Huffman coding first using the frequencies of the character and then generates
code for each character.
Once the data is encoded, it has to be decoded. Decoding is done using the same
tree.
Huffman Coding prevents any ambiguity in the decoding process using the concept
of prefix code ie. a code associated with a character should not be present in the prefix of
any other code. The tree created above helps in maintaining the property.
Huffman coding is done with the help of the following steps.
1. Calculate the frequency of each character in the string.

Frequency of string
2. Sort the characters in increasing order of the frequency. These are stored in a priority
queue Q.

Characters sorted according to the frequency


3. Make each unique character as a leaf node.
4. Create an empty node z. Assign the minimum frequency to the left child of z and assign
the second minimum frequency to the right child of z. Set the value of the z as the sum
of the above two minimum frequencies.

19
Getting the sum of the least numbers
5. Remove these two minimum frequencies from Q and add the sum into the list of
frequencies (* denote the internal nodes in the figure above).
6. Insert node z into the tree.
7. Repeat steps 3 to 5 for all the characters.

Repeat steps 3 to 5 for all the characters.

Repeat steps 3 to 5 for all the characters.


8. For each non-leaf node, assign 0 to the left edge and 1 to the right edge.

20
Freque
Character Code Size
ncy

A 5 11 5*2 = 10
Assign 0
to the B 1 100 1*3 = 3 left edge
and 1 to the
C 6 0 6*1 = 6
right edge
For D 3 101 3*3 = 9 sending
the above
4 * 8 = 32 bits 15 bits 28 bits
string over a
network, we have to send the tree as well as the above compressed-code. The total size
is given by the table below.

Without encoding, the total size of the string was 120 bits. After encoding the size is
reduced to 32 + 15 + 28 = 75.
Decoding the code

21
For decoding the code, we can take the code and traverse through the tree to find
the character.
Let 101 is to be decoded, we can traverse from the root as in the figure below.

Decoding
Huffman Coding Algorithm
create a priority queue Q consisting of each unique character.
sort then in ascending order of their frequencies.
for all the unique characters:
create a newNode
extract minimum value from Q and assign it to leftChild of newNode
extract minimum value from Q and assign it to rightChild of newNode
calculate the sum of these two minimum values and assign it to the value of newNode
insert this newNode into the tree
return rootNode

6. Single source shortest paths- Dijkstra’s algorithm


Dijkstra's Algorithm is a Graph algorithm that finds the shortest path from a
source vertex to all other vertices in the Graph (single source shortest path). It is a type of
Greedy Algorithm that only works on Weighted Graphs having positive weights.
The time complexity of Dijkstra's Algorithm is O(V2) with the help of the adjacency
matrix representation of the graph. This time complexity can be reduced to O((V + E) log
V) with the help of an adjacency list representation of the graph, where V is the number of
vertices and E is the number of edges in the graph.

22
Fundamentals of Dijkstra's Algorithm
Dijkstra's Algorithm was conceived by computer scientist Edsger W. Dijkstra in 1956.

The following are the basic concepts of Dijkstra's Algorithm:

1. Dijkstra's Algorithm begins at the node we select (the source node), and it examines
the graph to find the shortest path between that node and all the other nodes in
the graph.
2. The Algorithm keeps records of the presently acknowledged shortest distance from
each node to the source node, and it updates these values if it finds any shorter
path.
3. Once the Algorithm has retrieved the shortest path between the source and another
node, that node is marked as 'visited' and included in the path.
4. The procedure continues until all the nodes in the graph have been included in the
path. In this manner, we have a path connecting the source node to all other nodes,
following the shortest possible path to reach each node.

Working of Dijkstra's Algorithm


Highlights:
 Greedy Algorithm
 Relaxation
Dijkstra's Algorithm requires a graph and source vertex to work. The algorithm is purely
based on greedy approach and thus finds the locally optimal choice(local minima in this
case) at each step of the algorithm.
Each Vertex in this Algorithm will have two properties defined for it:
1. Visited Property
2. Path Property
Let us understand these properties in brief.
Visited Property:
1. The 'visited' property signifies whether or not the node has been visited.

23
2. We are using this property so that we do not revisit any node.
3. A node is marked visited only when the shortest path has been found.
Path Property:
1. The 'path' property stores the value of the current minimum path to the node.
2. The current minimum path implies the shortest way we have reached this node till
now.
3. This property is revised when any neighbor of the node is visited.
4. This property is significant because it will store the final answer for each node.

Initially, we mark all the vertices, or nodes, unvisited as they have yet to be
visited. The path to all the nodes is also set to infinity apart from the source node.
Moreover, the path to the source node is set to zero (0).

We then select the source node and mark it as visited. After that, we access all
the neighboring nodes of the source node and perform relaxation on every node.

Relaxation is the process of lowering the cost of reaching a node with the
help of another node.

In the process of relaxation, the path of each node is revised to the minimum value
amongst the node's current path, the sum of the path to the previous node, and the path
from the previous node to the current node.

Let us suppose that p[n] is the value of the current path for node n, p[m] is the value
of the path up to the previously visited node m, and w is the weight of the edge between
the current node and previously visited one (edge weight between n and m).

In the mathematical sense, relaxation can be exemplified as:

p[n] = minimum(p[n], p[m] + w)

We then mark an unvisited node with the least path as visited in every subsequent
step and update its neighbor's paths.
24
We repeat this procedure until all the nodes in the graph are marked visited.

Whenever we add a node to the visited set, the path to all its neighboring nodes
also changes accordingly.

If any node is left unreachable (disconnected component), its path remains 'infinity'.
In case the source itself is a separate component, then the path to all other nodes remains
'infinity'.

Understanding Dijkstra's Algorithm with an Example

The following is the step that we will follow to implement Dijkstra's Algorithm:

Step 1: First, we will mark the source node with a current distance of 0 and set the rest of
the nodes to INFINITY.

Step 2: We will then set the unvisited node with the smallest current distance as the
current node, suppose X.

Step 3: For each neighbor N of the current node X: We will then add the current distance
of X with the weight of the edge joining X-N. If it is smaller than the current distance of N,
set it as the new current distance of N.

Step 4: We will then mark the current node X as visited.

Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in the
graph.

25
Let us now understand the implementation of the algorithm with the help of an
example:

Figure 6: The Given Graph

1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because the
path to node A is 0 and the path from node A to B is 4, and the minimum((0 + 4),
INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because the
path to node A is 0 and the path from node A to C is 5, and the minimum((0 + 5),
INFINITY) is 5. Both the neighbors of node A are now relaxed; therefore, we can
move ahead.

26
7. We will now select the next unvisited node with the least path and visit it. Hence, we
will visit node B and perform relaxation on its unvisited neighbors. After performing
relaxation, the path to node C will remain 5, whereas the path to node E will
become 11, and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D,
and F. Since only node F is unvisited, it will be relaxed. Thus, the path to node B will
remain as it is, i.e., 4, the path to node D will also remain 13, and the path to
node F will become 14 (8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to
node F will remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as all
its neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.

Hence, the final paths we concluded are:

A=0
B = 4 (A -> B)
C = 5 (A -> C)
D = 4 + 9 = 13 (A -> B -> D)
E = 5 + 3 = 8 (A -> C -> E)
F = 5 + 3 + 6 = 14 (A -> C -> E -> F)
Pseudocode for Dijkstra's Algorithm

We will now understand a pseudocode for Dijkstra's Algorithm.

o We have to maintain a record of the path distance of every node. Therefore, we can
store the path distance of each node in an array of size n, where n is the total
number of nodes.

27
o Moreover, we want to retrieve the shortest path along with the length of that path.
To overcome this problem, we will map each node to the node that last updated its
path length.
o Once the algorithm is complete, we can backtrack the destination node to the
source node to retrieve the path.
o We can use a minimum Priority Queue to retrieve the node with the least path
distance in an efficient way.

Let us now implement a pseudocode of the above illustration:

Pseudocode:

function Dijkstra_Algorithm(Graph, source_node)


// iterating through the nodes in Graph and set their distances to INFINITY
for each node N in Graph:
distance[N] = INFINITY
previous[N] = NULL
If N != source_node, add N to Priority Queue G
// setting the distance of the source node of the Graph to 0
distance[source_node] = 0
// iterating until the Priority Queue G is not empty
while G is NOT empty:
// selecting a node Q having the least distance and marking it as visited
Q = node in G with the least distance[] mark Q visited
// iterating through the unvisited neighboring nodes of the node Q and performing
relaxation accordingly
for each unvisited neighbor node N of Q:
temporary_distance = distance[Q] + distance_between(Q, N)
// if the temporary distance is less than the given distance of the path to the
Node, updating the resultant distance with the minimum value
if temporary_distance < distance[N]
28
distance[N] := temporary_distance
previous[N] := Q
// returning the final list of distance
return distance[], previous[]

Explanation:

In the above pseudocode, we have defined a function that accepts multiple


parameters - the Graph consisting of the nodes and the source node. Inside this function,
we have iterated through each node in the Graph, set their initial distance to INFINITY,
and set the previous node value to NULL. We have also checked whether any selected
node is not a source node and added the same into the Priority Queue. Moreover, we have
set the distance of the source node to 0. We then iterated through the nodes in the
priority queue, selected the node with the least distance, and marked it as visited. We then
iterated through the unvisited neighboring nodes of the selected node and performed
relaxation accordingly. At last, we have compared both the distances (original and
temporary distance) between the source node and the destination node, updated the
resultant distance with the minimum value and previous node information, and returned
the final list of distances with their previous node information.

Code for Dijkstra's Algorithm in Java

The following is the implementation of Dijkstra's Algorithm in the Java Programming


Language:

File: DijkstraAlgorithm.java

// Implementation of Dijkstra's Algorithm in Java


// defining the public class for Dijkstra's Algorithm
public class DijkstraAlgorithm
{
// defining the method to implement Dijkstra's Algorithm

29
public void dijkstraAlgorithm(int[][] graph, int source)
{
// number of nodes
int nodes = graph.length;
boolean[] visited_vertex = new boolean[nodes];
int[] dist = new int[nodes];
for (int i = 0; i < nodes; i++)
{
visited_vertex[i] = false;
dist[i] = Integer.MAX_VALUE;
}
// Distance of self loop is zero
dist[source] = 0;
for (int i = 0; i < nodes; i++)
{
// Updating the distance between neighboring vertex and source vertex
int u = find_min_distance(dist, visited_vertex);
visited_vertex[u] = true;
// Updating the distances of all the neighboring vertices
for (int v = 0; v < nodes; v++)
{
if (!visited_vertex[v] && graph[u][v] != 0 && (dist[u] + graph[u][v] < dist[v])) {
dist[v] = dist[u] + graph[u][v];
}
}
}
for (int i = 0; i < dist.length; i++)
{

30
System.out.println(String.format("Distance from Vertex %s to Vertex %s is %s", source, i
, dist[i]));
}
}
// defining the method to find the minimum distance
private static int find_min_distance(int[] dist, boolean[] visited_vertex)
{
int minimum_distance = Integer.MAX_VALUE;
int minimum_distance_vertex = -1;
for (int i = 0; i < dist.length; i++) {
if (!visited_vertex[i] && dist[i] < minimum_distance)
{
minimum_distance = dist[i];
minimum_distance_vertex = i;
}
}
return minimum_distance_vertex;
}
// main function
public static void main(String[] args)
{
// declaring the nodes of the graphs
int graph[][] = new int[][] {
{ 0, 1, 1, 2, 0, 0, 0 },
{ 0, 0, 2, 0, 0, 3, 0 },
{ 1, 2, 0, 1, 3, 0, 0 },
{ 2, 0, 1, 0, 2, 0, 1 },
{ 0, 0, 3, 0, 0, 2, 0 },
{ 0, 3, 0, 0, 2, 0, 1 },

31
{ 0, 2, 0, 1, 0, 1, 0 }
};
// instantiating the DijkstraAlgorithm() class
DijkstraAlgorithm Test = new DijkstraAlgorithm();
// calling the dijkstraAlgorithm() method to find the shortest distance from the source node
to the destination node
Test.dijkstraAlgorithm(graph, 0);
}
}

Output

Distance from Vertex 0 to Vertex 0 is 0


Distance from Vertex 0 to Vertex 1 is 1
Distance from Vertex 0 to Vertex 2 is 1
Distance from Vertex 0 to Vertex 3 is 2
Distance from Vertex 0 to Vertex 4 is 4
Distance from Vertex 0 to Vertex 5 is 4
Distance from Vertex 0 to Vertex 6 is 3

Explanation:

In the above snippet of code, we have defined a public class


as DijkstraAlgorithm(). Inside this class, we have defined a public method
as dijkstraAlgorithm() to find the shortest distance from the source vertex to the
destination vertex. Inside this method, we have defined a variable to store the number of
nodes. We have then defined a Boolean array to store the information regarding the
visited vertices and an integer array to store their respective distances. Initially, we declared
the values in both the arrays as False and MAX_VALUE, respectively. We have also set the
distance of the source vertex as zero and used the for-loop to update the distance
between the source vertex and destination vertices with the minimum distance. We have

32
then updated the distances of the neighboring vertices of the selected vertex by
performing relaxation and printed the shortest distances for every vertex. We have then
defined a method to find the minimum distance from the source vertex to the destination
vertex. We then defined the main function where we declared the vertices of the graph and
instantiated the DijkstraAlgorithm() class. Finally, we have called
the dijkstraAlgorithm() method to find the shortest distance between the source vertex
and the destination vertices.
As a result, the required shortest possible paths for every node from the source
node are printed for the users.
Reference video link :
https://www.youtube.com/watch?v=XB4MIexjvY0

7. Fractional Knapsack Problem :


The fractional knapsack problem is also one of the techniques which are used to
solve the knapsack problem. In fractional knapsack, the items are broken in order to
maximize the profit. The problem in which we break the item is known as a Fractional
knapsack problem.
Fractional Knapsack problem is defined as, “Given a set of items having some
weight and value/profit associated with it. The knapsack problem is to find the set of items
such that the total weight is less than or equal to a given limit (size of knapsack) and the
total value/profit earned is as large as possible.”
Knapsack problem has two variants.
 Binary or 0/1 knapsack : Item cannot be broken down into parts.
 Fractional knapsack : Item can be divided into parts.
Given a set of items, each having some weight and value/profit associated with it. The
goal is to find the set of items such that the total weight is less than or equal to a given
limit (size of knapsack) and the total value/profit earned is as large as possible.
 The knapsack is an optimization problem and it is useful in solving resource
allocation problem.

33
 Let X = <x1, x2, x3, . . . . . , xn> is the set of n items. W = <w 1, w2, w3, . . . , wn> and V =
<v1, v2, v3, . . . , vn> are the set of weight and value associated with each items in x,
respectively. Knapsack capacity is M.
 Select items one by one from the set of items x and fill the knapsack such that it
would maximize the value. Knapsack problem has two variants. 0/1 knapsack does
not allow breaking of items. Either add entire item in a knapsack or reject it. It is
also known as a binary knapsack. Fractional knapsack allows the breaking of items.
So profit will also be considered accordingly.
 Knapsack problem can be formulated as follow :
Maximize
sumni=1vixi
subjected to
sumni=1vixileM
xiin(0,1)
for binary knapsack
xiin[0,1]
for fractional knapsack
Algorithm
Algorithm GREEDY_FRACTIONAL_KNAPSACK(X, V, W, M)
// Description : Solve the knapsack problem using greedy approach

// Input:
X: An array of n items
V: An array of profit associated with each item
W: An array of weight associated with each item
M: Capacity of knapsack

// Output :
SW: Weight of selected items
SP: Profit of selected items
34
// Items are presorted in decreasing order of pi = vi / wi ratio

S←Φ // Set of selected items, initially empty


SW ← 0 // weight of selected items
SP ← 0 // profit of selected items
i←1

while i ≤ n do
if (SW + w[i]) ≤ M then
S ← S ∪ X[i]
SW ← SW + W[i]
SP ← SP + V[i]
else
frac ← (M - SW) / W[i]
S ← S ∪ X[i] * frac // Add fraction of item X[i]
SP ← SP + V[i] * frac // Add fraction of profit
SW ← SW + W[i] * frac // Add fraction of weight
end
i←i+1
end
Complexity Analysis

 For one item there are two choices, either to select or reject. For 2 items we have
four choices:
 Select both items
 Reject both items
 Select first and reject second
 Reject first and select second
 In general, for n items, knapsack has 2 n choices. So brute force approach runs in
O(2n) time.

35
 We can improve performance by sorting items in advance. Using merge sort or
heap sort, n items can be sorted in O(nlog 2n) time. Merge sort and heap sort are
non-adaptive and their running time is the same in best, average and worst case.
 To select the items, we need one scan to this sorted list, which will take O(n) time.
 So the total time required is
T(n) = O(nlog2n) + O(n) = O(nlog2n).
Examples of Fractional Knapsack

Problem: Consider the following instances of the fractional knapsack problem: n = 3,


M = 20, V = (24, 25, 15) and W = (18, 15, 20) find the feasible solutions.

Solution:

Let us arrange items by decreasing order of profit density. Assume that items are
labeled as X = (I1, I2, I3), have profit V = {24, 25, 15} and weight W = {18, 15, 20}.

Item (xi) Value (vi) Weight (wi) pi = vi / wi

I2 25 15 1.67

I1 24 18 1.33

I3 15 20 0.75

We shall select one by one item from Table. If the inclusion of an item does not
cross the knapsack capacity, then add it. Otherwise, break the current item and select only
the portion of item equivalent to remaining knapsack capacity. Select the profit
accordingly. We should stop when knapsack is full or all items are scanned.

Initialize, Weight of selected items, SW = 0,

Profit of selected items, SP = 0,

Set of selected items, S = { },

36
Here, Knapsack capacity M = 20.

Iteration 1 : SW= (SW + w2) = 0 + 15 = 15

SW ≤ M, so select I2

S = { I2 }, SW = 15, SP = 0 + 25 = 25

Iteration 2 : SW + w1 > M, so break down item I1.

The remaining capacity of the knapsack is 5 unit, so select only 5 units of item I 1.

frac = (M – SW) / W[i] = (20 – 15) / 18 = 5 / 18

S = { I2, I1 * 5/18 }

SP = SP + v1 * frac = 25 + (24 * (5/18)) = 25 + 6.67 = 31.67

SW = SW + w1 * frac = 15 + (18 * (5/18)) = 15 + 5 = 20

The knapsack is full. Fractional Greedy algorithm selects items { I2, I1 * 5/18 }, and it gives a
profit of 31.67 units.

Problem: Find the optimal solution for knapsack problem (fraction) where knapsack
capacity = 28, P = {9, 5, 2, 7, 6, 16, 3} and w = {2, 5, 6, 11, 1, 9, 1}.

Solution:

Arrange items in decreasing order of profit to weight ratio

Item Profit pi Weight wi Ratio vi/wi

I5 6 1 6.00

I1 9 2 4.50

I7 3 1 3.00

37
Item Profit pi Weight wi Ratio vi/wi

I6 16 9 1.78

I2 5 5 1.00

I4 7 11 0.64

I3 2 6 0.33

Initialize, Weight = 0, P = 0, M = 28, S = { }

Where S is the solution set, P and W is profit and weight of included items, respectively. M
is the capacity of the knapsack.

Iteration 1

(Weight + w5) ≤ M, so select I5

So, S = { I5 }, Weight = 0 + 1 = 1, P = 0 + 6= 6

Iteration 2

(Weight + w1) ≤ M, so select I1

So, S = {I5 ,I1 }, Weight = 1 + 2 = 3, P = 6 + 9= 15

Iteration 3

(Weight + w7) ≤ M, so select I7

So, S = {I5, I1, I7 }, Weight = 3 + 1 = 4, P = 15 + 3= 18

Iteration 4

(Weight + w6) ≤ M, so select I6

So, S = {I5, I1, I7, I6 }, Weight = 4 + 9 = 13, P = 18 + 16= 34


38
Iteration 5

(Weight + w2) ≤ M, so select I2

So, S = {I5, I1, I7, I6, I2 }, Weight = 13 + 5 = 18, P = 34 + 5= 39

Iteration 6

(Weight + w4) > M, So I4 must be broken down into two parts x and y such that x =
capacity left in knapsack and y = I4 – x.

Available knapsack capacity is 10 units. So we can select only (28 – 18) / 11 = 0.91 unit of I 4.

So S = {I5, I1, I7, I6, I2, 0.91 * I4 }, Weight = 18 + 0.91*11 = 28, P = 39 + 0.91 * 7= 45.37

https://www.youtube.com/watch?v=oTTzNMHM05I

39

You might also like