Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DC Experiments -ALL

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 83

DC Experiments

Experiment 1
Aim:
Program to demonstrate datagram Socket for Chat Application using Java.

Objectives:
● To understand fundamental concepts of computer communication
● To understand sockets and ports

Outcomes:
From this experiment, the student will be able to
● Demonstrate knowledge of the basic elements and concepts related to
distributed system technologies
● Understand different models of a distributed system and discuss the
challenges and opportunities faced by them.

Hardware/Software Required:
JDK 1.6

Theory:

Socket: An interface between an application process and transport layer. The


application process can send/receive messages to/from another application process
(local or remote) via a socket.
Client/Server Communication

At a basic level, network-based systems consist of a server, client, and media for
communication as shown in Figure 1. A computer running a program that makes
requests for services is called client machine. A computer running a program that
offers requested services from one or more clients is called a server machine. The
media for communication can be wired or wireless network.

Procedure:

TCP client algorithm:

1. Find the IP address and protocol number of the server


2. Allocate a socket
3. Specify that the connection needs an arbitrary, unused protocol port on local
machine and allow TCP to select one
4. Connect the socket to the server
5. Communicate with the server using application-level protocol
6. Close the connection

TCP Server Algorithm:

1. Create a socket and bind it to the well-known address for the service being
offered

2. Place the socket in passive mode


3. Accept the next connection request from the socket, and obtain a new socket for
the connection
4. Repeatedly read a request from the client, formulate a response, and send a
reply back to the client according to the application protocol
5. When finished with a particular client, close the connection and return to step 3
to accept a new connection

Additional Learning:
A socket is one endpoint of a two-way communication link between two programs
running on the network. The socket mechanism provides a means of inter-process
communication (IPC) by establishing named contact points between which the
communication takes place.

Output Analysis:
1. It listens for connections from clients. When a client connects, the server calls
accept() to accept, or complete, the connection.
2. The client calls connect() to establish a connection to the server and initiate
the three-way handshake. The handshake step is important since it ensures
that each side of the connection is reachable in the network, in other words,
the client can reach the server and vice-versa.
Conclusion and Discussion:
Hence we conclude the following about socket programming:
● Connection based
● Guaranteed reliable and ordered
● Automatically break up your data into packets for you
● Makes sure it doesn't send data too fast for the internet connection to handle
(flow control)
● Easy to use, you just read and write data like its a file

Program:

Server.java
package exp1;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Scanner;

public class Server {


public static void main(String[] args){
final ServerSocket serverSocket ;
final Socket clientSocket ;
final BufferedReader in;
final PrintWriter out;
final Scanner sc=new Scanner(System.in);

try {
serverSocket = new ServerSocket(5000);
clientSocket = serverSocket.accept();
out = new PrintWriter(clientSocket.getOutputStream());
in = new BufferedReader (new
InputStreamReader(clientSocket.getInputStream()));

Thread sender= new Thread(new Runnable() {


String msg; //variable that will contains the data writter by the user
@Override // annotation to override the run method
public void run() {
while(true){
msg = sc.nextLine(); //reads data from user's keybord
out.println(msg); // write data stored in msg in the clientSocket
out.flush(); // forces the sending of the data
}
}
});
sender.start();

Thread receive= new Thread(new Runnable() {


String msg ;
@Override
public void run() {
try {
msg = in.readLine();
//tant que le client est connecté
while(msg!=null){
System.out.println("Client : "+msg);
msg = in.readLine();
}

System.out.println("Client déconecté");

out.close();
clientSocket.close();
serverSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
});
receive.start();
} catch (IOException e) {
e.printStackTrace();
}
}
}

Client.java
package exp1;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.Socket;
import java.util.Scanner;

public class Client {


public static void main(String[] args){
final Socket clientSocket; // socket used by client to send and recieve data from
server
final BufferedReader in; // object to read data from socket
final PrintWriter out; // object to write data into socket
final Scanner sc = new Scanner(System.in); // object to read data from user's
keybord
try {
clientSocket = new Socket("127.0.0.1",5000);
out = new PrintWriter(clientSocket.getOutputStream());
in = new BufferedReader(new
InputStreamReader(clientSocket.getInputStream()));
Thread sender = new Thread(new Runnable() {
String msg;
@Override
public void run() {
while(true){
msg = sc.nextLine();
out.println(msg);
out.flush();
}
}
});
sender.start();
Thread receiver = new Thread(new Runnable() {
String msg;
@Override
public void run() {
try {
msg = in.readLine();
while(msg!=null){
System.out.println("Server : "+msg);
msg = in.readLine();
}
System.out.println("Server out of service");
out.close();
clientSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
});
receiver .start();
}catch (IOException e){
e.printStackTrace();
}
}
}
Output:
Experiment 2
https://www.google.com/search?
q=implement+RMI+Application+using+Java&source=lnms&tbm=vid&sa=X&ved=2ahUKEwi405
W4_eH8AhXxSGwGHQITDkYQ_AUoAXoECAIQAw&biw=1366&bih=601&dpr=1#fpstate=ive&vl
d=cid:1a9d6248,vid:tKW4tHDBYsA
Aim:
Program to implement RMI Application using Java.
Objectives:
● The RMI (Remote Method Invocation) is an API that provides a mechanism to
create distributed applications in java.
● The RMI allows an object to invoke methods on an object running in another
JVM.

Outcomes:
From this experiment, the student will be able to
● Implement the middleware technologies that support distributed applications
using RPC, RMI and object-based middleware.

Hardware/Software Required:
JDK 1.6

Theory:

Java Remote Method Invocation (Java RMI) enables the programmer to create
distributed Java technology-based to Java technology-based applications, in which
the methods of remote Java objects can be invoked from other Java virtual
machines, possibly on different hosts. RMI uses object serialization to marshal and
unmarshal parameters and does not truncate types, supporting true object-oriented
polymorphism.

Java RMI is a mechanism that allows one to invoke a method on an object that exists
in another address space. The other address space could be on the same machine
or on a different one. The RMI mechanism is basically an object-oriented RPC
mechanism.

There are three processes that participate in supporting RMI:

1. The Client is the process that is invoking a method on a remote object.


2. The Server is the process that owns the remote object. A remote object is an
ordinary object in the address space of the server process.

Procedure:
Create Registry

1. Create your own interface which extends Remote interface.


2. Declare all the methods signature in it

3. Save and start registry.

Create Sever Program

1. Define a class Server which extends UnicastRemoteObject class and implements


an interface.

2. Define all the methods declared in own interface.

3. Create registry and bind server with it.

Create Client Program

1. Define a class Client.

2. Create an instance of an interface.

3. Lookup the registry for a server

4. Make a call to methods using instance of an interface.

Additional Learning:
In computing, the Java Remote Method Invocation (Java RMI) is a Java API that
performs remote method invocation, the object-oriented equivalent of remote
procedure calls (RPC), with support for direct transfer of serialized Java classes and
distributed garbage collection.

Output Analysis:
We have created our own interface which extends the remote interface called
registry. We have also created server and client programs and implemented RMI in
Java.

Conclusion and Discussion:


Hence we have studied and implemented Remote Method Invocation supporting the
distributed computing in JAVA.
Program:

Multiplication.java
import java.rmi.*;

public interface Multiplication extends Remote {


public int mul(int x, int y) throws RemoteException;
}

MulRemote.java
import java.rmi.*;
import java.rmi.server.*;

public class MulRemote extends UnicastRemoteObject implements


Multiplication {
MulRemote() throws RemoteException {
super();
}

public int mul(int x, int y) {


return x * y;
}
}

Server.java
import java.rmi.*;
import java.rmi.registry.*;

public class Server {


public static void main(String args[]) {
try {
Multiplication m1 = new MulRemote();
Naming.rebind("rmi://localhost:5000/xyz", m1);
} catch (Exception e) {
System.out.println(e);
}
}
}

Client.java
import java.rmi.*;

public class Client {


public static void main(String args[]) {
try {
Multiplication m1 = (Multiplication)
Naming.lookup("rmi://localhost:5000/xyz");
System.out.println(m1.mul(4, 3));
} catch (Exception e) {
}
}
}
Output:
Experiment 3
Aim:
Program to demonstrate Bully Election Algorithm using Java.

Objectives:
● To understand the election algorithm.
● To understand the principle and application of the bully algorithm

Outcomes:
From this experiment, the student will be able to learn to
● Analyze the various techniques used for clock synchronization and mutual
exclusion

Hardware/Software Required:
JDK 1.6

Theory:

Communication in networks is implemented in a process on one machine


communicating with a process on another machine. A distributed algorithm is an
algorithm, run on a distributed system, that does not assume the previous existence
of a central coordinator. A distributed system is a collection of processors that do
not share a memory or a clock. Each processor has its own memory, and the
processors communicate via communication networks. Considering two problems
requiring distributed algorithms, the coordinator election problem and the value
agreement problem (Byzantine generals problem).

Election Algorithms

The coordinator election problem is to choose a process from among a group of


processes on different processors in a distributed system to act as the central
coordinator.
An election algorithm is an algorithm for solving the coordinator election problem.
By the nature of the coordinator election problem, any election algorithm must be a
distributed algorithm.
● a group of processes on different machines need to choose a coordinator
● peer to peer communication: every process can send messages to every
other process.
● Assume that processes have unique IDs, such that one is highest -Assume
that the priority of process Pi is i

Procedure:
Any process Pi sends a message to the current coordinator; if no response in T time
units, Pi tries to elect itself as a leader.

Algorithm for process Pi that detected the lack of coordinator

1. Process Pi sends an “Election” message to every process with higher priority. 2. If


no other process responds, process Pi starts the coordinator code running and sends
a message to all processes with lower priorities saying “Elected Pi” 3. Else, Pi waits
for T’ time units to hear from the new coordinator, and if there is no response à start
from step (1) again.

Algorithm for other processes (also called Pi)

If Pi is not the coordinator then Pi may receive either of these messages from Pj

if Pi sends “Elected Pj”; [this message is only received if i < j] Pi updates its
records to say that Pj is the coordinator.
Else if Pj sends an “election” message (i > j)

Pi sends a response to Pj saying it is alive

Pi starts an election.

Additional Learning:
The bully algorithm is a type of Election algorithm which is mainly used for choosing
a coordinate. In a distributed system, we need some election algorithms such as
bully and ring to get a coordinator that performs functions needed by other
processes.
Output Analysis:
We have implemented and understood the Bully Election algorithm, which is
primarily used to elect a new process coordinator. The election to elect a new
coordinator can be carried out by any of the existing processes.

Conclusion and Discussion:


Hence from above experiment student understood the working of Bully election
algorithm for coordinator election in distributed system.. This method requires at
most five stages, and the probability of detecting a crashed process during the
execution of the algorithm is lowered in contrast to other algorithms.

Program:
import java.io.*;
import java.util.Scanner;

class Bully {
static int n;
static int p[] = new int[100];
static int s[] = new int[100];
static int c;

public static void main(String args[]) throws IOException {


System.out.println("Enter no of process");
Scanner in = new Scanner(System.in);
n = in.nextInt();
int i, j, k, l, m;
for (i = 0; i < n; i++) {
System.out.println("For process" + (i + 1) + ":");
System.out.println("Status:");
s[i] = in.nextInt();
System.out.println("Priority");
p[i] = in.nextInt();
}
System.out.println("Which process will intiate the election?");
int ele = in.nextInt();
election(ele);
System.out.println("final co-ordinator is:" + c);
}

static void election(int ele) {


ele = ele - 1;
c = ele + 1;
for (int i = 0; i < n; i++) {
if (p[ele] < p[i]) {
System.out.println("Election message is sent from" + (ele
+ 1) + "to" + (i + 1));
if (s[i] == 1) {
election(i + 1);
}
}
}
}
}
Output:
Experiment 4
Aim:
Program to demonstrate Berkeley Clock Synchronization Algorithm using
C++

Objectives:
Students will understand
● Clock synchronization deals with understanding the temporal ordering of
events produced by concurrent processes.

Outcomes:
From this experiment, the student will be able to learn to
● Analyze the various techniques used for clock synchronization and mutual
exclusion

Hardware/Software Required:
C++

Theory:
The Berkeley algorithm is a method of clock synchronisation in distributed
computing which assumes no machine has an accurate time source. Computer
systems normally avoid rewinding their clock when they receive a negative clock
alteration from the master. Doing so would break the property of monotonic time,
which is a fundamental assumption in certain algorithms in the system itself or in
programs such as make. A simple solution to this problem is to halt the clock for the
duration specified by the master, but this simplistic solution can also cause
problems, although they are less severe. For minor corrections, most systems slow
the clock (known as "clock slew"), applying the correction over a longer period of
time.

Procedure:

Unlike Cristian's algorithm the server process in Berkeley algorithm, called the
master periodically polls other slave processes. Generally speaking the algorithm is
as follows:
1. A master is chosen via an election process such as Chang and Roberts
algorithm.
2. The master polls the slaves who reply with their time in a similar way to
Cristian's algorithm.
3. The master observes the round-trip time (RTT) of the messages and
estimates the time of each slave and its own.
4. The master then averages the clock times, ignoring any values it receives far
outside the values of the others.
5. Instead of sending the updated current time back to the other process, the
master then sends out the amount (positive or negative) that each slave
must adjust its clock. This avoids further uncertainty due to RTT in the slave
processes.
With this method, the average cancels out individual clock's tendencies to drift.

Additional Learning:
Berkeley’s Algorithm is a clock synchronization technique used in distributed
systems. The algorithm assumes that each machine node in the network either
doesn’t have an accurate time source or doesn’t possess a UTC server.

Output Analysis:
We have implemented Berkeley clock synchronization technique where the master
periodically polls the slave processes. The master sends out the amount (positive or
negative) that each slave must adjust its clock to be in sync.

Conclusion and Discussion:


A physical clock is not present in distributed system for synchronization. To achieve
the synchronization all the machines timestamp is collected and the server
broadcast the appropriate time to all using berkely algorithm.

Program:
#include <bits/stdc++.h>
using namespace std;

int main()
{
int n, x, i, e, max = INT_MIN, h, m, t, p[100], d[100], sum = 0, avg =
0;
cout << "Enter no. of computers in the system: ";
cin >> n;
cout << endl
<< "Enter the clock time for Time Server";
cout << endl
<< "Enter hour and minute: ";
cin >> h >> m;
t = m + 60 * h;
for (i = 1; i <= n; i++)
{
cout << endl
<< "For process " << i;
cout << endl
<< "Enter hour and minute: ";
cin >> h >> m;
p[i] = m + 60 * h;
d[i] = p[i] - t;
cout << endl
<< "Difference is : " << d[i];
}
for (i = 1; i <= n; i++)
{
sum = sum + d[i];
}

avg = sum / (n + 1);


t = t + avg;
h = t / 60;
m = t % 60;

cout << endl


<< "So final time of time server is : " << h << " hh: " << m << "
mm ";
for (i = 1; i <= n; i++)
{
cout << endl
<< "So for computer " << i << " time is adjusted to " << h <<
" hh : " << m << " mm ";
}
return 0;
}
Output:
Experiment 5
Aim:
Program to implement Non-Token Ring Algorithm for distributed mutual exclusion.

Objectives:
Understand the different approach to achieving mutual exclusion in a distributed
system.

Outcomes:
From this experiment, the student will be able to learn
● Analyze the various techniques used for clock synchronization and mutual
exclusion

Hardware/Software Required:
Python 3.6

Theory:
Mutual exclusion:
● Concurrent access of processes to a shared resource or data is executed in
mutually exclusive manner.
● Only one process is allowed to execute the critical section (CS) at any given
time. ∙ In a distributed system, shared variables (semaphores) or a local
kernel cannot be used to implement mutual exclusion.
● Two basic approaches for distributed mutual exclusion:
1. Token-based approach
2. Non-token based approach

Token-based approach:

● A unique token is shared among the sites.


● A site is allowed to enter its CS if it possesses the token.
● Mutual exclusion is ensured because the token is unique

Non-token based approach::


● Two or more successive rounds of messages are exchanged among the sites
to determine which site will enter the CS next.
Ricart–Agrawala algorithm is an algorithm for mutual exclusion in a distributed
system proposed by Glenn Ricart and Ashok Agrawala. This algorithm is an
extension and optimization of Lamport’s Non-token based Distributed Mutual
Exclusion Algorithm. Like Lamport’s Algorithm, it also follows permission-based
approach to ensure mutual exclusion.
In this algorithm:
● Two type of messages ( REQUEST and REPLY) are used and communication
channels are assumed to follow FIFO order.
● A site send a REQUEST message to all other site to get their permission to
enter critical section.
● A site send a REPLY message to other site to give its permission to enter the
critical section.
● A timestamp is given to each critical section request using Lamport’s logical
clock. ∙ Timestamp is used to determine priority of critical section requests.
Smaller timestamp gets high priority over larger timestamp. The execution of
critical section request is always in the order of their timestamp.

Procedure:
● To enter Critical section:
○ When a site Si wants to enter the critical section, it send a
timestamped REQUEST message to all other sites.
○ When a site Sj receives a REQUEST message from site Si, It sends a
REPLY message to site Si if and only if
■ Site Sj is neither requesting nor currently executing the critical
section.
■ In case Site Sj is requesting, the timestamp of Site Si‘s request is
smaller than its own request.
■ Otherwise the request is deferred by site Sj.
● To execute the critical section:
○ Site Si enters the critical section if it has received the REPLY message
from all other sites.
● To release the critical section:
○ Upon exiting site Si sends REPLY message to all the deferred requests.

Additional Learning:
Non-token based algorithms use timestamp to order requests for the critical section
whereas sequence number is used in token based algorithms. Each request for the
critical section contains a sequence number. This sequence number is used to
distinguish old and current requests.
Output Analysis:
We have implemented a non-token based, Ricarta-Agarwala algorithm for
distributed mutual exclusion. Here REQUEST and REPLY messages are used to allow
sites to enter critical sections.

Conclusion and Discussion:


In distributed environment there is no centralized control over the distributed
resources. The machine which wants to acquire a resource must hold a token. But if
a token is lost, it should be regenerated using election algorithm. Here we have
studied the working of the Ricart–Agrawala algorithm which requires invocation of
2(N – 1) messages per critical section execution. These 2(N – 1) messages involves
(N – 1) request messages and (N – 1) reply messages

Program:
n = int(input())
cp = list(map(int, input().split()))
critical = 0
d = {}
for i in range(n):
d[i] = [0, []]
i = 0
while(i < len(cp)):
'''
l=d[cp[i]]
replies=l[0]
waiting=l[1]
'''
replies = n-len(cp[:i+1])
waiting = cp[i+1:].copy()
d[cp[i]] = [replies, waiting]
for j in range(n):
if(j != cp[i]):
print("P"+str(cp[i])+" sent request message to "+"P"+str(j))
print()
for j in range(n):
if(j != cp[i] and j not in cp[:i+1]):
print("P"+str(cp[i])+" got reply message from "+"P"+str(j))
print()
print()
i += 1
print()
for i in range(len(cp)):
critical = cp[i]
l = d[critical].copy()
l1 = l[1].copy()
print("Queue of P"+str(critical)+":", end="")
print(l1)
print()

print("P"+str(critical)+" enters critical section")


print(".")

print(".")
print("P"+str(critical)+" leaves critical section")
print()
for j in range(len(l1)):
temp = l1[j]
queue = d[temp].copy()
replies = queue[0]
queue[0] = replies+1
d[temp] = queue
print("P"+str(critical)+" replies to "+"P"+str(temp))
print()
print()
d[critical] = [0, []]
Output:
Experiment 6
Aim:
Program to Simulate Load Balancing Algorithm using Java

Objectives:
Understand the different approach of load balancing in distributed system.

Outcomes:
From this experiment, the student will be able to learn
● Analyze the various techniques used for load balancing in distributed system

Hardware/Software Required:
JDK 1.6

Theory:

Load balancing is the way of distributing load units (jobs or tasks) across a set of
processors which are connected to a network which may be distributed across the
globe. The excess load or remaining unexecuted load from a processor is migrated
to other processors which have load below the threshold load. Threshold load is
such an amount of load to a processor that any load may come further to that
processor. In a system with multiple nodes there is a very high chance that some
nodes will be idle while the other will be over loaded. So the processors in a system
can be identified according to their present load as heavily loaded processors
(enough jobs are waiting for execution), lightly loaded processors (less jobs are
waiting) and idle processors (have no job to execute). By load balancing strategy it
is possible to make every processor equally busy and to finish the works
approximately at the same time. A load balancing operation consists of three rules.
These are location rule, distribution rule and selection rule

Benefits of load balancing


a) Load balancing improves the performance of each node and hence the overall
system performance
b) Load balancing reduces the job idle time
c) Small jobs do not suffer from long starvation
d) Maximum utilization of resources
e) Response time becomes shorter
f) Higher throughput
g) Higher reliability
h) Low cost but high gain
i) Extensibility and incremental growth

Procedure:
routine Load_balance(n, p)
// We have list of n nodes initialized to 0 and is returned at the end of the algorithm.
//Round Robin Algorithm is used to balance the load with time quantum as 1
process. ∙Create a list of n nodes with each node having 0 processes allocated
currently. ∙Consider i processes, and assign j<-0
∙while i not equals 0, do
add a process to jth node(considering 1 process as time quantum of Round Robin
Algo )
j<-(j+1)%n
decrement i

∙ Return the list.

∙ Main routine:

∙ User inputs the n nodes and p processes

∙ Call route Load_balance(n,p) to get the balanced list

∙ MENU
add a new node, call routine Load_balance(n+1, p)
remove a node, call routine Load_balance(n-1, p)
add a new process, call routine Load_balance(n, p+1)
remove a process, call routine Load_balance(n, p-1)

∙ QUIT

∙ Display the returned list

Additional Learning:
The load-balancing algorithms make transfer decisions using the information about
the current system state. The distributed monitor executed by every processor in
the system maintains its local state and broadcasts this information to all of the
remote processors.

Output Analysis:
We have implemented a Round Robin load balancing algorithm where nodes are
added and removed. This improves the performance of each node and reduces job
idle time

Conclusion and Discussion:


In computing load balancing is a technique which improves the workload
distribution through multiple resources, like computers, clusters, servers, and disks.
Thus we have studied and implemented of load balancing technique to optimize the
use of resources available, maximize throughput, minimize response time, and
avoid overload of any single resource.

Program:
import java.util.*;

public class LoadBalance {


static void printLoad(int servers, int processes) {
int each = processes / servers;
int extra = processes % servers;
int total = 0;
int i = 0;
for (i = 0; i < extra; i++) {
System.out.println("Server " + (i + 1) + " has " + (each + 1)
+ " Processes");
}
for (; i < servers; i++) {
System.out.println("Server " + (i + 1) + " has " + each + "
Processes");
}
}

public static void main(String[] args) {


Scanner sc = new Scanner(System.in);
System.out.print("Enter the number of Servers: ");
int servers = sc.nextInt();
System.out.print("Enter the number of Processes: ");
int processes = sc.nextInt();
while (true) {
printLoad(servers, processes);
System.out.println("1.Add Servers 2.Remove Servers 3.Add
Processes 4.Remove Processes 5.Exit ");
switch (sc.nextInt()) {
case 1:
System.out.println("How many more servers to add ? ");
servers += sc.nextInt();
break;
case 2:
System.out.println("How many more servers to remove ?
");
servers -= sc.nextInt();
break;
case 3:
System.out.println("How many more Processes to add ?
");
processes += sc.nextInt();
break;
case 4:
System.out.println("How many more Processes to
remove ? ");

processes -= sc.nextInt();
break;
case 5:
return;
}
}
}
}
Output:
Experiment 7
Aim:
Program to implement Group Communication.

Objectives:
Understand the different approach to achieve group communication a distributed
system.

Outcomes:
From this experiment, the student will be able to learn
● Analyze the various techniques used for group communication in distributed
environment

Hardware/Software Required:
Python 3.6

Theory:

Group Communication

Remote procedure calls assume the existence of two parties: a client and a server.
This, as well as the socket-based communication we looked at earlier, is an
example of point-to-point, or unicast, communication. Sometimes, however, we
want one-to-many, or group, communication.

Groups are generally dynamic (Figure 1). They may be created and destroyed.
Processes may join or leave groups and processes may belong to multiple groups.
An analogy to group communication is the concept of a mailing list. A sender sends
a message to one party (the mailing list) and multiple users (members of the list)
receive the message. Groups allow processes to deal with collections of processes
as one abstraction. Ideally, a process should only send a message to a group and
need not know or care who its members are.
A group is an operating system abstraction for a collective of related processes. A
set of cooperative processes may, for example, form a group to provide an
extendable, efficient, available and reliable service. The group abstraction allows
member processes to perform
computation on different hosts while providing support for communication and
synchronization between them.

The term multicast means the use of a single communication primitive to send a
message to a specific set of processes rather than using a collection of individual
point to point message primitives. This is in contrast with the term broadcast
which means the message is addressed to every host or process.

A consensus protocol allows a group of participating processes to reach a


common decision, based on their initial inputs, despite failures.

A reliable multicast protocol allows a group of processes to agree on a set of


messages received by the group. Each message should be received by all members
of the group or by none. The order of these messages may be important for some
applications. A reliable multicast protocol is not concerned with message ordering,
only message delivery guarantees. Ordered delivery protocols can be
implemented on top of a reliable multicast service.

Multicast algorithms can be built on top of lower-level communication primitives


such as point-to-point sends and receives or perhaps by availing of specific network
mechanisms designed for this purpose.

The management of a group needs an efficient and reliable multicast


communication mechanism to allow clients obtain services from the group and
ensure consistency among servers in the presence of failures. Consider the
following two scenarios:-

A client wishes to obtain a service which can be performed by any member of the
group without affecting the state of the service.
A client wishes to obtain a service which must be performed by each member of the
group.
In the first case, the client can accept a response to its multicast from any member
of the group as long as at least one responds. The communication system need
only guarantee delivery of the multicast to a nonfaulty process of the group on a
best-effort basis. In the second case, the all-or-none atomic delivery requirements
requires that the multicast needs to be buffered until it is committed and
subsequently delivered to the application process, and so incurs additional latency.
Failure may occur during a multicast at the recipient processes, the communication
links or the originating process.

Failures at the recipient processes and on the communication links can be detected
by the originating process using standard time-out mechanisms or message
acknowledgements. The multicast can be aborted by the originator, or the service
group membership may be dynamically adjusted to exclude the failed processes
and the multicast can be continued.

If the originator fails during the multicast, there are two possible outcomes. Either
the message has not arrived at any destination or it has arrived at some. In the first
case, no process can be aware of the originator's intention and so the multicast
must be aborted. In the second case, it may be possible to complete the multicast
by selecting one of the recipients as the new originator. The recipients would have
to buffer messages until safe for delivery in case they were called on for this role.

A reliable multicast protocol imposes no restriction on the order in which messages


are delivered to group processes. Given that multicasts may be in progress by a
number of originators simultaneously, the messages may arrive at different
processes in a group in different orders. Also, a single originator may have a
number of simultaneous multicasts in progress or may have issued a sequence of
multicast messages whose ordering we might like preserved at the recipients.
Ideally, multicast messages should be delivered instantaneously in the real-time
order they were sent, but this is unrealistic as there is no global time and message
transmission has a possibly significant and variable latency.

A number of possible scenarios are given below which may require different levels
of ordering semantics. G and s represent groups and message sources. s may be
inside or outside a group. Note that group membership may overlap with other
groups, that is, processes may be members of more than one group.
Additional Learning:
Communication between two processes in a distributed system is required to
exchange various data, such as code or a file, between the processes. When one
source process tries to communicate with multiple processes at once, it is called
Group Communication. A group is a collection of interconnected processes with
abstraction.

Output Analysis:
We have implemented group communication where multicasting occurs. Through
the multicasting network, messages are received by all network cards in the group.

Conclusion and Discussion:


Group communication can be implemented in several ways. Hardware support for
multicasting allows the software to request the hardware to join a multicast group.
Messages sent to the multicast address will be received by all network cards
listening on that group(s). Another implementation option is to simulate
multicasting completely in software. The sending process can know all the members
of the group and send the same message to each group member Alternatively,
some process on some computer can be designated as a group coordinator: a
central point for group membership information. The sender will send one message
to the group coordinator, which then iterates over each group member and sends
the message to each member.

Program:

server.py

import socket
import select
import sys
from _thread import *

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)


server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

if len(sys.argv) != 3:
print("Correct usage: script, IP address, port number")
exit()

IP_address = str(sys.argv[1])

Port = int(sys.argv[2])

server.bind((IP_address, Port))

server.listen(100)

list_of_clients = []

def clientthread(conn, addr):

conn.send("Welcome to this chatroom!")

while True:
try:
message = conn.recv(2048)
if message:

print("<" + addr[0] + "> " + message)

message_to_send = "<" + addr[0] + "> " + message


broadcast(message_to_send, conn)
else:

remove(conn)

except:
continue

def broadcast(message, connection):


for clients in list_of_clients:
if clients != connection:
try:
clients.send(message)
except:
clients.close()

remove(clients)

def remove(connection):
if connection in list_of_clients:
list_of_clients.remove(connection)

while True:

conn, addr = server.accept()

list_of_clients.append(conn)

print(addr[0] + " connected")

start_new_thread(clientthread, (conn, addr))

conn.close()
server.close()

client.py
# Python program to implement client side of chat room.
import socket
import select
import sys

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)


if len(sys.argv) != 3:
print("Correct usage: script, IP address, port number")
exit()
IP_address = str(sys.argv[1])
Port = int(sys.argv[2])
server.connect((IP_address, Port))

while True:

# maintains a list of possible input streams


sockets_list = [sys.stdin, server]

read_sockets, write_socket, error_socket = select.select(


sockets_list, [], [])

for socks in read_sockets:


if socks == server:
message = socks.recv(2048)
print(message)
else:
message = sys.stdin.readline()
server.send(message)
sys.stdout.write("<You>")
sys.stdout.write(message)
sys.stdout.flush()
server.close()
Output:
Experiment 8
Aim:
Program to implement Deadlock management in distributed system.

Objectives:
● To equip students with skills to analyze and design distributed applications
● To study deadlock situation in operating system.
● To understand Bankers algorithm for deadlock avoidance and detection.
● Construct a resource allocation graph for a deadlock condition and verify
using the simulator.

Outcomes:
From this experiment, the student will be able to learn
● Demonstrate the concepts of Consistency and Replication Management

Hardware/Software Required:
Python 3.6

Theory:

A deadlock is a condition in a system where a set of processes (or threads) have


requests for resources that can never be satisfied. Essentially, a process cannot
proceed because it needs to obtain a resource held by another process; but, it itself
is holding a resource that the other process needs. There are four conditions to be
met for a deadlock to occur in a system:

1. Mutual exclusion: A resource can be held by at most one process.

2. Hold and wait: Processes that already hold resources can wait for another
resource.
3. Non-preemption: A resource, once granted, cannot be taken away.
4. Circular wait: Two or more processes are waiting for resources held by one of
the other processes.
The banker's algorithm is a resource allocation and deadlock avoidance algorithm
used in distributed system. The Banker's algorithm is a resource allocation and
deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by
simulating the allocation of predetermined maximum possible amounts of all
resources, and then makes an "s-state" check to test for possible deadlock
conditions for all other pending activities, before deciding whether allocation should
be allowed to continue. The Banker's algorithm is run by the operating system
whenever a process requests resources. The algorithm avoids deadlock by denying
or postponing the request if it determines that accepting the request could put the
system in an unsafe state. When a new process enters a system, it must declare the
maximum number of instances of each resource type that it may ever claim;
clearly, that number may not exceed the total number of resources in the system.
Also, when a process gets all its requested resources it must return them in a finite
amount of time.

Procedure:
The Banker’s Algorithm is as follows:
STEP 1: initialize
Work := Available;
for i = 1,2,...,n
Finish[i] = false
STEP 2: find i such that both
a. Finish[i] is false
b. Need_i <= Work
if no such i, goto STEP 4
STEP 3:
Work := Work + Allocation_i
Finish[i] = true
17
goto STEP 2
STEP 4:
if Finish[i] = true for all i, system is in safe state
Procedure:
1. Enter a number of processes with their need.
2. Find out whether allocated resources are greater than required resources.
3. If allocated resources are greater than required resources then it is in a safe
state or else it is an unsafe state .

Additional Learning:
Deadlock is a state of a database system having two or more transactions when
each transaction is waiting for a data item that is being locked by some other
transaction. A deadlock can be indicated by a cycle in the wait-for-graph.

Output Analysis:
We have implemented Banker’s algorithm for avoiding process deadlocks. This
algorithm tests for safety by simulating the allocation for the predetermined
maximum possible amounts of all resources then makes an “s-state” check to test
for possible activities, before deciding whether allocation should be allowed to
continue.

Conclusion and Discussion:


● Visualize and Investigate Deadlocked in a distributed system.
● Apply various techniques for avoiding, preventing or resolving process
deadlocks which will allow the system to run as efficiently as possible

Program:
def main():
processes = int(input("number of processes : "))
resources = int(input("number of resources : "))
max_resources = [int(i) for i in input("maximum resources :
").split()]

print("\n-- allocated resources for each process --")


currently_allocated = [[int(i) for i in input(
f"process {j + 1} : ").split()] for j in range(processes)]

print("\n-- maximum resources for each process --")


max_need = [[int(i) for i in input(
f"process {j + 1} : ").split()] for j in range(processes)]

allocated = [0] * resources


for i in range(processes):
for j in range(resources):
allocated[j] += currently_allocated[i][j]
print(f"\ntotal allocated resources : {allocated}")

available = [max_resources[i] - allocated[i] for i in


range(resources)]
print(f"total available resources : {available}\n")

running = [True] * processes


count = processes
while count != 0:
safe = False
for i in range(processes):
if running[i]:
executing = True
for j in range(resources):
if max_need[i][j] - currently_allocated[i][j] >
available[j]:
executing = False
break
if executing:
print(f"process {i + 1} is executing")
running[i] = False
count -= 1
safe = True
for j in range(resources):
available[j] += currently_allocated[i][j]
break
if not safe:
print("the processes are in an unsafe state.")
break

print(
f"the process is in a safe state.\navailable resources :
{available}\n")
if __name__ == '__main__':
main()
Output:
Experiment 9
Aim:
Program to implement Name Resolution using Java.

Objectives:
● To understand the basic terminologies of naming system in Distributed
Environment.
● To understand the mechanism of name resolution.

Outcomes:
From this experiment, the student will be able to learn
● Apply the knowledge of Distributed File System to analyze various file
systems like NFS, AFS and the experience in building large-scale distributed
applications..

Hardware/Software Required:
JDK 1.6

Theory:

The naming facility of a distributed operating system enables users and programs to
assign character-string names to objects and subsequently use these names to refer
to those objects. The locating facility, which is an integral part of the naming
facility, maps an object's name to the object's location in a distributed system. The
naming and locating facilities jointly form a naming system that provides the users
with an abstraction of an object that hides the details of how and where an object is
actually located in the network. It provides a further level of abstraction when
dealing with object replicas. Given an object name, it returns a set of the locations
of the object's replicas.
The naming system plays a very important role in achieving the goal of
● location transparency,
● facilitating transparent migration and replication of objects,
● and object sharing.

The figure above shows a simple naming model based on these two types of names.
In this naming model, a human-oriented name is first mapped (translated) to a
system-oriented name that is then rnapped to the physical locations of the
corresponding object's replicas.

Name Resolution is the process of looking up a name. It uses Closure mechanism:


knowing where and how to start name.

Name Resolution can be Iterative vs recursive. Recursive name resolution puts a


higher performance demand on each name server.

● Too high for global layer name servers

Advantages of recursive name resolution

● Caching is more effective


● Communication costs may be reduced

The principle of iterative name resolution


The principle of recursive name resolution.

The comparison between recursive and iterative name resolution with


respect to communication cost
Procedure:
1. When a DNS name resolution request is forwarded to a DNS server, the DNS
server examines its local DNS cache for the IP address
2. If the IP address is not in the DNS server’s cache, it checks its Hosts file.
(Since the Hosts file, it is not commonly used.)
3. If the DNS server is not authoritative and configured for forwarding, the DNS
server forwards the request to a higher-level DNS server.
4. If the DNS server can’t forward the request, or if forwarding fails, the DNS
server uses its Root Hints file (also known as Cache DNS). The Root Hints file
lists the 13 root DNS servers.
5. The root DNS server responds with the address of a com, edu, net or other
DNS server type (depending on the request).
6. The DNS server forwards the request to the high-level DNS server, which can
respond with a variety of IP addresses.

Additional Learning:
Name resolution is the process to determine the actual entity that a name refers to.
In distributed settings, the naming system is often provided by a number of sites. To
operate on an entity, we need to access it at an access point. An entity can offer
more than one access point.
Output Analysis:
We have implemented a program for DNS name resolution. Here, requests are sent
to the DNS servers, and the DNS server examines its local DNS cache for the IP
address.

Conclusion and Discussion:


In distributed system the user-defined name of an object is mapped with system
name which is stored on name server. Thus we have studied different techniques of
name resolution and able to distinguish among them.

Program:
import java.io.*;
import java.net.*;
import java.util.*;

class NameRes {
public static void main(String args[]) {
Scanner sc = new Scanner(System.in);
String url;
try {
System.out.println("Enter Host Name:");
url = sc.next();
InetAddress ip = InetAddress.getByName(url);
System.out.println("IP Adress:" + ip.getHostAddress());
} catch (Exception e) {
System.out.println(e);
}
}
}
Output:
Experiment 10
Aim:
Case study of Hadoop file system.

Objectives:
To understand the issues and the approaches for designing a Hadoop file system

Outcomes:
Students will learn to
● Apply the knowledge of Distributed File System to analyze various file
systems like NFS, AFS and the experience in building large-scale distributed
applications.

Hardware/Software Required:
Microsoft Office, Internet

Theory:
A Distributed File System (DFS) enables programs to store and access remote files
exactly as they do local ones, allowing users to access files from any computer on a
network. A file system provides a service for clients. The server interface is the
normal set of file operations: create, read, etc.

Hadoop Distributed File System (HDFS):

It is an opensource version of GFS from Yahoo. The HDFS is a distributed file system
which is designed to run over commodity hardware. It has many similarities with
the existing distributed file systems. The significant difference of HDFS with other
distributed systems is that HDFS is highly fault-tolerant. HDFS provides high
throughput access to application data and is especially designed for applications
that have large data sets. A. Goals:

● As HDFS is designed for batch processing rather than interactive use by


users, the emphasis is on high throughput of data access rather than low
latency of data access.
● HDFS has to provide high aggregate data bandwidth and it has to scale to
hundreds of nodes in a single cluster.
● It should support tens of millions of files in a single instance. • Detection of
faults, and quick and automatic recovery from them is a core architectural
goal of HDFS.

Features: Highly fault-tolerant, High throughput, Suitable for applications with large
data sets, streaming access to file system data, Can be built out of commodity
hardware.

Map Reduce File system:

MapReduce is a programming model, Users specify the computation in terms of a


mapand a reducefunction, Underlying runtime system automatically parallelizes the
computation across large-scale clusters of machines, and also handles machine
failures, efficient communications, and performance issues.

MapReduce is simplified processing for larger data sets: : peta, exabytes Write once
and read many data: allows for parallelism without mutexes. Map and Reduce are
the main operations: simple code. There are other supporting operations such as
combine and partition.

All the map should be completed before reduce operation starts.Map and reduce
operations are typically performed by the same physical processor.Number of map
tasks and reduced tasks are configurable. Operations are provisioned near the data.
Runtime takes care of splitting and moving data for operations. It is a Special
distributed file system.
MAP operation:

Reduce Operation:
Additional Learning:
The Hadoop Distributed File System (HDFS) is the primary data storage system used
by Hadoop applications. HDFS employs a NameNode and DataNode architecture to
implement a distributed file system that provides high-performance access to data
across highly scalable Hadoop clusters.

Conclusion and Discussion:

The name resolution is one of the major issues in distributed systems, as large
numbers of resources are available in it. From this experiment, we understood the
concept of different file systems along with their features.
Experiment 11
Aim:
Case Study of Google File System

Objectives:
To understand the approaches for designing a Google file system and understand it
architecture.

Outcomes:
Students will learn to
● Apply the knowledge of Distributed File System to analyse various file systems and
experience in building large-scale distributed applications.

Hardware/Software Required:
Microsoft Office, Internet

Theory:
The Google File System, developed in late 1990s, uses thousands of storage
systems built from inexpensive commodity components to provide petabytes of
storage to a large user community with diverse need.
Some of the most important aspects of this analysis reflected in the GFS design are:
● Scalability and reliability are critical features of the system; they must be
considered from the beginning, rather than at later design stages.
● The vast majority of files range in size from a few GB to hundreds of TB. •
The most common operation is to append to an existing file; random write
operations to a file are extremely infrequent.
● Sequential read operations are the norm.
● Users process the data in bulk and are less concerned with the response
time. • To simplify the system implementation the consistency model should
be relaxed without placing an additional burden on the application
developers.
As a result of this analysis several design decisions were made:
1. Segment a file into large chunks.
2. Implement an atomic file append operation allowing multiple applications
operating concurrently to append to the same file.
3. Build the cluster around a high-bandwidth rather than low-latency
interconnection network. Separate the flow of control from the data flow; schedule
the high-bandwidth data flow by pipelining the data transfer over TCP connections
to reduce the response time. Exploit network topology by sending data to the
closest node in the network.
4. Eliminate caching at the client site; caching increases the overhead for
maintaining consistency among cashed copies at multiple client sites and it is not
likely to improve performance.
5. Ensure consistency by channeling critical file operations through a master
controlling the entire system.
6. Minimize master’s involvement in file access operations to avoid hot-spot
contention and to ensure scalability.
7. Support efficient checkpointing and fast recovery mechanisms. 8. Support
efficient garbage collection mechanisms. GFS files are collections of fixed-size
segments called chunks; at the time of file creation each chunk is assigned a unique
chunk handle. A chunk consists of 64 KB blocks and each block has a 32-bit
checksum. Chunks are stored on Linux files systems and are replicated on multiple
sites; a user may change the number of the replicas, from the standard value of
three, to any desired value. The chunk size is 64 MB; this choice is motivated by the
desire to optimize the performance for large files and to reduce the amount of
metadata maintained by the system.

Figure 11.1 Architecture of Google File System


The architecture of a GFS cluster is illustrated in Figure 11.1 handle and the
location of the chunk. Then the application communicates directly with the chunk
server to carry out the desired file operation. The consistency model is very
effective and scalable. Operations, such as file creation, are atomic and are handled
by the master. To ensure scalability, the master has a minimal involvement in file
mutations, operations such as write or append which occur frequently. In such
cases the master grants a lease for a particular chunk to one of the chunk servers
called the primary; then, the primary creates a serial order for the updates of that
chunk. When data of a write operation straddles chunk boundary, two operations
are carried out, one for each chunk.

The following steps of a write request illustrate the process which buffers data and
decouples the control flow from the data flow for efficiency:
1. The client contacts the master which assigns a lease to one of the chunk servers
for the particular chunk, if no lease for that chunk exists; then, the master replies
with the Ids of the primary and the secondary chunk servers holding replicas of the
chunk. The client caches this information.
2. The client sends the data to all chunk servers holding replicas of the chunk; each
one of the chunk servers stores the data in an internal LRU buffer and then sends
an acknowledgement to the client.
3. The client sends the write request to the primary chunk server once it has
received the acknowledgements from all chunk servers holding replicas of the
chunk. The primary chunk server identifies mutations by consecutive sequence
numbers.
4. The primary chunk server sends the write requests to all secondaries. 5. Each
secondary chunk server applies the mutations in the order of the sequence number
and then sends an acknowledgement to the primary chunk server. 6. Finally, after
receiving the acknowledgements from all secondaries, the primary informs the
client.
The system supports an efficient checkpointing procedure based on copy-on-write to
construct system snapshots.

Additional Learning:
The Google File System (GFS) is a scalable distributed file system (DFS) created by Google
Inc. and developed to accommodate Google’s expanding data processing requirements. GFS
provides fault tolerance, reliability, scalability, availability and performance to large networks and
connected nodes. GFS is made up of several storage systems built from low-cost commodity
hardware components. It is optimized to accommodate Google's different data use and storage
needs, such as its search engine, which generates huge amounts of data that must be stored.
Conclusion:
From this experiment, we have studied the concept of GFS which demonstrates the
qualities essential for supporting large scale data processing workloads on
commodity hardware.

Exp7: Suzuki Kasami Algorithm


JANUARY 3, 2016LEAVE A COMMENT
Objective : To implement Suzuki Kasami Algorithm

Theory:

If a site wants to enter the CS and it does not have the token, it broadcasts a
REQUEST message for the token to all other sites. A site which possesses the
token sends it to the requesting site upon the receipt of its REQUEST message. If
a site receives a REQUEST message when it is executing the CS, it sends the
token only after it has completed the execution of the CS.

This algorithm must efficiently address the following two design issues:

(1) How to distinguish an outdated REQUEST message from a current REQUEST


message:

Due to variable message delays, a site may receive a token request message
after the corresponding request has been satisfied. If a site can not determined
if the request corresponding to a token request has been satisfied, it may
dispatch the token to a site that does not need it. This will not violate the
correctness, however, this may seriously degrade the performance.

(2) How to determine which site has an outstanding request for the CS: After a
site has finished the execution of the CS, it must determine what sites have an
outstanding request for the CS so that the token can be dispatched to one of
them.

Implementation:

Requesting the critical section

Advertisements
REPORT THIS AD

(a) If requesting site Si does not have the token, then it increments its sequence
number, RNi [i], and sends a REQUEST(i, sn) message to all other sites. (‘sn’ is
the updated value of RNi [i].)

(b) When a site Sj receives this message, it sets RNj [i] to max(RNj [i], sn). If Sj
has the idle token, then it sends the token to Si if RNj [i]=LN[i]+1.

Executing the critical section

(c) Site Si executes the CS after it has received the token.

Releasing the critical section Having finished the execution of the CS, site Si
takes the following actions:

(d) It sets LN[i] element of the token array equal to RNi [i].

(e) For every site Sj whose id is not in the token queue, it appends its id to the
token queue if RNi [j]=LN[j]+1.

(f) If the token queue is nonempty after the above update, Si deletes the top site
id from the token queue and sends the token to the site indicated by the id.
Program :

#include<iostream>

#include<queue>

#include<vector>

#define PR 6

using namespace std;

int token_holder;

//Description of the token

class Token

public:

int id; //Id of the site having token

queue <int> token_q; //Token queue

int ln[PR]; //Token Array for sequence no

void init() //Initializing token

id=0;

for(int i=0;i<PR;i++)

{
ln[i]=0;

}token;

//Description of each Site

class Site

public :

int rn[PR]; //Site’s Array for sequence no.

bool exec; //For checking whether site is executing

bool isreq; //For checking whether site is requesting

bool hastoken; //For checking whether site has token

void init() //Initializing sites

exec=0;

isreq=0;

hastoken=0;

for(int i=0;i<PR;i++)

rn[i]=0;
}

void req(int pid,int seqno);

}site[PR];

//For a site to execute request of site pid with sequenceno seqno

void Site::req(int pid,int seqno)

int i;

rn[pid]=max(rn[pid],seqno);

if(hastoken==1)

if(exec==0 && token.ln[pid]+1==rn[pid])


{

hastoken=0;

token_holder=pid;

else if(token.ln[pid]+1==rn[pid])
{

token.token_q.push(pid);
}

}
}

//Initialize

void initialize()

int i;

token.init();
for(i=0;i<PR;i++)

site[i].init();

//For a site with id pid to request for C.S.

void request(int pid)

int i,seqno;

seqno=++site[pid].rn[pid];

//Checking whether it has already requested

if(site[pid].isreq==1 || site[pid].exec==1)

{
printf(“SITE %d is already requesting \n”,pid);

return;

site[pid].isreq=1;

//Checking if it has the token

if(token_holder==pid)

site[pid].isreq=0;

site[pid].exec=1;

printf(“SITE %d already has the token and it enters the critical section\n”,pid);

return;

//Sending Request

if(token_holder!=pid)

for(i=0;i<PR;i++)

if(i!=pid)

site[i].req(pid,seqno);

}
}

//Checking if it has got the token

if(token_holder==pid)

site[pid].hastoken=1;

site[pid].exec=1;

site[pid].isreq=0;

printf(“SITE %d gets the token and it enters the critical section\n”,pid);

else

printf(“SITE %d is currently executing the critical section \nSite %d has placed


its request\n”,token_holder,pid);

//For a site with id pid to request for C.S.

void release(int pid)

{
if(site[pid].exec!=1)

printf(“SITE %d is not currently executing the critical section \n”,pid);

return;

int i,siteid;

token.ln[pid]=site[pid].rn[pid];
site[pid].exec=0;

printf(“SITE %d releases the critical section\n”,pid);

//Checking if deffred requests are there by checking token queue

//And Passing the token if queue is non empty

if(!token.token_q.empty())
{

siteid=token.token_q.front();
token.token_q.pop();
token.id=siteid;
site[pid].hastoken=0;

token_holder=siteid;

site[siteid].hastoken=1;

site[siteid].exec=1;

site[siteid].isreq=0;

printf(“SITE %d gets the token and it enters the critical section\n”,siteid);

return;
}

printf(“SITE %d still has the token\n”,pid);

//Printing the state of the system

void print()

int i,j,k=0;

queue <int> temp;

printf(“TOKEN STATE :\n”);

printf(“TOKEN HOLDER :%d\n”,token_holder);

printf(“TOKEN QUEUE: “);

if(token.token_q.empty())
{

printf(“EMPTY”);

j=0;

else

j=token.token_q.size();
}

while(k<j)
{

i=token.token_q.front();
token.token_q.pop();
token.token_q.push(i);
printf(“%d “,i);

k++;

printf(“\n”);

printf(“TOKEN SEQ NO ARRAY: “);

for(i=0;i<PR;i++)

printf(“%d “,token.ln[i]);
printf(“\n”);

printf(“SITES SEQ NO ARRAY: \n”);

for(i=0;i<PR;i++)

printf(” S%d :”,i);

for(j=0;j<PR;j++)

printf(” %d “,site[i].rn[j]);

printf(“\n”);

}
int main()

int i,j,time,pid;

string str;

initialize();

token_holder=0;

site[0].hastoken=1;

time=0;

cout<<“THE NO OF SITES IN THE DISTRIBUTED SYSTEM ARE “<<PR<<endl;

cout<<“INITIAL STATE\n”<<endl;

print();

printf(“\n”);

while(str!=”OVER”)

cin>>str;

if(str==”REQ”)

cin>>pid;
cout<<“EVENT :”<<str<<” “<<pid<<endl<<endl;

request(pid);

print();

printf(“\n”);

else if(str==”REL”)

cin>>pid;

cout<<“EVENT :”<<str<<” “<<pid<<endl<<endl;

release(pid);

print();

printf(“\n”);

You might also like