Unit 1 Notes Introduction To Computing With Distributed Data Processing UDS23201j
Unit 1 Notes Introduction To Computing With Distributed Data Processing UDS23201j
QOS parameters
The distributed systems must offer the following QOS:
Performance
Reliability
Availability
Security
Differences between centralized and distributed systems
Distributed search is a search engine model in which the tasks of Web crawling,
indexing and query processing are distributed among multiple computers and
networks.
The search engines were supported by a single supercomputer . But in recent years,
they have moved to a distributed model.
Google search relies upon thousands of computers crawling the Web from multiple
locations all over the world.
In Google's distributed search system, each computer involved in indexing crawls
and reviews a portion of the Web, taking a URL and following every link available
from it.
The computer gathers the crawled results from the URLs and sends that
information back to a centralized server in compressed format.
The centralized server then coordinates that information in a database, along with
information from other computers involved in indexing.
When a user types a query into the search field, Google's domain name server (
DNS ) software relays the query to the most logical cluster of computers, based on
factors such as its proximity to the user or how busy it is.
At the recipient cluster, the Web server software distributes the query to hundreds
or thousands of computers to search simultaneously.
Hundreds of computers scan the database index to find all relevant records.
The index server compiles the results, the document server pulls together the titles
and summaries and the page builder creates the search result pages.
Introduction to computing with distributed data processing(UDS23201J) :Unit 1- Distributed Systems
The products that are connected to the pervasive network are easily available.
The main goal of pervasive computing is to create an environment where the
connectivity of devices is embedded in such a way that the connectivity is
unobtrusive and always available.
Internet can be seen as a large distributed system.
Internet Service Providers (ISPs) are companies that provide broadband links and
other types of connection to individual users and small organizations, enabling
them to access services anywhere in the Internet as well as providing local services
such as email and web hosting.
The intranets are linked together by backbones network link with a high
transmission capacity, employing satellite connections, fibre optic cables and other
high-bandwidth circuits.
Internet is now build into numerous small devices from laptops to watches.
These devices must have high degree of portability. Mobile computing supports
this.
They could connect tens/hundreds of They could connect a few devices for every
computing devices in every room/person, person, small enough to carry around –
becoming “invisible” and part of the devices connected to cellular networks or
environment – WANs, LANs, PANs – WLANs
networking in small spaces
They could connect even the non- mobile They are actually a subset of ubiquitous
devices and offer various forms of computing.
communication.
They could support all form of devices They support only conventional, discrete
that are connected to the internet from computers and devices.
laptops to watches.
Introduction to computing with distributed data processing(UDS23201J) :Unit 1- Distributed Systems
Transparency Description
Relocation Hide that a resource may be moved to another location while in use
e) Security
Every system must consider strong security measurement. Distributed Systems
somehow deals with sensitive information; so secure mechanism must be in place. The
foolowing attacks are more common in distributed systems:
Denial of service attacks: When the requested service is not available at the time
of request it is Denial of Service (DOS) attack. This attack is done by bombarding
the service with a large number of useless requests that the serious users are unable
to use
it.
Introduction to computing with distributed data processing(UDS23201J) :Unit 1- Distributed Systems
Security of mobile code: Mobile code needs to be handled with care since they
are transmitted in an open environment.
f) Scalability
Distributed systems must be scalable as the number of user increases.
Masking failures: Some failures that have been detected can be hidden or
made less severe. Examples of hiding failures include retransmission of
messages and maintaining a redundant copy of same data.
Tolerating failures: All the failures cannot be handled. Some failures must be
accepted by the user. Example of this is waiting for a video file to be streamed
in.
Introduction to computing with distributed data processing(UDS23201J) :Unit 1- Distributed Systems
Recovery from failures: Recovery involves the design of software so that the
state
of
permanent data can be recovered or rolled back after a server has crashed.
Redundancy: Services can be made to tolerate failures by the use of redundant
components. Examples for this includes: maintenance of two different paths
between same source and destination.
Availability is also a major concern in the fault tolerance. The availabilityof a
system is a measure of the proportion of time that it is available for use. It is a
useful performance metric.
h) Quality of Service:
The distributed systems must confirm the following non functional requirements:
Reliability:A reliable distributed system is designed to be as fault tolerant as
possible. Reliability is the quality of a measurement indicating the degree to
which the measure is consistent.
Security: Security is the degree of resistance to, or protection from, harm. It
applies to any vulnerable and valuable asset, such as a person, dwelling,
community, nation, or organization. Distributed systems spread across wide
geographic locations. So security is a major concern.
Adaptability:The frequent changing of configurations and resource
availability demands the distributed system to de highly adaptable.
The Web gradually grew worldwide encompassing sites other than high energy
physics, but popularity really increased when graphical user interfaces became
available, notably Mosaic (Vetter et al., 1994).
A document was fetched from a server, transferred to a client, and presented on the
screen.
Since 1994, Web developments are primarily initiated and controlled by the World
Wide Web Consortium, which is a collaboration between CERN and M.I.T.
HTML
One of its most powerful features is the ability to express parts of a document
in the form of a script.
XML can be used to define arbitrary structures. In other words, it provides the
means to define different document
types.
Introduction to computing with distributed data processing(UDS23201J) :Unit 1- Distributed Systems
The pages of a website can usually be accessed from a simple Uniform Resource
Locator (URL) otherwise called as web address.
Web services
Web services allow exchange of information between applications on the web.
Using web services, applications can easily interact with each other.
The web services are offered using concept of Utility Computing.
Introduction to computing with distributed data processing(UDS23201J) :Unit 1- Distributed Systems
REVIEW QUESTIONS
PART-A
1. Define centralized computing.
The process of computation was started from working on a single processor. This uni-
processor computing can be termed as centralized computing.
9. What is MMOG?
These games simulate real-life as much as possible. As such it is necessary to
constantly evolve the game world using a set of laws.These laws are a complex set of
rules that the game engine applies with every clock tick
PART – B
1. What are the issues in distributed systems?
2. Explain web search as a distributed system.
3. Brief about MMOGs.
4. Describe financial trading.
5. Explain the trends in distributed systems.
6. Elucidate the resource sharing nature of distributed system.
7. Brief about the challenges in distributed system.
8. Describe WWW.
9. Explain HTML.
COMMUNICATION IN DISTRIBUTED
2
SYSTEM
types:
Distrubuted 2.2
Systems
Physical models: It is the most explicit way in which to describe a system in
terms of hardware composition.
Architectural models: They describe a system in terms of the computational and
communication tasks performed by its computational elements.
Fundamental models: They examine individual aspects of a distributed system.
They are again classified based on some parameters as follows:
Interaction models: This deals with the structure and sequencing of the
communication between the elements of the system
Failure models: This deals with the ways in which a system may fail to
operate correctly
Security models: This deals with the security measures implemented in the
system against attempts to interfere with its correct operation or to steal its
data.
There are many physical models available from the primitive baseline model to the
complex models that could handle cloud environments.
Baseline physical model:
This is a primitive model that describes the hardware or software components
located at networked computers.
The communication and coordination of their activities is done by passing
messages.
It has very limited internet connectivity and can support a small range of
services such as shared local printers and file servers.
The nodes were desktop computers and therefore relatively static, discrete and
independent.
Mobile computing has led to physical models with movable nodes. Service
discovery is of primary concern in these systems.
The cloud computing and cluster architectures hassled to pools of nodes that
collectively provide a given service. This resulted in enormous number of
systems given the service.
This describes the components and their interrelationships to ensure the system is
reliable, manageable, adaptable and cost-effective. The different models are evaluated based
on the following criteria:
architectural elements
architectural patterns
middleware platforms
Communication paradigms
There are three major modes of communication in distributed systems:
Interprocess communication:
This is a low-level support for communication between processes in distributed
systems, including message-passing primitives. They have direct access to the API
offered by Internet protocols and support multicast communication.
Remote invocation:
It is used in a distributed system and it is the calling of a remote operation,
procedure or method.
a) Request-reply protocols:
This is a message exchange pattern in which a requestor sends a request message
to a replier system which receives and processes the request, ultimately returning a
message in response. This is a simple, but powerful messaging pattern which
allows two applications to have a two-way conversation with one another over a
channel. This pattern is especially common in client–server architectures.
Distrubuted 2.6
Systems
b) Remote procedure calls: Remote Procedure Call (RPC) is a protocol that one
program can use to request a service from a program located in another computer
in a network without having to understand network details.
c) Remote method invocation: RMI (Remote Method Invocation) is a way that a
programmer can write object-oriented programming in which objects on different
computers can interact in a distributed network. This has the following two key
features:
Space uncoupling: Senders do not need to know who they are sending to
Time uncoupling: Senders and receivers do not need to exist at the same time
Indirect communication
Key techniques for indirect communication include:
a) Group communication: Group communication is concerned with the delivery
of messages to a set of recipients and hence is a multiparty communication
paradigm supporting one-to-many communication. A group identifier uniquely
identifies the group. The recipients join the group and receive the messages.
Senders send messages to the group based on the group identifier and hence do
not need to know the recipients of the message.
b) Publish-subscribe systems: This is a messaging pattern where senders
of messages, called publishers, do not program the messages to be sent directly
to specific receivers, called subscribers. Instead, published messages are
characterized into classes, without knowledge of what, subscribers there may
be. Similarly, subscribers express interest in one or more classes, and only
receive messages that are of interest, without knowledge of what, if any,
publishers there are. They offer one-to-many style of communication.
c) Message queues: They offer a point-to-point service whereby producer
processes can send messages to a specified queue and consumer processes can
receive messages from the queue or be notified of the arrival of new messages
in the queue. Queues therefore offer an indirection between the producer and
consumer processes.
d) Tuple spaces: The processes can place arbitrary items of structured data,
called tuples, in a persistent tuple space and other processes can either read or
remove such tuples from the tuple space by specifying patterns of interest. This
style of programming is known as generative communication.
e) Distributed shared memory: In computer architecture, distributed shared
memory (DSM) is a form of memory architecture where the (physically
separate) memories can be addressed as one (logically shared) address space.
Here, the term shared does not mean that there is a single centralized memory
but shared essentially means that the address space is shared (same physical
address on two processors refers to the same location in memory
Distrubuted 2.7
Systems
Roles and responsibilities
There are two types of architectures based on roles and responsibilities:
1) Client-server:
The system is structured as a set of processes, called servers, that offer services
to the users, called clients.
The client-server model is usually based on a simple request/reply protocol,
implemented with send/receive primitives or using remote procedure calls
(RPC) or remote method invocation (RMI).
The client sends a request (invocation) message to the server asking for some
service; - the server does the work and returns a result (e.g. the data requested
or an error code if the work could not be performed. )
A server can itself request services from other servers; thus, in
relation, the server itself acts like a client. this new
Some problems with client-server are centralization of service resulti
scaling, limited capacity of server bandwidth of network connecting t ng in poor
he server.
Placement
Mapping of objects or services to the underlying physical distributed infrastructure is
considered here. The following points must be taken care while placing the objects: reliability,
communication pattern, load, QOS etc. The following are the placement strategies:
Mapping of services to multiple servers
The servers may partition the set of objects on which the service is based and
distribute those objects between themselves, or they may maintain replicated
copies of them on several hosts.
The Web provides a common example of partitioned data in which each web
server manages its own set of resources. A user can employ a browser to access
a resource at any one of the servers.
Caching
A cache is a local store of recently used data objects that is closer to one client
or a particular set of clients than the objects themselves.
When a new object is received from a server it is added to the local cache store,
replacing some existing objects if
necessary.
Distrubuted 2.9
Systems
When an object is needed by a client process, the caching service first checks
the cache and supplies the object from there if an up-to-date copy is available.
If not, an up-to-date copy is fetched.
Caches may be co-located with each client or they may be located in a proxy
server that can be shared by several clients.
Mobile code:
Applets are a well-known and widely used example of mobile code.
The user running a browser selects a link to an applet whose code is stored on a
web server; the code is downloaded to the browser and runs.
They provide interactive response.
Mobile agents:
A mobile agent is a running program (including both code and data) that
travels from one computer to another in a network carrying out a task on
someone‟s behalf, such as collecting information, and eventually returning
with the results.
A mobile agent is a complete program, code + data, that can work (relatively)
independently.
The mobile agent can invoke local resources/data.
A mobile agent may make many invocations to local resources at each site it
visit.
Typical tasks of mobile agent includes: collect information , install/maintain
software on computers, compare prices from various vendors bay visiting their
sites etc.
Mobile agents (like mobile code) are a potential security threat to the resources
in computers that they visit.
Layering:
In a layered approach, a complex system is partitioned into a number of
layers, with a given layer making use of the services offered by the layer
below.
Distrubuted 2.1
Systems 0
Tiered architecture
Tiering is a technique to organize functionality of a given layer and place
this functionality into appropriate servers and, as a secondary
consideration, on to physical nodes.
The two tiered architecture refers to client/server architectures in which the
user interface (presentation layer) runs on the client and the database
(data layer) is stored on the server. The actual application logic can run on
either the client or the
server.
Distrubuted 2.1
Systems 1
Middleware is a general term for software that serves to "glue together" separate,
often complex and already existing, programs.
Categories of middleware
The following are some of the categories of middleware:
Distributed Objects
The term distributed objects usually refers to software modules that are designed to
work together, but reside either in multiple computers connected via a network or
in different processes inside the same computer. One object sends a message to
another object in a remote machine or process to perform some task. The results
are sent back to the calling object.
Distributed Components
A component is a reusable program building block that can be combined with
other components in the same or other computers in a distributed network to form
an application. Examples: a single button in a graphical user interface, a small
interest calculator, an interface to a database manager. Components can be
Distrubuted 2.1
Systems 4
deployed on different servers in a network and communicate with each other for
needed services. A component runs within a context called a container .
Examples: pages on a Web site, Web browsers, and word processors.
Publish subscriber model
Publish–subscribe is a messaging pattern where senders of messages, called
publishers, do not program the messages to be sent directly to specific receivers,
called subscribers. Instead, published messages are characterized into classes,
without knowledge of what, if any, subscribers there may be. Similarly,
subscribers express interest in one or more classes, and only receive messages that
are of interest, without knowledge of what publishers are there.
Message Queues
Message queues provide an asynchronous communications protocol, meaning that
the sender and receiver of the message do not need to interact with the message
queue at the same time. Messages placed onto the queue are stored until the
recipient retrieves them. Message queues have implicit or explicit limits on the size
of data that may be transmitted in a single message and the number of messages
that may remain outstanding on the queue.
Web services
A web service is a collection of open protocols and standards used for exchanging
data between applications or systems. Software applications written in various
programming languages and running on various platforms can use web services to
exchange data over computer networks like the Internet in a manner similar to
inter-process communication on a single computer. This interoperability (e.g.,
between Java and Python, or Windows and Linux applications) is due to the use of
open standards.
Peer to peer
Peer-to-peer (P2P) computing or networking is a distributed application
architecture that partitions tasks or work load between peers. Peers are equally
privileged, equipotent participants in the application. They are said to form a peer-
to-peer network of nodes.
Advantages of middleware
Real time information access among systems.
Streamlines business processes and helps raise organizational efficiency.
Maintains information integrity across multiple systems.
It covers a wide range of software systems, including distributed Objects and
components, message-oriented communication, and mobile application support.
Middleware is anything that helps developers create networked applications.
Distrubuted 2.1
Systems 5
Disadvantages of middleware
Prohibitively high development costs.
There are only few people with experience in the market place to develop and use
a middleware.
Models addressing time synchronization, message delays, failures, security issues are
addressed as:
make explicit all the relevant assumptions about the systems we are modelling.
processes.
Distrubuted 2.1
Systems 6
The two significant factors affecting interacting processes in a distributed system
are:
1) Communication performance
2) Global notion of time.
Communication Performance
The communication channel in can be implemented in a variety of ways in
distributed systems: Streams or through simple message passing over a network.
The performance characteristics of a network are:
a) Latency: A delay between the start of a message‟s transmission from one
process to the beginning of reception by another.
b) Bandwidth: The total amount of information that can be transmitted over in a
given time. The communication channels using the same network, have to
share the available bandwidth.
c) Jitter: The variation in the time taken to deliver a series of messages. It is very
relevant to multimedia data.
Event Modeling
Event ordering is of major concern in DS.
The concept of one event happening before another in a distributed system is
examined, and is shown to define a partial ordering of the events.
The execution of a system can be described in terms of events and their ordering
despite the lack of accurate clocks.
Consider a mailing list with users X, Y, Z, and A.
Due to independent delivery in message delivery, message may be delivered in
different order.
If messages m1, m2, m3 carry their time t1, t2, t3, then they can be displayed to
users accordingly to their time ordering.
The mail box is:
23 Z Re: Meeting
24 X Meeting
26 Y Re: Meeting
Distrubuted 2.18
send receiv receiv Systems
X e e
1 m 4
1 m
sen
2 receiv
2 d Physic
Y e
receiv 3 al time
e
sen
Z d
receive
m3 m1 m2
A
receive receive
t t
1 2
receive t3
Fig 2.6: Ordering of events
2.2.3.2 Failure Model
In a DS, both processes and communication channels may fail – i.e., they may depart
from what is considered to be correct or desirable behavior. The following are the common
types of failures:
Omission Failure
Arbitrary Failure
Timing Failure
a) Omission failure
Omission failures occur due to communication link failures. They are detected through
timeouts. They are classified as:
A fixed period of time is fixed for all the methods to complete its
execution. If the method takes time longer than the allowed time, a time
out has occurred.
In an asynchronous system a timeout can indicate only that a process is
not responding.
A process crash is called fail-stopif other processes can detect certainly
that the process has
crashed.
Distrubuted 2.19
Systems
Fail-stop behavior can be produced in a synchronous system if the
processes use timeouts to detect when other processes fail to respond and
messages are guaranteed to be delivered.
Communication
channel
Outgoing message buffer Incoming message buffer
Clock Process Process‟s local clock exceeds the bounds on its rate of drift
from real time
Performance Process Process exceeds the bounds on the interval between two steps
Performance Channel A message‟s transmission takes longer than the stated bound.
Masking failures
Each component in a distributed system is generally constructed from a collection
of other components. It is always possible to construct reliable services from
components that exhibit failures.
A knowledge of failure characteristics of a component can enable a new service to
be designed to mask the failure of the components on which it depends.
Distrubuted 2.21
Systems
Reliability of one-to-one communication
The term reliable communication is defined in terms of validity and integrity.
Validity: Any message in the outgoing message buffer is eventually delivered
to the incoming message buffer.
Integrity: The message received is identical to one sent, and no messages are
delivered twice.
The threats to integrity come from two independent sources:
Any protocol that retransmits messages but does not reject a message that
arrives twice.
Malicious users that may inject spurious messages, replay old messages or
tamper with messages.
2.2.3.2 Security model
The security of a DS can be achieved by securing the processes and the channels
used in their interactions and by protecting the objects that they encapsulate
against unauthorized access.
Protection is described in terms of objects, although the concepts apply equally well to
resources of all types.
Protecting objects
The server should verify the identity of the principal (user) behind each operation
and checking that they have sufficient access rights to perform the requested
operation on the particular object, rejecting those who do not.
A principal is an authority that manages the access rights. The principal may be a
user or a process.
Message destinations
The messages in the Internet protocols, messages are sent to (Internet address,
local port) pairs.
A local port is a message destination with in a computer, specified as an
integer.
A port has exactly one receiver but can have many senders.
Processes may use multiple ports to receive messages.
Any process that knows the number of a port can send a message to it.
Client programs refer to services by name and use a name server or binder to
translate their names into server locations at runtime.
This allows services to be relocated but not to migrate – that is, to be moved
while the system is
running.
Distrubuted 2.25
Systems
Reliability
A point-to-point message service can be described as reliable if messages are
guaranteed to be delivered.
A point-to-point message service can be described as unreliable if messages are
not guaranteed to be delivered in the face of even a single packet dropped or
lost.
Ordering
Some applications require that messages be delivered in the order in which
they were transmitted by the sender.
The delivery of messages outof sender order is regarded as a failure by such
applications.
Sockets
A socket is one endpoint of a two-way communication link between two programs
running on the network.
Users of this class refer to computers by Domain Name System (DNS) hostnames.
InetAddressaComputer = InetAddress.getByName("abc.ac.in");
In the Java and UNIX APIs, the sender specifies the destination using a socket.
o Creating a socket
Receive() returns Internet address and port of sender, along with the message.
Any application requiring messages larger than the maximum must fragment
them.
Distrubuted 2.27
Systems
Blocking:
The send() returns after delivering the message to theUDP and IP protocols.
These protocols are now responsible for transmitting the message to its
destination
The message is placed in a queue for the socket that is bound to the destination
port.
The receive()blocks until a datagram is received, unless a timeout has been set
on the socket.
Timeouts:
The receive() cannot wait indefinitely. This situation occurs when the sending
process may have crashed or the expected message may have been lost.
The timeouts must be much larger than the time required to transmit a
message.
The receive() does not specify an origin for messages. This can get datagrams
addressed to its socket from any origin.
The receive() returns the Internet address and local port of the sender, allowing
the recipient to check where the message came from.
Syntax:
send(packet);
receive();
2. setSoTimeout: This method allows a timeout to be set. With a
timeout set, the receive method will block for the time specified and
then throw an InterruptedIOException.
Syntax:
setSotimeout(time in seconds);
3. connect: This method is used for connecting to a particular remote
port and Internet address, in which case the socket is only able to
send messages to and receive messages from that address.
Syntax:
int connect(in sockfd, const structsockaddr *addr, socklen_taddrlen);
Distrubuted 2.29
Systems
Arrays of bytes Message Internet Port number
containing message length address
import java.net.*;
import java.io.*;
public class UDPClient
{
public static void main(String args[])
{
DatagramSocketaSocket = null;
try {
aSocket = new DatagramSocket(); // Create an object for the datagram class
byte [] m = args[0].getBytes(); //Buffer to get data
InetAddressaHost = InetAddress.getByName(args[1]); // Get Inetaddress
intserverPort = 6789;
DatagramPacket request =
new DatagramPacket(m, m.length(), aHost, serverPort);
aSocket.send(request); //Sent the packet
byte[] buffer = new byte[1000];
DatagramPacket reply = new DatagramPacket(buffer, buffer.length);
aSocket.receive(reply);
System.out.println("Reply: " + new String(reply.getData()));
}
catch (SocketException e){System.out.println("Socket: " + e.getMessage());
}
catch (IOException e){System.out.println("IO: " + e.getMessage());
}
Distrubuted 2.30
Systems
finally
{
if(aSocket != null) aSocket.close();}
}
}
Server program
import java.net.*;
import java.io.*;
public class UDPServer
{
public static void main(String args[])
{
DatagramSocketaSocket = null;
try
{
aSocket = new DatagramSocket(6789);
byte[] buffer = new byte[1000];
while(true)
{
DatagramPacket request = new DatagramPacket(buffer, buffer.length);
aSocket.receive(request);
DatagramPacket reply = new DatagramPacket(request.getData(),
request.getLength(), request.getAddress(), request.getPort());
aSocket.send(reply);
}
}
catch (SocketException e)
{
System.out.println("Socket: " + e.getMessage());
Distrubuted 2.31
Systems
}
catch (IOException e)
{
System.out.println("IO: " + e.getMessage());
}
finally
{
if (aSocket != null) aSocket.close();}
}
}
TCP server
import java.net.*;
import java.io.*;
public class TCPServer
{
public static void main (String args[])
{
try{
intserverPort = 7896;
ServerSocketlistenSocket = new ServerSocket(serverPort);
while(true) {
Socket clientSocket = listenSocket.accept();
Connection c = new Connection(clientSocket);
}
} catch(IOException e) {System.out.println("Listen :"+e.getMessage());}
}
}
class Connection extends Thread
{
DataInputStream in;
DataOutputStream out;
Socket clientSocket;
Distrubuted 2.35
Systems
public Connection (Socket aClientSocket)
{
try {
clientSocket = aClientSocket;
in = new DataInputStream( clientSocket.getInputStream());
out =new DataOutputStream( clientSocket.getOutputStream());
this.start();
} catch(IOException e) {System.out.println("Connection:"+e.getMessage());}
}
public void run(){
try { // an echo server
String data = in.readUTF();
out.writeUTF(data);
} catch(EOFException e) {System.out.println("EOF:"+e.getMessage());
} catch(IOException e) {System.out.println("IO:"+e.getMessage());
} finally { try {clientSocket.close();}catch (IOException e){/*close failed*/}}
}
}
The following are the three common approaches to external data representation and
marshalling:
CORBA: The data representation is concerned with an external representation for
the structured and primitive types that can be passed as the arguments and results
of remote method invocations in CORBA. It can be used by a variety of
programming languages. Marshalling and unmarshalling is done by the
middleware in binary form.
Java’s object serialization: This is concerned with the flattening and external data
representation of any single object or tree of objects that may need to be
transmitted in a message or stored on a disk. It is for use only by Java. Marshalling
and unmarshalling is done by the middleware in binary form.
XML: This defines a textual fomat for representing structured data. It was
originally intended for documents containing textual self-describing structured
data. Marshalling and unmarshalling is in texual form.
Marshalling in CORBA
The type of a data item not given in case of CORBA. CORBA assumes that both the
sender and recipient have common knowledge of the order and types of data items. The types
of data structures and types of basic data items are described in CORBA IDL. This provides a
notation for describing the types of arguments and results of RMI methods
Distrubuted 2.38
Systems
Example:
Struct student
{
string name;
stringrollnumber;
}
To serialize an object: ‹
(1) its class info is written out: name, version number ‹
(2) types and names of instance variables • If an instance variable belong to a new
class, then new class info must be written out, recursively. Each class is given a
handle
‹
Distrubuted 2.39
Systems
(3) values of instance variables ‹
Example: student s = new student(“5Sona”, “8CS50”);
‹ Thefollowinga re the steps to serialize objects in Java:
create an instance of ObjectOutputStream ‹
Invoke writeObject method passing Person object as argument ‹
To deserialize
create an instance of ObjectInputStream ‹
Invoke readObject method to reconstruct the original
object ObjectOutputStream out = new
ObjectOutputStream(… );
out.writeObject(originalStudent);
ObjectInputStream in = new ObjectInputStream(…);
Student theStudent = in.readObject();
Reflection:
Reflection is inquiring about class properties, e.g., names, types of methods and
variables, of objects. This allows to perform serialization and deserialization in a generic
manner, unlike in CORBA, which needs IDL specifications. For serialization, use reflection to
find out
(1) class name of the object to be serialized
(2) the names, types
(3) values of its instance variables
„ For deserialization,
(1) class name in the serialized form is used to create a class
(2) it is then used to create a constructor with arguments types corresponding to those
specified in the serialized form.
(3) the new constructor is used to create a new object with instance variables whose
values are read from the serialized form
Multicast operation is an operation that sends a single message from one process to
each of the members of a group of processes. The simplest way of multicasting, provides no
guarantees about message delivery or ordering.
The following are the characteristics of multicasting:
Fault tolerance based on replicated services:
A replicated service consists of a group of servers.
Client requests are multicast to all the members of the group, each of which
performs an identical operation.
Finding the discovery servers in spontaneous networking
Join to receive
multicast message
Receiver
Multicast IP
Address
Sender Receiver
Receiver
Fig 2.9: Multicasting
2.5.1 IP Multicast
IP multicast is built on top of the Internet protocol.
IP multicast allows the sender to transmit a single IP packet to a multicast group.
A multicast group is specified by class D IP address with1110 as it starting byte.
The membership of a multicast group is dynamic.
A computer is said to be in a multicast group if one or more processes have
sockets that belong to the multicast group.
The following details are important when using IPV4 IP multicasting:
Multicast IP routers
IP packets can be multicast both on local network and on the Internet.
Local multicast uses local network such as Ethernet.
To limit the distance of propagation of a multicast datagram, the sender can
specify the number of routers it is allowed to pass- called the time to live
(TTL).
Distrubuted 2.43
Systems
Multicast address allocation
Multicast addressing by temporary groups must be created before use and will
stop to exit when all members have left the particular multicast group.
They suffer from omission failures. The messages are not guaranteed to be delivered
to the group member in case of omission failures.
The Java API provides a datagram interface to IP multicast through the class
MulticastSocket.
The multicast datagram socket class is useful for sending and receiving IP
multicast packets.
MulticastSender.java
import java.io.*;
import java.net.*;
public class MulticastSender {
Distrubuted 2.44
Systems
public static void main(String[] args) {
DatagramSocket socket = null;
DatagramPacketoutPacket = null;
byte[] outBuf;
final int PORT = 8888;
try {
socket = new DatagramSocket();
long counter = 0;
String msg;
while (true) {
msg = " multicast! " + counter;
counter++;
outBuf = msg.getBytes();
//Send to multicast IP address and port
InetAddress address = InetAddress.getByName("224.2.2.3");
outPacket = new DatagramPacket(outBuf, outBuf.length, address, PORT);
socket.send(outPacket);
System.out.println("Server sends : " + msg);
try {
Thread.sleep(500);
} catch (InterruptedExceptionie) {
}
}
} catch (IOExceptionioe) {
System.out.println(ioe);
}
}
}
Distrubuted 2.45
Systems
Multicastreceiver.java
import java.io.*;
import java.net.*;
public class MulticastReceiver {
public static void main(String[] args) {
MulticastSocket socket = null;
DatagramPacketinPacket = null;
byte[] inBuf = new byte[256];
try {
//Prepare to join multicast group
socket = new MulticastSocket(8888);
InetAddress address = InetAddress.getByName("224.2.2.3");
socket.joinGroup(address);
while (true) {
inPacket = new DatagramPacket(inBuf, inBuf.length);
socket.receive(inPacket);
String msg = new String(inBuf, 0, inPacket.getLength());
System.out.println("From " + inPacket.getAddress() + " Msg : " + msg);
}
} catch (IOExceptionioe) {
System.out.println(ioe);
}
}
}
Ordering issue:
IP packets do not necessarily arrive in the order in which they were sent. Messages
sent by two different processes will not necessarily arrive in the same order at all the members
of the group. The problem of reliability and ordering is more common in:
Fault tolerance based on replicated services
Discovering services in spontaneous networking
Better performance through replicated data
Propagation of event notifications
An overlay network is a virtual network of nodes and logical links that is built on
top of an existing network with the purpose to implement a network service that is
not available in the existing network.
The overlay network may sometimes change the properties of the underlying network.
The overlay network provides:
a service that is tailored towards the needs of a class of application or a particular
higher-level service
more efficient operation in a given networked environment
additional feature – for example, multicast or secure communication.
Advantages:
They enable new network services to be defined without requiring changes to the
underlying network.
They encourage experimentation with network services and the customization of
services to particular classes of application.
Multiple overlays can be defined and can coexist, with the end result being a more
open and extensible network architecture.
Disadvantages:
They introduce an extra level of indirection.
They add to the complexity of network services.
They then make contact with a selected super node. To achieve this, each client
maintains a cache of super node identities.
At first login this cache is filled with the addresses of around seven super nodes,
and over time the client builds and maintains a much larger set (perhaps several
hundred).
Super nodes search the global index of users, which is distributed across the super
nodes.
The search is orchestrated by the client‟s chosen super node and involves an
expanding search of other super nodes until the specified user is found.
A user search typically takes between three and four seconds to complete for hosts
that have a global IP address.
Voice connection
Skype establishes a voice connection between the two parties using TCP for
signaling all requests and terminations and either UDP or TCP for the streaming
audio.
History is used to refer to a structure that contains a record of reply messages that
have been transmitted by the server.
Connections in HTTP
There are two types of connections in HTTP: Persistent and non persistent
On the same TCP connection the server, The server parses request, responds, and
parses request, responds and waits for new closes TCP connection.
requests.
It takes fewer RTTs to fetch the objects. It takes 2 RTTs to fetch each object.
It has less slow start. Each object transfer suffers from slow start
Distrubuted 2.56
HTTP Request Message Systems
Method URL HTTP Headers
version
Messag
e body
GET http://www.abc.ac.ind/data.html HTTP/1.1
- The parameters used in a function call have to be converted because the client and
server computers use different address spaces.
- Stubs perform this conversion so that the remote server computer perceives the
RPC as a local function call.
RMI is closely related to RPC but extended into the world of distributed objects. RMI
is a true distributed computing application interface specially designed for Java, written to
provide easy access to objects existing on remote virtual machines
RMI RPC
RMI uses an object oriented paradigm where RPC is not object oriented and does not deal
the user needs to know the object and the with objects. Rather, it calls specific
method of the object he needs to invoke. subroutines that are already established
With RPC looks like a local call. RPC RMI handles the complexities of passing
handles the complexities involved with along the invocation from the local to the
passing the call from the local to the remote remote computer. But instead of passing a
computer. procedural call, RMI passes a reference to the
object and the method that is being called.
Interfaces Class
Interfaces cannot be instantiated. Classes can be instantiated.
They do not contain constructors. They can have constructors.
They cannot have instance fields (i.e.) all the No constraints over the fields.
fields in an interface must be declared both
static and final.
The interface can contain only abstract They can contain method implementation.
method.
Interfaces are implemented by a class not Classes are extended by other classes.
extended.
Actions :
– Action is initiated by an object invoking a method in another object with or
without arguments.
– The receiver executes the appropriate method and thenreturns control to the
invoking object.
– The following are the effects of actions:
The state of the receiver may be changed.
A new object may be instantiated.
Further invocations on methods in other objects may take place.
Exceptions:
– Exceptions are run time errors.
– The programmers must write code that could handle all possible unusual or
erroneous cases.
Distrubuted 2.64
Systems
Garbage collection:
b) Distributed objects
Distributed object systems may adopt the client-server architecture where the
objects are managed by servers and their clients invoke their methods using
remote method invocation.
The invocation is carried out by executing a method of the object at the server
and the result is returned to the client in another message.
In this model, each process contains a collection of objects, some of which can
receive both local and remote invocations, whereas the other objects can receive
only local invocations. The core concepts of distributed object model:
The remote object to receive a remote method invocation is specified by the invoker as
a remote object reference. The remote object references may be passed as arguments and
results of remote method invocations.
Remote interfaces: The class of a remote object implements the methods of its
remote interface. Objects in other processes can invoke only the methods that
belong to its remote
interface.
Distrubuted 2.65
Systems
Dynamic invocation is used in this case which gives the client access to a generic
representation of a remote invocation, which is a part of the infrastructure for RMI.
The client will supply the remote object reference, the name of the method and the
arguments to doOperation and then wait to receive the results.
Dynamic skeletons:
Server programs
the classes for the dispatchers and skeletons, with the implementations of the
classes of all of the servants that it supports.
Client programs
the classes of the proxies for all of the remote objects that it will invoke.
Factory methods
Factory methods are static methods that return an instance of the native class.
Factory method is used to create different object from factory often refereed as
Item and it encapsulate the creation code.
The term factory method is sometimes used to refer to a method that creates
servants, and a factory object is an object with factory methods.
Any remote object that needs to be able to create new remote objects on demand
for clients must provide methods in its remote interface for this purpose. Such
methods are called factory methods, although they are really just normal methods.
Distrubuted 2.69
Binder Systems
A binder in a distributed system is a separate service that maintains a table
containing mappings from textual names to remote object references.
It is used by servers to register their remote objects by name and by clients to look
them up.
Server threads
Whenever an object executes a remote invocation, that execution may lead to
further invocations of methods in other remote objects, which may take sometime
to return.
To avoid this, servers generally allocate a separate thread for the execution of each
remote invocation.
Activation of remote objects
Activation of servers to support remote objects is done by a service called Inetd.
Certain applications demand the objects to persist for longer times.
When the objects are maintained for longer period it is actually waste of resources.
To avoid this the servers could be started on demand as and when needed through
Inetd.
Processes that start server processes to host remote objects are called activators.
Based on this remote objects can be described as active or passive.
An object is said to be active when it is available for invocation.
An object is called passive if is currently inactive but can be made active.
A passive object consists of two parts:
implementation of its methods
its state in the marshalled form.
When activating a passive object, an active object is created from the
corresponding passive object by creating a new instance of its class and initializing
its instance variables from the stored state.
Passive objects can be activated on demand.
Responsibilities of an activator:
Registering passive objects that are available for activation. This is done by
mapping the names of servers and passive objects.
Distrubuted 2.70
Systems
Starting named server processes and activating remote objects in them.
Tracking the locations of the servers for remote objects that it has alreadyactivated.
Java RMI provides the ability to make some remote objects activatable
[java.sun.comIX].
Persistent objects are managed by persistent object stores, which store their state in
a marshalled form on disk.
The objects are activated when their methods are invoked by other objects.
When activating the object, the invoker will not know whether an object is in main
memory or in a disk.
The persistent objects that are not needed are maintained in the object store (active
or passive state).
The decision of making an active object to passive is done through the following
ways:
in response to a request in the program that activated the objects
at the end of a transaction
when the program exits.
There are two approaches to determine whether an object is persistent or not:
The persistent object store maintains some persistent roots, and any object that
is reachable from a persistent root is defined to be persistent.
The persistent object store provides some classes. If the object belong to their
subclasses, then it is persistent.
Object location
Remote object reference can be done based on the Internet address and port
number of the process that created the remote object.
Some remote objects will exist in different processes, on different computers,
throughout their lifetime.
In this case, a remote object reference cannot act as an address for the objects.
Distrubuted 2.71
Systems
So clients making invocations require both a remote object reference and an
address to which to send invocations.
A location service helps clients to locate remote objects from their remote object
references.
It acts as a database that maps remote object references to their probable current
locations.
Alternatively a cache/broadcast scheme could also be used.
Here, a member of a location service on each computer holds a small cache of
remote object reference to-location mappings.
If a remote object reference is in the cache, that address is tried for the invocation
and will fail if the object has moved.
To locate an object that has moved or whose location is not in the cache, the
system broadcasts a request.
When a live reference enters a Java virtual machine, its reference count is
incremented.
The first reference to an object sends a referenced message to the server for the
object.
As live references are found to be unreferenced in the local virtual machine, the
count is decremented.
When the last reference has been discarded, an unreferenced message is sent to the
server.
The Java distributed garbage collection algorithm can tolerate the failure of client
processes.
To achieve this, servers lease their objects to clients for a limited period of
time.
Distrubuted 2.72
Systems
The lease period starts when the client makes first referenceinvocation to the
server.
It ends either when the time has expired or when the client makes a last reference
invocation to the server.
The information stored by the server (i.e.) the leasecontains the identifier of the
client‟s virtual machine and the period of the lease.
Clients are responsible for requesting the server to renew their leases before they
expire.
Leases in Jini
Jini provides a mechanism for locating services on the network that conform to a
particular (Java) interface, or that have certain attributes associated with them.
Once a service is located, the client can download an implementation of that
interface, which it then uses to communicate with the service.
The Jini distributed system includes a specification for leases that can be used in a
variety of situations when one object offers a resource to another object.
The granting of the use of a resource for a period of time is called a lease.
The object offering the resource will maintain it until the time in the lease expires.
The resource users are responsible for requesting their renewal when they expire.
The period of a lease may be negotiated between the grantor and the recipient in
Jini.
In Jini, an object representing a lease implements the Lease interface.
It contains information about the period of the lease and methods enabling the
lease to be renewed or cancelled.
Downloading of classes
In Java classes can be downloaded from one virtual machine to another.
If the recipient does not already possess the class of an object passed by value, its
code is downloaded automatically.
If the recipient of a remote object reference does not already possess the class for
a proxy, its code is downloaded automatically.
The advantages are:
1. There is no need for every user to keep the same set of classes in their
working environment.
2. Both client and server programs can make transparent use of instances of
new classes whenever they are added.
RMIregistry
The RMIregistry is the binder for Java RMI.
It maintains a table mapping URL-style names to references to remote objects.
The Naming class is used to access the RMIregistry.
The methods of the naming class is of the general form:
//computerName:port/objectName
where computerName and port refer to the location of the RMIregistry.
The LocateRegistryclass , which is in java.rmi.registry, is used to discover the
names.
The getRegistry()returns an object of typeRegistry representing the remote binding
service:
public static Registry getRegistry() throws RemoteException
Distrubuted 2.75
Systems
The following table gives a description of the methods in the Naming class:
Method Description
void rebind (String This method is used by a server to register the
name, Remote obj) identifier of a remote object by name.
void bind (String name, This method is also used to register a remote object by
Remote obj) name, but if the name is already bound to a remote
object reference an exception is thrown.
void unbind (String This method removes a binding.
name, Remote obj)
Remote lookup(String This method returns an array of Strings containing the
name)String [] list() names bound in the registry .This method is used by
clients to look up a remote object by name.
Server Program
import java.util.Vector;
public class ShapeListServant implements ShapeList
{
private Vector theList; // contains the list of Shapes
private int version;
public ShapeListServant(){...}
public Shape newShape(GraphicalObject g)
{
version++;
Shape s = new ShapeServant( g, version);
theList.addElement(s);
return s;
}
public Vector allShapes(){...}
public intgetVersion() { ... }
}
Distrubuted 2.76
Systems
Client Program
import java.rmi.*;
import java.rmi.server.*;
import java.util.Vector;
public class ShapeListClient
{
public static void main(String args[])
{
System.setSecurityManager(new RMISecurityManager());
ShapeListaShapeList = null;
try
{
aShapeList = (ShapeList) Naming.lookup("//project.ShapeList");
Vector sList = aShapeList.allShapes();
}
catch(RemoteException e)
{
System.out.println(e.getMessage());
}
catch(Exception e)
{
System.out.println("Client: " + e.getMessage());
}
}
}
Callbacks
Callback refers to a server‟s action of notifying clients about an event. Callbacks can
be implemented in RMI as follows:
The client creates a remote object that implements an interface that contains a
method for the server to call.
The server provides an operation allowing interested clients to inform it of the
remote object references of their callback objects.
Whenever an event of interest occurs, the server calls the interested clients.
Distrubuted 2.77
Systems
The disadvantages of callbacks:
The performance of the server may be degraded by the constant polling.
Clients cannot notify users of updates in a timely manner.
Programming Model
A group has group members (processes) who can join or leave the group. This is done
through multicast communication. There are three communication modes:
Unicast: Process to process communication
Broadcast: Communication to all the process
Multicast: Communication only to a group of processes
Process Groups and Objects Groups
An object group is a collection of objects that process the same set of invocations
concurrently.
Client objects invoke operations on a single, local object, which acts as a proxy for
the group.
The proxy uses a group communication system to send the invocations to the
members of the object group.
Object parameters and results are marshalled as in RMI and the associated calls are
dispatched automatically to the right destination objects/methods.
Types of group communication
Closed and open groups: A group is said to be closed if only members of the
group may multicast to it A process in a closed group delivers to itself any
message that it multicasts to the
group.
Distrubuted 2.79
Systems
Failure detection: The service monitors the group members in case of crash and
unreachability. The detector marks processes as Suspected or Unsuspected. The
process is excluded from membership if it is not reachable or crashed.
Channel-based:
The publishers publish events to named channels and subscribers then
subscribe to one of these named channels to receive all events sent
to that channel.
Topic-based (subject-based):
Each notification from the publisher is expressed in terms of a number
of fields, with one field denoting the topic.
Subscriptions are then defined in terms of the topic of interest.
Content-based:
Content-based approaches are a generalization of topic-based
approaches allowing the expression of subscriptions over a range of
fields in an event notification.
Type-based:
This is intrinsically linked with object-based approaches where objects
have a specified type.
In type-based approaches, subscriptions are defined in terms of types of
events and matching is defined in terms of types or subtypes of the
given filter.
Implementation Issues
The publish subscribe model also demands of security, scalability, failure handling,
concurrency and quality of service. This makes this model complex.
The top layer implements matching ensure that events match a given
Flooding:
Filtering:
Implementation Issues
The important implementation decision for message queuing systems is the choice
between centralized and distributed implementations.
WebSphere MQ
This is a IBM developed middleware based on the concept of message queues.
Queues in WebSphere MQ are managed by queue managers which host and
manage queues and allow applications to access queues through the Message
Queue Interface(MQI).
Header
The header contains all the information needed to identify and route the
message: destination, priority, expiration date, a message ID and a
timestamp.
Distrubuted 2.92
Systems
These fields are either created by the underlying system, or constructor
methods.
The properties can be used to express additional context associated with the
message, including a location field.
Body
The body is opaque and untouched by the system.
The body can be any one of a text message, a byte stream, a serialized
Java object, a stream of primitive Java values or a more structured set of
name/value pairs.
A message producer is an object used to publish messages under a
particular topic or to send messages to a queue.
A message consumer is an object used to subscribe to messages concerned
with a given topic or to receive messages from a queue.
The consumer is complicated:
Consumers apply message filters.
Consumer program either can block using a receive operation or it
can establish a message listener object that identifies a message.
Processes can communicate with other Here, a process does not have private
processes. They can be protected from one address space. So one process can alter the
another by having private address spaces. execution of other.
This technique can be used in This cannot be used to heterogeneous
heterogeneous computers. computers.
Synchronization between processes is Synchronization is through locks and
through message passing primitives. semaphores.
Processes communicating via message Processes communicating through DSM
passing must execute at the same time. may execute with non-overlapping
lifetimes.
Efficiency:
All remote data accesses are explicit and Any particular read or update may or may
therefore the programmer is always aware not involve communication by the
of whether a particular operation is in- underlying runtime support.
process or involves the expense of
communication.
Distrubuted 2.94
Systems
2.13.1 Tuple space communication
Processes communicate indirectly by placing tuples in a tuple space, from which other
processes can read or remove them.
The programming model
Processes communicate through a shared collection of tuples.
Any combination of types of tuples may exist in the same tuple space.
Processes share data by accessing the same tuple space.
The tuples are stored in tuple space using the write operation and read or extract
them from tuple space using the reador take operation.
No direct access to tuples in tuple space is allowed and processes have to replace
tuples in the tuple space instead of modifying them. Thus, tuples are immutable.
Properties of tuple space
Space uncoupling: A tuple placed in tuple space may originate from any number
of sender processes and may be delivered to any one of a number of potential
recipients.
Time uncoupling: A tuple placed in tuple space will remain in that tuple space
until removed and hence the sender and receiver do not need to overlap in time.
Implementation Issues
Centralized solution where the tuple space resource is managed by a single server is a
common implementation.
Replication:
A tuple space behaves like a state machine, maintaining state and changing this
state in response to events received from other replicas or from the environment.
To ensure consistency the replicas
(ii) Must start in the same state
(iii) must execute events in the same order
(iv) must react deterministically to each event.
Alternatively, a multicast algorithm can be used.
Here, updates are carried out in the context of the agreed set of replicas and
tuples are also partitioned into distinct tuple sets based on their associated logical
names.
The system consists of a set of workers carrying out computations on the tuple
space, and a set of tuple space
replicas.
Distrubuted 2.95
A given physical node can contain any number of workers,
Systems
replicas or indeed both;
a given worker therefore may or may not have a local replica.
Nodes are connected by a communications network that may lose, duplicate or
delay messages and can deliver messages out of order.
A write operation is implemented by sending a multicast message over the
unreliable communications channel to all members of the view.
The write request is repeated until all acknowledgements are received.
The read operation consists of sending a multicast message to all replicas.
Write
1. The requesting site multicasts the write request to all members of the view;
2. On receiving this request, members insert the tuple into their replica and
acknowledge this action;
3. Step 1 is repeated until all acknowledgements are received.
Read
1. The requesting site multicasts the read request to all members of the view;
2. On receiving this request, a member returns a matching tuple to the requestor;
3. The requestor returns the first matching tuple received as the result of the
operation (ignoring others);
4. Step 1 is repeated until at least one response is received.
Phase 1: Selecting the tuple to be removed
1. The requesting site multicasts the take request to all members of the view;
2. On receiving this request, each replica acquires a lock on the associated tuple set
and, if the lock cannot be acquired, the take request is rejected;
3. All accepting members reply with the set of all matching tuples;
4. Step 1 is repeated until all sites have accepted the request and responded with their
set of tuples and the intersection is non-null;
5. A particular tuple is selected as the result of the operation .
6. If only a minority accept the request, this minority are asked to release their locks
and phase 1 repeats.
Phase 2: Removing the selected tuple
1. The requesting site multicasts a remove request to all members of the view
citing the tuple to be removed;
Distrubuted 2.96
Systems
2. On receiving this request, members remove the tuple from their replica, send
an acknowledgement and release the lock;
3. Step 1 is repeated until all acknowledgements are received
Implicit dependencies:
A distributed object communicates with outside world through interfaces.
The interfaces provide a complete contract for the deploying and use of this
object.
Distrubuted 2.98
Systems
The problem in distributed objects is that the internal behavior of an
object is hidden.
Implicit dependencies in the distributed configuration are not safe.
The third-party developers to implement one particular element in a
distributed configuration.
Requirement: Dependencies of the object with otherobjects in the distributed
configuration must be specified.
Interaction with the middleware:
The distributed objects must interact even with low-levelmiddleware
architecture.
Requirement: While programming distributed applications, a clean separation
must be made between code related to middleware framework and code
associated with the application.
Lack of separation of distribution concerns:
Distributed programmers must also focus on issues related to security,
transactions, coordination and replication.
To provide the non functional requirements :
Programmers must have knowledge of the full details of all the
associated distributed system services.
The implementation of an object will contain calls to distributed system
services and to middleware interfaces. This increases the programming
complexity.
Requirement: These complexities should be hidden from
the programmer.
No support for deployment:
Objects must be deployed manually on individual machines and activated
when required.
This is a tiresome and error-prone process.
Requirement: Middleware platforms should support deployment.
Advantages of CBT
Independent extensions
Component Market
Component models lessen unanticipated interactions between components
Reduced time to market
Reduced Costs
Disadvantages of CBT:
Time to develop software components takes a big effort.
Components can be pricey.
o a front-end client
o a container holding one or more components that implement the
application or business logic
o system services that manage the associated data in persistent storage.
Distrubuted 2.100
Systems
The containers provide a managed server-side hosting environment
for components.
The components deal with application concerns and the container deals with
distributed systems and middleware issues.
A container includes a number of components that require the same configuration
in terms of distributed system support.
Support for Deployment
Component-based middleware supports the deployment of
component configurations.
They include releases of software architectures (components and
their interconnections) together with deployment descriptors.
These descriptors describe how the configurations should be deployed in a
distributed environment.
Deployment descriptors are typically written in XML and include sufficient
information to ensure that:
o components are correctly connected using appropriate protocols and
associated middleware support
o the underlying middleware and platform are configured to provide the right
level of support to the component configuration
o the associated distributed system services are set up to provide the right
level of security, transaction support and so on.
Attribute Policy
REQUIRED If the client has an associated transaction running, execute
within this transaction; otherwise, start a new transaction.
REQUIRES_NEW Always start a new transaction for this invocation.
SUPPORTS If the client has an associated transaction, execute the method within
the context of this transaction; if not, the call proceeds without any
transaction support.
NOT_SUPPORTED If the client calls the method from within a transaction, then
this transaction is suspended before calling the method and
resumed afterwards – that is, the invoked method is excluded
from the transaction.
MANDATORY The associated method must be called from within a client
transaction; if not, an exception is thrown.
NEVER The associated methods must not be called from within a
client transaction; if this is attempted, an exception is
thrown
Dependency injection:
In dependency injection a third party (container), is responsible for managing and
resolving the relationships between a component and its dependencies.
A component refers to a dependency using an annotation and the container is
responsible for resolving this annotation and ensuring that, at runtime, the
associated attribute refers to the right object.
This is typically implemented by the container usingreflection.
Distrubuted 2.103
Systems
Enterprise JavaBean Interception
The Enterprise JavaBeans allows interception of two types of operation to alter their
default behavior:
• method calls associated with a business interface
Invocation Contexts
Method Description
public Object getTarget() Returns the bean instance associated with the incoming
invocation or event
public Method getMethod() Returns the method being invoked
public Object[] getParameters() Returns the set of parameters associated with the
intercepted business method
public Object proceed() throws Execution proceeds to next interceptor in the chain
Exception (if any) or the method that has been intercepted
Distrubuted 2.104
REVIEW QUESTIONS Systems
PART-A
1. What is IPC?
Interprocess communication (IPC) is a set of programming interfaces that allows a
programmer to coordinate activities among different program processes that can run
concurrently in an operating system.
Physical models: It is the most explicit way in which to describe a system in terms
of hardware composition.
Interaction models: This deals with the structure and sequencing of the
communication between the elements of the system
Failure models: This deals with the ways in which a system may fail to operate
correctly
Security models: This deals with the security measures implemented in the
system against attempts to interfere with its correct operation or to steal its
data.
3. Define ULS.
The ultra large scale (ULS) distributed systems is defined as a complex system
consisting of a series of subsystems that are systems in their own right and that come
together to perform a particular task or tasks.
RMI RPC
RMI uses an object oriented paradigm RPC is not object oriented and does not
where the user needs to know the object deal with objects. Rather, it calls specific
and the method of the object he needs subroutines that are already established
to invoke.
With RPC looks like a local call. RPC RMI handles the complexities of passing
handles the complexities involved with along the invocation from the local to the
passing the call from the local to the remote computer. But instead of passing a
remote computer. procedural call, RMI passes a reference to
the object and the method that is being
called.
Interfaces Class
Interfaces cannot be instantiated. Classes can be instantiated.
They do not contain constructors. They can have constructors.
They cannot have instance fields (i.e.) all No constraints over the fields.
the fields in an interface must be declared
both static and final.
The interface can contain only abstract They can contain method
method. implementation.
Interfaces are implemented by a class not Classes are extended by other
extended. classes.
Independent extensions
Distrubuted 2.123
Systems
Component Market
Component models lessen unanticipated interactions between components
Reduced time to market
Reduced Costs
Disadvantages of
CBT:
Time to develop software components takes a big effort.
PART-B
1. Describe the physical system model.
2. Explain architectural models.
3. Write notes about architectural patterns.
4. Explain the middleware.
5. Describe the fundamental models.
6. Explain about interprocess communication.
7. Describe external data representation and marshaling.
8. Elucidate multicast communication.
9. Explain overlay networks.
10. Write notes on message passing interface.
Unlike the client/server model, in which the client makes a service request and the
server fulfills the request, the P2P network model allows each node to function as both
client and server.
P2P systems can be used to provide anonymized routing of network traffic, massive
parallel computing environments, distributed storage and other functions. Most P2P programs
are focused on media sharing and P2P is therefore often associated with software
piracy and copyright
violation.
Distributed 3.2
System
Features of Peer to Peer Systems
Large scale sharing of data and resources
No need for centralized management
Their design of P2P system must be in such a manner that each user contributes
resources to the entire system.
All the nodes in apeer-to-peer system have the same functional capabilities and
responsibilities.
The operation of P2P system does not depend on the centralized management.
The choice of an algorithm for theplacement of data across many hosts and the
access of the data must balance the workload and ensure the availability without
much overhead.
4) Lot of movies, music and other copyrighted files are transferred using this type of
file transfer. P2P is the technology used in torrents.
Distributed 3.3
System
P2P Middleware
Middleware is the software that manages and supports the different components of
a distributed system.
Peer clients need to locate and communicate with any available resource, even
though resources may be widely distributed and configuration may be dynamic,
constantly adding and removing resources and connections.
Global Scalability
Load Balancing
Local Optimization
Security of data
Any node can access any object by routing each request through a sequence of
nodes, exploiting knowledge at each of theme to locate the destination object.
Global User IDs (GUID) also known as opaque identifiers are used as names, but
they do not contain the location information.
A client wishing to invoke an operation on an object submits a request including
the object‟s GUID to the routing overlay, which routes the request to a node at
which a replica of the object
resides.
Distributed 3.4
System
IP Overlay Network
The scalability of IPV4 is limited to 232 Peer to peer systems can address more
nodes and IPv6 is 2128. objects using GUID.
Napster server
Napster server
Index 1. File location
Index
request
2. List of peers
of f ering the f
ile
5. Index update
4. File deliv
ered
Load Balancing
Optimization for local interactions between neighboring peers
Any node can access any object by routing each request through a sequence of
nodes, exploiting knowledge at each of theme to locate the destination object.
Global User IDs (GUID) also known as opaque identifiers are used as names, but
they do not contain the location information.
A client wishing to invoke an operation on an object submits a request including
the object‟s GUID to the routing overlay, which routes the request to a node at
which a replica of the object resides.
They are actually sub-systems within the peer-to-peer middleware that are meant
locating nodes and objects.
They implement a routing mechanism in the application layer.
They ensure that any node can access any object by routing each request thru a
sequence of nodes
Features of GUID:
They are pure names or opaque identifiers that do not reveal anything about the
locations of the objects.
They are the building blocks for routing overlays.
They are computed from all or part of the state of the object using a function that
deliver a value that is very likely to be unique. Uniqueness is then checked against
all other GUIDs.
They are not human understandable.
3.5 PASTRY
Pastry provides efficient request routing, deterministic object location, and load
balancing in an application-independent manner.
Furthermore, Pastry provides mechanisms that support and facilitate application-
specific object replication, caching, and fault recovery.
Each node in the Pastry network has a unique, uniform random identifier (nodeId) in
a circular 128-bit identifier space.
When presented with a message and a numeric 128-bit key, a Pastry node efficiently
routes the message to the node with a nodeId that is numerically closest to the key,
among all currently live Pastry nodes.
The expected number of forwarding steps in the Pastry overlay network is O(log N),
while the size of the routing table maintained in each Pastry node is only O(log N) in
size (where N is the number of live Pastry nodes in the overlay network).
At each Pastry node along the route that a message takes, the application is notified
and may perform application-specific computations related to the message.
Each Pastry node keeps track of its L immediateneighbors in the nodeId space called
the leaf set, and notifies applications of new node arrivals, node failures and node
recoveries within the leaf set.
Pastry takes into account locality (proximity) in the underlying Internet.
It seeks to minimize the distance messages travel, according to a scalar proximity
metric like the ping delay.
Pastry is completely decentralized, scalable, and self-organizing; it automatically
adapts to the arrival, departure and failure of nodes.
Distributed 3.10
System
Capabilities of Pastry
Mapping application objects to Pastry nodes
Application-specific objects are assigned unique, uniform random identifiers
(objIds) and mapped to the k, (k >= 1) nodes with nodeIds numerically closest
to the objId.
The number k reflects the application's desired degree of replication for the
object.
Inserting objects
Application-specific objects can be inserted by routing a Pastry message, using
the objId as the key.
When the message reaches a node with one of the k closest nodeIds to the
objId, that node replicates the object among the other k-1 nodes with closest
nodeIds (which are, by definition, in the same leaf set for k <= L/2).
Accessing objects
Application-specific objects can be looked up, contacted, or retrieved by
routing a Pastry message, using the objId as the key.
By definition, the message is guaranteed to reach a node that maintains a
replica of the requested object unless all k nodes with nodeIds closest to the
objId have failed.
Availability and persistence
Applications interested in availability and persistence of application-specific
objects maintain the following invariant as nodes join, fail and recover: object
replicas are maintained on the k nodes with numerically closest nodeIds to the
objId, for k> 1.
The fact that Pastry maintains leaf sets and notifies applications of changes in
the set's membership simplifies the task of maintaining this invariant.
Diversity
The assignment of nodeIds is uniform random, and cannot be corrupted by an
attacker. Thus, with high probability, nodes with adjacent nodeIds are diverse
in geographic location, ownership, jurisdiction, network attachment, etc.
The probability that such a set of nodes is conspiring or suffers from correlated
failures is low even for modest set sizes.
This minimizes the probability of a simultaneous failure of all k nodes that
maintain an object
replica.
Distributed 3.11
System
Load balancing
Both nodeIds and objIds are randomly assigned and uniformly distributed in
the 128-bit Pastry identifier space.
Without requiring any global coordination, this results in a good first-order
balance of storage requirements and query load among the Pastry nodes, as
well as network load in the underlying Internet.
Object caching
Applications can cache objects on the Pastry nodes encountered along the
paths taken by insert and lookup messages.
Subsequent lookup requests whose paths intersect are served the cached copy.
Pastry's network locality properties make it likely that messages routed with
the same key from nearby nodes converge early, thus lookups are likely to
intercept nearby cached objects.
This distributed caching offloads the k nodes that hold the primary replicas of
an object, and it minimizes client delays and network traffic by dynamically
caching copies near interested clients.
Efficient, scalable information dissemination
Applications can perform efficient multicast using reverse path forwarding along
the tree formed by the routes from clients to the node with nodeId numerically
closest to a given objId.
Pastry's network locality properties ensure that the resulting multicast trees are
efficient; i.e., they result in efficient data delivery and resource usage in the
underlying Internet.
Second Stage:
This has a full routing algorithm, which routes a request to any node
in O(log N) messages.
Each Pastry node maintains a tree-structured routing table with GUIDs and
IP addresses for a set of nodes spread throughout the entire range of 2128
possible GUID values, with increased density of coverage for GUIDs
numerically close to its own.
The structure of routing table is: GUIDs are viewed as hexadecimal values and
the table classifies GUIDs based on their hexadecimal prefixes.
The table has as many rows asthere are hexadecimal di gits in a GUID, so for
the prototype Pastry system that weare describing, there are 128/4 = 32 rows.
Any row n contains 15 entries – one foreach possible value of the
nth hexadecimal digit, excluding the value in the local node ‟s
GUID.
Each entry in the table points to one of the potentially many nodes
whose GUIDs have the relevant prefix.
The routing process at any node A uses the information in its routing table R
and leaf set L to handle each request from an application and each incoming
messagefrom another node according to the algorithm.
Distributed 3.13
System
Pastry’s Routing Algorithm
To handle a message M addressed to a node D (where R[p,i] is the element at column i,
row p of the routing table):
1. If (L-l < D <Ll) { // the destination is within the leaf set or is the current node.
2. Forward M to the element Li of the leaf set with GUID closest to D or
the current node A.
3. } else { // use the routing table to despatch M to a node with a closer GUID
4. Find p, the length of the longest common prefix ofD and A,.and i, the (p+1)th
hexadecimal digit of D.
5. If (R[p,i] • null) forward M to R[p,i] // route M to a node with a longer common prefix.
6. else { // there is no entry in the routing table.
7. Forward M to any node in L or R with a common prefix of length p but a
GUID that is numerically closer.
}
}
Locality
The Pastry routing structure is redundant.
Each row in a routin g table contains 16 entries.
The entries in the ithrow give the addresses of 16 nodes with GUIDs with i–1 initial
hexadecimal digits thatmatch the current node ‟s GUID and an ith di git that takes each
of the possible hexadecimal mal values.
A well-populated Pastry overlay will contain many more nodesthan can be contained
in an individual routing table; whenever a new routing table isbeing constructed a
choice is made for each position between several candidates based on a proximity
neighbour selection algorithm.
A locality metric is used to compare candidates and the closest available node is
chosen.
Since the information available is not comprehensive, this mechanism cannot
produce globally optimal routings, but simulations have shown that it results in routes
that are on average only about 30–50% longer than the optimum.
Distributed 3.14
System
Fault tolerance
The Pastry routing algorithm assumes that allentries in routing tables and leaf sets
refer to live, correctly functioning nodes.
All nodessend „heartbeat‟ messages i.e., messages sent at fixed time intervals to
indicate that thesender is alive to neighbouring nodes in their leaf sets, but
information about failednodes detected in this manner may not be disseminated
sufficiently rapidly to eliminaterouting errors.
Nor does it account for malicious nodes that may attempt to interfere with
correct routing.
To overcome these problems, clients that depend upon reliable messagedelivery
are expected to employ an at-least-once delivery mechanism and repeat their
requests several times in the absence of a response.
This willallow Pastry a longer time window to detect and repair node failures.
To deal with any remaining failures or malicious nodes, a small degree of
randomness is introduced into the route selection algorithm.
Dependability
Dependability measures include the use of acknowledgements at each hop in the
routing algorithm.
If the sending host does not receive an acknowledgement after aspecified timeout,
it selects an alternative route and retransmits the message.
The nodethat failed to send an acknowledgement is then noted as a suspected
failure.
To detect failed nodes each Pastry node periodically sends aheartbeat message to
its immediate neighbour to the left (i.e., with a lower GUID) in the
leaf set.
Each node also records the time of the last heartbeat message received from its
immediate neighbour on the right (with a higher GUID).
If the interval since the lastheartbeat exceeds a timeout threshold, the detecting
node starts a repair procedure thatinvolves contacting the remaining nodes in the
leaf set with a notification about thefailed node and a request for suggested
replacements.
Even in the case of multiplesimultaneous failures, this procedure terminates with
all nodes on the left side of thefailed node having leaf sets that contain the l live
nodes with the closest
GUIDs.
Distributed 3.15
System
Suspectedfailed nodes in routing tables are probed in a similar manner to that used
for the leaf setand if they fail to respond, their routing table entries are replaced
with a suitablealternative obtained from a nearby node.
A simple gossip protocol is used to periodically exchange routing table
information betweennodes in order to repair failed entries and prevent slow
deterioration of the localityproperties.
The gossip protocol is run about every 20 minutes.
3.6 TAPESTRY
Tapestry is a decentralized distributed system.
It is an overlay network that implements simple key-based routing.
Each node serves as both an object store and a router that applications can contact
to obtain objects.
In a Tapestry network, objects are “published” at nodes, and once an object has
been successfully published, it is possible for any other node in the network to find
the location at which that objects is published.
Identifiers are either NodeIds, which refer to computers that perform routing
operations, or GUIDs, which refer to the objects.
For any resource with GUID G there is a unique root node with GUID RG that is
numerically closest to G.
Hosts H holding replicas of G periodically invoke publish(G) to ensure that newly
arrived hosts become aware of the existence of G.
4377
43FE 437A
4228
4361
4378 4664
Phil‟s
Books 4B4F 4A6D
E791 4378
57EC AA93
Phil‟s
Books
Gnutella
The Gnutella network is a fully decentralized, peer-to-peer application layer
network that facilitates file sharing; and is built around an open protocol developed
to enable host discovery, distributed search, and file transfer.
Distributed 3.17
System
It consists of the collection of Internet connected hosts on which Gnutella protocol
enabled applications are running.
The Gnutella protocol makes possible all host-to-host communication through the
use of messages.
Types of Message
There are only five types of messages
Ping
Pong
Query
Distributed 3.18
System
Query-Hit
Push
Ping and Pong messages facilitate host discovery, Query and Query-Hit messages make
possible searching the network, and Push messages ease file transfer from firewalled
hosts.Because there is no central index providing a list of hosts connected to the network, a
disconnected host must have offline knowledge of a connected host in order to connect to the
network.
Once connected to the network, a host is always involved in discovering hosts,
establishing connections, and propagating messages that it receives from its peers,
but it may also initiate any of the following six voluntary activities: searching for
content, responding to searches, retrieving content, and distributing content. A host
will typically engage in these activities simultaneously.
A host will search for other hosts and establish connections so as to satisfy a
maximum requirement for active connections as specified by the user, or to replace
active connections dropped by it or its peer.
Consequently, a host tends to always maintain the maximum requirement for
active connections as specified by the user, or the one connection that it needs to
remain connected to the network.
To engage in host discovery, a host must issue a Ping message to the host of which
it has offline knowledge.
That host will then forward the ping message across its open connections, and
optionally respond to it with a Pong message.
Each host that subsequently receives the Ping message will act in a similar manner
until the TTL of the message has been exhausted.
A Pong message may only be routed along the reverse of the path that carried the
Ping message to which it is responding.
After having discovered a number of other hosts, the host that issued the initial
ping message may begin to open further connections to the network.
Doing so allows the host to issue ping messages to a wider range of hosts and
therefore discover a wider range of other hosts, as well as to begin querying the
network.
In searching for content, a host propagates search queries across its active
connections.
Those queries are then processed and forwarded by its peers.
Distributed 3.19
System
When processing a query, a host will typically apply the query to its local database
of content, and respond with a set of URLs pointing to the matching files.
The propagation of Query and Query-Hit messages is identical to that of Ping and
Pong messages.
A host issues a Query message to the hosts to which it is connected.
The hosts receiving that Query message will then forward it across their open
connections, and optionally respond with a Query-Hit message.
Each host that subsequently receives the Query message will act in a similar
manner until the TTL of the message has been exhausted.
Query-Hit messages may only be routed along the reverse of the path that carried
the Query message to which it is a response.
The sharing of content on the Gnutella network is accomplished through the use of
the HTTP protocol.
Due to the decentralized nature of the architecture, each host participating in the
Gnutella network plays a key role in the organization and operation of the
network.
Both the sharing of host information, as well as the propagation of search requests
and responses are the responsibility of all hosts on the network rather than a central
index.
Each host in the Gnutella network is capable of acting as both a server and client,
allowing a user to distribute and retrieve content simultaneously.
This spreads the provision of content across many hosts, and helps to eliminate the
bottlenecks that typically occur when placing enormous loads on a single host.
In this network, the amount and variety of content available to a user scales with
the number of users participating in the network.
Distributed file systems support the sharing of information in the form of files
and hardware resources.
Examples:
The GOOGLE File System: A scalable distributed file system for large
distributed data-intensive applications. It provides fault tolerance while running on
inexpensive commodity hardware, and it delivers high aggregate performance to a
large number of clients.
The CODA distributed file system: Developed at CMU, incorporates many
distinguished features which are not present in any other distributed file system.
A client module.
Client Computer Server Computer
Directory service
Application Application
Program Program
Client module
Unique File Identifiers (UFIDs) are used to refer to files in all requests for flat file
service operations.
UFIDs are long sequences of bits chosen so that each file has a unique among all
of the files in a distributed system.
Directory service:
This provides mapping between text names for the files and their UFIDs.
Clients may obtain the UFID of a file by quoting its text name to directory service.
Directory service supports functions needed generate directories, to add new files
to directories.
Client module:
It runs on each computer and provides integrated service (flat file and directory) as
a single API to application programs.
It holds information about the network locations of flat-file and directory server
processes; and achieve better performance through implementation of a cache of
recently used file blocks at the client.
Create() FileId: Creates a new file of length0 and delivers a UFID for it.
File groups have identifiers which are unique throughout the system.
To construct a globally unique ID we use some unique attribute of the machine on
which it is created, e.g. IP number, even though the file group may move
subsequently.
– They are stored on the workstation‟s disk and are available only to local user
processes.
Distributed 3.26
Workstations System
Venus
Servers
User
program
UNIX Kernel Vice
UNIX Kernel
Venus
User
program Network
UNIX Kernel
Vice
Venus UNIX Kernel
User
program
UNIX Kernel
Workstation
Non-local file
User operations Venus
program UNIX file
system calls
UNIX Kernel
UNIX file system
Local
disk
Thus, the client‟s request for file access is delivered across the network as a
message to the server, the server machine performs the access request, and the
result is sent to the client.
This need to minimize the number of messages sent and the overhead per
message.
2. Data-caching model
This model attempts to reduce the network traffic of the previous model by
caching the data obtained from the server node.
This takes advantage of the locality feature of the found in file accesses.
Distributed 3.28
System
A replacement policy such as LRU is used to keep the cache size bounded.
This model reduces network traffic it has to deal with the cache coherency
problem during writes, because the local cached copy of the data needs to be
updated, the original file at the server node needs to be updated and copies in
any other caches need to be updated.
The data-caching model offers the possibility of increased performance and
greater system scalability because it reduces network traffic, contention for the
network, and contention for the file servers. Hence almost all distributed file
systems implement some form of caching.
In file systems that use the data-caching model, an important design issue is to
decide the unit of data transfer.
This refers to the fraction of a file that is transferred to and form clients as a
result of single read or write operation.
3. File-level transfer model
In this model when file data is to be transferred, the entire file is moved.
File needs to be transferred only once in response to client request and hence is
more efficient than transferring page by page which requires more network
protocol overhead.
This reduces server load and network traffic since it accesses the server only
once. This has better scalability.
Once the entire file is cached at the client site, it is immune to server and
network failures.
This model requires sufficient storage space on the client machine.
This approach fails for very large files, especially when the client runs on a
diskless workstation.
If only a small fraction of a file is needed, moving the entire file is wasteful.
The naming faciliy of a distributed operating system enables users and programs to
assign character-string names to objects and subsequently use these names to refer
to those objects.
The locating faciliy, which is an integral part of the naming facility, maps an
object's name to the object's location in a distributed system.
The naming and locating facilities jointly form a naming system that provides the
users with an abstraction of an object that hides the details of how and where an
object is actually located in the network.
Given an object name, it returns a set of the locations of the object's replicas.
The naming system plays a very important role in achieving the goal of
location transparency,
Example:
3.12.1 Identifiers:
3. Changing the current context first and then using a relative name
Managerial layer
– Rather, the name agent retains control over the resolution process and one by one
calls each of the servers involved in the resolution process.
– To continue the name resolution, the name agent sends a name resolution request
along with the unresolved portion of the name to the next name server.
– The process continues until the name agent receives the authority attribute of the
named object
Caching at the client is an effective way of pushing the processing workload from
the server out to client devices,if a client has the capacity.
If result data is likely to be reused by multiple clients or if the client devices do not
have the capacity then caching at the server is more effective.
Multi-level caching
– Client sends operation requests to the server and the server sends responses in turn.
– With some exceptions the client need not wait for a response before sending the
next request. Server may send the responses in any order.
Directory Structure
Directory is a tree of directory entries.
Each entry consists of a set of attributes.
An attribute has: a name , an attribute type or attribute description and one or
more values
Attributes are defined in a schema.
Each entry has a unique identifier called Distinguished Name (DN).
The DN consists of its Relative Distinguished Name (RDN) constructed from
some attribute(s) in the entry.
It is followed by the parent entry's DN.
Think of the DN as a full filename and the RDN as a relative filename in a folder.
DN may change over the lifetime of the entry.
To reliably and unambiguously identify entries, a UUID might be provided in the
set of the entry's operational attributes.
There are mainly two types of file models in the distributed operating system.
1. Structure Criteria
2. Modifiability Criteria
Structure Criteria
There are two types of file models in structure criteria. These are as follows:
1. Structured Files
2. Unstructured Files
Structured Files
The Structured file model is presently a rarely used file model. In the structured file model, a file is
seen as a collection of records by the file system. Files come in various shapes and sizes and with a
variety of features. It is also possible that records from various files in the same file system have
varying sizes. Despite belonging to the same file system, files have various attributes. A record is the
smallest unit of data from which data may be accessed. The read/write actions are executed on a set
of records. Different "File Attributes" are provided in a hierarchical file system to characterize the
file. Each attribute consists of two parts: a name and a value. The file system used determines the
file attributes. It provides information on files, file sizes, file owners, the date of last modification,
the date of file creation, access permission, and the date of last access. Because of the varied access
rights, the Directory Service function is utilized to manage file attributes.
Records in non-indexed files are retrieved based on their placement inside the file. For instance, the
second record from the starting and the second from the end of the record.
Each record contains a single or many key fields in a file containing indexed records, each of which
may be accessed by specifying its value. A file is stored as a B-tree or similar data structure or hash
table to find records quickly.
Unstructured Files
It is the most important and widely used file model. A file is a group of unstructured data sequences
in the unstructured model. Any substructure does not support it. The data and structure of each file
available in the file system is an uninterrupted sequence of bytes such as UNIX or DOS. Most latest
OS prefer the unstructured file model instead of the structured file model due to sharing of files by
multiple apps. It has no structure; therefore, it can be interpreted in various ways by different
applications.
Modifiability Criteria
There are two files model in the Modifiability Criteria. These are as follows:
1. Mutable Files
2. Immutable Files
Mutable Files
The existing operating system employs the mutable file model. A file is described as a single series
of records because the same file is updated repeatedly once new material is added. After a file is
updated, the existing contents are changed by the new contents.
Immutable Files
The Immutable file model is used by Cedar File System (CFS). The file may not be modified once
created in the immutable file model. Only after the file has been created can it be deleted. Several
versions of the same file are created to implement file updates. When a file is changed, a new file
version is created. There is consistent sharing because only immutable files are shared in this file
paradigm. Distributed systems allow caching and replication strategies, overcoming the limitation of
many copies and maintaining consistency. The disadvantages of employing the immutable file
model include increased space use and disc allocation activity. CFS uses the "Keep" parameter to
keep track of the file's current version number. When the parameter value is 1, it results in the
production of a new file version. The previous version is erased, and the disk space is reused for a
new one. When the parameter value is greater than 1, it indicates the existence of several versions of
a file. If the version number is not specified, CFS utilizes the lowest version number for actions such
as "delete" and the highest version number for other activities such as "open".
Distributed 3.40
System
REVIEW QUESTIONS
PART - A
IP Overlay Network
The scalability of IPV4 is limited to 232 Peer to peer systems can address more
nodes and IPv6 is 2128. objects using GUID.
PART – B
1. Explain about peer to peer communication.
2. Explain the working of routing overlays.
3. Describe Napster.
4. Write in detail about peer to peer middleware.
5. Explain about pastry.
6. Describe tapestry.
4
System
SYNCHRONIZATION AND
REPLICATION
4.1 INTRODUCTION TO SYNCHRONIZATION
Synchronizing data in a distributed system is an enormous challenge in and of itself.
In single CPU systems, critical regions, mutual exclusion, and other synchronization problems
are solved using methods such as semaphores. These methods will not work in distributed
systems because they implicitly rely on the existence of shared memory.
Examples:
Two processes interacting using a semaphore must both be able to access the
semaphore. In a centralized system, the semaphore is stored in the kernel and accessed
by the processes using system calls.
If two events occur in a distributed system, it is difficult to determine which event
occurred first.
Communication between processes in a distributed system can have unpredictable
delays, processes can fail, messages may be lost synchronization in distributed systems is
harder than in centralized systems because the need for distributed algorithms. The following
are the properties of distributed algorithms:
The relevant information is scattered among multiple machines.
Processes make decisions based only on locally available information.
A single point of failure in the system should be avoided.
No common clock or other precise global time source exists
clock.
Distributed Systems 4.2
External synchronization
This method synchronize the process‟s clock with an authoritative external
reference clock S(t) by limiting skew to a delay bound D > 0 - |S(t) - Ci(t) | <
D for all t.
For example, synchronization with a UTC (Coordinated Universal
Time)source.
Internal synchronization
Synchronize the local clocks within a distributed system to disagree by not
more than a delay bound D > 0, without necessarily achieving external
synchronization - |Ci(t) - Cj(t)| < D for all i, j, t n
For a system with external synchronization bound of D, the internal
synchronization is bounded by 2D.
Checking the correctness of a clock
If drift rate falls within a bound r > 0, then for any t and t‟ with t‟ > t the following
error bound in measuring t and t‟ holds:
n (1-r)(t‟-t) ≤ H(t‟) - H(t) ≤ (1+r)(t‟-t) n
Monotonically increasing
Drift rate bounded between synchronization points
Principle:
The UTC-synchronized time server S is used here.
A process P sends requests to S and measures round-trip time Tround .
In LAN, Tround should be around 1-10 ms .
During this time, a clock with a 10-6 sec/sec drift rate varies by at most 10-8 sec.
Hence the estimate of Tround is reasonably accurate .
Now set clock to t + ½ Tround.
Distributed Systems 4.4
The time daemon asks all the other machines for their clock values.
The Network Time Protocol defines architecture for a time service and a protocol to
distribute time information over the Internet.
Features of NTP:
To provide protection against interference with the time service, whether malicious
or accidental.
Distributed Systems 4.6
Synchronization is done at clients with higher strata number and is less accurate
due to increased latency to strata 1 time server.
If a strata 1 server fails, it may become a strata 2 server that is being synchronized
though another strata 1 server.
Modes of NTP:
NTP works in the following modes:
Multicast:
One computer periodically multicasts time info to all other computers on
network.
Procedure-call:
This is similar to Christian‟s protocol .
Distributed Systems 4.7
Symmetric:
The history includes the local timestamps of send and receive of the previous NTP
message and the local timestamp of send of this message
For each pair i of messages (m, m‟) exchanged between two servers the following
values are being computed
Let o be the true offset of B‟s clock relative to A‟s clock, and let t and t‟ the
true transmission times of m and m‟ (Ti , Ti-1 ... are not true time)
Ti = Ti-1 + t‟ – o........................(2)
Implementing NTP
Statistical algorithms based on 8 most recent pairs are used in NTP to determine
quality of estimates.
The value of oi that corresponds to the minimum di is chosen as an estimate for o .
Time server communicates with multiple peers, eliminates peers with unreliable
data, favors peers with higher strata number (e.g., for primary synchronization
partner selection).
NTP phase lock loop model: modify local clock in accordance with observed drift
rate.
The experiments achieve synchronization accuracies of 10 msecs over Internet,
and 1 msec on LAN using NTP
• L(e) = M * Li(e) +
i Where M = maximum number of processes
Vector clocks
The main aim of vector clocks is to order in such a way to match causality. The
expression, V(e) < V(e‟) if and only if e → e‟, holds for vector clocks, where V(e) is the
vector clock for event e. For implementing vector clock, label each event by vector V(e) [c1,
c2 …, cn]. ciis the events in process i that causally precede e.
• Each processor keeps a vector of values, instead of a single value.
• VCi is the clock at process i; it has a component for each process in the
system. VCi[i] corresponds to Pi„s local “time”.
VCi[j] represents Pi„s knowledge of the “time” at Pj(the # of events that Pi knows
have occurred at Pj
• Each processor knows its own “time” exactly, and updates the values of other
processors‟ clocks based on timestamps received in messages.
Debugging
The global state of a distributed system consists of the local state of each process,
together with the messages that are currently in transit, that is, that have been
sent but not delivered.
Distributed Snapshot represents a state in which the distributed system might have
been in. A snapshot of the system is a single configuration of the system.
Distributed Systems 4.13
When Q completes its part of the snapshot, it sends its predecessor a DONE
message.
By recursion, when the initiator of the distributed snapshot has received a DONE
message from all its successors, it knows that the snapshot has been completely
taken.
But there can be process failures (i.e.) the whole process crashes.
The failure can be detected when the object/code in a process that detects failures
of other processes.
There are two types of failure detectors: unreliable failure detector and reliable
failure detector.
• unsuspected or failure
exit(): Leaves the critical section. Now other processes can enter critical
section. The following are the requirements for Mutual Exclusion (ME):
[ME1] safety: only one process at a time
Performance Evaluation:
The following are the criteria for performance measures:
Bandwidth consumption, which is proportional to the number of messages sent
in each entry and exit operations.
The client delay incurred by a process at each entry and exit operation.
Throughput of the system: Rate at which the collection of processes as a whole
can access the critical section.
If a reply constitutes a token signifying the permission to enter the critical section.
If no other process has the token at the time of the request, then the server replied
immediately with the token.
If token is currently held by another process, then the server does not reply but
queues the request.
Client on exiting the critical section, a message is sent to server, giving it back the
token.
Each process pi has a communication channel to the next process in the ring as
follows: p(i+1)/mod N.
The unique token is in the form of a message passed from process to process in a
single direction clockwise.
If a process does not require to enter the CS when it receives the token, then it
immediately forwards the token to its neighbor.
A process requires the token waits until it receives it, but retains it.
To exit the critical section, the process sends the token on to its neighbor.
Performance Evaluation:
Bandwidth consumption is high.
Client delay is again 1 round trip time
Synchronization delay is one message transmission time.
If three processes concurrently request entry to the CS, then it is possible for p1 to
reply to itself and hold off p2; for p2 rely to itself and hold off p3; for p3 to reply to
itself and hold off p1.
Each process has received one out of two replies, and none can proceed.
Performance Evaluation:
Bandwidth utilization is 2sqrt(N) messages per entry to CS and sqrt(N) per exit.
Client delay is the same as Ricart and Agrawala‟salgorithm, one round-trip time.
The reactions of the algorithms when messages are lost or when a process crashes
is fault tolerance.
None of the algorithm that we have described would tolerate the loss of messages
if the channels were unreliable.
The ring-based algorithm cannot tolerate any single process crash failure.
The central server algorithm can tolerate the crash failure of a client process that
neither holds nor has requested the token.
4.6 ELECTIONS
6. If the received identifier is that of the receiver itself, then this process‟ s identifier
must be the greatest, and it becomes the coordinator.
7. The coordinator marks itself as non-participant set elected_i and sends an elected
message to its neighbour enclosing its ID.
8. When a process receives elected message, marks itself as a non-participant, sets its
variable elected_i and forwards the message.
The election was started by process 17. The highest process identifier encountered so
far is 24.
Requirements:
E1 is met. All identifiers are compared, since a process must receive its own ID
back before sending an elected message.
E2 is also met due to the guaranteed traversals of the ring.
Performance Evaluation
If only a single process starts an election, the worst-performance case is then the
anti-clockwise neighbour has the highest identifier. A total of N-1 messages is
used to reach this neighbour. Then further N messages are required to announce its
election. The elected message is sent N times. Making 3N-1 messages in all.
Turnaround time is also 3N-1 sequential message transmission time.
Election process:
The process begins a election by sending an election message to these processes that
have a higher ID and awaits an answer in response.
If none arrives within time T, the process considers itself the coordinator and sends
coordinator message to all processes with lower identifiers.
Otherwise, it waits a further time T‟ for coordinator message to arrive. If none, begins
another election.
If a process receives a coordinator message, it sets its variable elected_i to be the
coordinator ID.
If a process receives an election message, it send back an answer message and begins
another election unless it has begun one already.
Requirements:
E1 may be broken if timeout is not accurate or replacement.
Suppose P3 crashes and replaced by another process. P2 set P3 as coordinator and P1
set P2 as coordinator.
E2 is clearly met by the assumption of reliable transmission.
Distributed Systems 4.26
Performance Evaluation
Best case the process with the second highest ID notices the coordinator‟s failure.
Then it can immediately elect itself and send N-2 coordinator messages.
The bully algorithm requires O(N^2) messages in the worst case - that is, when the
process with the least ID first detects the coordinator‟s failure. For then N-1 processes
altogether begin election, each sending messages to processes with higher ID.
All concurrency control protocols are based on serial equivalence and are derived from
rules of conflicting operations:
Locks used to order transactions that access the same object according to request
order.
Optimistic concurrency control allows transactions to proceed until they are ready
to commit, whereupon a check is made to see any conflicting operation on objects.
Timestamp ordering uses timestamps to order transactions that access the same
object according to their starting time.
The use of multiple threads is beneficial to the performance. Multiple threads may
access the same objects.
Distributed Systems 4.27
4.8 TRANSACTIONS
Transaction originally from database management systems.
Clients require a sequence of separate requests to a server to be atomic in the sense
that:
• They are free from interference by operations being performed on behalf of
other concurrent clients.
• Either all of the operations must be completed successfully or they must have
no effect at all in the presence of server crashes.
Atomicity
All or nothing: a transaction either completes successfully, and effects of all of its
operations are recorded in the object, or it has no effect at all.
Failure atomicity: effects are atomic even when server crashes
Durability: after a transaction has completed successfully, all its
effects are saved in permanent storage for recover later.
Isolation
Each transaction must be performed without interference from other transactions. The
intermediate effects of a transaction must not be visible to other transactions.
Distributed Systems 4.28
4.8.1 Concurrency
Control Serial Equivalence
If these transactions are done one at a time in some order, then the final result will
be correct.
If we do not want to sacrifice the concurrency, an interleaving of the operations of
transactions may lead to the same effect as if the transactions had been performed
one at a time in some order.
We say it is a serially equivalent interleaving.
The use of serial equivalence is a criterion for correct concurrent execution to
prevent lost updates and inconsistent retrievals.
Conflicting Operations
When we say a pair of operations conflicts we mean that their combined effect
depends on the order in which they are executed. E.g. read and write
There are three ways to ensure serializability:
Locking
Timestamp ordering
Optimistic concurrency control
The sub-transaction T1 starts its own pair of sub-transactions, T11 and T22.
Sub-transactions at the same level can run concurrently, but their access to
common objects is serialized.
Each sub-transaction can fail independently of its parent and of the other sub-
transactions.
If one or more of the sub-transactions fails, the parent transaction could record
the fact and then commit, with the result that all the successful child
transactions commit.
It could then start another transaction to attempt to redeliver the messages that
were not sent the first time.
A transaction may commit or abort only after its child transactions have
completed.
When a sub -transaction aborts, the parent can decide whether to abort or not.
If the top-level transaction commits, then all of the sub-transactions that have
provisionally committed can commit too, provided that none of their ancestors has
aborted.
4.10 LOCKS
A simple example of a serializing mechanism is the use of exclusive locks.
If another client wants to access the same object, it has to wait until the object is
unlocked in the end.
Read locks are shared locks (i.e.) they do not bring about conflict.
Write locks are exclusive locks, since the order in which write is done may give
rise to a conflict.
Lock Read conflict Write conflict
None No No
Read No Yes
Write Yes Yes
So to prevent this problem, a transaction that needs to read or write an object must
be delayed until other transactions that wrote the same object have committed or
aborted.
The rule in Strict Two Phase Locking:
Any locks applied during the progress of a transaction are helduntil the
transaction commits or aborts.
4.10.1 Deadlocks
A wait-for graph can be used to represent the waiting relationships between current
transactions.
In a wait-for graph the nodes represent transactions and the edges represent wait-for
relationships between transactions – there is an edge from node T to node U when
transaction T is waiting for transaction U to release a lock.
Safe State
System is in safe state if there exists a sequence <P 1, P2, …, Pn> of ALL the
processes is the systems such that for each P i, the resources that Pi can still
request can be satisfied by currently available resources + resources held by all the
Pj, with j <i.
If a system is in safe state, then there are no deadlocks.
If a system is in unsafe state, then there is possibility of deadlock.
Avoidance is to ensure that a system will never enter an unsafe state.
Deadlock Detection
• Deadlock may be detected by finding cycles in the wait-for-graph. Having detected
a deadlock, a transaction must be selected for abortion to break the cycle.
o If lock manager blocks a request, an edge can be added. Cycle should be
checked each time a new edge is added.
o One transaction will be selected to abort in case of cycle. Age of
transaction and number of cycles involved when selecting a victim
• Timeouts is commonly used to resolve deadlock. Each lock is given a limited
period in which it is invulnerable. After this time, a lock becomes vulnerable.
If no other transaction is competing for the object, vulnerable object remained
locked. However, if another transaction is waiting, the lock is broken.
Disadvantages:
Transaction aborted simply due to timeout and waiting transaction even if there is
no deadlock.
Hard to set the timeout time
Validation of Transactions
Validation uses the read-write conflict rules to ensure that the scheduling of a
particular transaction is serially equivalent with respect to all other overlapping
transactions- that is, any transactions that had not yet committed at the time this
transaction started.
Each transaction is assigned a number when it enters the validation phase (when
the client issues a closeTransaction).
Such number defines its position in time.
A transaction always finishes its working phase after all transactions with lower
numbers.
That is, a transaction with the number Ti always precedes a transaction with
number Tj if i < j.
The validation test on transaction Tv is based on conflicts between operations in
pairs of transaction Ti and Tv, for a transaction Tv to be serializable with respect
to an overlapping transaction Ti, their operations must conform to the below rules.
Rule No Tv T1 Rule
1 Write Read Ti must not read objects written by Tv
2 Read Write Tvmust not read objects written by Ti
3 Write Write Ti must not read objects written by Tvand Tvmust not
read objects written by Ti.
Types of Validation
Backward Validation: checks the transaction undergoing validation with other
preceding overlapping transactions- those that entered the validation phase before it.
Forward Validation: checks the transaction undergoing validation with other later
transactions, which are still active
Starvation
When a transaction is aborted, it will normally be restarted by the client
program.
There is no guarantee that a particular transaction will ever pass the validation
checks, for it may come into conflict with other transactions for the use of objects
each time it is restarted.
3 Read Write Tc must not read an object that has been written by
any Ti where Ti>Tcthis requires that Tc>the write
timestamp of the committed object.
Whenever a read operation is carried out, it is directed to the version with the
largest write timestamp less than the transaction timestamp.
If the transaction timestamp is larger than the read timestamp of the version being
used, the read timestamp of the version is set to the transaction timestamp.
When a read arrives late, it can be allowed to read from an old committed version,
so there is no need to abort late read operations.
In multi-version timestamp ordering, read operations are always permitted,
although they may have to wait for earlier transactions to complete, which ensures
that executions are recoverable.
There is no conflict between write operations of different transactions, because
each transaction writes its own committed version of the objects it accesses.
The rule in this ordering is
Tcmust not write objects that have been read by any Ti where Ti >Tc
2. When a participant receives a canCommit? request it replies with its vote (Yes or No) to
the coordinator. Before voting Yes, it prepares to commit by saving objects in permanent
storage. If the vote is No, the participant aborts immediately.
Distributed Systems 4.41
Edge Chasing
When a server notes that a transaction T starts waiting for another transaction U,
which is waiting to access a data item at another server, it sends a probe containing
TU to the server of the data item at which transaction U is blocked.
Detection: receive probes and decide whether deadlock has occurred and whether to
forward the probes.
When a server receives a probe TU and finds the transaction that U is waiting for,
say V, is waiting for another data item elsewhere, a probe TUV is forwarded.
Detection: receive probes and decide whether deadlock has occurred and whether to
forward the probes.
When a server receives a probe TU and finds the transaction that U is waiting for,
say V, is waiting for another data item elsewhere, a probe TUV is forwarded.
Resolution: select a transaction in the cycle to abort
Transaction priorities
Every transaction involved in a deadlock cycle can cause deadlock detection to be
initiated.
The effect of several transactions in a cycle initiating deadlock detection is that
detection may happen at several different servers in the cycle, with the result that more
than one transaction in the cycle is aborted.
4.14 REPLICATION
Replication in distributed systems enhances the performance, availability and fault
tolerance. The general requirement includes:
• Replication transparency
• Consistency
• Coordination:
The replica managers coordinate in preparation for executing therequest
consistently.
They agree, if necessary at this stage, on whether the request is to be applied.
They also decide on the ordering of this request relative to others.
The types of ordering includes: FIFO ordering, casual ordering and total
ordering.
• Execution: The replica managers execute the request – perhaps tentatively: that is, in
such a way that they can undo its effects later.
• Agreement: The replica managers reach consensus on the effect of the request – if
any – that will be committed.
• Response: One or more replica managers respond to the front end.
Distributed Systems 4.47
Coda is a distributed file system. Coda has been developed at Carnegie Mellon
University (CMU) in the 1990s, and is now integrated with a number of popular UNIX-based
operating systems such as Linux.
Processes in Coda
Cod a maintains distinction between client and server processes. The clients are known
as Venus pr ocesses and Server as Vice processes. The threads are non preemptive and operate
entirely in user space. Low-level thread handles I/O operations.
Caching in Coda
Cache consistency in Coda is maintained using callbacks.
The Vice server tracks all clients that have a copy of the file and provide
callback promise. The tokens are obtained from vice server.
It guarantee that Venus will be notified if file is modified
Upon modification Vice server send invalidate to clients.
Fault tolerance
Distributed Snapshot represents a state in which the distributed system might have
been in. A snapshot of the system is a single configuration of the system.
A run is a total ordering of all the events in a global history that is consistent with
each local history‟s ordering.
A global state predicate is a function that maps from the set of global states of
processes in the system.
• Unsuspected or failure
aborted.
Distributed Systems 4.56
PART - B
1. Explain the clocking in detail.
2. Describe Cristian‟s algorithm.
3. Write about Berkeley algorithm
4. Brief about NTP.
5. Explain about logical time and logical clocks.
6. Describe global states.
Distributed Systems 4.58
Machine S Machine D
1
d
a
2
c 2
b
1 c
b c
a 4 d
f e
5a a 3
b
4a
P1 P2 P3 P4 P5 P1 P2 P3
Kernel Kernel
Machine S Machine D
1
d
a2
c 2
b
1 c b c
a 4 d
f e
5a a 3
b a
4
P1 P2 P1 P2 P3 P4
Kernel Kernel
There are several strategies for moving the address space and data including:
Eager (All):
This transfers the entire address space.
No trace of process is left behind in the original system.
If address space is large and if the process does not need most of it,
then this approach is unnecessarily expensive.
Implementations that provide a checkpoint/restart facility are likely to
use this approach, because it is simpler to do the check-pointing and
restarting if all of the address space is localized.
Precopy
In this method, the process continues to execute on the source node
while the address space is copied to the target node.
The pages modified on the source during the precopy operation have to
be copied a second time.
This strategy reduces the time that a process is frozen and cannot
execute during migration.
Eager (dirty)
The transfer involves only the portion of the address space that is in
main memory and has been modified.
Any additional blocks of the virtual address space are transferred on
demand.
The source machine is involved throughout the life of the process.
Copy-on-reference
In this method, the pages are only brought over when referenced.
This has the lowest initial cost of process migration.
5.4 Process and Resource Management Distributed 5.4
System
Flushing
The pages are cleared from main memory by flushing dirty pages to
disk.
This relieves the source of holding any pages of the migrated process in
main memory.
Negotiation in Migration
• The selection of migration policy is the responsibility of Starter utility.
• Starter utility is also responsible for long-term scheduling and memory allocation.
• The decision to migrate must be reached jointly by two Starter processes (i.e.) the
process on the source and on the destination machines.
5.2 THREADS
A thread is the entity within a process that can be scheduled for execution. All
threads of a process share its virtual address space and system resources.
Each thread maintains exception handlers, a scheduling priority, thread local storage, a
unique thread identifier, and a set of structures the system will use to save the thread context
until it is scheduled. The thread context includes the thread's set of machine registers, the
kernel stack, a thread environment block, and a user stack in the address space of the thread's
process. Threads can also have their own security context, which can be used for
impersonating clients.
Because threads have some of the properties of processes, they are sometimes called
lightweight processes. In a process, threads allow multiple executions of streams. A process
can be single threaded or multi- threaded.
K K K K
K K K
K K K K
Fig 5.9: Two Level Model
5.2.2 Threading Issues
The following are the issues in threads:
Semantics of fork() and exec() system calls
The fork() call may duplicate either the calling thread or all
threads.
Thread cancellation
Terminating a thread before it has finished.
There are two general approaches:
Asynchronous cancellation terminates the target thread immediately
Deferred cancellation allows the target thread to periodically check if it
should be cancelled
Signal handling
Signals are used in UNIX systems to notify a process that a particular event
has occurred
A signal handler is used to process signals
Signal is generated by particular event
Signal is delivered to a process
Signal is handled
The signal can be delivered :
to the thread to which the signal applies
Distributed 5.9
System
to every thread in the process
PThreads
This is a POSIX standard (IEEE 1003.1c).
This is a API for thread creation and synchronization and specifies the
behavior of the thread library.
pthreads is an Object Orientated API that allows user-land multi-threading in
PHP.
It includes all the tools needed to create multi-threaded applications targeted at
the Web or the Console.
Distributed 5.1
System 0
PHP applications can create, read, write, execute and synchronize with
Threads, Workers and Threaded objects.
A Threaded Object:
A Threaded Object forms the basis of the functionality that allows
pthreads to operate.
It exposes synchronization methods and some useful interfaces for the
programmer.
Thread:
The user can implement a Thread by extending the Thread declaration
provided by pthreads implementing the run method.
Any members of the thread can be written and read by any context
with a reference to the Thread.
Any context can be executed in any public and protected methods.
The run method of the implementation is executed in a separate thread
when the start method of the implementation is called from the context
that created it.
Only the context that creates a thread can start and join with it.
Worker Object:
A Worker Thread has a persistent state, and will be invoked by start().
The life of worker thread is till the calling object goes out of scope, or
is explicitly shutdown.
Any context with a reference can stack objects onto the Worker, which
will be executed by the Worker in a separate Thread.
The run method of a Worker is executed before any objects on the
stack, such that it can initialize resources that the objects to come may
need.
Pool:
A Pool of Worker threads can be used to distribute Threaded objects
among Workers.
The Pool class included implements this functionality and takes care of
referencing in a sane manner.
The Pool implementation is the easiest and most efficient way of using
multiple threads.
Distributed 5.1
System 1
Synchronization:
All of the objects that pthreads creates have built in synchronization in
the (familiar to java programmers ) form of ::wait and ::notify.
Calling ::wait on an object will cause the context to wait for another
context to call ::notify on the same object.
This allows for powerful synchronization between Threaded Objects in
PHP.
Method Modifiers:
The protected methods of Threaded Objects are protected by pthreads,
such that only one context may call that method at a time.
The private methods of Threaded Objects can only be called from
within the Threaded Object during execution.
Data Storage:
Any data type that can be serialized can be used as a member of a
Threaded object, it can be read and written from any context with a
reference to the Threaded Object.
Not every type of data is stored serially, basic types are stored in their
true form. Complex types, Arrays, and Objects that are not Threaded
are stored serially; they can be read and written to the Threaded Object
from any context with a reference.
With the exception of Threaded Objects any reference used to set a
member of a Threaded Object is separated from the reference in the
Threaded Object; the same data can be read directly from the Threaded
Object at any time by any context with a reference to the Threaded
Object.
Static Members:
When a new context is created , they are generally copied, but
resources and objects with internal state are nullified .
This allows them to function as a kind of thread local storage.
Allowing the new context to initiate a connection in the same way as
the context that created it, storing the connection in the same place
without affecting the original context.
These threads are common in UNIX operating systems (Solaris, Linux,
Mac OS
X).
Distributed 5.1
System 2
Windows Threads
Job objects are namable, securable, sharable objects that control attributes of
the processes associated with them.
Operations performed on the job object affect all processes associated with the
job object.
An application can use the thread pool to reduce the number of application
threads and provide management of the worker threads.
Applications can queue work items, associate work with waitable handles,
automatically queue based on a timer, and bind with I/O.
Each UMS thread has its own thread context instead of sharing the thread
context of a single thread.
The ability to switch between threads in user mode makes UMS more efficient
than thread pools for short-duration work items that require few system calls.
A fiber is a unit of execution that must be manually scheduled by the
application.
However, using fibers can make it easier to port applications that were
designed to schedule their own
threads.
Distributed 5.1
System 3
Java Threads
Java is a multi threaded programming language which means we can develop
multi threaded program using Java.
A multi threaded program contains two or more parts that can run concurrently
and each part can handle different task at the same time making optimal use of
the available resources especially when your computer has multiple CPUs.
Java threads may be created by: Extending Thread class or by implementing the
Runnable interface.
Java threads are managed by the JVM.
Runnable run()
Dead
Waiting
Fig 5. 10: Lifecycle of Java Thread
Thread class provide constructors and methods to create and perform operations
on a thread.
Thread class extends Object class and implements Runnable interface.
The Runnable interface should be implemented by any class whose instances
are intended to be executed by a thread.
Runnable interface have only one method named run().
This will work fine only when there are few nodes in the system.
This is because the inquirer receives a flood of replies almost
simultaneously, and the time required to process the reply messages for
making a node selection is too long as the number of nodes (N)
increase.
Also the network traffic quickly consumes network bandwidth.
A simple approach is to probe only m of N nodes for selecting a node.
Fault tolerance
A good scheduling algorithm should not be disabled by the crash of one
or more nodes of the system.
Also, if the nodes are partitioned into two or more groups due to link
failures, the algorithm should be capable of functioning properly for the
nodes within a group.
Algorithms that have decentralized decision making capability and
consider only available nodes in their decision making have better fault
tolerance
capability.
Distributed 5.1
System 6
Quick decision making capability
Heuristic methods requiring less computational efforts (and hence less
time) while providing near-optimal results are preferable to exhaustive
(optimal) solution methods.
Balanced system performance and scheduling overhead
Algorithms that provide near-optimal system performance with a
minimum of global state information (such as CPU load) gathering
overhead are desirable.
This is because the overhead increases as the amount of global state
information collected increases.
This is because the usefulness of that information is decreased due to
both the aging of the information being gathered and the low
scheduling frequency as a result of the cost of gathering and processing
the extra information.
Stability
Fruitless migration of processes, known as processor thrashing, must be
prevented.
E.g. if nodes n1 and n2 observe that node n3 is idle and then offload a
portion of their work to n3 without being aware of the offloading
decision made by the other node.
Now if n3 becomes overloaded due to this it may again start
transferring its processes to other nodes.
This is caused by scheduling decisions being made at each node
independently of decisions made by other nodes.
5.3.2 Task Assignment Approach
The following assumptions are made in this approach:
A process has already been split up into pieces called tasks.
This split occurs along natural boundaries (such as a method), so that each task
will have integrity in itself and data transfers among the tasks are minimized.
The amount of computation required by each task and the speed of each CPU
are known. The cost of processing each task on every node is known. This is
derived from assumption 2.
The IPC cost between every pair of tasks is already known.
The IPC cost is 0 for tasks assigned to the same node.
Distributed 5.1
System 7
This is usually estimated by an analysis of the static program.
If two tasks communicate n times and the average time for each inter-task
communication is t, them IPC costs for the two tasks is n * t.
Precedence relationships among the tasks are known.
Reassignment of tasks is not possible.
Goal of this method is to assign the tasks of a process to the nodes of a distributed
system in such a manner as to achieve goals such as:
Minimization of IPC costs
Quick turnaround time for the complete process
A high degree of parallelism
Efficient utilization of system resources in general
• These goals often conflict. E.g., while minimizing IPC costs tends to assign all
tasks of a process to a single node, efficient utilization of system resources tries
to distribute the tasks evenly among the nodes.
• So also, quick turnaround time and a high degree of parallelism encourage
parallel execution of the tasks, the precedence relationship among the tasks
limits their parallel execution.
• In case of m tasks and q nodes, there are mq possible assignments of tasks to
nodes .
• In practice, however, the actual number of possible assignments of tasks to
nodes may be less than mq due to the restriction that certain tasks cannot be
assigned to certain nodes due to their specific requirements.
5.3.3 Classification of Load Balancing Algorithms
Shortest method
L distinct nodes are chosen at random, each is polled to determine its load. The
process is transferred to the node having the minimum value unless its
workload value prohibits to accept the process.
Simple improvement is to discontinue probing whenever a node with zero load
is encountered.
Bidding method
The nodes contain managers (to send processes) and contractors (to receive
processes).
Distributed 5.22
System
The managers broadcast a request for bid, contractors respond with bids (prices
based on capacity of the contractor node) and manager selects the best offer.
Winning contractor is notified and asked whether it accepts the process for
execution or not.
The full autonomy for the nodes regarding scheduling.
This causes big communication overhead.
It is difficult to decide a good pricing policy.
Pairing
Contrary to the former methods the pairing policy is to reduce the variance of
load only between pairs.
Each node asks some randomly chosen node to form a pair with it.
If it receives a rejection it randomly selects another node and tries to pair again.
Two nodes that differ greatly in load are temporarily paired with each other and
migration starts.
The pair is broken as soon as the migration is over.
A node only tries to find a partner if it has at least two processes.
Priority assignment policy
Selfish: Local processes are given higher priority than remote processes. Worst
response time performance of the three policies.
Altruistic: Remote processes are given higher priority than local processes. Best
response time performance of the three policies.
Intermediate: When the number of local processes is greater or equal to the
number of remote processes, local processes are given higher priority than remote
processes. Otherwise, remote processes are given higher priority than local
processes.
Migration limiting policy
This policy determines the total number of times a process can migrate
Uncontrolled: A remote process arriving at a node is treated just as a process
originating at a node, so a process may be migrated any number of times
Controlled:
Avoids the instability of the uncontrolled policy.
Distributed 5.23
System
Use a migration count parameter to fix a limit on the number of time a process
can migrate.
Irrevocable migration policy: migration count is fixed to 1.
For long execution processes migration count must be greater than 1 to adapt
for dynamically changing states
5.3.4 Load-sharing approach
The following problems in load balancing approach led to load sharing approach:
Load balancing technique with attempting equalizing the workload on all the
nodes is not an appropriate object since big overhead is generated by gathering
exact state information.
Load balancing is not achievable since number of processes in a node is always
fluctuating and temporal unbalance among the nodes exists every moment
Basic idea:
It is necessary and sufficient to prevent nodes from being idle while some other
nodes have more than two processes.
Load-sharing is much simpler than load-balancing since it only attempts to ensure
that no node is idle when heavily node exists.
Priority assignment policy and migration limiting policy are the same as that for
the load-balancing algorithms.
Location Policies
The location policy decides whether the sender node or the receiver node of the
process takes the initiative to search for suitable node in the system, and this policy
can one of the following:
Sender-initiated location policy: Sender node decides where to send the process.
Heavily loaded nodes search for lightly loaded nodes
Receiver-initiated location policy: Receiver node decides from where to get the
process. Lightly loaded nodes search for heavily loaded nodes
Static algorithms ignore the current state Dynamic algorithms are able to give
or load of the nodes in the system. significantly better performance.
Static algorithms are much simpler. They are complex
Distributed 5.29
System
19. Differentiate deterministic and probabilistic algorithms.