Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

6CS5 DS Unit-2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

ARYA GROUP OF COLLEGES, JAIPUR Distributed System CS/ VISem

ARYA GROUP OF COLLEGES


Arya College of Engineering & Research Centre, Jaipur

Department of
Computer Science & Engineering

Subject Name with code: Distributed System (DS) (6CS5-11)


UNIT-II

Presented by: Ms. Rashi Jain


Assistant Professor, CSE
Ms.Rashi jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS/VI Sem

UNIT-II

Concurrent Processes
and Programming

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

SUBJECT DETAILED SYLLABUS

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

OUTLINE
• Processes and Threads

• Graph models for process representation

• Client/Server Model

• Time Services

• Language mechanism for synchronization

• Object model resource servers

• Characteristics of concurrent programming languages (Language not included)

• Inter-Process communication and coordination:


- Message Passing
- Request/Reply and transaction communication

• Name and Directory services

• RPC and RMI Case Studies Ms.Rashi Jain


ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CONCURRENT PROCESSES AND PROGRAMMING


• Concurrent processing: is a computing model in which multiple processors execute instructions
simultaneously for better performance. Concurrent means, which occurs when something else
happens.

• The tasks are broken into sub-types, which are then assigned to different processors to perform
simultaneously, sequentially instead, as they would have to be performed by one processor.
Concurrent processing is sometimes synonymous with parallel processing.

Ms.Rashi Jain
Distributed System CS VI Sem
ARYA GROUP OF COLLEGES JAIPUR

THE NOTION AND IMPORTANCE OF CONCURRENT PROCESSES


1) Processes:

• The chief task of an operating system is to manage a set of processes and to perform work on their
behalf.

• So far we have considered processes as independent. They interact in some way with the operating
system, but we have not examined ways in which processes can interact with each other.

• Processes that interact with each by cooperating to achieve a common goal are called concurrent
processes.

- Heavyweight processes (processes)

- Lightweight processes (threads)

• Many problems can be naturally viewed in a way that leads to concurrent programming.

2) Some tasks have inherent parallelism that can be exploited by concurrent processes.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

THREADS
What is thread?
• A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.

• A thread is a single sequence stream within in a process.

• Because threads have some of the properties of processes, they are sometimes
called lightweight processes.

• A thread shares with its peer threads few information like code segment, data
segment and open files.

• When one thread alters a code segment memory item, all other threads see that.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

THREADS CONT...
• Threads provide a way to improve application performance through parallelism.

• The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in
parallel.

• Threads represent a software approach to improving performance of operating system by reducing the
overhead thread is equivalent to a classical process.

• Each thread belongs to exactly one process and no thread can exist outside a process.

• Each thread represents a separate flow of control.

• Threads have been successfully used in implementing network servers and web server.

• They also provide a suitable foundation for parallel execution of applications on shared memory
multiprocessors.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

THREADS CONT...
• Like a traditional process i.e., process with one thread, a thread can be in any of several states
(Running, Blocked, Ready or Terminated).

• Each thread has its own stack since thread will generally call different procedures and thus a different
execution history. This is why thread needs its own stack.

• An operating system that has thread facility, the basic unit of CPU utilization is a thread.

• A thread has or consists of a program counter (PC), a register set, and a stack space.

• Threads are not independent of one other like processes as a result threads shares with other threads
their code section, data section, OS resources also known as task, such as open files and signals.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

SINGLE-THREADED AND A MULTITHREADED PROCESS

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

PROCESS VS THREAD
Threads operate in the same way as that of processes. Some of the similarities and
differences are:
Similarities:
• Like processes, threads share CPU and only one thread active (running) at a time.
• Like processes, threads within a process, execute sequentially.
• Like processes, thread can create children.
• And like process, if one thread is blocked, another thread can run.

Differences:
• Unlike processes, threads are not independent of one another.
• Unlike processes, all threads can access every address in the task.
• Unlike processes, threads are designed to assist one another.
Note that processes might or might not assist one another because processes may
originate from different users. Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DIFFERENCE BETWEEN PROCESS AND THREAD

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

GRAPH MODELS FOR PROCESS REPRESENTATION


The topology of a distributed system is
represented by a graph where the
nodes represent processes, and the
links represent communication
channels.

A network of processes. The nodes are


processes, and the edges are
.
communication channels.

In both parallel and distributed


systems, the events are partially
ordered. Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

GRAPH REPRESENTATION CONT....

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

PARALLEL VS DISTRIBUTED SYSTEM

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

PARALLEL VS DISTRIBUTED (MULTIPLE PROCESSORS)

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

EXAMPLES

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

GRAPH ALGORITHMS VS DISTRIBUTED ALGORITHMS

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CLIENT-SERVER MODEL
Basic Definition:
Server: provides services
Client: requests for services
Service: any resource (e.g., data, type definition, file, control, object, CPU time, display
device, etc.)
Typical Properties
• A service request is about "what" is needed, and it often made abstractly. It is up to the
server to determine how to get the job done.

•. The location of clients and servers are usually transparent to the user A client may
become a server; a server may become a client.

• The ideal client/server software is independent of hardware or OS platform.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TIME SERVICES: DCE DISTRIBUTED TIME SERVICE


• A distributed computing system has many advantages but also brings with it new problems.
.
• One of them is keeping the clocks on different nodes synchronized.

• In a single system, there is one clock that provides the time of day to all applications.

• Computer hardware clocks are not completely accurate, but there is always one consistent idea of what time it
is for all processes running on the system.

• In a distributed system, however, each node has its own clock.

• Even if it were possible to set all of the clocks in the distributed system to one consistent time at some point,
. those clocks would drift away from that time at different rates. As a result, the different nodes of a distributed
system have different ideas of what time it is.

• This is a problem, for example, for distributed applications that care about the ordering of events. It is difficult
to determine whether Event A on Node X occurred before Event B on Node Y because different nodes have
different notions of the current time. Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DCE CONT...
The
. DCE Distributed Time Service (DTS) addresses this problem in two ways:
1. DTS provides a way to periodically synchronize the clocks on the different hosts in a distributed system.

2. DTS also provides a way of keeping that synchronized notion of time reasonably close to the correct time. (In
DTS, correct time is considered to be Coordinated Universal Time (UTC), an international standard.)

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DTS
There
. are several different components that comprise the DCE Distributed Time Service:

• Time clerk
• Time servers
- Local time server
- Global time server
- Courier time server
- Backup courier time server
• DTS application programming interface
.
• Time provider interface (TPI)
• ·Time format, which includes inaccuracy

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DTS
There
. are several different components that comprise the DCE Distributed Time Service:

• Time clerk
• Time servers
- Local time server
- Global time server
- Courier time server
- Backup courier time server
• DTS application programming interface
.
• Time provider interface (TPI)
• ·Time format, which includes inaccuracy

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TIME CLERK
• The time clerk is the client side of DTS. It runs on a client machine, such as a workstation, and keeps the
. machine's local time synchronized by asking time servers for the correct time and adjusting the local time
accordingly.

• The time clerk is configured to know the limit of the local system's hardware clock.

• When enough time has passed that the system's time is above a certain inaccuracy threshold (that is, the clock
may have drifted far enough away from the correct time), the time clerk issues synchronization.

• It queries various time servers for their opinion of the correct time of day, calculates the probable correct time
. and its inaccuracy based on the answers it receives, and updates the local system's time.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TIME CLERK
• The time clerk is the client side of DTS. It runs on a client machine, such as a workstation, and keeps the
. machine's local time synchronized by asking time servers for the correct time and adjusting the local time
accordingly.

• The time clerk is configured to know the limit of the local system's hardware clock.

• When enough time has passed that the system's time is above a certain inaccuracy threshold (that is, the clock
may have drifted far enough away from the correct time), the time clerk issues synchronization.

• It queries various time servers for their opinion of the correct time of day, calculates the probable correct time
. and its inaccuracy based on the answers it receives, and updates the local system's time.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TIME CLERK CONT...

The following figure shows a LAN with two time clerks (C) and three time servers (S). Each
of the time clerks queries two of the time servers when synchronizing. The time servers all
query each other.
DTS Time Clerks and Servers

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TIME SERVERS
• A time server is a node that is designated to answer queries about the time. The number of time servers in a DCE cell is
configurable; three per LAN is a typical number.

• Time clerks query these time servers for the time, and the time servers query one another, computing the new system time
and adjusting their own clocks as appropriate. One or more of the time servers can be attached to an external time provider

• A distinction is made between local time servers (time servers on a given LAN) and global time servers. This is because
they are located differently by their clients.

• A client may need to contact a global time server if, for example, the client wants to get time from three servers, but only

. two servers are available on the LAN.

• In addition, it may be desirable to configure a DTS system to have two LAN servers and one global time server
synchronizing with each other, rather than just having time servers within the LAN synchronizing with each other. This is
where couriers are needed.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TIME SERVERS CONT...


• A courier time server is a time server that synchronizes with a global time
server; that is, a time server outside the courier's LAN. It thus imports an
outside time to the LAN by synchronizing with the outside time server.
Other time servers in the LAN can be designated as backup courier time
servers. If the courier is not available, then one of the backup couriers
serves in its place.

• The following figure shows two LANs (LAN A and LAN B) and their time

. servers (S). In each LAN, one of the time servers acts as a courier time
server (Co) by querying a global time server (G) (that is, a time server
outside of either LAN) for the current time.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

LANGUAGE MECHANISM FOR SYNCHRONIZATION


Concurrent programming allows for:

• Concurrent processing with some specifications

• Allows for synchronization to communicate

• A concurrent language extended from a sequential language adds additional constructs to


provide:
-Specification of concurrent activities
. -Synchronization of processes
-Inter-process communication
-Nondeterministic execution of processes.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

LANGUAGE MECHANISM FOR SYNCHRONIZATION

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

SHARED VARIABLE-BASED SYNCHRONIZATION AND


COMMUNICATION
To understand the requirements for communication and synchronisation based on shared
variables:

To briefly review:

• Semaphores,
• Monitors and
• Conditional critical regions
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

PREREQUISITES

Understanding the issues of busy-waiting and semaphores from an Operating


System Course.

However:
• give full details on busy-waiting, semaphores, conditional critical
regions, monitors etc.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

SYNCHRONISATION AND COMMUNICATION

• The correct behaviour of a concurrent program depends on synchronisation


and communication between its processes.

• Synchronisation: the satisfaction of constraints on the interleaving of the


actions of processes (e.g. an action by one process only occurring after an
action by another)

• . Communication: The passing of information from one process to another


• Concepts are linked since communication requires synchronisation, and synchronisation
can be considered as content-less communication.
• Data communication is usually based upon either shared variables or message passing.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

SHARED VARIABLE COMMUNICATION


• Examples: busy waiting, semaphores and monitors.

• Unrestricted use of shared variables is unreliable and unsafe due to multiple update
problems.

• Consider two processes updating a shared variable, X, with the assignment: X:= X+1.
-load the value of X into some register
-increment the value in the register by 1 and
-store the value in the register back to X
.
• As the three operations are not indivisible, two processes simultaneously updating the
variable could follow an interleaving that would produce an incorrect result.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

AVOIDING INTERFERENCE
• The parts of a process that access shared variables must be executed indivisibly with
respect to each other.

• These parts are called critical sections.

• The required protection is called mutual exclusion.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

MUTUAL EXCLUSION
• A sequence of statements that must appear to be executed indivisibly is called a critical
section.

• The synchronisation required to protect a critical section is known as mutual exclusion.

• Atomicity is assumed to be present at the memory level. If one process is executing X:= 5,
simultaneously with another executing X:= 6, the result will be either 5 or 6 (not some other
value).
.
• If two processes are updating a structured object, this atomicity will only apply at the single
word element level

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

MUTUAL EXCLUSION

(* mutual exclusion *)
var mutex : semaphore; (* initially 1 *)
process P1; process P2;
statement X statement A;
wait (mutex); wait (mutex);
statement Y statement B;
signal (mutex); signal (mutex);
statement Z statement C;
. end P1; end P2;

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CONDITION SYNCHRONISATION
• Condition synchronisation is needed when a process wishes to perform an operation
that can only sensibly, or safely, be performed if another process has itself taken some
action or is in some defined state.

• E.g. a bounded buffer has 2 condition synchronisation:

-The producer processes must not attempt to deposit data onto the buffer if the buffer
is full
-The consumer processes cannot be allowed to extract objects from the buffer if the
buffer is empty
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CONDITION SYNCHRONISATION

var consyn : semaphore (* init 0 *)

process P1; process P2;


(* waiting process *) (* signalling proc *)
statement X; statement A;
wait (consyn) signal (consyn)
statement Y; statement B;
.
end P1; end P2;
In what order will the statements execute?

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

BUSY WAITING
• One way to implement synchronisation is to have processes set and check shared variables
that are acting as flags.

• This approach works well for condition synchronisation but no simple method for mutual
exclusion exists.

• Busy wait algorithms are in general inefficient; they involve processes using up processing
cycles when they cannot perform useful work.

• Even on a multiprocessor system they can give rise to excessive traffic on the memory bus or
network (if distributed)
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

SEMAPHORES
• A semaphore is a non-negative integer variable that apart from initialization can only be
acted upon by two procedures P (or WAIT) and V (or SIGNAL).

• WAIT(S) If the value of S > 0 then decrement its value by one; otherwise delay the
process until S > 0 (and then decrement its value).

• SIGNAL(S) Increment the value of S by one.

• WAIT and SIGNAL are atomic (indivisible). Two processes both executing WAIT operations
on the same semaphore cannot interfere with each other and cannot fail during the execution
of a semaphore operation.
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

PROCESS STATES

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DEADLOCK
• Two processes are deadlocked if each is holding a resource
while waiting for a resource held by the other

type Sem is ...;


X : Sem := 1; Y : Sem := 1;

task A; task B;
task body A is task body B is
begin begin
. ... ...
Wait(X); Wait(Y);
Wait(Y); Wait(X);
... ...
end A; end B;
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CRITICISMS OF SEMAPHORES
• Semaphore are an elegant low-level synchronisation primitive, however, their
use is error-prone.

• If a semaphore is omitted or misplaced, the entire program to collapse.


Mutual exclusion may not be assured and deadlock may appear just when the
software is dealing with a rare but critical event.

• A more structured synchronisation primitive is required.


.

• No high-level concurrent programming language relies entirely on


semaphores; they are important historically but are arguably not adequate for
the real-time domain
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CRITICISMS OF SEMAPHORES
• The readers-writers problem relates to an object such as a file that is shared between multiple processes.
Some of these processes are readers i.e. they only want to read the data from the object and some of the
processes are writers i.e. they want to write into the object.

• The readers-writers problem is used to manage synchronization so that there are no problems with the object
data. For example - If two readers access the object at the same time there is no problem. However if two
writers or a reader and writer access the object at the same time, there may be problems.

• To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the
object, no reader or writer may access it. However, multiple readers can access the object at the same time.

• This can be implemented using semaphores. The codes for the reader and writer process in the reader-writer
.
problem are given as follows −

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

READERS PROCESS
wait (mutex); //Reader wants to enter the critical section

rc ++; //Read count now incremented one

if (rc == 1)

wait (wrt);

signal(mutex);

. READ THE OBJECT

// current reader performs reading here

wait(mutex); // reader wants to leave


. rc --;

if (rc == 0)

signal (wrt); // writer can enter

signal(mutex); Ms.Rashi Jain


ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DESCRIPTION
• In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a
variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt
handles the writing mechanism and is common to the reader and writer process code.

• The variable rc denotes the number of readers accessing the object. As soon as rc becomes
1, wait operation is used on wrt. This means that a writer cannot access the object
anymore. After the read operation is done, rc is decremented. When re becomes 0, signal
operation is used on wrt. So a writer can access the object now.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

WRITERS PROCESS
Wait(wrt);
.
.
Write into the object
.
.
Signal(wrt);

• If a writer wants to access the object, wait operation is performed on wrt. After that no other writer can
access the object. When a writer is done writing into the object, signal operation is performed on wrt.
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

MESSAGE PASSING SYNCHRONISATION


The only means of communication in distributed systems
• Implicit synchronization: messages can be received only after they have been sent.

• Non-blocking send, blocking receive: asynchronous message passing.

• Blocking send, blocking receive: synchronous message passing.

Asynchronous message passing:


• Is an extension of the semaphore concept to distributed systems
• Send operations assume that the channel has an unbounded buffer
• Example: pipe and socket

Synchronous
. message passing:

• No buffering of messages in the communication channel

• Examples: Communication Sequential Processes (CSP),Remote Procedure Call (RPC) - asymmetrical


communication.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CHARACTERISTICS OF CONCURRENT PROGRAMMING


• Increased program throughput—parallel execution of a concurrent program allows the
number of tasks completed in a given time to increase proportionally to the number of
processors.

• High responsiveness for input/output—input/output-intensive programs mostly wait


for input or output operations to complete. Concurrent programming allows the time
that would be spent waiting to be used for another task.

• More appropriate program structure—some problems and problem domains are well-
suited to representation as concurrent tasks or processes
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

INTER-PROCESS COMMUNICATION AND COORDINATION


1. Message Passing
2. Request/Reply and transaction communication

• Distributed IPC and process coordination are based on message passing


• Dependent on the ability to locate communication entities: role of the name
service
• Three fundamental message passing communication models:
– message passing
– request/reply (RPC)
– transaction communication

.• Distributed process coordination examples:


– distributed mutual exclusion
– leader election

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

MESSAGE PASSING COMMUNICATION

• Messages are collections of data objects.

• Their structure and interpretations are defined by the peer applications.

• Communicating processes pass composed messages to the system transport


service.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

CONCURRENT COMPUTING
Concurrent computing is a form of computing in which several computations are
executing during overlapping time periods –concurrently – instead of sequentially
(one completing before the next starts). This is a property of a system – this may be
an individual program, a computer, or a network.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

MEDIUM OF CONCURRENCY
A thread is a sequential program that interacts with other running programs, in other
words, it acts as an intermediate between the concurrent running systems.

.
A thread can be edited,
paused , stopped; and
then restarted at the wish
of the end-user.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DISTRIBUTED SYSTEMS: SERVER CONNECTIONS


Two distinct forms of Concurrent computing are:

1. Distributed Programming: Where Read-Only Memory is shared between the end-users.

2. Parallel Programming: Where no memory sharing takes place.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

REQUEST/REPLY COMMUNICATION: RPC


• A remote procedure call is an interprocess communication technique that is used for
client-server based applications. It is also known as a subroutine call or a function call.
• A client has a request message that the RPC translates and sends to the server. This request
may be a procedure or a function call to a remote server. When the server receives the
request, it sends the required response back to the client. The client is blocked while the
server is processing the call and only resumed execution after the server is finished.
The sequence of events in a remote procedure call are given as follows:
 The client stub is called by the client.

. The client stub makes a system call to send the message to the server and puts the
parameters in the message.

 The message is sent from the client to the server by the client’s operating system.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

REQUEST/REPLY COMMUNICATION: RPC CONT...


• The message is passed to the server stub by the server operating system.

• The parameters are removed from the message by the server stub.

• Then, the server procedure is called by the server stub.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

ADVANTAGES & DISADVANTAGES OF REMOTE PROCEDURE CALL


Some of the advantages of RPC are as follows:
 Remote procedure calls support process oriented and thread oriented models.
 The internal message passing mechanism of RPC is hidden from the user.
 The effort to re-write and re-develop the code is minimum in remote procedure calls.
 Remote procedure calls can be used in distributed environment as well as the local environment.
 Many of the protocol layers are omitted by RPC to improve performance.

Some of the disadvantages of RPC are as follows:


 The remote procedure call is a concept that can be implemented in different ways. It is not a standard.
. There is no flexibility in RPC for hardware architecture. It is only interaction based.

 There is an increase in costs because of remote procedure call.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TRANSACTION COMMUNICATION
Properties of Transactions: Any transaction must maintain the ACID properties, viz.
Atomicity, Consistency, Isolation, and Durability.
• Atomicity − This property states that a transaction is an atomic unit of processing, that is,
either it is performed in its entirety or not performed at all. No partial update should exist.

• Consistency − A transaction should take the database from one consistent state to another
consistent state. It should not adversely affect any data item in the database.

• Isolation − A transaction should be executed as if it is the only one in the system. There
should not be any interference from the other concurrent transactions that are
.
simultaneously running.

• Durability − If a committed transaction brings about a change, that change should be


durable in the database and not lost in case of any failure.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

TRANSACTION COMMUNICATION
The two phase commit protocol is analogous to a real life unanimous voting scheme .voting is initiated by the
coordinator of a transaction. all participant in the distributed transaction must come to an agreement about
whether to commit or abort .before a participant can vote to commit a transaction, it must be prepared to
perform the commit.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

WHAT IS NAME AND DIRECTORY SERVICES


• Given the name or some attributes of an object entity, more attribute information is
obtained.
• Name service and Directory service are interchangeable. They all describe how a
named object can be addressed and located by using its address.

• A directory service is a software application — or a set of applications — that stores


and organizes information about a computer network's users and network resources,
and that allows network administrators to manage users' access to the resources.

.
• Additionally, directory services act as an abstraction layer between users and shared
resources.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

PURPOSE OF DIRECTORY SERVICES


• Enable user to reference network resources with short names instead of real
addresses.

• Locate object by attributes.

• Provide a layer of abstraction so that the network resources can be managed


independently without service interruption.

• Added value, such as security, etc

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

WAYS TO NAME AN OBJECT

• <attribute>,<name,attributes,address>,<name, type, attributes,


address>

• Flat,hierarchy structure, structure-free name, value pairs.

• Physical, organizational, functional.


.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

DIT
A directory information tree (DIT) is data represented in a hierarchical tree-like structure consisting of the
Distinguished Names (DNs) of directory service entries.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

ACCESS MODE
DSA—Directory Service Agent

DUA–- Directory User Agent

Client Server based Model

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

REMOTE METHOD INVOCATION


• RMI stands for Remote Method Invocation. It is a
mechanism that allows an object residing in one system
(JVM) to access/invoke an object running on another JVM.
• RMI is used to build distributed applications; it provides
remote communication between Java programs. It is
provided in the package java.rmi.

Architecture of an RMI Application


In an RMI application, we write two programs, a server
program (resides on the server) and a client program (resides
on the client).
 Inside the server program, a remote object is created and
. reference of that object is made available for the client (using
the registry).
 The client program requests the remote objects on the server
and tries to invoke its methods.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

RMI -COMPONENTS
Let us now discuss the components of this architecture.
 Transport Layer − This layer connects the client and the server. It manages the existing
connection and also sets up new connections.

 Stub − A stub is a representation (proxy) of the remote object at client. It resides in the
client system; it acts as a gateway for the client program.

 Skeleton − This is the object which resides on the server side. stub communicates with
. this skeleton to pass request to the remote object.

 RRL(Remote Reference Layer) − It is the layer which manages the references made by
the client to the remote object.
Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

WORKING OF AN RMI APPLICATION


The following points summarize how an RMI application works −
 When the client makes a call to the remote object, it is received by the stub which
eventually passes this request to the RRL.

 When the client-side RRL receives the request, it invokes a method called invoke() of the
object remote Ref. It passes the request to the RRL on the server side.

 The RRL on the server side passes the request to the Skeleton (proxy on the server) which
. finally invokes the required object on the server.

 The result is passed all the way back to the client.


Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

GOALS OF RMI
Following are the goals of RMI −
 To minimize the complexity of the application.

 To preserve type safety.

 Distributed garbage collection.

 Minimize the difference between working with local and remote objects.
.

Ms.Rashi Jain
ARYA GROUP OF COLLEGES JAIPUR Distributed System CS VI Sem

THANKS & REGARDS....


.

Ms.Rashi Jain

You might also like