Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A) What Is RPC? Explain Different Types of RPC?

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

A) What is RPC? Explain different types of RPC?

Ans: RPC is a powerful technique for constructing distributed, client-server based applications. It
is based on extending the notion of conventional, or local procedure calling, so that the called
procedure need not exist in the same address space as the calling procedure. The two processes
may be on the same system, or they may be on different systems with a network connecting them.
By using RPC, programmers of distributed applications avoid the details of the interface with the
network. The transport independence of RPC isolates the application from the physical and
logical elements of the data communications mechanism and allows the application to use a
variety of transports.
B) Explain causal ordering of messages for ordered delivery of
multicast messages in group communication?
Ans: Causal Ordering
For some applications consistent-ordering semantics is not necessary and even weaker semantics
is acceptable. Therefore, an application can have better performance if the message-passing system used
supports a weaker ordering semantics that is acceptable to the application. One such weaker ordering
semantics that is acceptable to many applications is the causal ordering semantics. This semantics
ensures that if the event of sending one message is causally related to the event of sending another
message, the two messages are delivered to all receivers in the correct order. However, if two messagesending events are not causally related, the two messages may be delivered to the receivers in any order.
Two message-sending events are said to be causally related if they are correlated by the happened-before
relation.

Causal ordering of messages

One method for implementing causal-ordering semantics is the CBCAST protocol


of the ISIS system. It assumes broadcasting of all messages to group members and
works as follows:

1. Each member process of a group maintains a vector of n components, where n is


the total number of members in the group. Each member is assigned a sequence
number from 0 to n-1, and the i-th component of the vectors corresponds to the
member with sequence number i. In particular, the value of i-th component of a
members vector is equal to the number of the last message received in
sequence by this member from member i.
2. To send a message, a process increments the value of its own component in its
own vector and sends the vector as part of the message.
3. When the message arrives at a receiver processs site, it is buffered by the
runtime system. The runtime system tests the two conditions given below to
decide whether the message can be delivered to the user process or its delivery
must be delayed to ensure causal-ordering semantics. Let S be the vector of the
sender process that is attached to the message and R be the vector of the
receiver process. Also let i be the sequence number of the sender process. Then
the two conditions to be tested are:
1. S[i]=R[i]+1
2. S[j]<=R[j] for all j not equal i
The first condition ensures that the receiver has not missed any message from the sender. This
test is needed because two messages from the same sender are always causally related. The second
condition ensures that the sender has not received any messages that the receiver has not yet received.
This test is needed to make sure that the senders message is not causally related to a message missed
by the receiver.
If the message passes these two tests, the runtime system delivers it to the user process.
Otherwise, the message is left in the buffer and the test is carried out again for it when a new message
arrives.

Causal ordering of messages formal description and algorithms


A distributed system guarantees causal ordering of messages iff the following
implication holds:
(send(m1)

send(m2)) => (rec(m1)

rec(m2)

Example:

1
e1

P1

2
e1
e2
2

P2

e1
2

P3

2
e3
e1
3

C) Describe workstation-server model.


Ans. Workstation Server Model

The workstation model is a network of personal workstations, each with its own disk and a local
file system.
A workstation with its own local disk is usually called a diskful workstation and a workstation
without a local disk is called a diskless workstation.
With the proliferation of high-speed networks, diskless workstations have become more popular
in network.
Environments than diskful workstations, making the workstation-server model more popular than
the workstation model for building distributed computing systems.
A distributed computing system based on the workstation server model consists of a few
minicomputers and several workstations (most of which are diskless, but a few of which may be
diskful) interconnected by a communication network.

D) Describe Transparency. Explain different transparency aspects?


Ans: Transparency
We saw that one of the main goals of a distributed operating system is to make the existence of multiple
computers invisible (transparent) and provide a single system image to its users. That is, a distributed
operating system must be designed in such a way that a collection of distinct machines connected by a
communication subsystem appears to its users as a virtual uniprocessor. Achieving complete transparency
is a difficult task and requires that several different aspects of transparency be supported by the
distributed operating system.
Access Transparency
Access transparency means that users should not need or be able to recognize whether a resource
(hardware or software) is remote or local. This implies that the distributed operating system should allow
users to access remote resources in the sameway as local resources.
Location Transparency
The two main aspects of location transparency are as follows:
1 Name transparency. This refers to the fact that the name of a resource (hardware or software) should not
reveal any hint as to the physical location of the resource.
2. User mobility. This refers to the fact that no matter which machine a user is logged onto, he or she
should be able to access a resource with the same name.
Replication Transparency
For better performance and reliability, almost all distributed operating systems have the provision to
create replicas (additional copies) of files and other resources on different nodes of the distributed system.
In these systems, both the existence of multiple copies of a replicated resource and the replication activity
should be transparent to the users.
Failure Transparency
Failure transparency deals with masking from the users' partial failures in the system, such as a
communication link failure, a machine failure, or a storage device crash. A distributed operating system
having failure transparency property will' continue to function, perhaps in a degraded form, in the face of
partial failures.
Migration Transparency
For better performance, reliability, and security reasons, an object that is capable of being moved (such as
a process or a file) is often migrated from one node to another in a distributed system.
Concurrency Transparency
Concurrency transparency means that each user has a feeling that he or she is the sole user of the system
and other users do not exist in the system.

Performance Transparency
The aim of performance transparency is to allow the system to be automatically reconfigured to improve
performance, as loads vary dynamically in the system.
Scaling Transparency
The aim of scaling transparency is to allow the system to expand in scale withoutdisrupting the activities
of the users.
E) What is Buffering? Explain the finite bound buffering strategy.
Ans. In the standard message passing model, messages can be copied many times: from the user buffer to
the kernel buffer (the output buffer of a channel), from the kernel buffer of the sending computer
(process) to the kernel buffer in the receiving computer (the input buffer of a channel), and finally from
the kernel buffer of the receiving computer (process) to a user buffer.
Finite-Bound Buffer
Unbounded capacity of a buffer is practically impossible. Therefore, in practice, systems using
asynchronous mode of communication use finite-bound buffers, also known as multiple-message buffers.
In this case message is first copied from the sending processs memory into the receiving processs
mailbox and then copied from the mailbox to the receivers memory when the receiver calls for the
message.
When the buffer has finite bounds, a strategy is also needed for handling the problem of a
possible buffer overflow. The buffer overflow problem can be dealt with in one of the following two
ways:

Unsuccessful communication. In this method, message transfers simply fail, whenever there is
no more buffer space and an error is returned.

Flow-controlled communication. The second method is to use flow control, which means that
the sender is blocked until the receiver accepts some messages, thus creating space in the buffer
for new messages. This method introduces a synchronization between the sender and the receiver
and may result in unexpected deadlocks. Moreover, due to the synchronization imposed, the
asynchronous send does not operate in the truly asynchronous mode for all send commands.

F) Explain server implementation of RPC mechanism.


Ans:

3. What is distributed computing system? Explain in brief its gaining popularity? (10 Marks)
Ans.
Resource sharing: Since a computer can request a service from another computer by sending an
appropriate request to it over the communication network, hardware and software resources can be shared
among computers. For example, a printer, a compiler, a text processor, or a database at a computer can be
shared with remote computers.
Enhanced Performance: A distributed computing system is capable of providing rapid response time and
higher system throughput. This ability is mainly due to the fact that many tasks can be concurrently
executed at different computers. Moreover, distributed systems can employ a load distributing technique
to improve response time. In load
distributing tasks at heavily loaded computers are transferred to lightly loaded computers,
thereby reducing the time tasks wait before receiving service.
Improved reliability and availability: A distributed computing system provides improved reliability and
availability because a few components of the system can fail without affecting the availability of the rest
of the system. Also, through the replication of data (e.g., files and directories) and services, distributed
systems can be made fault tolerant. Services are processes that provide functionality (e.g., a file service
provides file system management; a mail service provides an electronic mail facility).
Modular expandability: Distributed computing systems are inherently; amenable to modular expansion
because new hardware and software resources can be easily added without replacing the existing
resources.

You might also like