CHP 2
CHP 2
CHP 2
Communication in Distributed
Systems
priyanka bhansali
The single most important difference
between a distributed system and a
uniprocessor system is the interprocess
communication.
priyanka bhansali
In a uniprocessor system, interprocess
communication assumes the existence of shared
memory.
A typical example is the producer-consumer
problem.
priyanka bhansali
E.g. Proc. A wants to communicate with
Proc. B
1.It first builds a message in its own
address space
2.It executes a system call
3.The OS fetches the message and sends it
through network to B.
priyanka bhansali
A and B have to agree on the meaning of the bits
being sent. For example,
priyanka bhansali
OSI (Open System Interconnection Reference
model)
Machine 1 Machine 2
Process A Process B
Application protocol
Application Application
Presentation protocol
Presentation Presentation
Interface Interface
Session protocol
Session Sessionn
Transport protocol
Transport Transport
Network protocol
Network Network
Data link protocol
Data link Data link
Physical protocol
Physical Physical
Network
priyanka bhansali
The physical layer
priyanka bhansali
The data link layer
This layer is to detect and correct errors in the
physical layer. It groups the bits into frames, and
see that each frame is correctly received.
The data link layer does its work by putting a special bit pattern on
the start and end of each frame, to mark them, as well as computing a
checksum by adding up all the bytes in the frame in a certain way.
The receiver recomputes the checksum from the data and compares
the result to the checksum following the frame. If they agree, ok. If
not, resend.
priyanka bhansali
Error-detecting codes & Error-
correcting codes
Two basic strategies have been developed to deal
with errors in the transmission.
Error-detecting strategy: include only enough
redundancy to allow the receiver to deduce that
an error occurred, but not which error.
Error-correcting strategy: include enough
redundant information along with each block of
data sent, to enable the receiver to deduce what
the transmitted data must have been.
priyanka bhansali
A frame consists of m data bits and r redundant
bits. Let the total length be n (n=m+r). An n-bit
unit containing data and check bits is often
referred to as an n-bit codeword.
Given any two codewords, say 100 and 101, it is
easy to determine how many corresponding bits
differ. Just use exclusive or.
The number of bit positions in which two
codewords differ is called the Hamming
distance.
priyanka bhansali
Given the algorithm for computing the
check bits, it is possible to construct a
complete list of the legal codewords, and
from this list find the two codewords
whose Hamming distance is minimum.
This distance is the Hamming distance of
the complete code.
priyanka bhansali
To detect d errors, you need a distance d+1 code
because with such a code there is no way that d
single-bit errors can change a valid codeword into
another valid codeword.
To correct d errors, you need a distance 2d+1
code because that way the legal codewords are so
far apart that even with d changes, the original
codeword is still closer than any other codeword,
so it can be uniquely determined.
priyanka bhansali
An example is to append a single parity bit to the
data. A code with a single parity bit has a distance
2, so it can detect single errors.
Another example is an error-correcting code of
four valid codewords: 0000000000, 0000011111,
1111100000, and 1111111111. This code has a
distance 5. It can correct double errors. If the
codeword 0000000111 arrives, the receiver
knows that the original must have been
0000011111.
priyanka bhansali
If we want to design a code with m
message bits and r check bits that will
allow all single errors to be corrected, the
requirement is: (m+r+1)<=2r.
priyanka bhansali
Hamming code
Hamming code can correct single errors.
1001000
Hamming code: 00110010000
1100001
Hamming code: 10111001001
priyanka bhansali
Polynomial code checksum
Frame: 1101011011
Generator: 10011, agreed by the send and the
revceiver.
Message after 4 (the degree of the generator) zero
bits are appended: 11010110110000
11010110110000 divide 10011 using modulo 2
division. The remainder is 1110.
Append 1110 to the frame and send it.
When the receiver gets the message, divide it by
the generator, if there is a remainder, there has
been an error. priyanka bhansali
The network layer
The primary task of this layer is routing,
that is, how to choose the best path to send
the message to the destination.
The shortest route is not always the best route. What
really matters is the amount of delay on a given route.
Delay can change over the course of time.
Two network-layer protocols:
1) X.25 (telephone network) connection-oriented
2) IP (Internet protocol) connectionless
priyanka bhansali
The transport layer
This layer is to deliver a message to the
transport layer with the expectation that it
will be delivered without loss.
Upon receiving a message from the session layer
The transport layer breaks it into pieces small
enough for each to fit in a single packet
Assign each one a sequence number
Send them all
E.g. TCP, UDP
priyanka bhansali
The session layer
This layer is essentially an enhanced
version of the transport layer.
Provides dialog control, to keep track of which
party is currently talking
Few applications are interested in this
and it is rarely supported.
priyanka bhansali
Presentation Layer
This layer is concerned with the meaning
of bits.
E.g. people’s names, addresses, amounts of
money, and so on.
priyanka bhansali
The Application Layer
priyanka bhansali
Asynchronous Transfer
Mode (ATM)
Asynchronous Transfer Mode (ATM) is a
switching technique that uses time division
multiplexing (TDM) for data
communications. It is a network
technology that supports voice, video and
data communications. ATM encodes data
into small fixed − sized cells so that they
are suitable for TDM and transmits them
over physical medium.
priyanka bhansali
The functional reference model
of ATM is illustrated as below −
priyanka bhansali
Features
ATM technology provides dynamic bandwidth that is particularly suited
for bursty traffic.
All data are encoded into identical cells. Hence, data transmission is
simple, uniform and predictable. Uniform packet size ensures that mixed
traffic is handled efficiently.
The size of an ATM cell is 53 bytes, 5 byte header and 48 byte payload.
There are two different cell formats − user-network interface (UNI) and
network-network interface (NNI).
‘Asynchronous’ implies that the cells do not need to be transmitted
continuously as in synchronous lines. Cells are sent only when there is
data to be sent.
ATM header are of small size of 5 bytes. This reduces packet overload,
thus ensuring effective bandwidth usage.
ATM flexibly divide the bandwidth requirement of physical layer. It
provides scalability both in size and speed.
priyanka bhansali
HOW DATA IS SEND USING
ATM MODEL
SENDER first establishes a connection to
the receiver or receivers.
During connection establishment, a route is
determined from the sender to the receiver
and routing information is stored in the
switches along the way.
Using this connection, packets can be sent,
but they are chopped up by the hardware
into small,fixed-sized units called cells.
priyanka bhansali
The cells follow the path stored in the
switches.
When the connection is no longer needed,
It is released and the routing information
in the switches are erased.
The advantage of cell switching is that it
can handle both point-to-point and
multicasting efficiently.
priyanka bhansali
ATM reference model
UPPER LAYER
ADAPTATION
LAYER
ATM LAYER
PHYSICAL LAYER
priyanka bhansali
THE ATM PHYSICAL LAYER
ATM ADAPTOR board plugged into a computer
can put out a stream of cells onto a wire or fiber.
Transmission stream is continuous. When no data
is transmitted,empty cells are transmitted.
ATM is synchronous.
Alternatively, the adaptor board can use
SONET(Synchronous Optical Network) in the
physical layer, putting its cell s into the payload
portion of SONET frames.
priyanka bhansali
Structure: group of servers offering service
to clients
Based on a request/response paradigm
Techniques:
Socket, remote procedure calls (RPC),
Remote Method Invocation (RMI)
priyanka bhansali
THE ATM LAYER
This layer is comparable to data link layer
of OSI model. It accepts the 48 byte
segments from the upper layer, adds a 5
byte header to each segment and converts
into 53 byte cells. This layer is responsible
for routing of each cell, traffic
management, multiplexing and switching.
priyanka bhansali
ATM Adaptation Layer (AAL)
This layer corresponds to network layer of OSI
model. It provides facilities to the existing packet
switched networks to connect to ATM network
and use its services. It accepts the data and
converts them into fixed sized segments. The
transmissions can be of fixed or variable data
rate. This layer has two sub layers − Convergence
sub layer and Segmentation and Reassembly sub
layer.
priyanka bhansali
Client-Server Model
Request
Client Server
Reply
Kernel Kernel
Network
priyanka bhansali
Client-Server Model Layer
7
5 Request/Reply
4
3
2 Data link
1 Physical
priyanka bhansali
Advantages
Simplicity: The client sends a request and gets an
answer. No connection has to be established.
Efficiency: just 3 layers. Getting packets from
client to server and back is handled by 1 and 2 by
hardware: an Ethernet or Token ring. No routing
is needed and no connections are established, so
layers 3 and 4 are not needed. Layer 5 defines the
set of legal requests and replies to these requests.
two system calls: send (dest, &mptr), receive
(addr, &mptr)
priyanka bhansali
An example of Client-Server
header.h
/* definitions needed by clients and servers.*/
#define MAX_PATH 255 /* maximum length of a file name */
#define BUF_SIZE 1024 /* how much data to transfer at once */
#define FILE_SERVER 243 /* file server’s network address */
priyanka bhansali
/* Error codes. */
#define OK 0 /* operation performed correctly */
#define E_BAD_OPCODE –1 /* unknown operation requested */
#define E_BAD_PARAM –2 /* error in a parameter */
#define E_IO -3 /* disk error or other I/O error */
priyanka bhansali
/* Definition of the message format. */
struct message {
long source; /* sender’s identity */
long dest; /* receiver’s identity */
long opcode; /* which operation: CREATE, READ, etc. */
long count; /* how many bytes to transfer */
long offset; /* where in file to start reading or writing */
long extra1; /* extra field */
long extra2; /* extra field */
long result; /* result of the operation reported here */
char name[MAX_PATH]; /* name of the file being operated on */
char data[BUF_SIZE]; /* data to be read or written */
};
priyanka bhansali
#include <header.h>
void main(void)
{
struct message m1, m2; /* incoming and outgoing messages */
int r; /* result code */
while (1) { /* server runs forever */
receive(FILE_SERVER, &m1); /* block waiting for a message */
switch(m1.opcode) { /* dispatch on type of request */
case CREATE: r = do_create(&m1, &m2); break;
case READ: r = do_read(&m1, &m2); break;
case WRITE: r = do_write(&m1, &m2); break;
case DELETE: r = do_delete(&m1, &m2); break;
default: r = E_BAD_OPCODE;
}
m2.result = r; /* return result to client */
send(m1.source, &m2); /* send reply */
}
priyanka bhansali
#include <header.h>
int copy (char *src, char *dst) /* procedure to copy file using the server */
{ struct message m1; /* message buffer */
long position; /* current file position */
long client = 110; /* client’s address */
initialize(); /* prepare for execution */
position = 0;
priyanka bhansali
do { /* get a block of data from the source file. */
m1.opcode = READ; /* operation is a read */
m1.offset = position; /* current position in the file */
strcpy(&m1.name, src); /* copy name of file to be read to message */
send(FILE_SERVER, &m1); /* send the message to the file server */
receive(client, &m1); /* block waiting for the reply */
/* write the data just received to the destination file. */
m1.opcode = WRITE; /* operation is a write */
m1.offset = position; /* current position in the file */
m1.count = m1.result; /* how many bytes to write */
strcpy(&m1.name, dst); /* copy name of file to be written to buf */
send(FILE_SERVER, &m1); /* send the message to the file server */
receive(client, &m1); /* block waiting for the reply */
position += m1.result; /* m1.result is number of bytes written */
} while (m1.result > 0); /* iterate until done */
return (m1.result >=0 > OK: m1.result); /* return OK or error code */
}
priyanka bhansali
Issues in Client-Server
Communication
Addressing
Blocking versus non-blocking
Buffered versus unbuffered
Reliable versus unreliable
Server architecture: concurrent versus
sequential
Scalability
priyanka bhansali
Addressing Issues
Question: how is the server
located?
user server
Hard-wired address
Machine address and process
address are known a priori
Broadcast-based user server
Server chooses address from
a sparse address space
Client broadcasts request
Can cache response for future NS user server
priyanka bhansali
Blocking versus Non-blocking
Blocking communication (synchronous)
Send blocks until message is actually sent
Receive blocks until message is actually
received
Non-blocking communication
(asynchronous)
Send returns immediately
Return does not block either
Examples:
priyanka bhansali
Unbuffered
Buffering Issues
user server
communication
Server must call receive
before client can call send
user server
Buffered communication
Client send to a mailbox
Server receives from a
mailbox
priyanka bhansali
Unreliable channel Reliability request
ACK
Need acknowledgements (ACKs)
Server
User
reply
Applications handle ACKs ACK
ACKs for both request and reply
Reliable channel
Reply acts as ACK for request
request
Explicit ACK for response
Server
User
reply
Reliable communication on unreliable
ACK
channels
Transport protocol handles lost
messages
priyanka bhansali
Server Architecture
Sequential
Serve one request at a time
Can service multiple requests by employing events and
asynchronous communication
Concurrent
Server spawns a process or thread to service each request
Can also use a pre-spawned pool of threads/processes (apache)
Thus servers could be
Pure-sequential, event-based, thread-based, process-based
priyanka bhansali
Scalability
Question:How can you scale the server
capacity?
Buy bigger machine!
Replicate
Distribute data and/or algorithms
Ship code instead of data
Cache
priyanka bhansali
Addressing
1.the server’s address was simply
hardwired as a constant
2.Machine # + Process #: 243.4 199.0
3.Machine # + local-id
Disadvantage: it is not transparent to the
user. If the server is changed from 243 to
170, the program has to be changed.
priyanka bhansali
4. Assign each process a unique address
that does not contain an embedded machine
number.
One way to achieve this is to have a centralized process
address allocator that simply maintains a counter. Upon
receiving a request for an address, it simply returns the
current value of the counter and increment it by one.
Disadvantage: centralize does not scale to large
systems.
priyanka bhansali
5. Let each process pick its own id from a
large, sparse address space, such as the
space of 64-bit binary integers.
Problem: how does the sending kernel
know what machine to send the message
to?
priyanka bhansali
Solution:
a.The sender can broadcast a special “locate packet”
containing the address of the destination process.
b. All the kernel check to see if the address is theirs.
c. If so, send back “here I am” message giving their
network address (machine number).
Disadvantage: broadcasting puts extra load on
the system.
priyanka bhansali
6. provide an extra machine to map high-
level (ASCII) service names to machine
addresses. Servers can be referred to by
ASCII strings in the program.
Disadvantage: centralized component: the
name server
priyanka bhansali
7. Use special hardware. Let process pick
random address. Instead of locating them
by broadcasting, locate them by hardware.
priyanka bhansali
Blocking versus Nonblocking
Primitives
Client blocked
Client running Client running
priyanka bhansali
Nonblocking send primitive
Client
blocked
Client running
Client running
Trap Return
priyanka bhansali
Nonblocking primitives
Advantage: can continue execution without
waiting.
Disadvantage: the sender cannot modify
the message buffer until the message has
been sent and it does not know when the
transfer can complete. It can hardly avoid
touching the buffer forever.
priyanka bhansali
Solutions to the drawbacks of
nonblocking primitives
1.To have the kernel copy the message to an
internal kernel buffer and then allow process to
continue.
Problem: extra copies reduce the system
performance.
2. Interrupt the sender when the message has been
sent
Problem: user-level interrupts make programming
tricky, difficult, and subject to race conditions.
priyanka bhansali
Buffered versus Unbuffered
Primitives
No buffer allocated. Fine if receive() is
called before send().
Buffers allocated, freed, and managed to
store the incoming message. Usually a
mailbox created.
priyanka bhansali
Reliable versus Unreliable
Primitives
The system has no guarantee about
message being delivered.
The receiving machine sent an
acknowledgement back. Only when this
ack is received, will the sending kernel free
the user (client) process.
Use reply as ack.
priyanka bhansali
Implementing the client-server
model
Item Option 1 Option 2 Option 3
Addressing Machine number Sparse process ASCII names
address looked up via
server
Blocking Blocking Nonblocking with Nonblocking with
primitives copy to kernel interrupt
Buffering Unbuffered, Unbuffered, Mailboxes
discarding temporarily keeping
unexpected unexpected messages
messages
Reliability Unreliable Request-Ack-Reply Request-Reply-Ack
Ack
priyanka bhansali
Acknowledgement
Long messages can be split into multiple packets.
For example, one message: 1-1, 1-2, 1-3; another
message: 2-1, 2-2, 2-3, 2-4.
Ack each individual packet
Advantage: if a packet is lost, only that packet has to be retransmitted.
Disadvantage: require more packets on the network.
Ack entire message
Advantage: fewer packets
Disadvantage: more complicated recovery when a packet is lost.
(Because retransmit the entire message).
priyanka bhansali
Code Packet type From To Description
REQ Request Client Server The client wants service
REP Reply Server Client Reply from the server to the
client
ACK Ack Either Other The previous packet arrived
AYA Are you Client Server Probe to see if the server has
alive? crashed
IAA I am alive Server Client The server has not crashed
TA Try again Server Client The server has no room
AU Address Server Client No process is using this
unknown address
priyanka bhansali
Some examples of packet
exchanges for client-server
communication
REQ
Client Server
REP
REQ
Client ACK Server
REP
ACK
REQ
ACK
Client AYA Server
IAA
REP
ACK
priyanka bhansali
Remote Procedure Call
The idea behind RPC is to make a remote
procedure call look as much as possible
like a local one.
A remote procedure call occurs in the
following steps:
priyanka bhansali
Remote procedure call steps:
The client procedure calls the client stub in the normal way.
The client stub builds a message and traps to the kernel.
The kernel sends the message to the remote kernel.
The remote kernel gives the message to the server stub.
The server stub unpacks the parameters and calls the server.
The server does the work and returns the result to the stub.
The server stub packs it in a message and traps to the kernel.
The remote kernel sends the message to the client’s kernel.
The client’s kernel gives the message to the client stub.
The stub unpacks the result and returns to the client.
priyanka bhansali
Remote Procedure Call
Client machine Client stub Server stub Server machine
Kernel Kernel
Message transport
over the network
priyanka bhansali
Parameter Passing
little endian: bytes are numbered from
right to left
3 2 1 0
0 0 0 5
7 6 5 4
L L I J
big endian: bytes are numbered from left
to right
0 1 2 3
5 0 0 0
4 5 6 7
J I L L
priyanka bhansali
How to let two kinds of machines
talk to each other?
a standard should be agreed upon for representing
each of the basic data types, given a parameter
list (n parameters) and a message.
devise a network standard or canonical form for
integers, characters, Booleans, floating-point
numbers, and so on.
Convert to either little endian/big endian. But
inefficient.
use native format and indicate in the first byte of
the message which format this is.
priyanka bhansali
How are pointers passed?
not to use pointers. Highly undesirable.
copy the array into the message and send it to the
server. When the server finishes, the array can be
copied back to the client.
distinguish input array or output array. If input,
no need to be copied back. If output, no need to
be sent over to the server.
still cannot handle the most general case of a
pointer to an arbitrary data structure such as a
complex graph.
priyanka bhansali
How can a client locate the
server?
hardwire the server network address into
the client.
Disadvantage: inflexible.
priyanka bhansali
Dynamic Binding
Server: exports the server interface.
The server registers with a binder (a
program), that is, give the binder its name,
its version number, a unique identifier, and
a handle.
The server can also deregister when it is no
longer prepared to offer service.
priyanka bhansali
How the client locates the server?
When the client calls one of the remote procedure “read”
for the first time, the client stub sees that is not yet bound
to a server.
The client stub sends message to the binder asking to
import version 3.1 of the file-server interface.
The binder checks to see if one or more servers have
already exported an interface with this name and version
number.
If no server is willing to support this interface, the “read”
call fails; else if a suitable server exists, the binder gives
its handle and unique identifier to the client stub.
The client stub uses the handle as the address to send the
request message to. priyanka bhansali
Advantages
It can handle multiple servers that support the
same interface
The binder can spread the clients randomly over
the servers to even the load
It can also poll the servers periodically,
automatically deregistering any server that fails to
respond, to achieve a degree of fault tolerance
It can also assist in authentication. Because a
server could specify it only wished to be used by
a specific list of users
priyanka bhansali
Disadvantage
the extra overhead of exporting and
importing interfaces cost time.
priyanka bhansali
Server Crashes
The server can crash before the execution or after
the execution
The client cannot distinguish these two.
The client can:
Wait until the server reboots and try the operation
again (at least once semantics).
Gives up immediately and reports back failure (at
most once semantics).
Guarantee nothing.
priyanka bhansali
Client Crashes
If a client sends a request to a server and
crashes before the server replies, then a
computation is active and no parent is
waiting for the result. Such an unwanted
computation is called an orphan.
priyanka bhansali
Problems with orphans
They waste CPU cycles
They can lock files or tie up valuable
resources
If the client reboots and does the RPC
again, but the reply from the orphan comes
back immediately afterward, confusion can
result
priyanka bhansali
What to do with orphans?
Extermination: Before a client stub sends an
RPC message, it makes a log entry telling what it
is about to do. After a reboot, the log is checked
and the orphan is explicitly killed off.
Disadvantage: the expense of writing a disk
record for every RPC; it may not even work,
since orphans themselves may do RPCs, thus
creating grandorphans or further descendants
that are impossible to locate.
priyanka bhansali
Reincarnation:Divide time up into
sequentially numbered epochs. When a
client reboots, it broadcasts a message to
all machines declaring the start of a new
epoch. When such a broadcast comes in, all
remote computations are killed.
priyanka bhansali
Gentle reincarnation: when an epoch
broadcast comes in, each machine checks
to see if it has any remote computations,
and if so, tries to locate their owner. Only if
the owner cannot be found is the
computation killed.
priyanka bhansali
Expiration:Each RPC is given a standard
amount of time, T, to do the job. If it
cannot finish, it must explicitly ask for
another quantum. On the other hand, if
after a crash the server waits a time T
before rebooting, all orphans are sure to be
gone.
None of the above methods are desirable.
priyanka bhansali
Implementation Issues
the choice of the RPC protocol:
connection-oriented or connectionless
protocol?
general-purpose protocol or specifically
designed protocol for RPC?
packet and message length
Acknowledgements
priyanka bhansali
Flow control
overrun error: with some designs, a chip
cannot accept two back-to-back packets
because after receiving the first one, the
chip is temporarily disabled during the
packet-arrived interrupt, so it misses the
start of the second one.
priyanka bhansali
How to deal with overrun error?
If the problem is caused by the chip being
disabled temporarily while it is processing an
interrupt, a smart sender can insert a delay
between packets to give the receiver just enough
time.
If the problem is caused by the finite buffer
capacity of the network chip, say n packets, the
sender can send n packets, followed by a
substantial gap.
priyanka bhansali
Timer Management
Current time Current time
14200 14200
Process table
14205
Process 3 0
14216
1
14212 0
Process 2
2
14212
14216
Process 0 3
14205
priyanka bhansali
Group Communication
RPC can have one-to-one communication
(unicast) one-to-many communication
(multicast) and one-to-all communication
(broadcast).
Multicasting can be implemented using
broadcast. Each machine receives a
message. If the message is not for this
machine, then discard.
priyanka bhansali
Closed groups: only the member of the group
can send messages to the group. Outsiders cannot.
Open groups: any process in the system can send
messages to the group.
Peer group: all the group members are equal.
Advantage: symmetric and has no single point of failure.
Disadvantage: decision making is difficult. A vote has to be taken.
priyanka bhansali
Group Membership
Management
Centralized way: group server maintains a
complete data base of all the groups and
their exact membership.
Advantage: straightforward, efficient, and easy to implement.
Disadvantage: single point of failure.
priyanka bhansali
Group Addressing
A process just sends a message to a group address
and it is delivered to all the members. The sender
is not aware of the size of the group or whether
communication is implemented by multicasting,
broadcasting, or unicasting.
Require the sender to provide an explicit list of all
destinations (e.g., IP addresses).
Each message contains a predicate (Boolean
expression) to be evaluated. If it is true, accept; If
false, discard.
priyanka bhansali
Send and Receive Primitives
If we wish to merge RPC and group
communication, to send a message, one of
the parameters of send indicates the
destination. If it is a process address, a
single message is sent to that one process.
If it is a group address, a message is sent to
all members of the group.
priyanka bhansali
Atomicity
How to guarantee atomic broadcast and fault
tolerance?
The sender starts out by sending a message to all members of the
group. Timers are set and retransmissions sent where necessary.
When a process receives a message, if it has not yet seen this
particular message, it, too, sends the message to all members of the
group (again with times and retransmissions if necessary). If it has
already seen the message, this step is not necessary and the message
is discarded. No matter how many machines crash or how many
packets are lost, eventually all the surviving processes will get the
message.
priyanka bhansali
Message Ordering
Use global time ordering, consistent time
ordering.
priyanka bhansali
Overlapping Groups
Overlapping groups can lead to a new kind
of inconsistency.
Group 1 Group 2
1 B 2
A
D
4 C
3
priyanka bhansali
Scalability
Many algorithms work fine as long as all
the groups only have a few members, but
what happens when there are tens,
hundreds, or even thousands of members
per group? If the algorithm still works
properly, the property is called scalability.
priyanka bhansali
Asynchronous Transfer Mode
Networks (ATM)
When the telephone companies decided to build networks
for the 21st century, they faced a dilemma:
Voice traffic is smooth, needing a low, but constant
bandwidth.
Data traffic is bursty, needing no bandwidth (when there
is no traffic), but sometimes needing a great deal for very
short periods of time.
Neither traditional circuit switching (used in the Public
Switched Telephone Network) nor packet switching (used
in the Internet) was suitable for both kinds of traffic.
priyanka bhansali
After much study, a hybrid form using
fixed-size blocks over virtual circuits was
chosen as a compromise that gave
reasonably good performance for both
types of traffic. The scheme, is called
ATM.
priyanka bhansali
ATM
The idea of ATM is that a sender first establish a
connection (i.e., a virtual circuit) to the receiver.
During connection establishment, a route is
determined from the sender to the receiver and
routing information is stored in the switches
along the way. Using this connection, packets can
be sent, but they are chopped up into small, fixed-
sized units call cells. The cells for a given virtual
circuit all follow the path stored in the switches.
When the connection is no longer needed, it is
released and the routing information purged from
the switches. priyanka bhansali
A virtual circuit
Router
Sender Receiver
priyanka bhansali
Advantages: now a single network can be
used to transport an arbitrary mix of voice,
data, broadcast television, videotapes,
radio, and other information efficiently,
replacing what were previously separate
networks (telephone, X.25, cable TV, etc.).
Video conferencing can use ATM.
priyanka bhansali
ATM reference model
Upper layers
Adaptation layer
ATM layer
Physical layer
priyanka bhansali
The ATM physical layer has the same
functionality as layer 1 in the OSI model.
The ATM layer deals with cells and cell
transport, including routing.
The adaptation layer handles breaking packets
into cells and reassembling them at the other end.
The upper layer makes it possible to have ATM
offer different kinds of services to different
applications.
priyanka bhansali
An ATM cell
Bytes 5 48
priyanka bhansali