DCCN Notes
DCCN Notes
Subsequently, all cells follow the same path to the destination. It can handle both constant rate
traffic and variable rate traffic. Thus it can carry multiple types of traffic with end-to-end quality
of service. ATM is independent of a transmission medium, they may be sent on a wire or fiber by
themselves or they may also be packaged inside the payload of other carrier systems. ATM
networks use ―Packet‖ or ―cell‖ Switching with virtual circuits. Its design helps in the
implementation of high-performance multimedia networking.
ATM Layers:
1. ATM Adaption Layer (AAL) – It is meant for isolating higher-layer protocols from
details of ATM processes and prepares for conversion of user data into cells and
segments it into 48-byte cell payloads. AAL protocol excepts transmission from upper-
layer services and helps them in mapping applications, e.g., voice, data to ATM cells.
2. Physical Layer –
It manages the medium-dependent transmission and is divided into two parts physical
medium-dependent sublayer and transmission convergence sublayer. The main functions
are as follows:
o It converts cells into a bitstream.
o It controls the transmission and receipt of bits in the physical medium.
o It can track the ATM cell boundaries.
o Look for the packaging of cells into the appropriate type of frames.
3. ATM Layer –
It handles transmission, switching, congestion control, cell header processing, sequential
delivery, etc., and is responsible for simultaneously sharing the virtual circuits over the
physical link known as cell multiplexing and passing cells through an ATM network
known as cell relay making use of the VPI and VCI information in the cell header.
ATM Applications:
1. ATM WANs – It can be used as a WAN to send cells over long distances, a router
serving as an end-point between ATM network and other networks, which has two stacks
of the protocol.
3. Frame relay backbone – Frame relay services are used as a networking infrastructure
for a range of data services and enabling frame-relay ATM service to Internetworking
services.
5. Carrier infrastructure for telephone and private line networks – To make more
effective use of SONET/SDH fiber infrastructures by building the ATM infrastructure for
carrying the telephonic and private-line traffic.
ATM PROTOCOL ARCHITECTURE
4.1 What is ATM protocol architecture?
The asynchronous transfer mode (ATM) protocol architecture is designed to support the transfer
of data with a range of guarantees for quality of service. The user data is divided into small,
fixed-length packets, called cells, and transported over virtual connections. ATM operates over
high data rate physical circuits, and the simple structure of ATM cells allows switching to be
performed in hardware, which improves the speed and efficiency of ATM switches.
Figure 24 shows the reference model for ATM. The first thing to notice is that, as well as layers,
the model has planes. The functions for transferring user data are located in the user plane; the
functions associated with the control of connections are located in the control plane; and the co-
ordination functions associated with the layers and planes are located in the management planes.
The three-dimensional representation of the ATM protocol architecture is intended to portray the
relationship between the different types of protocol. The horizontal layers indicate the
encapsulation of protocols through levels of abstraction as one layer is built on top of another,
whereas the vertical planes indicate the functions that require co-ordination of the actions taken
by different layers. An advantage of dividing the functions into control and user planes is that it
introduces a degree of independence in the definition of the functions: the protocols for
transferring user data (user plane) are separated from the protocols for controlling connections
(control plane).
The protocols in the ATM layer provide communication between ATM switches while the
protocols in the ATM adaptation layer (AAL) operate end-to-end between users. This is
illustrated in the example ATM network in Figure 25.
Two types of interface are identified in Figure 24: one between the users and the network (user-
network interface), and the other between the nodes (switches) within the network (network-
node interface).
Figure 25 ATM network
Before describing the functions of the three layers in the ATM reference model, I shall briefly
describe the format of ATM cells. Figure 26 shows the two basic types of cell.
2. Increased network performance and reliability: The network deals with fewer, aggregated
entities.
3. Reduced processing and short connection setup time: Much of the work is done when the
virtual path is set up. By reserving capacity on a virtual path connection in anticipation of later
call arrivals, new virtual channel connections can be established by executing simple control
functions at the endpoints of the virtual path connection; no call processing is required at transit
nodes. Thus, the addition of new virtual channels to an existing virtual path involves minimal
processing.
4. Enhanced network services: The virtual path is used internal to the network but is also
visible to the end user. Thus, the user may define closed user groups or closed networks of
virtual channel bundles.
Figure 2 suggests in a general way the call establishment process using virtual channels and
virtual paths. The process of setting up a virtual path connection is decoupled from the process of
setting up an individual virtual channel connection,
The virtual path control mechanisms include calculating routes, allocating capacity, and
storing connection state information.
To set up a virtual channel, there must first be a virtual path connection to the required
destination node with sufficient available capacity to support the virtual channel, with the
appropriate quality of service. A virtual channel is set up by storing the required state
information (virtual channel/virtual path mapping).
ATM CELL
ATM Cell Format – As information is transmitted in ATM in the form of fixed-size units called
cells. As known already each cell is 53 bytes long which consists of a 5 bytes header and 48
bytes payload.
Asynchronous Transfer Mode can be of two format types which are as follows:
1. UNI Header: This is used within private networks of ATMs for communication between
ATM endpoints and ATM switches. It includes the Generic Flow Control (GFC) field.
2. NNI Header: is used for communication between ATM switches, and it does not include
the Generic Flow Control(GFC) instead it includes a Virtual Path Identifier (VPI) which
occupies the first 12 bits.
Working of ATM: ATM standard uses two types of connections. i.e., Virtual path connections
(VPCs) which consist of Virtual channel connections (VCCs) bundled together which is a basic
unit carrying a single stream of cells from user to user. A virtual path can be created end-to-end
across an ATM network, as it does not rout the cells to a particular virtual circuit. In case of
major failure, all cells belonging to a particular virtual path are routed the same way through the
ATM network, thus helping in faster recovery.
Switches connected to subscribers use both VPIs and VCIs to switch the cells which are Virtual
Path and Virtual Connection switches that can have different virtual channel connections
between them, serving the purpose of creating a virtual trunk between the switches which can be
handled as a single entity. Its basic operation is straightforward by looking up the connection
value in the local translation table determining the outgoing port of the connection and the new
VPI/VCI value of connection on that link.
ATM is a ―virtual circuit‖ based: the path is reserved before transmission. While Internet
Protocol (IP) is connectionless and end-to-end resource reservations are not possible.
RSVP is a new signaling protocol on the internet.
ATM Cells: Fixed or small size and Tradeoff is between voice or data. While IP packets
are of variable size.
Addressing: ATM uses 20-byte global NSAP addresses for signaling and 32-bit locally
assigned labels in cells. While IP uses 32-bit global addresses in all packets.
The size of an ATM cell is 53 bytes: 5 byte header and 48 byte payload. There are two different
cell formats - user-network interface (UNI) and network-network interface (NNI). The below
image represents the Functional Reference Model of the Asynchronous Transfer Mode.
The above diagram shows relationship of different classes to total capacity of network.
3. Broadcast Storms –
A broadcast storm occurs when there is a sudden upsurge in the number of requests to a
network. As a result, a network may be unable to handle all of the requests at the same
time.
4. Multicasting –
Multicasting occurs when a network allows multiple computers to communicate with
each other at the same time. In multicasting, a collision can occur when two packets are
sent at the same time. Such frequent collisions may cause a network to be congested.
7. Outdated Hardware –
When data is transmitted over old switches, routers, servers, and Internet exchanges,
bottlenecks can emerge. Data transmission can get hampered or slowed down due to
outdated hardware. As a result, network congestion occurs.
8. Over-subscription –
A cost-cutting tactic that can result in the network being compelled to accommodate far
more traffic than it was designed to handle (at the same time).
Congestion at the network layer is related to two issues, throughput and delay.
1. Based on delay
When the load is much less than the capacity of the network, the delay is at a minimum .
This minimum delay is composed of propagation delay and processing delay, both of which are
negligible.
However, when the load reaches the network capacity ,the delay increases sharply because we
now need to add the queuing delay to the total delay.
The delay becomes infinite when the load is greater than the capacity.
CONGESTION CONTROL
Congestion control algorithms
Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding congestive
collapse.
Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
There are two congestion control algorithm which are as follows:
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters the bucket,
the outflow is at constant rate.When the bucket is full with water additional water entering spills
over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are involved in
leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
Token bucket Algorithm
The leaky bucket algorithm has a rigid output design at an average rate independent of
the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to speed up. This
calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
When tokens are shown, a flow to transmit traffic appears in the display of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak
burst rate in good tokens in the bucket.
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty the
traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that the data
is not lost. One such algorithm is token bucket algorithm.
In figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted.
For a packet to be transmitted, it must capture and destroy one token. In figure (B) We see that
three of the five packets have gotten through, but the other two are stuck waiting for more tokens
to be generated.
Ways in which token bucket is superior to leaky bucket: The leaky bucket algorithm controls
the rate at which the packets are introduced in the network, but it is very conservative in nature.
Some flexibility is introduced in the token bucket algorithm. In the token bucket, algorithm
tokens are generated at each tick (up to a certain limit). For an incoming packet to be transmitted,
it must capture a token and the transmission takes place at the same rate. Hence some of the
busty packets are transmitted at the same rate if tokens are available and thus introduces some
amount of flexibility in the system.
3. Discarding Policy :A good discarding policy adopted by the routers is that the routers
may prevent congestion and at the same time partially discard the corrupted or less
sensitive packages and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :Since acknowledgements are also the part of the load in the
network, the acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only
if it has to send a packet or a timer expires.
2. Choke Packet Technique :Choke packet technique is applicable to both virtual networks as
well as datagram subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitors its resources and the utilization at each of its output lines.
Whenever the resource utilization exceeds the threshold value which is set by the administrator,
the router directly sends a choke packet to the source giving it a feedback to reduce the traffic.
The intermediate nodes through which the packets has traveled are not warned about congestion.
3. Implicit Signaling :In implicit signaling, there is no communication between the congested
nodes and the source. The source guesses that there is congestion in a network. For example
when sender sends several packets and there is no acknowledgment for a while, one assumption
is that there is a congestion.
4. Explicit Signaling :In explicit signaling, if a node experiences congestion it can explicitly
sends a packet to the source or destination to inform about congestion. The difference between
choke packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case adopt
policies to prevent further congestion.
Backward Signaling : In backward signaling, a signal is sent in the opposite direction of
the congestion. The source is warned about congestion and it needs to slow down.
Types:
1. Permanent Virtual Circuit (PVC) –These are the permanent connections between
frame relay nodes that exist for long durations. They are always available for
communication even if they are not in use. These connections are static and do not
change with time.
2. Switched Virtual Circuit (SVC) –These are the temporary connections between frame
relay nodes that exist for the duration for which nodes are communicating with each other
and are closed/ discarded after the communication. These connections are dynamically
established as per the requirements.
Advantages:
1. High speed
2. Scalable
3. Reduced network congestion
4. Cost-efficient
5. Secured connection
Disadvantages:
1. Lacks error control mechanism
2. Delay in packet transfer
3. Less reliable
Traffic Contract
An ATM WAN is frequently a public network owned and managed by a service provider who
supports multiple customers. These customers agree upon and pay for a certain level of
bandwidth and performance from the service provider over that WAN. This agreement becomes
the basis of the traffic contract, which defines the traffic parameters and the QoS that is
negotiated for each virtual connection for that user on the network.
References to the traffic contract in an ATM network represent a couple of things. First, the
traffic contract represents an actual service agreement between the user and the service provider
for the expected network-level support. Second, the traffic contract refers to the specific traffic
parameters and QoS values negotiated for an ATM virtual connection at call setup, which are
implemented during data flow to support that service agreement.
The traffic contract also establishes the criteria for policing of ATM virtual connections on the
network to ensure that violations of the agreed-upon service levels do not occur.
ATM GFR
Guaranteed Frame Rate (GFR) has been recently proposed in the ATM Forum as an
enhancement to the UBR service category. Guaranteed Frame Rate will provide a minimum rate
guarantee to VCs at the frame level. The GFR service also allows for the fair usage of any extra
network bandwidth. GFR requires minimum signaling and connection management functions,
and depends on the network's ability to provide a minimum rate to each VC. GFR is likely to be
used by applications that can neither specify the traffic parameters needed for a VBR VC, nor
have cability for ABR (for rate based feedback control). Current internetworking applications
fall into this category, and are not designed to run over QoS based networks. These applications
could benefit from a minimum rate guarantee by the network, along with an opportunity to fairly
use any additional bandwidth left over from higher priority connections. In the case of LANs
connected by ATM backbones, network elements outside the ATM network could also benefit
from GFR guarantees. For example, IP routers separated by an ATM network could use GFR
VCs to exchange control messages. Figure 1illustrates such a case where the ATM cloud
connects several LANs and routers. ATM end systems may also establish GFR VCs for
connections that can benefit from a minimum throughput guarantee.
Figure 1: Use of GFR in ATM connected LANs
The original GFR proposals [ 11, 12] give the basic definition of the GFR service. GFR provides
a minimum rate guarantee to the framesof a VC. The guarantee requires the specification of a
maximum frame size (MFS) of the VC. If the user sends packets (or frames) smaller than the
maximum frame size, at a rate less than the minimum cell rate (MCR), then all the packets are
expected to be delivered by the network with minimum loss. If the user sends packets at a rate
higher than the MCR, it should still receive at least the minimum rate. The minimum rate is
guaranteed to the untagged frames of the connection. In addition, a connection sending in excess
of the minimum rate should receive a fair share of any unused network capacity. The exact
specification of the fair share has been left unspecified by the ATM Forum. Although the GFR
specification is not yet finalized, the above discussion captures the essence of the service.
There are three basic design options that can be used by the networkto provide the per-VC
minimum rate guarantees for GFR - tagging, buffer management, and queueing:
1.
Tagging: Network based tagging(or policing) can be used as a means of marking non-
conforming packets before they enter the network. This form of tagging is usually
performed when the connection enters the network. Figure 2shows the role of network
based tagging in providing a minimum rate service in a network. Network based tagging
on a per-VC level requires some per-VC state information to be maintained by the
network and increases the complexity of the network element. Tagging can isolate
conforming and non-conforming traffic of each VC so that other rate enforcing
mechanisms can use this information to schedule the conforming traffic in preference to
non-conforming traffic. In a more general sense, policing can be used to discard non-
conforming packets, thus allowing only conforming packets to enter the network.