Unit 2
Unit 2
The Data Link layer is located between physical and network layers. It provides
services to the Network layer and it receives services from the physical layer.
The scope of the data link layer is node-to-node.
The following are the design issues in the Data Link Layer −
Types of Services
The services are of three types −
Framing
Framing is the function of a data link layer that provides a way for a sender to
transmit a set of bits that are meaningful to the receiver.
The following are the three types of framing methods that are used in Data Link
Layer −
Byte-oriented framing
Bit-oriented framing
Clock-based framing
Error Control
At the sending node, a frame in a data-link layer needs to be changed to bits,
transformed to electromagnetic signals, and transmitted through the
transmission media. At the receiving node, electromagnetic signals are received,
transformed to bits, and put together to create a frame.
Flow Control
Flow control allows the two nodes to communicate with each other and work at
different speeds. The data link layer monitors the flow control so that when a
fast sender sends data, a slow receiver can receive the data at the same speed.
Because of this flow control technique is used.
Error Detection
Error
A condition when the receiver’s information does not matches with the sender’s
information. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from sender to receiver. That
may get corrupted. To avoid this, we use error-detecting codes which are
additional data added to a given digital message to help us detect if any error
Basic approach used for error detection is the use of redundancy bits, where
3. Checksum
Blocks of data from the source are subjected to a check bit or parity bit
This scheme makes the total number of 1’s even, that is why it is called even
parity checking.
Two-dimensional Parity check
Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit. Parity check bits are also calculated for all columns, then both
are sent along with the data. At the receiving end these are compared with the
Checksum
each of m bits.
In the sender’s end the segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented to get the checksum.
At the receiver’s end, all received segments are added using 1’s complement
division.
are appended to the end of data unit so that the resulting data unit becomes
At the destination, the incoming data unit is divided by the same number. If
at this step there is no remainder, the data unit is assumed to be correct and
is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and
Error Correction
Error Correction codes are used to detect and correct the errors when data is
Backward error correction: Once the error is discovered, the receiver requests
Forward error correction: In this case, the receiver uses the error-correcting
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For
example, If we want to calculate a single-bit error, the error correction code will
determine which one of seven bits is in error. To achieve this, we have to add
some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data
bits. The number of redundant bits r can be calculated by using the formula:
>=d+r+1
The value of r is calculated by using the above formula. For example, if the
value of d is 4, then the possible smallest value that satisfies the above relation
would be 3.
R.W Hamming is Hamming code which can be applied to any length of the data
unit and uses the relationship between data units and redundant units.
Hamming Code
Parity bits: The bit which is appended to the original data of binary bits so that
Even parity: To check for even parity, if the total number of 1s is even, then the
value of the parity bit is 0. If the total number of 1s occurrences is odd, then the
Odd Parity: To check for odd parity, if the total number of 1s is even, then the
value of parity bit is 1. If the total number of 1s is odd, then the value of parity
bit is 0.
An information of 'd' bits are added to the redundant bits 'r' to form d+r.
The location of each of the (d+r) digits is assigned a decimal value.
At the receiving end, the parity bits are recalculated. The decimal value of the
>= d+r+1
2r
>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.
The number of redundant bits is 3. The three bits are represented by r1, r2, r4.
The position of the redundant bits is calculated with corresponds to the raised
, 22
We observe from the above figure that the bit position that includes 1 in the
first position are 1, 3, 5, 7. Now, we perform the even-parity check at these bit
We observe from the above figure that the bit positions that includes 1 in the
third position are 4, 5, 6, 7. Now, we perform the even-parity check at these bit
Now, we perform the even-parity check, the total number of 1s appearing in the
R2 bit
We observe from the above figure that the binary representation of r4 is 1011.
Now, we perform the even-parity check, the total number of 1s appearing in the
The binary representation of redundant bits, i.e., r4r2r1 is 100, and its
position. The bit value must be changed from 1 to 0 to correct the error.
Sliding window protocols are data link layer protocols for reliable and sequential
delivery of data frames. The sliding window is also used in Transmission Control
Protocol.
Working Principle
In these protocols, the sender has a buffer called the sending window and the
receiver has buffer called the receiving window.
The size of the sending window determines the sequence number of the
outbound frames. If the sequence number of the frames is an n-bit field, then
the range of sequence numbers that can be assigned is 0 to 2𝑛−1.
Consequently, the size of the sending window is 2𝑛−1. Thus in order to
accommodate a sending window size of 2𝑛−1, a n-bit sequence number is
chosen.
The sequence numbers are numbered as modulo-n. For example, if the sending
window size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1,
and so on. The number of bits in the sequence number is 2 to generate the
binary sequence 00, 01, 10, 11.
The size of the receiving window is the maximum number of frames that the
receiver can accept at a time. It determines the maximum number of frames
that the sender can send before receiving acknowledgment.
Example
Suppose that we have sender window and receiver window each of size 4. So
the sequence numbering of both the windows will be 0,1,2,3,0,1,2 and so on.
The following diagram shows the positions of the windows after sending the
frames and receiving acknowledgments.
Types of Sliding Window Protocols
The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two
categories −
● Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. It uses the concept of sliding window, and so is
also called sliding window protocol. The frames are sequentially numbered and a
finite number of frames are sent. If the acknowledgment of a frame is not received
within the time period, all frames starting from that frame are retransmitted.
● Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgment for the first frame. However, here only the erroneous or lost frames
are retransmitted, while the good frames are received and buffered.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and
asynchronous balanced mode.
Normal Response Mode (NRM) − Here, two types of stations are there, a
primary station that send commands and secondary station that can
respond to received commands. It is used for both point - to - point and
multipoint communications.
Flag − It is an 8-bit sequence that marks the beginning and the end of
the frame. The bit pattern of the flag is 01111110.
Address − It contains the address of the receiver. If the frame is sent by
the primary station, it contains the address(es) of the secondary
station(s). If it is sent by the secondary station, it contains the address of
the primary station. The address field may be from 1 byte to several
bytes.
Control − It is 1 or 2 bytes containing flow and error control information.
Payload − This carries the data from the network layer. Its length may
vary from one network to another.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection.
The standard code used is CRC (cyclic redundancy code)
Types of HDLC Frames
There are three types of HDLC frames. The type of frame is determined by the
control field of the frame −
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
1/U = bits/frame,
2. Distributed Allocation
For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same
time due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different
process as:
Following are the different methods of random-access protocols for broadcasting frames
on the channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the
specified time, the station waits for a random amount of time, called the backoff time (Tb).
And the station may assume the frame has been lost or destroyed. Therefore, it
retransmits the frame until all the data are successfully transmitted to the receiver.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.
It is a carrier sense multiple access based on media access protocol to sense the traffic on
a channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes
idle. Hence, it reduces the chances of a collision on a transmission medium.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.
ADVERTISEMENT
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it
first senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for
some time, and this time period is called the Interframe space or IFS. However, the IFS
time is often used to define the priority of the station.
Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a random
slot number of slots as wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets when the channel is
inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.
In controlled access, the stations seek information from one another to find which
station has the right to send. It allows only one node to send at a time, to avoid
the collision of messages on a shared medium. The three controlled-access
methods are:
1. Reservation
2. Polling
3. Token Passing
Reservation
● In the reservation method, a station needs to make a reservation before
sending data.
● The timeline has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
● If there are M stations, the reservation interval is divided into M slots, and
each station has one slot.
● Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1.
No other station is allowed to transmit during this slot.
● In general, i th station may announce that it has a frame to send by inserting a
1 bit into i th slot. After all N slots have been checked, each station knows
which stations wish to transmit.
● The stations which have reserved their slots transfer their frames in that order.
● After data transmission period, next reservation interval begins.
● Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In
the second interval, only station 1 has made a reservation.
Advantages of Reservation:
● The main advantage of reservation is high rates and low rates of data
accessing time of the respective channel can be predicated easily. Here time
and rates are fixed.
● Priorities can be set to provide speedier access from secondary.
● Predictable network performance: Reservation-based access methods can
provide predictable network performance, which is important in applications
where latency and jitter must be minimized, such as in real-time video or
audio streaming.
● Reduced contention: Reservation-based access methods can reduce
contention for network resources, as access to the network is pre-allocated
based on reservation requests. This can improve network efficiency and
reduce packet loss.
● Quality of Service (QoS) support: Reservation-based access methods can
support QoS requirements, by providing different reservation types for
different types of traffic, such as voice, video, or data. This can ensure that
high-priority traffic is given preferential treatment over lower-priority traffic.
● Efficient use of bandwidth: Reservation-based access methods can enable
more efficient use of available bandwidth, as they allow for time and frequency
multiplexing of different reservation requests on the same channel.
● Support for multimedia applications: Reservation-based access methods
are well-suited to support multimedia applications that require guaranteed
network resources, such as bandwidth and latency, to ensure high-quality
performance.
Disadvantages of Reservation:
● Highly trust on controlled dependability.
● Decrease in capacity and channel data rate under light loads; increase in
turn-around time.
Polling
● Polling process is similar to the roll-call performed in class. Just like the
teacher, a controller sends a message to each node in turn.
● In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
● The message sent by the controller contains the address of the node being
selected for granting access.
● Although all nodes receive the message the addressed one responds to it and
sends data if any. If there is no data, usually a “poll reject”(NAK) message is
sent back.
● Problems include high overhead of the polling messages and high
dependence on the reliability of the controller.
Advantages of Polling:
● The maximum and minimum access time and data rates on the channel are
fixed predictable.
● It has maximum efficiency.
● It has maximum bandwidth.
● No slot is wasted in polling.
● There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling:
● It consume more time.
● Since every station has an equal chance of winning in every round, link
sharing is biased.
● Only some station might run out of data to send.
● An increase in the turnaround time leads to a drop in the data rates of the
channel under low loads.
Efficiency Let Tpoll be the time for polling and Tt be the time required for
transmission of data. Then,
Efficiency = Tt/(Tt + Tpoll)
Token Passing
● In token passing scheme, the stations are connected logically to each other in
form of ring and access to stations is governed by tokens.
● A token is a special bit pattern or a small message, which circulate from one
station to the next in some predefined order.
● In Token ring, token is passed from one station to another adjacent station in
the ring whereas incase of Token bus, each station uses the bus to send the
token to the next station in some predefined order.
● In both cases, token represents permission to send. If a station has a frame
queued for transmission when it receives the token, it can send that frame
before it passes the token to the next station. If it has no queued frame, it
passes the token simply.
● After sending a frame, each station must wait for all N stations (including
itself) to send the token to their neighbours and the other N – 1 stations to
send a frame, if they have one.
● There exists problems like duplication of token or token is lost or insertion of
new station, removal of a station, which need be tackled for correct and
reliable operation of this scheme.
Performance of token ring can be concluded by 2 parameters:-
1. Delay, is a measure of time between when a packet is ready and when it is
delivered. So, the average time (delay) required to send a token to the next
station = a/N.
2. Throughput, which is a measure of successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
Advantages of Token passing:
● It may now be applied with routers cabling and includes built-in debugging
features like protective relay and auto reconfiguration.
● It provides good throughput when conditions of high load.
Disadvantages of Token passing:
● Its cost is expensive.
● Topology components are more expensive than those of other, more widely
used standard.
● The hardware element of the token rings are designed to be tricky. This
implies that you should choose on manufacture and use them exclusively.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to
be shared across multiple stations based on their time, distance and codes. It can access
all the stations at the same time to send the data frames to the channel.
ADVERTISEMENT
Following are the various methods to access the channel based on their time, distance and
codes:
FDMA
It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different
frequency to the subchannel. Each station is reserved with a particular band to prevent the
crosstalk between the channels and interferences of stations.
TDMA
Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in the
shared channel, it divides the channel into different frequency slots that allocate stations to
transmit the data frames. The same frequency bandwidth into the shared channel by
dividing the signal into various time slots to transmit it. However, TDMA has an overhead
of synchronization that specifies each station's time slot by adding synchronization bits to
each slot.
CDMA
The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it allows
each station to transmit the data frames with full frequency on the shared channel at all
times. It does not require the division of bandwidth on a shared channel based on time
slots. If multiple stations send data to a channel simultaneously, their data frames are
separated by a unique code sequence. Each station has a different unique code for
transmitting the data over a shared channel. For example, there are multiple users in a
room that are continuously speaking. Data is received by the users if only two-person
interact with each other using the same language. Similarly, in the network, if different
stations communicate with each other simultaneously with different code language.