Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Data Linl Unit3

The Data Link Layer, the second layer from the bottom of the OSI model, is responsible for transferring datagrams across individual links and includes protocols like Ethernet and PPP. It provides services such as framing, reliable delivery, flow control, error detection, and correction, with techniques like parity checks and cyclic redundancy checks (CRC) for error management. Additionally, the HDLC protocol defines a frame structure that includes fields for addressing, control, and error checking.

Uploaded by

nhtrodtqk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Data Linl Unit3

The Data Link Layer, the second layer from the bottom of the OSI model, is responsible for transferring datagrams across individual links and includes protocols like Ethernet and PPP. It provides services such as framing, reliable delivery, flow control, error detection, and correction, with techniques like parity checks and cyclic redundancy checks (CRC) for error management. Additionally, the HDLC protocol defines a frame structure that includes fields for addressing, control, and error checking.

Uploaded by

nhtrodtqk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Unit 3

Data Link Layer


o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from
the bottom.
o The communication channel that connects the adjacent nodes is known as links,
and in order to move the datagram from source to the destination, the datagram
must be moved across an individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across
an individual link.
o The Data link layer protocol defines the format of the packet exchanged across
the nodes as well as the actions such as Error detection, retransmission, flow
control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can be handled
by different link layer protocols on different links in a path. For example, the
datagram is handled by Ethernet on the first link, PPP on the second link.

Following services are provided by the Data Link Layer:


o Framing & Link access: Data Link Layer protocols encapsulate each network
frame within a Link layer frame before the transmission across the link. A frame
consists of a data field in which network layer datagram is inserted and a number
of data fields. It specifies the structure of the frame as well as a channel access
protocol by which frame is to be transmitted over the link.
o Reliable delivery: Data Link Layer provides a reliable delivery service, i.e.,
transmits the network layer datagram without any error. A reliable delivery
service is accomplished with transmissions and acknowledgements. A data link
layer mainly provides the reliable delivery service over the links as they have
higher error rates and they can be corrected locally, link at which an error occurs
rather than forcing to retransmit the data.
o Flow control: A receiving node can receive the frames at a faster rate than it can
process the frame. Without flow control, the receiver's buffer can overflow, and
frames can get lost. To overcome this problem, the data link layer uses the flow
control to prevent the sending node on one side of the link from overwhelming
the receiving node on another side of the link.
o Error detection: Errors can be introduced by signal attenuation and noise. Data
Link Layer protocol provides a mechanism to detect one or more errors. This is
achieved by adding error detection bits in the frame and then receiving node can
perform an error check.
o Error correction: Error correction is similar to the Error detection, except that
receiving node not only detect the errors but also determine where the errors
have occurred in the frame.
o Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit
the data at the same time. In a Half-Duplex mode, only one node can transmit
the data at the same time.

Error Detection
When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data transmitted
by another device. An Error is a situation when the message received at the receiver end
is not identical to the message transmitted.
Types Of Errors

Errors can be classified into two categories:

o Single-Bit Error
o Burst Error

Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.

Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ‘s and for a
single-bit error to occurred, a noise must be more than 1 ‘s.

Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires
are used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is
corrupted per byte.
Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.

The Burst Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit.

Burst Errors are most likely to occurr in Serial Data Transmission.

The number of affected bits depends on the duration of the noise and data rate.

Error Detecting Techniques:


The most popular Error Detecting Techniques are:

o Single parity check


o Two-dimensional parity check
o Checksum
o Cyclic redundancy check

Single Parity Check

o Single Parity checking is the simple mechanism and inexpensive to detect the
errors.
o In this technique, a redundant bit is also known as a parity bit which is appended
at the end of the data unit so that the number of 1s becomes even. Therefore, the
total number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of
1s bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-
parity checking.

Drawbacks Of Single Parity Checking

o It can only detect single-bit errors which are very rare.


o If two bits are interchanged, then it cannot detect the errors.
Two-Dimensional Parity Check

o Performance can be improved by using Two-Dimensional Parity Check which


organizes the data in the form of a table.
o Parity check bits are computed for each row, which is equivalent to the single-
parity check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the
redundant row of bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits computed
from the received data.

Drawbacks Of 2D Parity Check

o If two bits in one data unit are corrupted and two bits exactly the same position
in another data unit are also corrupted, then 2D Parity checker will not be able to
detect the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.
Checksum
A Checksum is an error detection technique based on the concept of redundancy.

It is divided into two parts:

Checksum Generator

A Checksum is generated at the sending side. Checksum generator subdivides the data
into equal segments of n bits each, and all these segments are added together by using
one's complement arithmetic. The sum is complemented and appended to the original
data, known as checksum field. The extended data is transmitted across the network.

Suppose L is the total sum of the data segments, then the checksum would be ?

1. The Sender follows the given steps:


2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the sum.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.
Checksum Checker

A Checksum is verified at the receiving side. The receiver subdivides the incoming data
into equal segments of n bits each, and all these segments are added together, and then
this sum is complemented. If the complement of the sum is zero, then the data is
accepted otherwise data is rejected.

1. The Receiver follows the given steps:


2. The block unit is divided into k sections and each of n bits.
3. All the k sections are added together by using one's complement algorithm to get the
sum.
4. The sum is complemented.
5. If the result of the sum is zero, then the data is accepted otherwise the data is discarde
d.

Cyclic Redundancy Check (CRC)


CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0s is appended to the data unit, and this n number
is less than the number of bits in a predetermined number, known as division
which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is
known as binary division. The remainder generated from this division is known as
CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original
data. This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will
treat this whole unit as a single unit, and it is divided by the same divisor that was
used to find the CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.

Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator

o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at
the end of the data as the length of the divisor is 4 and we know that the length
of the string 0s to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the
divisor 1001.
o The remainder generated from the binary division is known as CRC remainder.
The generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit,
and the final string would be 11100111 which is sent across the network.
CRC Checker

o The functionality of the CRC checker is similar to the CRC generator.


o When the string 11100111 is received at the receiving end, then CRC checker
performs the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.
Error Correction
Error Correction codes are used to detect and correct the errors when data is
transmitted from the sender to the receiver.

Error Correction can be handled in two ways:

o Backward error correction: Once the error is discovered, the receiver requests
the sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting
code which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For example, If
we want to calculate a single-bit error, the error correction code will determine which
one of seven bits is in error. To achieve this, we have to add some additional redundant
bits.

Suppose r is the number of redundant bits and d is the total number of the data bits.
The number of redundant bits r can be calculated by using the formula:

2r>=d+r+1

The value of r is calculated by using the above formula. For example, if the value of d is
4, then the possible smallest value that satisfies the above relation would be 3.

To determine the position of the bit which is in error, a technique developed by R.W
Hamming is Hamming code which can be applied to any length of the data unit and
uses the relationship between data units and redundant units.
Hamming Code
Parity bits: The bit which is appended to the original data of binary bits so that the total
number of 1s is even or odd.

Even parity: To check for even parity, if the total number of 1s is even, then the value of
the parity bit is 0. If the total number of 1s occurrences is odd, then the value of the
parity bit is 1.

Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of
parity bit is 1. If the total number of 1s is odd, then the value of parity bit is 0.

Algorithm of Hamming code:

o An information of 'd' bits are added to the redundant bits 'r' to form d+r.
o The location of each of the (d+r) digits is assigned a decimal value.
o The 'r' bits are placed in the positions 1,2,.....2k-1.
o At the receiving end, the parity bits are recalculated. The decimal value of the
parity bits determines the position of an error.

Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example:

Suppose the original data is 1010 which is to be sent.

Total number of data bits 'd' = 4


Number of redundant bits r : 2r >= d+r+1
2r>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.
Total number of bits = d+r = 4+3 = 7;

o The binary representation of redundant bits, i.e., r4r2r1 is 100, and its
corresponding decimal value is 4. Therefore, the error occurs in a 4 th bit position.
The bit value must be changed from 1 to 0 to correct the error.

Basic Frame Structure of HDLC



High-Level Data Link Control (HDLC) generally uses term “frame” to indicate and represent
an entity of data or a protocol of data unit often transmitted or transferred from one station to
another station. Each and every frame on link should begin and end with Flag Sequence Field
(F). Each of frames in HDLC includes mainly six fields. It begins with a flag field, an address
field, a control field, an information field, an frame check sequence (FCS) field, and an ending
flag field. The ending flag field of one frame can serve as beginning flag field of the next
frame in multiple-frame transmissions.

The basic frame structure of HDLC protocol is shown below :

Size of Different Fields :


Field Name Size (bits)

Flag Field 8 bits

Address Field 8 bits

Control Field 8 or 16 bits

Variable (not used in


Information Field some type of HDLC
frames)
FCS (Frame Check
16 or 32 bits
Sequence) Field

Closing Flag Field 8 bits

Let us understand these fields in details :


1. Flag Field –
The flag field is generally responsible for initiation and termination of error checking. In
HDLC protocol, there is no start and stop bits. So, the flag field is basically using delimiter
0x7e to simply indicate beginning and end of frame.
It is an 8-bit sequence with a bit pattern 01111110 that basically helps in identifying both
starting and end of a frame. This bit pattern also serves as a synchronization pattern for
receiver. This bit pattern is also not allowed to occur anywhere else inside a complete
frame.

2. Address Field –
The address field generally includes HDLC address of secondary station. It helps to
identify secondary station will sent or receive data frame. This field also generally consists
of 8 bits therefore it is capable of addressing 256 addresses. This field can be of 1 byte or
several bytes long, it depends upon requirements of network. Each byte can identify up to
128 stations.
This address might include a particular address, a group address, or a broadcast address. A
primary address can either be a source of communication or a destination that eliminates
requirement of including address of primary.

3. Control Field –
HDLC generally uses this field to determine how to control process of communication. The
control field is different for different types of frames in HDLC protocol. The types of
frames can be Information frame (I-frame), Supervisory frame (S-frame), and Unnumbered
frame (U-frame).
This field is a 1-2-byte segment of frame generally requires for flow and error control. This
field basically consists of 8 bits but it can be extended to 16 bits. In this field, interpretation
of bits usually depends upon the type of frame.

4. Information Field –
This field usually contains data or information of users sender is transmitting to receiver in
an I-frame and network layer or management information in U-frame. It also consists of
user’s data and is fully transparent. The length of this field might vary from one network to
another network.
Information field is not always present in an HDLC frame.

5. Frame Check Sequence (FCS) –


FCS is generally used for identification of errors i.e., HDLC error detection. In FCS,
CRC16 (16-bit Cyclic Redundancy Check) or CRC32 (32-bit Cyclic Redundancy Check)
code is basically used for error detection. CRC calculation is done again in receiver. If
somehow result differs even slightly from value in original frame, an error is assumed.
This field can either contain 2 byte or 4 bytes. This field is a total 16 bit that is required for
error detection in address field, control field, and information field. FCS is basically
calculated by sender and receiver both of a data frame. FCS is used to confirm and ensure
that data frame was not corrupted by medium that is used to transfer frame from sender to
receiver.
What is Point-to-Point Protocol (PPP)?

PPP stands for Point-to-Point Protocol. PPP is Windows default Remote Access Service
(RAS) protocol and is Data Link Layer (DLL) protocol used to encapsulate higher network
layer protocols to pass through synchronous and asynchronous lines of communication.
Initially created as an encapsulation protocol to carry numerous layer of network traffic over
point-to-point connections. In addition, PPP settled various measures including asynchronous
and bit-oriented synchronous encapsulation, multiplexing of network protocols, negotiation of
sessions, and negotiation of data-compression. PPP also supports non TCP / IP protocols, such
as IPX / SPX and DECnet. A prior standard known as Serial Link Internet Protocol (SLIP) has
been largely replace by it.

What is the PPP Authentication Protocol?

PPP supports two main authentication protocols:PPP supports two main authentication
protocols:
 Password Authentication Protocol (PAP): A basic authentication scheme where the
user’s identity information such as the user name and password are transmitted in the plain
format. Its disadvantage is that it is less secure because of the simplicity.
 Challenge Handshake Authentication Protocol (CHAP): CHAP is more secure as it
periodically alert the identity of the remote client by using a three way hand shake method.
The password is encrypted, thus it is much more secure than plaintext PAP.

Which Protocol is Replaced by PPP?

PPP took the place of the Serial Line Internet Protocol (SLIP). SLIP was one of the older
protocols that was used for establishing connections for serial means to provide P-P
communication contacts but it had some issues such as, it did not support features like error
detection and link multiplexing. PPP has certain advancements over the conventional protocols
with respect to error detection, multiplexing and authentication and therefore is more
appropriate for use on the modern internet connections.

Services Provided by PPP

PPP offers several essential services, including:PPP offers several essential services,
including:
 Data Encapsulation: It enshrines network layer protocols such as IP and others for
transmission over serial link.
 Link Control Protocol (LCP): It is employed to set up, initial and finalize the data link
connection.
 Network Control Protocols (NCP): PPP has provisions for supporting many NCPs for the
configuration of the network-layer protocols such as IP, IPX, and AppleTalk.
 Authentication: Offers both PAP and CHAP for the purpose of authentication.
 Error Detection: It also has methods of maintaining integrity of the data through error
checking techniques.


 Channel Allocation Problem in Computer Network

 Channel allocation is a process in which a single channel is divided and allotted to


multiple users in order to carry user specific tasks.
 There are user’s quantity may vary every time the process takes place. If there are N
number of users and channel is divided into N equal-sized sub channels, Each user is
assigned one portion. If the number of users are small and don’t vary at times, then
Frequency Division Multiplexing can be used as it is a simple and efficient channel
bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation
in LANs and MANs, and Dynamic Channel Allocation.

 These are explained as following below.


1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel among multiple
competing users using Frequency Division Multiplexing (FDM). if there are N users,
the frequency channel is divided into N equal sized portions (bandwidth), each user
being assigned one portion. since each user has a private frequency band, there is no
interference between users.

 2. Dynamic Channel Allocation:

In dynamic channel allocation scheme, frequency bands are not permanently


assigned to the users. Instead channels are allotted to users dynamically as
needed, from a central pool. The allocation is done considering a number of
parameters so that transmission interference is minimized.

Multiple access protocol- ALOHA, CSMA,


CSMA/CA and CSMA/CD
Data Link Layer
The data link layer is used in a computer network to transmit the data between two devices
or nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error
control in the data link layer, and hence it is termed as logical of data link control.
Whereas the lower sub-layer is used to handle and reduce the collision or multiple access
on a channel. Hence it is termed as media access control or the multiple access
resolutions.
Data Link Control
A data link control is a reliable channel for transmitting data over a dedicated link using
various techniques such as framing, error control and flow control of data packets in the
computer network.

What is a multiple access protocol?


When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the channel. It may create
collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same time
due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.

Following are the types of multiple access protocol that is subdivided into the different
process as:

A. Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any
station control another station. Depending on the channel's state (idle or busy), each station
transmits the data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on
the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol


It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
time, the station waits for a random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed. Therefore, it retransmits the
frame until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel
and transmitting data frames. Some frames collide because most stations send their frames
at the same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to
the receiver end. At the same time, other frames are lost or destroyed. Whenever two
frames fall on a shared channel simultaneously, collisions can occur, and both will suffer
damage. If the new frame's first bit enters the channel before finishing the last bit of the
second frame. Both frames are completely finished, and both stations must retransmit the
data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the beginning
of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^ -
2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)
It is a carrier sense multiple access based on media access protocol to sense the traffic
on a channel (idle or busy) before transmitting the data. It means that if the channel is idle,
the station can send data to the channel. Otherwise, it must wait until the channel becomes
idle. Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-


Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.

CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit
data frames. The CSMA/CD protocol works with a medium access control layer. Therefore,
it first senses the shared channel before broadcasting the frames, and if the channel is idle,
it transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.

CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether
the channel is clear. If the station receives only a single (own) acknowledgments, that
means the data frame has been successfully transmitted to the receiver. But if it gets two
signals (its own and one more in which the collision of frames), a collision of the frame
occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this
time period is called the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.

Contention window: In the Contention window, the total time is divided into different slots. When
the station/ sender is ready to transmit the data frame, it chooses a random slot number of slots
as wait time. If the channel is still busy, it does not restart the entire process, except that it restarts
the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol


It is a method of reducing data frame collision on a shared channel. In the controlled access method,
each station interacts and decides to send a data frame by a particular station approved by all other
stations. It means that a single station cannot send the data frames unless all other stations are not
approved. It has three types of controlled access: Reservation, Polling, and Token Passing.

Reservation
 In the reservation method, a station needs to make a reservation before sending data.
 The timeline has two kinds of periods:
o Reservation interval of fixed time length
o Data transmission period of variable frames.
 If there are M stations, the reservation interval is divided into M slots, and each station has
one slot.
 Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station
is allowed to transmit during this slot.
 In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
 The stations which have reserved their slots transfer their frames in that order.
 After data transmission period, next reservation interval begins.
 Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only
station 1 has made a reservation.

Advantages of Reservation
 The main advantage of reservation is high rates and low rates of data accessing time of the
respective channel can be predicated easily. Here time and rates are fixed.
 Priorities can be set to provide speedier access from secondary.
 Reservation-based access methods can provide predictable network performance, which is
important in applications where latency and jitter must be minimized, such as in real-time
video or audio streaming.
 Reservation-based access methods can reduce contention for network resources, as access
to the network is pre-allocated based on reservation requests. This can improve network
efficiency and reduce packet loss.
 Reservation-based access methods can support QoS requirements, by providing different
reservation types for different types of traffic, such as voice, video, or data. This can ensure
that high-priority traffic is given preferential treatment over lower-priority traffic.
 Reservation-based access methods can enable more efficient use of available bandwidth, as
they allow for time and frequency multiplexing of different reservation requests on the
same channel.
 Reservation-based access methods are well-suited to support multimedia applications that
require guaranteed network resources, such as bandwidth and latency, to ensure high-
quality performance.
Disadvantages of Reservation
 Highly trust on controlled dependability.
 Decrease in capacity and channel data rate under light loads; increase in turn-around time.

2. Polling
 Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
 In this, one acts as a primary station(controller) and the others are secondary stations. All
data exchanges must be made through the controller.
 The message sent by the controller contains the address of the node being selected for
granting access.
 Although all nodes receive the message the addressed one responds to it and sends data if
any. If there is no data, usually a “poll reject”(NAK) message is sent back.
 Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
Advantages of Polling
 The maximum and minimum access time and data rates on the channel are fixed
predictable.
 It has maximum efficiency.
 It has maximum bandwidth.
 No slot is wasted in polling.
 There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling
 It consume more time.
 Since every station has an equal chance of winning in every round, link sharing is biased.
 Only some station might run out of data to send.
 An increase in the turnaround time leads to a drop in the data rates of the channel under
low loads.
Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of
data. Then,
Efficiency = Tt/(Tt + Tpoll)
3. Token Passing
 In token passing scheme, the stations are connected logically to each other in form of ring
and access to stations is governed by tokens.
 A token is a special bit pattern or a small message, which circulate from one station to the
next in some predefined order.
 In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next station
in some predefined order.
 In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token to
the next station. If it has no queued frame, it passes the token simply.
 After sending a frame, each station must wait for all N stations (including itself) to send the
token to their neighbours and the other N – 1 stations to send a frame, if they have one.
 There exists problems like duplication of token or token is lost or insertion of new station,
removal of a station, which need be tackled for correct and reliable operation of this
scheme.
Performance of token ring can be concluded by 2 parameters:-
 Delay, is a measure of time between when a packet is ready and when it is delivered. So,
the average time (delay) required to send a token to the next station = a/N.
 Throughput, which is a measure of successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
Advantages of Token passing
 It may now be applied with routers cabling and includes built-in debugging features like
protective relay and auto reconfiguration.
 It provides good throughput when conditions of high load.
Disadvantages of Token passing
 Its cost is expensive.
 Topology components are more expensive than those of other, more widely used standard.
 The hardware element of the token rings are designed to be tricky. This implies that you
should choose on manufacture and use them exclusively.

C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to be shared
across multiple stations based on their time, distance and codes. It can access all the stations at the
same time to send the data frames to the channel.

Following are the various methods to access the channel based on their time, distance and codes:

1. FDMA (Frequency Division Multiple Access)


2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA

It is a frequency division multiple access (FDMA) method used to divide the available bandwidth
into equal bands so that multiple users can send data through a different frequency to the subchannel.
Each station is reserved with a particular band to prevent the crosstalk between the channels and
interferences of stations.

TDMA

Time Division Multiple Access (TDMA) is a channel access method. It allows the same frequency
bandwidth to be shared across multiple stations. And to avoid collisions in the shared channel, it
divides the channel into different frequency slots that allocate stations to transmit the data frames.
The same frequency bandwidth into the shared channel by dividing the signal into various time slots
to transmit it. However, TDMA has an overhead of synchronization that specifies each station's time
slot by adding synchronization bits to each slot.

CDMA

The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it allows
each station to transmit the data frames with full frequency on the shared channel at all
times. It does not require the division of bandwidth on a shared channel based on time
slots. If multiple stations send data to a channel simultaneously, their data frames are
separated by a unique code sequence. Each station has a different unique code for
transmitting the data over a shared channel. For example, there are multiple users in a
room that are continuously speaking. Data is received by the users if only two-person
interact with each other using the same language. Similarly, in the network, if different
stations communicate with each other simultaneously with different code language.

You might also like