Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Unit 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Unit 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Data link layer - Data link layer design issues, Error detection and correction, Sliding

window protocols, High-Level Data Link Control(HDLC)protocol. Medium Access


Control (MAC) sublayer –Channel allocation problem, Multiple access protocols,
Ethernet, Wireless LANs - 802.11, Repeaters, Hubs, Bridges, Switches, Routers and
Gateways.

1)Data link layer

Data Link Layer


The data link layer is used in a computer network to transmit the data between two
devices or nodes. It divides the layer into parts such as data link control and the multiple
access resolution/protocol. The upper layer has the responsibility to flow control and the
error control in the data link layer, and hence it is termed as logical of data link control.
Whereas the lower sub-layer is used to handle and reduce the collision or multiple access
on a channel. Hence it is termed as media access control or the multiple access
resolutions.

Data Link Control


A data link control is a reliable channel for transmitting data over a dedicated link using
various techniques such as framing, error control and flow control of data packets in the
computer network.

2.Data link layer design issues

The Data Link layer is located between physical and network layers. It provides
services to the Network layer and it receives services from the physical layer.
The scope of the data link layer is node-to-node.
The following are the design issues in the Data Link Layer −

​ The services that are provided to the Network layer.


​ Framing
​ Error control
​ Flow control

Services to the Network Layer


In OSI each layer uses the services of the bottom layer and provides services to
the above layer. The main function of this layer is to provide a well defined
service interface over the network layer.

Types of Services
The services are of three types −

​ Unacknowledged connectionless service − Sender sends message,


receiver is receiving messages without any acknowledgement both nodes
are using connectionless services.
​ Acknowledged connectionless service − Sender sends the message
to receiver, when receiver the message it sends acknowledgement to
sender that it receives the message with connectionless services.
​ Acknowledged connection - oriented service − Both sender and
receiver are using connection oriented services, and communication is
acknowledged base communication between the two nodes.

Framing
Framing is the function of a data link layer that provides a way for a sender to
transmit a set of bits that are meaningful to the receiver.

The Frame contains the following −


​ Frame Header
​ Payload field for holding packet
​ Frame Trailer

The frame is diagrammatically shown below −

The following are the three types of framing methods that are used in Data Link
Layer −

​ Byte-oriented framing
​ Bit-oriented framing
​ Clock-based framing

Error Control
At the sending node, a frame in a data-link layer needs to be changed to bits,
transformed to electromagnetic signals, and transmitted through the
transmission media. At the receiving node, electromagnetic signals are received,
transformed to bits, and put together to create a frame.

Since electromagnetic signals are susceptible to error, a frame is susceptible to


error. The error needs first to be detected, after detection it needs to be either
corrected by the receiver node or discarded and retransmitted by the sending
node.

Flow Control
Flow control allows the two nodes to communicate with each other and work at
different speeds. The data link layer monitors the flow control so that when a
fast sender sends data, a slow receiver can receive the data at the same speed.
Because of this flow control technique is used.

Methods for Flow Control


There are two methods are used for flow control, which are as follows −

​ Feedback Based Flow Control


​ Rate Based Flow Control

3.Error detection and correction

Error Detection

Error

A condition when the receiver’s information does not matches with the sender’s

information. During transmission, digital signals suffer from noise that can

introduce errors in the binary bits travelling from sender to receiver. That

means a 0 bit may change to 1 or a 1 bit may change to 0.

Error Detecting Codes (Implemented either at Data link layer or

Transport Layer of OSI Model)

Whenever a message is transmitted, it may get scrambled by noise or data

may get corrupted. To avoid this, we use error-detecting codes which are
additional data added to a given digital message to help us detect if any error

has occurred during transmission of the message.

Basic approach used for error detection is the use of redundancy bits, where

additional bits are added to facilitate detection of errors. Some popular

techniques for error detection are:

1. Simple Parity check

2. Two-dimensional Parity check

3. Checksum

4. Cyclic redundancy check

Simple Parity check

Blocks of data from the source are subjected to a check bit or parity bit

generator form, where a parity of : 1 is added to the block if it contains odd

number of 1’s, and

0 is added if it contains even number of 1’s

This scheme makes the total number of 1’s even, that is why it is called even

parity checking.
Two-dimensional Parity check

Parity check bits are calculated for each row, which is equivalent to a simple

parity check bit. Parity check bits are also calculated for all columns, then both

are sent along with the data. At the receiving end these are compared with the

parity bits calculated on the received data.

Checksum

In checksum error detection scheme, the data is divided into k segments

each of m bits.

In the sender’s end the segments are added using 1’s complement

arithmetic to get the sum. The sum is complemented to get the checksum.

The checksum segment is sent along with the data segments.

At the receiver’s end, all received segments are added using 1’s complement

arithmetic to get the sum. The sum is complemented.


If the result is zero, the received data is accepted; otherwise discarded.

Cyclic redundancy check (CRC)


Unlike checksum scheme, which is based on addition, CRC is based on binary

division.

In CRC, a sequence of redundant bits, called cyclic redundancy check bits,

are appended to the end of data unit so that the resulting data unit becomes

exactly divisible by a second, predetermined binary number.

At the destination, the incoming data unit is divided by the same number. If

at this step there is no remainder, the data unit is assumed to be correct and

is therefore accepted.

A remainder indicates that the data unit has been damaged in transit and

therefore must be rejected.

Error Correction

Error Correction codes are used to detect and correct the errors when data is

transmitted from the sender to the receiver.

Error Correction can be handled in two ways:

Backward error correction: Once the error is discovered, the receiver requests

the sender to retransmit the entire data unit.

Forward error correction: In this case, the receiver uses the error-correcting

code which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For

example, If we want to calculate a single-bit error, the error correction code will

determine which one of seven bits is in error. To achieve this, we have to add
some additional redundant bits.

Suppose r is the number of redundant bits and d is the total number of the data

bits. The number of redundant bits r can be calculated by using the formula:

>=d+r+1

The value of r is calculated by using the above formula. For example, if the

value of d is 4, then the possible smallest value that satisfies the above relation

would be 3.

To determine the position of the bit which is in error, a technique developed by

R.W Hamming is Hamming code which can be applied to any length of the data

unit and uses the relationship between data units and redundant units.

Hamming Code

Parity bits: The bit which is appended to the original data of binary bits so that

the total number of 1s is even or odd.

Even parity: To check for even parity, if the total number of 1s is even, then the

value of the parity bit is 0. If the total number of 1s occurrences is odd, then the

value of the parity bit is 1.

Odd Parity: To check for odd parity, if the total number of 1s is even, then the

value of parity bit is 1. If the total number of 1s is odd, then the value of parity

bit is 0.

Algorithm of Hamming code:

An information of 'd' bits are added to the redundant bits 'r' to form d+r.
The location of each of the (d+r) digits is assigned a decimal value.

The 'r' bits are placed in the positions 1,2,.....2k-1

At the receiving end, the parity bits are recalculated. The decimal value of the

parity bits determines the position of an error.

Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example:

Suppose the original data is 1010 which is to be sent.

Total number of data bits 'd' = 4

Number of redundant bits r : 2r

>= d+r+1

2r

>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.

Total number of bits = d+r = 4+3 = 7;

Determining the position of the redundant bits

The number of redundant bits is 3. The three bits are represented by r1, r2, r4.

The position of the redundant bits is calculated with corresponds to the raised

power of 2. Therefore, their corresponding positions are 1, 21

, 22

The position of r1 = 1, The position of r2 = 2 , The position of r4 = 4

Representation of Data on the addition of parity bits:

We observe from the above figure that the bit position that includes 1 in the

first position are 1, 3, 5, 7. Now, we perform the even-parity check at these bit

positions. The total number of 1 at these bit positions corresponding to r1 is

even, therefore, the value of the r1 bit is 0.


bit positions. The total number of 1 at these bit positions corresponding to r2 is

odd, therefore, the value of the r2 bit is 1.

Determining r4 bit: The r4 bit is calculated by performing a parity check on the

bit positions whose binary representation includes 1 in the third position.

We observe from the above figure that the bit positions that includes 1 in the

third position are 4, 5, 6, 7. Now, we perform the even-parity check at these bit

positions. The total number of 1 at these bit positions corresponding to r4 is

even, therefore, the value of the r4 bit is 0.

Data transferred is given below:


We observe from the above figure that the binary representation of r1 is 1100.

Now, we perform the even-parity check, the total number of 1s appearing in the

r1 bit is an even number. Therefore, the value of r1 is 0.

R2 bit

The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r4 is 1011.
Now, we perform the even-parity check, the total number of 1s appearing in the

r4 bit is an odd number. Therefore, the value of r4 is 1.

The binary representation of redundant bits, i.e., r4r2r1 is 100, and its

corresponding decimal value is 4. Therefore, the error occurs in a 4th bit

position. The bit value must be changed from 1 to 0 to correct the error.

4.Sliding window protocols,

Sliding Window Protocol:

Sliding window protocols are data link layer protocols for reliable and sequential
delivery of data frames. The sliding window is also used in Transmission Control
Protocol.

In this protocol, multiple frames can be sent by a sender at a time before


receiving an acknowledgment from the receiver. The term sliding window refers
to the imaginary boxes to hold frames. Sliding window method is also known as
windowing.

Working Principle
In these protocols, the sender has a buffer called the sending window and the
receiver has buffer called the receiving window.

The size of the sending window determines the sequence number of the
outbound frames. If the sequence number of the frames is an n-bit field, then
the range of sequence numbers that can be assigned is 0 to 2𝑛−1.
Consequently, the size of the sending window is 2𝑛−1. Thus in order to
accommodate a sending window size of 2𝑛−1, a n-bit sequence number is
chosen.

The sequence numbers are numbered as modulo-n. For example, if the sending
window size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1,
and so on. The number of bits in the sequence number is 2 to generate the
binary sequence 00, 01, 10, 11.

The size of the receiving window is the maximum number of frames that the
receiver can accept at a time. It determines the maximum number of frames
that the sender can send before receiving acknowledgment.

Example
Suppose that we have sender window and receiver window each of size 4. So
the sequence numbering of both the windows will be 0,1,2,3,0,1,2 and so on.
The following diagram shows the positions of the windows after sending the
frames and receiving acknowledgments.
Types of Sliding Window Protocols
The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two
categories −

● Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. It uses the concept of sliding window, and so is
also called sliding window protocol. The frames are sequentially numbered and a
finite number of frames are sent. If the acknowledgment of a frame is not received
within the time period, all frames starting from that frame are retransmitted.
● Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgment for the first frame. However, here only the erroneous or lost frames
are retransmitted, while the good frames are received and buffered.

High-Level Data Link Control(HDLC)protocol.

High-level Data Link Control (HDLC) is a group of communication protocols of


the data link layer for transmitting data between network points or nodes. Since
it is a data link protocol, data is organized into frames. A frame is transmitted
via the network to the destination that verifies its successful arrival. It is a bit -
oriented protocol that is applicable for both point - to - point and multipoint
communications.

Transfer Modes
HDLC supports two types of transfer modes, normal response mode and
asynchronous balanced mode.

​ Normal Response Mode (NRM) − Here, two types of stations are there, a
primary station that send commands and secondary station that can
respond to received commands. It is used for both point - to - point and
multipoint communications.

​ Asynchronous Balanced Mode (ABM) − Here, the configuration is


balanced, i.e. each station can both send commands and respond to
commands. It is used for only point - to - point communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The
structure varies according to the type of frame. The fields of a HDLC frame are

​ Flag − It is an 8-bit sequence that marks the beginning and the end of
the frame. The bit pattern of the flag is 01111110.
​ Address − It contains the address of the receiver. If the frame is sent by
the primary station, it contains the address(es) of the secondary
station(s). If it is sent by the secondary station, it contains the address of
the primary station. The address field may be from 1 byte to several
bytes.
​ Control − It is 1 or 2 bytes containing flow and error control information.
​ Payload − This carries the data from the network layer. Its length may
vary from one network to another.
​ FCS − It is a 2 byte or 4 bytes frame check sequence for error detection.
The standard code used is CRC (cyclic redundancy code)
Types of HDLC Frames
There are three types of HDLC frames. The type of frame is determined by the
control field of the frame −

​ I-frame − I-frames or Information frames carry user data from the


network layer. They also include flow and error control information that is
piggybacked on user data. The first bit of control field of I-frame is 0.
​ S-frame − S-frames or Supervisory frames do not contain information
field. They are used for flow and error control when piggybacking is not
required. The first two bits of control field of S-frame is 10.
​ U-frame − U-frames or Un-numbered frames are used for myriad
miscellaneous functions, like link management. It may contain an
information field, if required. The first two bits of control field of U-frame
is 11.
Channel allocation problem,

Channel Allocation Problem


Channel allocation is a process in which a single channel is divided and
allotted to multiple users in order to carry user specific tasks. There are user’s
quantity may vary every time the process takes place. If there are N number
of users and channel is divided into N equal-sized sub channels, Each user is
assigned one portion. If the number of users are small and don’t vary at
times, then Frequency Division Multiplexing can be used as it is a simple and
efficient channel bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel
Allocation in LANs and MANs, and Dynamic Channel Allocation.

These are explained as following below.


1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel among
multiple competing users using Frequency Division Multiplexing (FDM). if
there are N users, the frequency channel is divided into N equal sized
portions (bandwidth), each user being assigned one portion. since each user
has a private frequency band, there is no interference between users.
However, it is not suitable in case of a large number of users with variable
bandwidth requirements.
It is not efficient to divide into fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)

Where,

T = mean time delay,


C = capacity of channel,

L = arrival rate of frames,

1/U = bits/frame,

N = number of sub channels,

T(FDM) = Frequency Division Multiplexing Time

2. Dynamic Channel Allocation:


In dynamic channel allocation scheme, frequency bands are not permanently
assigned to the users. Instead channels are allotted to users dynamically as
needed, from a central pool. The allocation is done considering a number of
parameters so that transmission interference is minimized.
This allocation scheme optimises bandwidth usage and results is faster
transmissions.
Dynamic channel allocation is further divided into:
1. Centralised Allocation

2. Distributed Allocation

Medium Access Control (MAC) sublayer –


Multiple access protocols

What is a multiple access protocol?


When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the channel. It may create
collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same
time due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different
process as:

A. Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any
station control another station. Depending on the channel's state (idle or busy), each
station transmits the data frame. However, if more than one station sends the data over a
channel, there may be a collision or data conflict. Due to the collision, the data frame
packets may be lost or changed. And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for broadcasting frames
on the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol


It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a
network simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through
multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision
detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the
specified time, the station waits for a random amount of time, called the backoff time (Tb).
And the station may assume the frame has been lost or destroyed. Therefore, it
retransmits the frame until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel
and transmitting data frames. Some frames collide because most stations send their
frames at the same time. Only two frames, frame 1.1 and frame 2.2, are successfully
transmitted to the receiver end. At the same time, other frames are lost or destroyed.
Whenever two frames fall on a shared channel simultaneously, collisions can occur, and
both will suffer damage. If the new frame's first bit enters the channel before finishing the
last bit of the second frame. Both frames are completely finished, and both stations must
retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on
a channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes
idle. Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The


P-Persistent mode defines that each node senses the channel, and if the channel is
inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for a (q
= 1-p probability) random time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station
before the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.

ADVERTISEMENT

CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it
first senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control
layer. When a data frame is sent to a channel, it receives an acknowledgment to check
whether the channel is clear. If the station receives only a single (own) acknowledgments,
that means the data frame has been successfully transmitted to the receiver. But if it gets
two signals (its own and one more in which the collision of frames), a collision of the frame
occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for
some time, and this time period is called the Interframe space or IFS. However, the IFS
time is often used to define the priority of the station.

Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a random
slot number of slots as wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets when the channel is
inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol


It is a method of reducing data frame collision on a shared channel. In the controlled
access method, each station interacts and decides to send a data frame by a particular
station approved by all other stations. It means that a single station cannot send the data
frames unless all other stations are not approved. It has three types of controlled
access: Reservation, Polling, and Token Passing.

In controlled access, the stations seek information from one another to find which
station has the right to send. It allows only one node to send at a time, to avoid
the collision of messages on a shared medium. The three controlled-access
methods are:
1. Reservation
2. Polling
3. Token Passing
Reservation
● In the reservation method, a station needs to make a reservation before
sending data.
● The timeline has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
● If there are M stations, the reservation interval is divided into M slots, and
each station has one slot.
● Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1.
No other station is allowed to transmit during this slot.
● In general, i th station may announce that it has a frame to send by inserting a
1 bit into i th slot. After all N slots have been checked, each station knows
which stations wish to transmit.
● The stations which have reserved their slots transfer their frames in that order.
● After data transmission period, next reservation interval begins.
● Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In
the second interval, only station 1 has made a reservation.

Advantages of Reservation:
● The main advantage of reservation is high rates and low rates of data
accessing time of the respective channel can be predicated easily. Here time
and rates are fixed.
● Priorities can be set to provide speedier access from secondary.
● Predictable network performance: Reservation-based access methods can
provide predictable network performance, which is important in applications
where latency and jitter must be minimized, such as in real-time video or
audio streaming.
● Reduced contention: Reservation-based access methods can reduce
contention for network resources, as access to the network is pre-allocated
based on reservation requests. This can improve network efficiency and
reduce packet loss.
● Quality of Service (QoS) support: Reservation-based access methods can
support QoS requirements, by providing different reservation types for
different types of traffic, such as voice, video, or data. This can ensure that
high-priority traffic is given preferential treatment over lower-priority traffic.
● Efficient use of bandwidth: Reservation-based access methods can enable
more efficient use of available bandwidth, as they allow for time and frequency
multiplexing of different reservation requests on the same channel.
● Support for multimedia applications: Reservation-based access methods
are well-suited to support multimedia applications that require guaranteed
network resources, such as bandwidth and latency, to ensure high-quality
performance.
Disadvantages of Reservation:
● Highly trust on controlled dependability.
● Decrease in capacity and channel data rate under light loads; increase in
turn-around time.
Polling
● Polling process is similar to the roll-call performed in class. Just like the
teacher, a controller sends a message to each node in turn.
● In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
● The message sent by the controller contains the address of the node being
selected for granting access.
● Although all nodes receive the message the addressed one responds to it and
sends data if any. If there is no data, usually a “poll reject”(NAK) message is
sent back.
● Problems include high overhead of the polling messages and high
dependence on the reliability of the controller.
Advantages of Polling:
● The maximum and minimum access time and data rates on the channel are
fixed predictable.
● It has maximum efficiency.
● It has maximum bandwidth.
● No slot is wasted in polling.
● There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling:
● It consume more time.
● Since every station has an equal chance of winning in every round, link
sharing is biased.
● Only some station might run out of data to send.
● An increase in the turnaround time leads to a drop in the data rates of the
channel under low loads.
Efficiency Let Tpoll be the time for polling and Tt be the time required for
transmission of data. Then,
Efficiency = Tt/(Tt + Tpoll)
Token Passing
● In token passing scheme, the stations are connected logically to each other in
form of ring and access to stations is governed by tokens.
● A token is a special bit pattern or a small message, which circulate from one
station to the next in some predefined order.
● In Token ring, token is passed from one station to another adjacent station in
the ring whereas incase of Token bus, each station uses the bus to send the
token to the next station in some predefined order.
● In both cases, token represents permission to send. If a station has a frame
queued for transmission when it receives the token, it can send that frame
before it passes the token to the next station. If it has no queued frame, it
passes the token simply.
● After sending a frame, each station must wait for all N stations (including
itself) to send the token to their neighbours and the other N – 1 stations to
send a frame, if they have one.
● There exists problems like duplication of token or token is lost or insertion of
new station, removal of a station, which need be tackled for correct and
reliable operation of this scheme.
Performance of token ring can be concluded by 2 parameters:-
1. Delay, is a measure of time between when a packet is ready and when it is
delivered. So, the average time (delay) required to send a token to the next
station = a/N.
2. Throughput, which is a measure of successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
Advantages of Token passing:
● It may now be applied with routers cabling and includes built-in debugging
features like protective relay and auto reconfiguration.
● It provides good throughput when conditions of high load.
Disadvantages of Token passing:
● Its cost is expensive.
● Topology components are more expensive than those of other, more widely
used standard.
● The hardware element of the token rings are designed to be tricky. This
implies that you should choose on manufacture and use them exclusively.

C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to
be shared across multiple stations based on their time, distance and codes. It can access
all the stations at the same time to send the data frames to the channel.

ADVERTISEMENT

Following are the various methods to access the channel based on their time, distance and
codes:

1. FDMA (Frequency Division Multiple Access)


2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)

FDMA

It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different
frequency to the subchannel. Each station is reserved with a particular band to prevent the
crosstalk between the channels and interferences of stations.
TDMA

Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in the
shared channel, it divides the channel into different frequency slots that allocate stations to
transmit the data frames. The same frequency bandwidth into the shared channel by
dividing the signal into various time slots to transmit it. However, TDMA has an overhead
of synchronization that specifies each station's time slot by adding synchronization bits to
each slot.

CDMA

The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it allows
each station to transmit the data frames with full frequency on the shared channel at all
times. It does not require the division of bandwidth on a shared channel based on time
slots. If multiple stations send data to a channel simultaneously, their data frames are
separated by a unique code sequence. Each station has a different unique code for
transmitting the data over a shared channel. For example, there are multiple users in a
room that are continuously speaking. Data is received by the users if only two-person
interact with each other using the same language. Similarly, in the network, if different
stations communicate with each other simultaneously with different code language.

You might also like