Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
11 views

Error Detection Codes

Dccn
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Error Detection Codes

Dccn
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Error Detection Codes: The binary information is transferred from one location to another location

through some communication medium. The external noise can change bits from 1 to 0 or 0 to 1. This
change in values changes the meaning of the actual message and is called an error. For efficient data
transfer, there should be error detection and correction codes. An error detection code is a binary code
that detects digital errors during transmission. To detect errors in the received message, we add some
extra bits to the actual data.

Without the addition of redundant bits, it is not possible to detect errors in the received message.
There are 3 ways in which we can detect errors in the received message :

1. Parity Bit

2. CheckSum

3. Cyclic Redundancy Check (CRC)

What is parity bit?

A parity bit is an extra bit that is added to the message bits or data-word bits on the sender side. Data-
word bits along with parity bits is called a codeword. The parity bit is added to the message bits on the
sender side, to help in error detection at the receiver side.

Parity Bit Method

We’ll be understanding the parity bit method in this article in depth:

A parity bit is an extra bit included in the binary message to make a total number of 1’s either odd or
even. Parity word denotes the number of 1’s in a binary string. There are two parity systems – even
and odd parity checks.

Even Parity

Total number of 1’s in the given data bit should be even. So if the total number of 1’s in the data bit is
odd then a single 1 will be appended to make total number of 1’s even else 0 will be appended(if total
number of 1’s are already even). Hence, if any error occurs, the parity check circuit will detect it at the
receiver’s end.

Odd Parity

In odd parity system, if the total number of 1’s in the given binary string (or data bits) are even then 1
is appended to make the total count of 1’s as odd else 0 is appended. The receiver knows that whether
sender is an odd parity generator or even parity generator

Advantages of Parity Bit Method

• Parity bit method is a promising method to detect any odd bit error at the receiver side.
• Overhead in data transmission is low, as only one parity bit is added to the message bits at
sender side before transmission.

• Also, this is a simple method for odd bits error detection.

Disadvantages of Parity Bit Method

• The limitation of this method is that only error in a odd number of bits are identified and we
also cannot determine the exact location of error in the data bit, therefore, error correction is
not possible in this method.

• If the number of bits in even parity check increase or decrease (data changed) but remained
to be even then it won’t be able to detect error as the number of bits are still even and same
goes for odd parity check.

Two-Dimensional Parity Check

Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, then both are sent along with the data.
At the receiving end, these are compared with the parity bits calculated on the received data.

Advantages of Two-Dimensional Parity Check


• Two-Dimensional Parity Check can detect and correct all single bit error.
• Two-Dimensional Parity Check can detect two- or three-bit error that occur anywhere in the
matrix.
Disadvantages of Two-Dimensional Parity Check
• Two-Dimensional Parity Check cannot correct two- or three-bit error. It can only detect two-
or three-bit error.
• If we have an error in the parity bit then this scheme will not work.

Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process involves
dividing the data into equally sized segments and using a 1’s complement to calculate the sum of these
segments. The calculated sum is then sent along with the data to the receiver. At the receiver’s end,
the same process is repeated and if all zeroes are obtained in the sum, it means that the data is correct.

Checksum – Operation at Sender’s Side


• Firstly, the data is divided into k segments each of m bits.
• On the sender’s end, the segments are added using 1’s complement arithmetic to get the sum.
The sum is complemented to get the checksum.
• The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
• At the receiver’s end, all received segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented.
• If the result is zero, the received data is accepted; otherwise discarded.

Cyclic Redundancy Check (CRC)

• Unlike the checksum scheme, which is based on addition, CRC is based on binary division.

• In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the
end of the data unit so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number.

• At the destination, the incoming data unit is divided by the same number. If at this step there
is no remainder, the data unit is assumed to be correct and is therefore accepted.

• A remainder indicates that the data unit has been damaged in transit and therefore must be
rejected.
CRC Working

We have given dataword of length n and divisor of length k.


Step 1: Append (k-1) zero’s to the original message
Step 2: Perform modulo 2 division
Step 3: Remainder of division = CRC
Step 4: Code word = Data with append k-1 zero’s + CRC
Note:
• CRC must be k-1 bits
• Length of Code word = n+k-1 bits
Example: Let’s data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC method
discussed below.

FRAMING
• Frames are the units of digital transmission, particularly in computer networks.
• The data link layer divides the stream of bits received from the network layer into manageable
data units called frames.

• At the data link layer, it extracts the message from the sender and provides it to the receiver
by providing the sender’s and receiver’s addresses. The advantage of using frames is that data
is broken up into recoverable chunks that can easily be checked for corruption.
• Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered
accurately and efficiently.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the
frame, the length of the frame itself acts as a delimiter.
• Drawback: It suffers from internal fragmentation if the data size is less than the frame size
• Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning
of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of the frame.
Used in Ethernet(802.3). The problem with this is that sometimes the length field might get
corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the frame. Used
in Token Ring. The problem with this is that ED can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains ED then,
a byte is stuffed into data to differentiate it from ED.
Disadvantage – It is very costly and obsolete method.
2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.

Examples:
• If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing.

Flow Control and Error Control protocols:


Flow Control
o It is a set of procedures that tells the sender how much data it can transmit before the data
overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data. Therefore, the
receiving device must be able to inform the sending device to stop the transmission
temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they are processed.
Two methods have been developed to control the flow of data:
o Stop-and-wait
o Sliding window

Flow control refers to a set of procedures used to restrict the amount of data that the sender
can send before waiting for acknowledgment.
Stop-and-wait
o n the Stop-and-wait method, the sender waits for an acknowledgement after every frame it
sends.
o When acknowledgement is received, then only next frame is sent. The process of alternately
sending and waiting of a frame continues until the sender transmits the EOT (End of
transmission) frame.
Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the
next frame is sent.
Disadvantage of Stop-and-wait
Stop-and-wait technique is inefficient to use as each frame must travel across all the way to
the receiver, and an acknowledgement travels all the way before the next frame is sent. Each
frame sent and received uses the entire time needed to traverse the link.
Sliding Window
o The Sliding Window is a method of flow control in which a sender can transmit the several
frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the another due to which
capacity of the communication channel can be utilized efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver end.
o The window can hold the frames at either end, and it provides the upper limit on the number
of frames that can be transmitted before the acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n means that they are
numbered from 0 to n-1. For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent
before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame that it wants to
receive. For example, to acknowledge the string of frames ending with frame number 4, the
receiver will send the ACK containing the number 5. When the sender sees the ACK with the
number 5, it got to know that the frames from 0 through 4 have been received.

Sender Window

o At the beginning of a transmission, the sender window contains n-1 frames, and when they
are sent out, the left boundary moves inward shrinking the size of the window. For example,
if the size of the window is w if three frames are sent out, then the number of frames left out
in the sender window is w-3.

o Once the ACK has arrived, then the sender window expands to the number which will be equal
to the number of frames acknowledged by ACK.

o For example, the size of the window is 7, and if frames 0 through 4 have been sent out and no
acknowledgement has arrived, then the sender window contains only two frames, i.e., 5 and
6. Now, if ACK has arrived with a number 4 which means that 0 through 3 frames have arrived
undamaged and the sender window is expanded to include the next four frames. Therefore,
the sender window contains six frames (5,6,7,0,1,2).

Receiver Window

o At the beginning of transmission, the receiver window does not contain n frames, but it
contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.

o The receiver window does not represent the number of frames received, but it represents the
number of frames that can be received before an ACK is sent. For example, the size of the
window is w, if three frames are received then the number of spaces available in the window
is (w-3).

o Once the acknowledgement is sent, the receiver window expands by the number equal to the
number of frames acknowledged.

o Suppose the size of the window is 7 means that the receiver window contains seven spaces
for seven frames. If the one frame is received, then the receiver window shrinks and moving
the boundary from 0 to 1. In this way, window shrinks one by one, so window now contains
the six spaces. If frames from 0 through 4 have sent, then the window contains two spaces
before an acknowledgement is sent.

Error Control:

Error control in the data link layer is based on automatic repeat request, which is the retransmission
of data.
NOISE LESS

Simplest Protocol

• There is no need for flow control in this scheme.


• The data link layer at the sender site gets data from its network layer, makes a frame out of
the data, and sends it.
• The data link layer at the receiver site receives a frame from its physical layer, extracts data
from the frame, and delivers the data to its network layer.
• The data link layers of the sender and receiver provide transmission services for their network
layers. The data link layers use the services provided by their physical layers.

Stop and Wait Protocol –


If data frame receivers arrive at the site faster than they can be processed, then frames must
be stored until their use. Generally, the receiver does not have enough storage space,
especially if it is receiving data from multiple sources.
• Design:
On Comparing the stop-and-wait protocol design model with the Simplest protocol design
model, we can see the traffic on the front/forward channel (from sender to receiver) and the
back/reverse channel. Anytime, there is either one data frame on the forward channel or one
ACK frame on the reverse channel. We hereupon require a half-duplex link.
• Flow Diagram :
This figure shows an example of communication using the Stop-and-wait protocol. It is still
straightforward. The sender sends a frame and waits for a response from the receiver. When
ACK(acknowledged) will arrival from the receiver side then send the next frame and so on.
Keep it in mind when two frames will be there sender will involve in four event and the
receivers will involve in two events.

NOISY CHANNEL
Stop-and-Wait Automatic Repeat Request
Stop and Wait for ARQ (Automatic Repeat Request) that does both error control and flow
control.

Timeout
Timeout refers to the duration for which the sender waits for an acknowledgment (ACK) from
the receiver after transmitting a data packet. If the sender does not receive an ACK within this
timeout period, it assumes that the frame was lost or corrupted and retransmits the frame.

2. Sequence Number (Data)


In Stop-and-Wait ARQ, the sender assigns sequence numbers to each data frame it sends. This
allows the receiver to identify and acknowledge each frame individually, ensuring reliable
delivery of data packets. After sending a frame, the sender waits for an acknowledgment
before sending the next frame.
3. Sequence Number(Acknowledgement)
Similarly, sequence numbers are also used in acknowledgments (ACKs) sent by the receiver to
acknowledge received data frames. When the receiver successfully receives a data frame, it
sends an ACK back to the sender, indicating the sequence number of the next expected frame.
The sender uses this ACK to determine whether the transmission was successful and whether
it can proceed to send the next frame.

Working of Stop and Wait for ARQ


• Sender A sends a data frame or packet with sequence number 0.
• Receiver B, after receiving the data frame, sends an acknowledgement with sequence
number 1 (the sequence number of the next expected data frame or packet)
There is only a one-bit sequence number that implies that both sender and receiver have a
buffer for one frame or packet only

Advantages of Stop and Wait ARQ


• Simple Implementation: Stop and Wait ARQ is a simple protocol that is easy to implement in
both hardware and software. It does not require complex algorithms or hardware
components, making it an inexpensive and efficient option.
• Error Detection: Stop and Wait ARQ detects errors in the transmitted data by using checksums
or cyclic redundancy checks (CRC). If an error is detected, the receiver sends a negative
acknowledgment (NAK) to the sender, indicating that the data needs to be retransmitted.
• Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably and in order. The
receiver cannot move on to the next data packet until it receives the current one. This ensures
that the data is received in the correct order and eliminates the possibility of data corruption.
• Flow Control: Stop and Wait ARQ can be used for flow control, where the receiver can control
the rate at which the sender transmits data. This is useful in situations where the receiver has
limited buffer space or processing power.
• Backward Compatibility: Stop and Wait ARQ is compatible with many existing systems and
protocols, making it a popular choice for communication over unreliable channels.
Disadvantages of Stop and Wait ARQ
• Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the sender to wait for an
acknowledgment from the receiver before sending the next data packet. This results in a
low data transmission rate, especially for large data sets.
• High Latency: Stop and Wait ARQ introduces additional latency in the transmission of data, as
the sender must wait for an acknowledgment before sending the next packet. This can be a
problem for real-time applications such as video streaming or online gaming.
• Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the available bandwidth
efficiently, as the sender can transmit only one data packet at a time. This results in
underutilization of the channel, which can be a problem in situations where the available
bandwidth is limited.
• Limited Error Recovery: Stop and Wait ARQ has limited error recovery capabilities. If a data
packet is lost or corrupted, the sender must retransmit the entire packet, which can be time-
consuming and can result in further delays.
• Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel noise, which can
cause errors in the transmitted data. This can result in frequent retransmissions and can
impact the overall efficiency of the protocol.

Go-Back-N Automatic Repeat Request

Go-Back-N ARQ is mainly a specific instance of Automatic Repeat Request (ARQ)


protocol where the sending process continues to send a number of frames as specified by the
window size even without receiving an acknowledgement (ACK) packet from the receiver.

The sender keeps a copy of each frame until the arrival of acknowledgement.
This protocol is a practical approach to the sliding window.
• In Go-Back-N ARQ, the size of the sender is N and the size of the receiver window is always 1.
• This protocol makes the use of cumulative acknowledgements means here the receiver
maintains an acknowledgement timer; whenever the receiver receives a new frame from the
sender then it starts a new acknowledgement timer. When the timer expires then the receiver
sends the cumulative acknowledgement for all the frames that are unacknowledged by the
receiver at that moment.
• It is important to note that the new acknowledgement timer only starts after the receiving of
a new frame, it does not start after the expiry of the old acknowledgement timer.
• If the receiver receives a corrupted frame, then it silently discards that corrupted frame and
the correct frame is retransmitted by the sender after the timeout timer expires. Thus receiver
silently discards the corrupted frame. By discarding silently we mean that: “Simply rejecting
the frame and not taking any action for the frame".
• In case after the expiry of the acknowledgement timer, suppose there is only one frame that
is left to be acknowledged. In that case, the receiver sends the independent acknowledgement
for that frame.
• In case if the receiver receives the out of order frame then it simply discards all the frames.
• In case if the sender does not receive any acknowledgement then the entire window of the
frame will be retransmitted in that case.
• Using the Go-Back-N ARQ protocol leads to the retransmission of the lost frames after the
expiry of the timeout timer.
The Need of Go-Back-N ARQ
This protocol is used to send more than one frame at a time. With the help of Go-Back-N ARQ,
there is a reduction in the waiting time of the sender.
With the help of the Go-Back-N ARQ protocol the efficiency in the transmission increases.
Send (sliding) window for Go-Back-N ARQ
Basically, the range which is in the concern of the sender is known as the send sliding window
for the Go-Back-N ARQ. It is an imaginary box that covers the sequence numbers of the data
frame which can be in transit.
The size of this imaginary box is 2m-1 having three variables Sf( which indicates send window,
the first outstanding frame), Sn(indicates the send window, the next frame to be sent),
SSize.(indicates the send window, size).
• The sender can transmit N frames before receiving the ACK frame.
• The size of the send sliding window is N.
• The copy of sent data is maintained in the sent buffer of the sender until all the sent packets
are acknowledged.
• If the timeout timer runs out then the sender will resend all the packets.
• Once the data get acknowledged by the receiver then that particular data will be removed
from the buffer.
Whenever a valid acknowledgement arrives then the send window can slide one or more slots.

Sender Window Size


As we have already told you the Sender window size is N.The value of N must be greater than
1.
In case if the value of N is equal to 1 then this protocol becomes a stop-and-wait protocol.
Receive (sliding) window for Go-Back-N ARQ
The range that is in the concern of the receiver is called the receiver sliding window.
• The receive window is mainly an abstract concept of defining an imaginary box whose size is
1and has a single variable Rn.
• The window slides when a correct frame arrives, the sliding occurs one slot at a time.
• The receiver always looks for a specific frame to arrive in the specific order.
• Any frame that arrives out of order at the receiver side will be discarded and thus need to be
resent by the sender.
• If a frame arrives at the receiver safely and in a particular order then the receiver send ACK
back to the sender.
• The silence of the receiver causes the timer of the unacknowledged frame to expire.
Design of Go-Back-N ARQ
With the help of Go-Back-N ARQ, multiple frames can be transit in the forward direction and
multiple acknowledgements can transit in the reverse direction. The idea of this protocol is
similar to the Stop-and-wait ARQ but there is a difference and it is the window of Go-Back-N
ARQ allows us to have multiple frames in the transition as there are many slots in the send
window.

Window size for Go-Back-N ARQ


In the Go-Back-N ARQ, the size of the send window must be always less than 2m and the size
of the receiver window is always 1.

Flow Diagram
Advantages
Given below are some of the benefits of using the Go-Back-N ARQ protocol:
• The efficiency of this protocol is more.
• The waiting time is pretty much low in this protocol.
• With the help of this protocol, the timer can be set for many frames.
• Also, the sender can send many frames at a time.
• Only one ACK frame can acknowledge more than one frame.
Disadvantages
Given below are some drawbacks:
• Timeout timer runs at the receiver side only.
• The transmitter needs to store the last N packets.
• The retransmission of many error-free packets follows an erroneous packet.
Selective-Repeat Automatic Repeat Request (ARQ)

• selective-repeat Automatic Repeat Request (ARQ) is one of the techniques where a data link
layer may deploy to control errors.
• It is also known as Sliding Window Protocol and used for error detection and control in the
data link layer.
• In the selective repeat, the sender sends several frames specified by a window size even
without the need to wait for individual acknowledgement from the receiver as in Go-Back-N
ARQ.
• In selective repeat protocol, the retransmitted frame is received out of sequence.
• In Selective Repeat ARQ only the lost or error frames are retransmitted, whereas correct
frames are received and buffered.
• The receiver while keeping track of sequence numbers buffers the frames in memory and
sends NACK for only frames which are missing or damaged. The sender will send/retransmit a
packet for which NACK is received.

Explanation
Step 1 − Frame 0 sends from sender to receiver and set timer.
Step 2 − Without waiting for acknowledgement from the receiver another frame, Frame1 is
sent by sender by setting the timer for it.
Step 3 − In the same way frame2 is also sent to the receiver by setting the timer without
waiting for previous acknowledgement.
Step 4 − Whenever sender receives the ACK0 from receiver, within the frame 0 timer then it
is closed and sent to the next frame, frame 3.
Step 5 − whenever the sender receives the ACK1 from the receiver, within the frame 1 timer
then it is closed and sent to the next frame, frame 4.
Step 6 − If the sender doesn’t receive the ACK2 from the receiver within the time slot, it
declares timeout for frame 2 and resends the frame 2 again, because it thought the frame2
may be lost or damaged.
HDLC (High-Level Data Link Control)
High-level Data Link Control (HDLC) is a group of communication protocols of the data link layer for
transmitting data between network points or nodes. Since it is a data link protocol, data is organized
into frames. A frame is transmitted via the network to the destination that verifies its successful arrival.
It is a bit - oriented protocol that is applicable for both point - to - point and multipoint
communications.

Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous balanced
mode.

• Normal Response Mode (NRM) − Here, two types of stations are there, a primary station that
send commands and secondary station that can respond to received commands. It is used for
both point - to - point and multipoint communications.

• Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each station
can both send commands and respond to commands. It is used for only point - to - point
communications.
HDLC Frame

HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC frame are −

• Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
• Address − It contains the address of the receiver. If the frame is sent by the primary station, it
contains the address(es) of the secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field may be from 1 byte to several
bytes.
• Control − It is 1 or 2 bytes containing flow and error control information.
• Payload − This carries the data from the network layer. Its length may vary from one network
to another.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code
used is CRC (cyclic redundancy code)

Types of HDLC Frames

There are three types of HDLC frames. The type of frame is determined by the control field of the
frame
• I-frame − I-frames or Information frames carry user data from the network layer. They also
include flow and error control information that is piggybacked on user data. The first bit of
control field of I-frame is 0.
• S-frame − S-frames or Supervisory frames do not contain information field. They are used for
flow and error control when piggybacking is not required. The first two bits of control field of
S-frame is 10.
• U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous functions,
like link management. It may contain an information field, if required. The first two bits of
control field of U-frame is 11.
Multiple Access Protocols

Multiple Access Protocols are methods used in computer networks to control how data is transmitted
when multiple devices are trying to communicate over the same network. These protocols ensure that
data packets are sent and received efficiently, without collisions or interference. They help manage the
network traffic so that all devices can share the communication channel smoothly and effectively.

Random Access Protocol


In this, all stations have same superiority that is no station has more priority than another
station. Any station can send data depending on medium’s state( idle or busy). It has two
features:
• There is no fixed time for sending data
• There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
ALOHA
It was designed for wireless LAN but is also applicable for shared medium. In this, multiple
stations can transmit data at the same time and can hence lead to collision and data being
garbled.

Pure ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement
doesn’t come within the allotted time then the station waits for a random amount of time
called back-off time (Tb) and re-sends the data. Since different stations wait for different
amount of time, the probability of further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5

Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
Carrier Sense Multiple Access (CSMA)
Carrier Sense Multiple Access (CSMA) is a method used in computer networks to manage how
devices share a communication channel to transfer the data between two devices. In this
protocol, each device first sense the channel before sending the data. If the channel is busy,
the device waits until it is free. This helps reduce collisions, where two devices send data at
the same time, ensuring smoother communication across the network. CSMA is commonly
used in technologies like Ethernet and Wi-Fi.
This method was developed to decrease the chances of collisions when two or more stations
start sending their signals over the data link layer. Carrier Sense multiple access requires that
each station first check the state of the medium before sending.
What is Vulnerable Time in CSMA?
Vulnerable time is the short period when there’s a chance that two devices on a network
might send data at the same time and cause a collision.
In networks like Wi-Fi or Ethernet, devices first listen to see if the network is clear before
sending their data. However, after a device checks and finds the network free, there’s a
small delay before it starts sending data. During this time, another device might also try to
send data, which can lead to both data packets crashing into each other (a collision). This
period, where a collision might happen, is called the vulnerable time.
Vulnerable time = Propagation time (Tp)

The persistence methods can be applied to help the station take action when the channel is
busy/idle.
Types of CSMA Protocol
There are two main types of Carrier Sense Multiple Access (CSMA) protocols, each designed
to handle how devices manage potential data collisions on a shared communication channel.
These types differ based on how they respond to the detection of a busy network:
1. CSMA/CD
2. CSMA/CA
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If successful, the transmission is finished, if not, the frame is
sent again.

In the diagram, starts sending the first bit of its frame at t1 and since C sees the channel idle
at t2, starts sending its frame at t2. C detects A’s frame at t3 and aborts transmission. A
detects C’s frame at t4 and aborts its transmission. Transmission time for C’s frame is,
therefore, t3-t2 and for A’s frame is t4-t1
So, the frame transmission time (Tfr) should be at least twice the maximum propagation
time (Tp). This can be deduced when the two stations involved in a collision are a maximum
distance apart.
Process: The entire process of collision detection can be explained as follows:
• Throughput and Efficiency: The throughput of CSMA/CD is much greater than pure or
slotted ALOHA.
• For the 1-persistent method, throughput is 50% when G=1.
• For the non-persistent method, throughput can go up to 90%.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)


The basic idea behind CSMA/CA is that the station should be able to receive while
transmitting to detect a collision from different stations. In wired networks, if a collision has
occurred then the energy of the received signal almost doubles, and the station can sense
the possibility of collision. In the case of wireless networks, most of the energy is used for
transmission, and the energy of the received signal increases by only 5-10% if a collision
occurs. It can’t be used by the station to sense collision. Therefore CSMA/CA has been
specially designed for wireless networks.
These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy it senses the channel again,
when the station finds a channel to be idle it waits for a period of time called IFS time. IFS can
also be used to define the priority of a station or a frame. Higher the IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots. A station that is ready to send
frames chooses a random number of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-out timer can help guarantee a
successful transmission of the frame.
Characteristics of CSMA/CA
1. Carrier Sense: The device listens to the channel before transmitting, to ensure that it is not
currently in use by another device.
2. Multiple Access: Multiple devices share the same channel and can transmit simultaneously.
3. Collision Avoidance: If two or more devices attempt to transmit at the same time, a collision
occurs. CSMA/CA uses random backoff time intervals to avoid collisions.
4. Acknowledgment (ACK): After successful transmission, the receiving device sends an ACK to
confirm receipt.
5. Fairness: The protocol ensures that all devices have equal access to the channel and no single
device monopolizes it.
6. Binary Exponential Backoff: If a collision occurs, the device waits for a random period of time
before attempting to retransmit. The backoff time increases exponentially with each
retransmission attempt.
7. Interframe Spacing: The protocol requires a minimum amount of time between transmissions
to allow the channel to be clear and reduce the likelihood of collisions.
8. RTS/CTS Handshake: In some implementations, a Request-To-Send (RTS) and Clear-To-Send
(CTS) handshake is used to reserve the channel before transmission. This reduces the chance
of collisions and increases efficiency.
9. Wireless Network Quality: The performance of CSMA/CA is greatly influenced by the quality
of the wireless network, such as the strength of the signal, interference, and network
congestion.
10. Adaptive Behavior: CSMA/CA can dynamically adjust its behavior in response to changes in
network conditions, ensuring the efficient use of the channel and avoiding congestion.
Overall, CSMA/CA balances the need for efficient use of the shared channel with the need to
avoid collisions, leading to reliable and fair communication in a wireless network.
Process: The entire process of collision avoidance can be explained as follows:
Types of CSMA Access Modes
There are 4 types of access modes available in CSMA. It is also referred as 4 different types of
CSMA protocols which decide the time to start sending data across shared media.
1. 1-Persistent: It senses the shared channel first and delivers the data right away if the channel
is idle. If not, it must wait and continuously track for the channel to become idle and then
broadcast the frame without condition as soon as it does. It is an aggressive transmission
algorithm.
2. Non-Persistent: It first assesses the channel before transmitting data; if the channel is idle,
the node transmits data right away. If not, the station must wait for an arbitrary amount of
time (not continuously), and when it discovers the channel is empty, it sends the frames.
3. P-Persistent: It consists of the 1-Persistent and Non-Persistent modes combined. Each node
observes the channel in the 1Persistent mode, and if the channel is idle, it sends a frame with
a P probability. If the data is not transferred, the frame restarts with the following time slot
after waiting for a (q = 1-p probability) random period.
4. O-Persistent: A supervisory node gives each node a transmission order. Nodes wait for their
time slot according to their allocated transmission sequence when the transmission medium
is idle.
Advantages of CSMA
• Increased Efficiency: CSMA ensures that only one device communicates on the network at a
time, reducing collisions and improving network efficiency.
• Simplicity: CSMA is a simple protocol that is easy to implement and does not require complex
hardware or software.
• Flexibility: CSMA is a flexible protocol that can be used in a wide range of network
environments, including wired and wireless networks.
• Low cost: CSMA does not require expensive hardware or software, making it a cost-effective
solution for network communication.
Disadvantages of CSMA
• Limited Scalability: CSMA is not a scalable protocol and can become inefficient as the number
of devices on the network increases.
• Delay: In busy networks, the requirement to sense the medium and wait for an available
channel can result in delays and increased latency.
• Limited Reliability: CSMA can be affected by interference, noise, and other factors, resulting
in unreliable communication.
• Vulnerability to Attacks: CSMA can be vulnerable to certain types of attacks, such as jamming
and denial-of-service attacks, which can disrupt network communication.
Transmission Collision detection
Protocol behavior method Efficiency Use cases

Low-traffic
Pure Sends frames No collision
Low networks
ALOHA immediately detection

Low-traffic
Slotted Sends frames at Better than
No collision detection networks
ALOHA specific time slots pure ALOHA

Monitors medium
Collision detection by Wired networks
after sending a
CSMA/CD monitoring High with moderate to
frame, retransmits if
transmissions high traffic
necessary

Wireless
Monitors medium
Collision avoidance networks with
while transmitting,
CSMA/CA through random High moderate to high
adjusts behavior to
backoff time intervals traffic and high
avoid collisions
error rates

Controlled Access Protocols in Computer Network

Controlled Access Protocols (CAPs) in computer networks control how data packets are sent over a
common communication medium. These protocols ensure that data is transmitted efficiently, without
collisions, and with little interference from other data transmissions. In this article, we will discuss
Controlled Access Protocols.

What is the Controlled Access?

In controlled access, the stations seek data from one another to find which station has the right to
send. It allows only one node to send at a time, to avoid the collision of messages on a shared medium.
The three controlled-access methods are:

• Reservation
• Polling
• Token Passing
1. Reservation
• In the reservation method, a station needs to make a reservation before sending data.

• The timeline has two kinds of periods:

o Reservation interval of fixed time length

o Data transmission period of variable frames.

• If there are M stations, the reservation interval is divided into M slots, and each station has
one slot.

• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station
is allowed to transmit during this slot.
• In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.

• The stations which have reserved their slots transfer their frames in that order.
• After data transmission period, next reservation interval begins.
• Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In the first
interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station 1 has
made a reservation.

Advantages of Reservation
• The main advantage of reservation is high rates and low rates of data accessing time of the
respective channel can be predicated easily. Here time and rates are fixed.

• Priorities can be set to provide speedier access from secondary.

• Reservation-based access methods can provide predictable network performance, which is


important in applications where latency and jitter must be minimized, such as in real-time
video or audio streaming.

• Reservation-based access methods can reduce contention for network resources, as access to
the network is pre-allocated based on reservation requests. This can improve network
efficiency and reduce packet loss.

• Reservation-based access methods can support QoS requirements, by providing different


reservation types for different types of traffic, such as voice, video, or data. This can ensure
that high-priority traffic is given preferential treatment over lower-priority traffic.

• Reservation-based access methods can enable more efficient use of available bandwidth, as
they allow for time and frequency multiplexing of different reservation requests on the same
channel.

• Reservation-based access methods are well-suited to support multimedia applications that


require guaranteed network resources, such as bandwidth and latency, to ensure high-quality
performance.

Disadvantages of Reservation

• Highly trust on controlled dependability.


• Decrease in capacity and channel data rate under light loads; increase in turn-around time.
2. Polling

• Polling process is similar to the roll-call performed in class. Just like the teacher, a controller
sends a message to each node in turn.

• In this, one acts as a primary station(controller) and the others are secondary stations. All data
exchanges must be made through the controller.

• The message sent by the controller contains the address of the node being selected for
granting access.

• Although all nodes receive the message the addressed one responds to it and sends data if
any. If there is no data, usually a “poll reject”(NAK) message is sent back.

• Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.

Advantages of Polling

• The maximum and minimum access time and data rates on the channel are fixed predictable.
• It has maximum efficiency.
• It has maximum bandwidth.
• No slot is wasted in polling.
• There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling

• It consume more time.


• Since every station has an equal chance of winning in every round, link sharing is biased.
• Only some station might run out of data to send.
• An increase in the turnaround time leads to a drop in the data rates of the channel under low
loads.
Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,

Efficiency = Tt/(Tt + Tpoll)


3. Token Passing

• In token passing scheme, the stations are connected logically to each other in form of ring and
access to stations is governed by tokens.

• A token is a special bit pattern or a small message, which circulate from one station to the next
in some predefined order.

• In Token ring, token is passed from one station to another adjacent station in the ring whereas
incase of Token bus, each station uses the bus to send the token to the next station in some
predefined order.

• In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token to
the next station. If it has no queued frame, it passes the token simply.

• After sending a frame, each station must wait for all N stations (including itself) to send the
token to their neighbours and the other N – 1 stations to send a frame, if they have one.

• There exists problems like duplication of token or token is lost or insertion of new station,
removal of a station, which need be tackled for correct and reliable operation of this scheme.

Performance of token ring can be concluded by 2 parameters:-

• Delay, is a measure of time between when a packet is ready and when it is delivered. So, the
average time (delay) required to send a token to the next station = a/N.

• Throughput, which is a measure of successful traffic.

Throughput, S = 1/(1 + a/N) for a<1


and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
Advantages of Token passing

• It may now be applied with routers cabling and includes built-in debugging features like
protective relay and auto reconfiguration.

• It provides good throughput when conditions of high load.

Disadvantages of Token passing

• Its cost is expensive.

• Topology components are more expensive than those of other, more widely used standard.

• The hardware element of the token rings are designed to be tricky. This implies that you should
choose on manufacture and use them exclusively.

You might also like