Error Detection Codes
Error Detection Codes
through some communication medium. The external noise can change bits from 1 to 0 or 0 to 1. This
change in values changes the meaning of the actual message and is called an error. For efficient data
transfer, there should be error detection and correction codes. An error detection code is a binary code
that detects digital errors during transmission. To detect errors in the received message, we add some
extra bits to the actual data.
Without the addition of redundant bits, it is not possible to detect errors in the received message.
There are 3 ways in which we can detect errors in the received message :
1. Parity Bit
2. CheckSum
A parity bit is an extra bit that is added to the message bits or data-word bits on the sender side. Data-
word bits along with parity bits is called a codeword. The parity bit is added to the message bits on the
sender side, to help in error detection at the receiver side.
A parity bit is an extra bit included in the binary message to make a total number of 1’s either odd or
even. Parity word denotes the number of 1’s in a binary string. There are two parity systems – even
and odd parity checks.
Even Parity
Total number of 1’s in the given data bit should be even. So if the total number of 1’s in the data bit is
odd then a single 1 will be appended to make total number of 1’s even else 0 will be appended(if total
number of 1’s are already even). Hence, if any error occurs, the parity check circuit will detect it at the
receiver’s end.
Odd Parity
In odd parity system, if the total number of 1’s in the given binary string (or data bits) are even then 1
is appended to make the total count of 1’s as odd else 0 is appended. The receiver knows that whether
sender is an odd parity generator or even parity generator
• Parity bit method is a promising method to detect any odd bit error at the receiver side.
• Overhead in data transmission is low, as only one parity bit is added to the message bits at
sender side before transmission.
• The limitation of this method is that only error in a odd number of bits are identified and we
also cannot determine the exact location of error in the data bit, therefore, error correction is
not possible in this method.
• If the number of bits in even parity check increase or decrease (data changed) but remained
to be even then it won’t be able to detect error as the number of bits are still even and same
goes for odd parity check.
Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, then both are sent along with the data.
At the receiving end, these are compared with the parity bits calculated on the received data.
Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process involves
dividing the data into equally sized segments and using a 1’s complement to calculate the sum of these
segments. The calculated sum is then sent along with the data to the receiver. At the receiver’s end,
the same process is repeated and if all zeroes are obtained in the sum, it means that the data is correct.
• Unlike the checksum scheme, which is based on addition, CRC is based on binary division.
• In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the
end of the data unit so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number.
• At the destination, the incoming data unit is divided by the same number. If at this step there
is no remainder, the data unit is assumed to be correct and is therefore accepted.
• A remainder indicates that the data unit has been damaged in transit and therefore must be
rejected.
CRC Working
FRAMING
• Frames are the units of digital transmission, particularly in computer networks.
• The data link layer divides the stream of bits received from the network layer into manageable
data units called frames.
• At the data link layer, it extracts the message from the sender and provides it to the receiver
by providing the sender’s and receiver’s addresses. The advantage of using frames is that data
is broken up into recoverable chunks that can easily be checked for corruption.
• Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered
accurately and efficiently.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the
frame, the length of the frame itself acts as a delimiter.
• Drawback: It suffers from internal fragmentation if the data size is less than the frame size
• Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning
of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of the frame.
Used in Ethernet(802.3). The problem with this is that sometimes the length field might get
corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the frame. Used
in Token Ring. The problem with this is that ED can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains ED then,
a byte is stuffed into data to differentiate it from ED.
Disadvantage – It is very costly and obsolete method.
2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Examples:
• If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing.
Flow control refers to a set of procedures used to restrict the amount of data that the sender
can send before waiting for acknowledgment.
Stop-and-wait
o n the Stop-and-wait method, the sender waits for an acknowledgement after every frame it
sends.
o When acknowledgement is received, then only next frame is sent. The process of alternately
sending and waiting of a frame continues until the sender transmits the EOT (End of
transmission) frame.
Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the
next frame is sent.
Disadvantage of Stop-and-wait
Stop-and-wait technique is inefficient to use as each frame must travel across all the way to
the receiver, and an acknowledgement travels all the way before the next frame is sent. Each
frame sent and received uses the entire time needed to traverse the link.
Sliding Window
o The Sliding Window is a method of flow control in which a sender can transmit the several
frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the another due to which
capacity of the communication channel can be utilized efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver end.
o The window can hold the frames at either end, and it provides the upper limit on the number
of frames that can be transmitted before the acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n means that they are
numbered from 0 to n-1. For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent
before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame that it wants to
receive. For example, to acknowledge the string of frames ending with frame number 4, the
receiver will send the ACK containing the number 5. When the sender sees the ACK with the
number 5, it got to know that the frames from 0 through 4 have been received.
Sender Window
o At the beginning of a transmission, the sender window contains n-1 frames, and when they
are sent out, the left boundary moves inward shrinking the size of the window. For example,
if the size of the window is w if three frames are sent out, then the number of frames left out
in the sender window is w-3.
o Once the ACK has arrived, then the sender window expands to the number which will be equal
to the number of frames acknowledged by ACK.
o For example, the size of the window is 7, and if frames 0 through 4 have been sent out and no
acknowledgement has arrived, then the sender window contains only two frames, i.e., 5 and
6. Now, if ACK has arrived with a number 4 which means that 0 through 3 frames have arrived
undamaged and the sender window is expanded to include the next four frames. Therefore,
the sender window contains six frames (5,6,7,0,1,2).
Receiver Window
o At the beginning of transmission, the receiver window does not contain n frames, but it
contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but it represents the
number of frames that can be received before an ACK is sent. For example, the size of the
window is w, if three frames are received then the number of spaces available in the window
is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the number equal to the
number of frames acknowledged.
o Suppose the size of the window is 7 means that the receiver window contains seven spaces
for seven frames. If the one frame is received, then the receiver window shrinks and moving
the boundary from 0 to 1. In this way, window shrinks one by one, so window now contains
the six spaces. If frames from 0 through 4 have sent, then the window contains two spaces
before an acknowledgement is sent.
Error Control:
Error control in the data link layer is based on automatic repeat request, which is the retransmission
of data.
NOISE LESS
Simplest Protocol
NOISY CHANNEL
Stop-and-Wait Automatic Repeat Request
Stop and Wait for ARQ (Automatic Repeat Request) that does both error control and flow
control.
Timeout
Timeout refers to the duration for which the sender waits for an acknowledgment (ACK) from
the receiver after transmitting a data packet. If the sender does not receive an ACK within this
timeout period, it assumes that the frame was lost or corrupted and retransmits the frame.
The sender keeps a copy of each frame until the arrival of acknowledgement.
This protocol is a practical approach to the sliding window.
• In Go-Back-N ARQ, the size of the sender is N and the size of the receiver window is always 1.
• This protocol makes the use of cumulative acknowledgements means here the receiver
maintains an acknowledgement timer; whenever the receiver receives a new frame from the
sender then it starts a new acknowledgement timer. When the timer expires then the receiver
sends the cumulative acknowledgement for all the frames that are unacknowledged by the
receiver at that moment.
• It is important to note that the new acknowledgement timer only starts after the receiving of
a new frame, it does not start after the expiry of the old acknowledgement timer.
• If the receiver receives a corrupted frame, then it silently discards that corrupted frame and
the correct frame is retransmitted by the sender after the timeout timer expires. Thus receiver
silently discards the corrupted frame. By discarding silently we mean that: “Simply rejecting
the frame and not taking any action for the frame".
• In case after the expiry of the acknowledgement timer, suppose there is only one frame that
is left to be acknowledged. In that case, the receiver sends the independent acknowledgement
for that frame.
• In case if the receiver receives the out of order frame then it simply discards all the frames.
• In case if the sender does not receive any acknowledgement then the entire window of the
frame will be retransmitted in that case.
• Using the Go-Back-N ARQ protocol leads to the retransmission of the lost frames after the
expiry of the timeout timer.
The Need of Go-Back-N ARQ
This protocol is used to send more than one frame at a time. With the help of Go-Back-N ARQ,
there is a reduction in the waiting time of the sender.
With the help of the Go-Back-N ARQ protocol the efficiency in the transmission increases.
Send (sliding) window for Go-Back-N ARQ
Basically, the range which is in the concern of the sender is known as the send sliding window
for the Go-Back-N ARQ. It is an imaginary box that covers the sequence numbers of the data
frame which can be in transit.
The size of this imaginary box is 2m-1 having three variables Sf( which indicates send window,
the first outstanding frame), Sn(indicates the send window, the next frame to be sent),
SSize.(indicates the send window, size).
• The sender can transmit N frames before receiving the ACK frame.
• The size of the send sliding window is N.
• The copy of sent data is maintained in the sent buffer of the sender until all the sent packets
are acknowledged.
• If the timeout timer runs out then the sender will resend all the packets.
• Once the data get acknowledged by the receiver then that particular data will be removed
from the buffer.
Whenever a valid acknowledgement arrives then the send window can slide one or more slots.
Flow Diagram
Advantages
Given below are some of the benefits of using the Go-Back-N ARQ protocol:
• The efficiency of this protocol is more.
• The waiting time is pretty much low in this protocol.
• With the help of this protocol, the timer can be set for many frames.
• Also, the sender can send many frames at a time.
• Only one ACK frame can acknowledge more than one frame.
Disadvantages
Given below are some drawbacks:
• Timeout timer runs at the receiver side only.
• The transmitter needs to store the last N packets.
• The retransmission of many error-free packets follows an erroneous packet.
Selective-Repeat Automatic Repeat Request (ARQ)
• selective-repeat Automatic Repeat Request (ARQ) is one of the techniques where a data link
layer may deploy to control errors.
• It is also known as Sliding Window Protocol and used for error detection and control in the
data link layer.
• In the selective repeat, the sender sends several frames specified by a window size even
without the need to wait for individual acknowledgement from the receiver as in Go-Back-N
ARQ.
• In selective repeat protocol, the retransmitted frame is received out of sequence.
• In Selective Repeat ARQ only the lost or error frames are retransmitted, whereas correct
frames are received and buffered.
• The receiver while keeping track of sequence numbers buffers the frames in memory and
sends NACK for only frames which are missing or damaged. The sender will send/retransmit a
packet for which NACK is received.
Explanation
Step 1 − Frame 0 sends from sender to receiver and set timer.
Step 2 − Without waiting for acknowledgement from the receiver another frame, Frame1 is
sent by sender by setting the timer for it.
Step 3 − In the same way frame2 is also sent to the receiver by setting the timer without
waiting for previous acknowledgement.
Step 4 − Whenever sender receives the ACK0 from receiver, within the frame 0 timer then it
is closed and sent to the next frame, frame 3.
Step 5 − whenever the sender receives the ACK1 from the receiver, within the frame 1 timer
then it is closed and sent to the next frame, frame 4.
Step 6 − If the sender doesn’t receive the ACK2 from the receiver within the time slot, it
declares timeout for frame 2 and resends the frame 2 again, because it thought the frame2
may be lost or damaged.
HDLC (High-Level Data Link Control)
High-level Data Link Control (HDLC) is a group of communication protocols of the data link layer for
transmitting data between network points or nodes. Since it is a data link protocol, data is organized
into frames. A frame is transmitted via the network to the destination that verifies its successful arrival.
It is a bit - oriented protocol that is applicable for both point - to - point and multipoint
communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous balanced
mode.
• Normal Response Mode (NRM) − Here, two types of stations are there, a primary station that
send commands and secondary station that can respond to received commands. It is used for
both point - to - point and multipoint communications.
• Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each station
can both send commands and respond to commands. It is used for only point - to - point
communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC frame are −
• Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
• Address − It contains the address of the receiver. If the frame is sent by the primary station, it
contains the address(es) of the secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field may be from 1 byte to several
bytes.
• Control − It is 1 or 2 bytes containing flow and error control information.
• Payload − This carries the data from the network layer. Its length may vary from one network
to another.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code
used is CRC (cyclic redundancy code)
There are three types of HDLC frames. The type of frame is determined by the control field of the
frame
• I-frame − I-frames or Information frames carry user data from the network layer. They also
include flow and error control information that is piggybacked on user data. The first bit of
control field of I-frame is 0.
• S-frame − S-frames or Supervisory frames do not contain information field. They are used for
flow and error control when piggybacking is not required. The first two bits of control field of
S-frame is 10.
• U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous functions,
like link management. It may contain an information field, if required. The first two bits of
control field of U-frame is 11.
Multiple Access Protocols
Multiple Access Protocols are methods used in computer networks to control how data is transmitted
when multiple devices are trying to communicate over the same network. These protocols ensure that
data packets are sent and received efficiently, without collisions or interference. They help manage the
network traffic so that all devices can share the communication channel smoothly and effectively.
Pure ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement
doesn’t come within the allotted time then the station waits for a random amount of time
called back-off time (Tb) and re-sends the data. Since different stations wait for different
amount of time, the probability of further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
Carrier Sense Multiple Access (CSMA)
Carrier Sense Multiple Access (CSMA) is a method used in computer networks to manage how
devices share a communication channel to transfer the data between two devices. In this
protocol, each device first sense the channel before sending the data. If the channel is busy,
the device waits until it is free. This helps reduce collisions, where two devices send data at
the same time, ensuring smoother communication across the network. CSMA is commonly
used in technologies like Ethernet and Wi-Fi.
This method was developed to decrease the chances of collisions when two or more stations
start sending their signals over the data link layer. Carrier Sense multiple access requires that
each station first check the state of the medium before sending.
What is Vulnerable Time in CSMA?
Vulnerable time is the short period when there’s a chance that two devices on a network
might send data at the same time and cause a collision.
In networks like Wi-Fi or Ethernet, devices first listen to see if the network is clear before
sending their data. However, after a device checks and finds the network free, there’s a
small delay before it starts sending data. During this time, another device might also try to
send data, which can lead to both data packets crashing into each other (a collision). This
period, where a collision might happen, is called the vulnerable time.
Vulnerable time = Propagation time (Tp)
The persistence methods can be applied to help the station take action when the channel is
busy/idle.
Types of CSMA Protocol
There are two main types of Carrier Sense Multiple Access (CSMA) protocols, each designed
to handle how devices manage potential data collisions on a shared communication channel.
These types differ based on how they respond to the detection of a busy network:
1. CSMA/CD
2. CSMA/CA
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If successful, the transmission is finished, if not, the frame is
sent again.
In the diagram, starts sending the first bit of its frame at t1 and since C sees the channel idle
at t2, starts sending its frame at t2. C detects A’s frame at t3 and aborts transmission. A
detects C’s frame at t4 and aborts its transmission. Transmission time for C’s frame is,
therefore, t3-t2 and for A’s frame is t4-t1
So, the frame transmission time (Tfr) should be at least twice the maximum propagation
time (Tp). This can be deduced when the two stations involved in a collision are a maximum
distance apart.
Process: The entire process of collision detection can be explained as follows:
• Throughput and Efficiency: The throughput of CSMA/CD is much greater than pure or
slotted ALOHA.
• For the 1-persistent method, throughput is 50% when G=1.
• For the non-persistent method, throughput can go up to 90%.
Low-traffic
Pure Sends frames No collision
Low networks
ALOHA immediately detection
Low-traffic
Slotted Sends frames at Better than
No collision detection networks
ALOHA specific time slots pure ALOHA
Monitors medium
Collision detection by Wired networks
after sending a
CSMA/CD monitoring High with moderate to
frame, retransmits if
transmissions high traffic
necessary
Wireless
Monitors medium
Collision avoidance networks with
while transmitting,
CSMA/CA through random High moderate to high
adjusts behavior to
backoff time intervals traffic and high
avoid collisions
error rates
Controlled Access Protocols (CAPs) in computer networks control how data packets are sent over a
common communication medium. These protocols ensure that data is transmitted efficiently, without
collisions, and with little interference from other data transmissions. In this article, we will discuss
Controlled Access Protocols.
In controlled access, the stations seek data from one another to find which station has the right to
send. It allows only one node to send at a time, to avoid the collision of messages on a shared medium.
The three controlled-access methods are:
• Reservation
• Polling
• Token Passing
1. Reservation
• In the reservation method, a station needs to make a reservation before sending data.
• If there are M stations, the reservation interval is divided into M slots, and each station has
one slot.
• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station
is allowed to transmit during this slot.
• In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
• The stations which have reserved their slots transfer their frames in that order.
• After data transmission period, next reservation interval begins.
• Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In the first
interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station 1 has
made a reservation.
Advantages of Reservation
• The main advantage of reservation is high rates and low rates of data accessing time of the
respective channel can be predicated easily. Here time and rates are fixed.
• Reservation-based access methods can reduce contention for network resources, as access to
the network is pre-allocated based on reservation requests. This can improve network
efficiency and reduce packet loss.
• Reservation-based access methods can enable more efficient use of available bandwidth, as
they allow for time and frequency multiplexing of different reservation requests on the same
channel.
Disadvantages of Reservation
• Polling process is similar to the roll-call performed in class. Just like the teacher, a controller
sends a message to each node in turn.
• In this, one acts as a primary station(controller) and the others are secondary stations. All data
exchanges must be made through the controller.
• The message sent by the controller contains the address of the node being selected for
granting access.
• Although all nodes receive the message the addressed one responds to it and sends data if
any. If there is no data, usually a “poll reject”(NAK) message is sent back.
• Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
Advantages of Polling
• The maximum and minimum access time and data rates on the channel are fixed predictable.
• It has maximum efficiency.
• It has maximum bandwidth.
• No slot is wasted in polling.
• There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling
• In token passing scheme, the stations are connected logically to each other in form of ring and
access to stations is governed by tokens.
• A token is a special bit pattern or a small message, which circulate from one station to the next
in some predefined order.
• In Token ring, token is passed from one station to another adjacent station in the ring whereas
incase of Token bus, each station uses the bus to send the token to the next station in some
predefined order.
• In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token to
the next station. If it has no queued frame, it passes the token simply.
• After sending a frame, each station must wait for all N stations (including itself) to send the
token to their neighbours and the other N – 1 stations to send a frame, if they have one.
• There exists problems like duplication of token or token is lost or insertion of new station,
removal of a station, which need be tackled for correct and reliable operation of this scheme.
• Delay, is a measure of time between when a packet is ready and when it is delivered. So, the
average time (delay) required to send a token to the next station = a/N.
• It may now be applied with routers cabling and includes built-in debugging features like
protective relay and auto reconfiguration.
• Topology components are more expensive than those of other, more widely used standard.
• The hardware element of the token rings are designed to be tricky. This implies that you should
choose on manufacture and use them exclusively.