Data Linl Unit3
Data Linl Unit3
Error Detection
When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data transmitted
by another device. An Error is a situation when the message received at the receiver end
is not identical to the message transmitted.
Types Of Errors
o Single-Bit Error
o Burst Error
Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.
Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ‘s and for a
single-bit error to occurred, a noise must be more than 1 ‘s.
Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires
are used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is
corrupted per byte.
Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.
The Burst Error is determined from the first corrupted bit to the last corrupted bit.
The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
The number of affected bits depends on the duration of the noise and data rate.
o Single Parity checking is the simple mechanism and inexpensive to detect the
errors.
o In this technique, a redundant bit is also known as a parity bit which is appended
at the end of the data unit so that the number of 1s becomes even. Therefore, the
total number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of
1s bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-
parity checking.
o If two bits in one data unit are corrupted and two bits exactly the same position
in another data unit are also corrupted, then 2D Parity checker will not be able to
detect the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.
Checksum
A Checksum is an error detection technique based on the concept of redundancy.
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data
into equal segments of n bits each, and all these segments are added together by using
one's complement arithmetic. The sum is complemented and appended to the original
data, known as checksum field. The extended data is transmitted across the network.
Suppose L is the total sum of the data segments, then the checksum would be ?
A Checksum is verified at the receiving side. The receiver subdivides the incoming data
into equal segments of n bits each, and all these segments are added together, and then
this sum is complemented. If the complement of the sum is zero, then the data is
accepted otherwise data is rejected.
o In CRC technique, a string of n 0s is appended to the data unit, and this n number
is less than the number of bits in a predetermined number, known as division
which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is
known as binary division. The remainder generated from this division is known as
CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original
data. This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will
treat this whole unit as a single unit, and it is divided by the same divisor that was
used to find the CRC remainder.
If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at
the end of the data as the length of the divisor is 4 and we know that the length
of the string 0s to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the
divisor 1001.
o The remainder generated from the binary division is known as CRC remainder.
The generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit,
and the final string would be 11100111 which is sent across the network.
CRC Checker
o Backward error correction: Once the error is discovered, the receiver requests
the sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting
code which automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For example, If
we want to calculate a single-bit error, the error correction code will determine which
one of seven bits is in error. To achieve this, we have to add some additional redundant
bits.
Suppose r is the number of redundant bits and d is the total number of the data bits.
The number of redundant bits r can be calculated by using the formula:
2r>=d+r+1
The value of r is calculated by using the above formula. For example, if the value of d is
4, then the possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by R.W
Hamming is Hamming code which can be applied to any length of the data unit and
uses the relationship between data units and redundant units.
Hamming Code
Parity bits: The bit which is appended to the original data of binary bits so that the total
number of 1s is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the value of
the parity bit is 0. If the total number of 1s occurrences is odd, then the value of the
parity bit is 1.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of
parity bit is 1. If the total number of 1s is odd, then the value of parity bit is 0.
o An information of 'd' bits are added to the redundant bits 'r' to form d+r.
o The location of each of the (d+r) digits is assigned a decimal value.
o The 'r' bits are placed in the positions 1,2,.....2k-1.
o At the receiving end, the parity bits are recalculated. The decimal value of the
parity bits determines the position of an error.
o The binary representation of redundant bits, i.e., r4r2r1 is 100, and its
corresponding decimal value is 4. Therefore, the error occurs in a 4 th bit position.
The bit value must be changed from 1 to 0 to correct the error.
2. Address Field –
The address field generally includes HDLC address of secondary station. It helps to
identify secondary station will sent or receive data frame. This field also generally consists
of 8 bits therefore it is capable of addressing 256 addresses. This field can be of 1 byte or
several bytes long, it depends upon requirements of network. Each byte can identify up to
128 stations.
This address might include a particular address, a group address, or a broadcast address. A
primary address can either be a source of communication or a destination that eliminates
requirement of including address of primary.
3. Control Field –
HDLC generally uses this field to determine how to control process of communication. The
control field is different for different types of frames in HDLC protocol. The types of
frames can be Information frame (I-frame), Supervisory frame (S-frame), and Unnumbered
frame (U-frame).
This field is a 1-2-byte segment of frame generally requires for flow and error control. This
field basically consists of 8 bits but it can be extended to 16 bits. In this field, interpretation
of bits usually depends upon the type of frame.
4. Information Field –
This field usually contains data or information of users sender is transmitting to receiver in
an I-frame and network layer or management information in U-frame. It also consists of
user’s data and is fully transparent. The length of this field might vary from one network to
another network.
Information field is not always present in an HDLC frame.
PPP stands for Point-to-Point Protocol. PPP is Windows default Remote Access Service
(RAS) protocol and is Data Link Layer (DLL) protocol used to encapsulate higher network
layer protocols to pass through synchronous and asynchronous lines of communication.
Initially created as an encapsulation protocol to carry numerous layer of network traffic over
point-to-point connections. In addition, PPP settled various measures including asynchronous
and bit-oriented synchronous encapsulation, multiplexing of network protocols, negotiation of
sessions, and negotiation of data-compression. PPP also supports non TCP / IP protocols, such
as IPX / SPX and DECnet. A prior standard known as Serial Link Internet Protocol (SLIP) has
been largely replace by it.
PPP supports two main authentication protocols:PPP supports two main authentication
protocols:
Password Authentication Protocol (PAP): A basic authentication scheme where the
user’s identity information such as the user name and password are transmitted in the plain
format. Its disadvantage is that it is less secure because of the simplicity.
Challenge Handshake Authentication Protocol (CHAP): CHAP is more secure as it
periodically alert the identity of the remote client by using a three way hand shake method.
The password is encrypted, thus it is much more secure than plaintext PAP.
Which Protocol is Replaced by PPP?
PPP took the place of the Serial Line Internet Protocol (SLIP). SLIP was one of the older
protocols that was used for establishing connections for serial means to provide P-P
communication contacts but it had some issues such as, it did not support features like error
detection and link multiplexing. PPP has certain advancements over the conventional protocols
with respect to error detection, multiplexing and authentication and therefore is more
appropriate for use on the modern internet connections.
PPP offers several essential services, including:PPP offers several essential services,
including:
Data Encapsulation: It enshrines network layer protocols such as IP and others for
transmission over serial link.
Link Control Protocol (LCP): It is employed to set up, initial and finalize the data link
connection.
Network Control Protocols (NCP): PPP has provisions for supporting many NCPs for the
configuration of the network-layer protocols such as IP, IPX, and AppleTalk.
Authentication: Offers both PAP and CHAP for the purpose of authentication.
Error Detection: It also has methods of maintaining integrity of the data through error
checking techniques.
Channel Allocation Problem in Computer Network
For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same time
due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different
process as:
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
time, the station waits for a random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed. Therefore, it retransmits the
frame until all the data are successfully transmitted to the receiver.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the beginning
of two or more station time slot.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit
data frames. The CSMA/CD protocol works with a medium access control layer. Therefore,
it first senses the shared channel before broadcasting the frames, and if the channel is idle,
it transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether
the channel is clear. If the station receives only a single (own) acknowledgments, that
means the data frame has been successfully transmitted to the receiver. But if it gets two
signals (its own and one more in which the collision of frames), a collision of the frame
occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this
time period is called the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.
Contention window: In the Contention window, the total time is divided into different slots. When
the station/ sender is ready to transmit the data frame, it chooses a random slot number of slots
as wait time. If the channel is still busy, it does not restart the entire process, except that it restarts
the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.
Reservation
In the reservation method, a station needs to make a reservation before sending data.
The timeline has two kinds of periods:
o Reservation interval of fixed time length
o Data transmission period of variable frames.
If there are M stations, the reservation interval is divided into M slots, and each station has
one slot.
Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station
is allowed to transmit during this slot.
In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
The stations which have reserved their slots transfer their frames in that order.
After data transmission period, next reservation interval begins.
Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only
station 1 has made a reservation.
Advantages of Reservation
The main advantage of reservation is high rates and low rates of data accessing time of the
respective channel can be predicated easily. Here time and rates are fixed.
Priorities can be set to provide speedier access from secondary.
Reservation-based access methods can provide predictable network performance, which is
important in applications where latency and jitter must be minimized, such as in real-time
video or audio streaming.
Reservation-based access methods can reduce contention for network resources, as access
to the network is pre-allocated based on reservation requests. This can improve network
efficiency and reduce packet loss.
Reservation-based access methods can support QoS requirements, by providing different
reservation types for different types of traffic, such as voice, video, or data. This can ensure
that high-priority traffic is given preferential treatment over lower-priority traffic.
Reservation-based access methods can enable more efficient use of available bandwidth, as
they allow for time and frequency multiplexing of different reservation requests on the
same channel.
Reservation-based access methods are well-suited to support multimedia applications that
require guaranteed network resources, such as bandwidth and latency, to ensure high-
quality performance.
Disadvantages of Reservation
Highly trust on controlled dependability.
Decrease in capacity and channel data rate under light loads; increase in turn-around time.
2. Polling
Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
In this, one acts as a primary station(controller) and the others are secondary stations. All
data exchanges must be made through the controller.
The message sent by the controller contains the address of the node being selected for
granting access.
Although all nodes receive the message the addressed one responds to it and sends data if
any. If there is no data, usually a “poll reject”(NAK) message is sent back.
Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
Advantages of Polling
The maximum and minimum access time and data rates on the channel are fixed
predictable.
It has maximum efficiency.
It has maximum bandwidth.
No slot is wasted in polling.
There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling
It consume more time.
Since every station has an equal chance of winning in every round, link sharing is biased.
Only some station might run out of data to send.
An increase in the turnaround time leads to a drop in the data rates of the channel under
low loads.
Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of
data. Then,
Efficiency = Tt/(Tt + Tpoll)
3. Token Passing
In token passing scheme, the stations are connected logically to each other in form of ring
and access to stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one station to the
next in some predefined order.
In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next station
in some predefined order.
In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token to
the next station. If it has no queued frame, it passes the token simply.
After sending a frame, each station must wait for all N stations (including itself) to send the
token to their neighbours and the other N – 1 stations to send a frame, if they have one.
There exists problems like duplication of token or token is lost or insertion of new station,
removal of a station, which need be tackled for correct and reliable operation of this
scheme.
Performance of token ring can be concluded by 2 parameters:-
Delay, is a measure of time between when a packet is ready and when it is delivered. So,
the average time (delay) required to send a token to the next station = a/N.
Throughput, which is a measure of successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
Advantages of Token passing
It may now be applied with routers cabling and includes built-in debugging features like
protective relay and auto reconfiguration.
It provides good throughput when conditions of high load.
Disadvantages of Token passing
Its cost is expensive.
Topology components are more expensive than those of other, more widely used standard.
The hardware element of the token rings are designed to be tricky. This implies that you
should choose on manufacture and use them exclusively.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to be shared
across multiple stations based on their time, distance and codes. It can access all the stations at the
same time to send the data frames to the channel.
Following are the various methods to access the channel based on their time, distance and codes:
It is a frequency division multiple access (FDMA) method used to divide the available bandwidth
into equal bands so that multiple users can send data through a different frequency to the subchannel.
Each station is reserved with a particular band to prevent the crosstalk between the channels and
interferences of stations.
TDMA
Time Division Multiple Access (TDMA) is a channel access method. It allows the same frequency
bandwidth to be shared across multiple stations. And to avoid collisions in the shared channel, it
divides the channel into different frequency slots that allocate stations to transmit the data frames.
The same frequency bandwidth into the shared channel by dividing the signal into various time slots
to transmit it. However, TDMA has an overhead of synchronization that specifies each station's time
slot by adding synchronization bits to each slot.
CDMA
The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it allows
each station to transmit the data frames with full frequency on the shared channel at all
times. It does not require the division of bandwidth on a shared channel based on time
slots. If multiple stations send data to a channel simultaneously, their data frames are
separated by a unique code sequence. Each station has a different unique code for
transmitting the data over a shared channel. For example, there are multiple users in a
room that are continuously speaking. Data is received by the users if only two-person
interact with each other using the same language. Similarly, in the network, if different
stations communicate with each other simultaneously with different code language.