Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Computer Networking Unit -2

The document discusses error detection and correction techniques in the Data Link Layer of the OSI model, focusing on methods such as Longitudinal Redundancy Check (LRC), Vertical Redundancy Check (VRC), Checksum, Cyclic Redundancy Check (CRC), and Hamming Code. It explains the types of errors that can occur during data transmission, including single-bit, burst, and multiple-bit errors, and outlines the advantages and disadvantages of each error detection method. Additionally, it covers the Stop-and-Wait Protocol for reliable data transmission between devices.

Uploaded by

faraz.ali.acet
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Computer Networking Unit -2

The document discusses error detection and correction techniques in the Data Link Layer of the OSI model, focusing on methods such as Longitudinal Redundancy Check (LRC), Vertical Redundancy Check (VRC), Checksum, Cyclic Redundancy Check (CRC), and Hamming Code. It explains the types of errors that can occur during data transmission, including single-bit, burst, and multiple-bit errors, and outlines the advantages and disadvantages of each error detection method. Additionally, it covers the Stop-and-Wait Protocol for reliable data transmission between devices.

Uploaded by

faraz.ali.acet
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Data Link Layer(Unit -2)

Error Detection and Error Correction in Data Link Layer


Introduction: In the Data Link Layer, data is transferred between nodes in a
network. During transmission, errors can occur due to noise, signal distortion, or
other factors. Error detection and correction techniques ensure that errors are
identified and corrected to maintain data integrity.
The Data Link Layer of the OSI model is responsible for the reliable transmission of
data between adjacent network nodes. It ensures that the data received is free from
transmission errors. To achieve this, it employs two mechanisms:
Error Detection
Error Correction
Note->The data link layer receives the information in the form of packets from the
Network layer, it divides packets into frames and sends those frames bit-by-bit to
the underlying physical layer.
In the data link layer, data is sent in the form of "data frames" which contain the
actual data packet along with additional information like source and destination
addresses, allowing for reliable transmission between devices on the same
network
An Ethernet frame can contain up to 1,500 bytes of data, but the minimum frame
size is 46 bytes. The minimum frame size is necessary to ensure that the frame is
long enough to detect collisions

Types Of Error
In the Data Link Layer, errors occur when bits are corrupted during transmission
between two devices. The types of errors in the Data Link Layer can be classified
mainly into three categories:
1. Single-bit Error:
Definition: A single-bit error occurs when only one bit in the data unit is altered
from 1 to 0 or from 0 to 1.
Cause: This can be caused by noise or some external disturbance during
transmission.
Example:
Original Data: 11010101
Received Data: 11011101
Here, only the 4th bit from the left has been altered from 0 to 1, resulting in a
single-bit error.
Single-bit errors are usually rare in systems that use modern transmission methods
but can happen due to noise or interference in the communication channel. Error
detection techniques, like parity checks, are often used to detect such errors and
ensure data integrity.
2. Burst Error:
Definition: A burst error occurs when two or more consecutive bits in the data unit
are changed.
Characteristics:
Burst errors are more common than single-bit errors.
The length of the burst error is determined by the number of bits that have been
altered from the first changed bit to the last changed bit.
Example:
Original Data: 1011001101
Received Data: 1011110001
In this case, the 4th, 5th, 6th, and 7th bits have been altered, resulting in a burst
error of length 4.

Other Example->
3.Multiple Bit Errors:
a multiple-bit error affects two or more bits. For example, in an 8-bit sequence, if
the original data is 10101010 and is received as 11100010, multiple bits have been
changed.
Other Example->

Error Representation in Frames:


When data is transmitted, it is sent in frames. Errors that affect frames can also be
described using two categories:
Random Errors: These occur independently at random positions in the frame and
usually involve single-bit errors.
Correlated Errors: When a series of bits are affected simultaneously (such as in
burst errors), these errors are considered correlated.
----------------------------------------------------------------------------------------

LRC(Longitudinal Redundancy Check)-Longitudinal Redundancy Check (LRC) is a


data communications technique that detects errors during data transmission. It
works by adding a check character to the data being transmitted, and then
comparing the character at both the sending and receiving ends. If the characters
match, the transmission was successful.

OR
Longitudinal Redundancy Check (LRC) is also known as 2-D parity check. In this
method, data which the user want to send is organized into tables of rows and
columns. A block of bit is divided into table or matrix of rows and columns. In order
to detect an error, a redundant bit is added to the whole block and this block is
transmitted to receiver. The receiver uses this redundant row to detect error. After
checking the data for errors, receiver accepts the data and discards the redundant
row of bits.
How LRC Works:
Data Block Formation: The data to be transmitted is divided into rows of equal-
length bits.
Parity Calculation: For each column of the data block (taking one bit from each
row), a parity bit is calculated (usually even or odd parity).
Parity Byte Generation: The calculated parity bits from each column are combined
to form a parity byte or LRC byte.
Transmission: Both the data block and the LRC byte are sent to the receiver.
Error Detection at Receiver: The receiver recalculates the LRC from the received
data block and compares it with the received LRC byte. If they match, no error is
detected. If they differ, an error has occurred during transmission.
Example :
If a block of 32 bits is to be transmitted, it is divided into matrix of four rows and
eight columns which as shown in the following figure :

In this matrix of bits, a parity bit (odd or even) is calculated for each column. It
means 32 bits data plus 8 redundant bits are transmitted to receiver. Whenever
data reaches at the destination, receiver uses LRC to detect error in data.

Advantage :
LRC is used to detect burst errors.

Example : Suppose 32 bit data plus LRC that was being transmitted is hit by a burst
error of length 5 and some bits are corrupted as shown in the following figure :
The LRC received by the destination does not match with newly corrupted LRC. The
destination comes to know that the data is erroneous, so it discards the data.

Disadvantage :
The main problem with LRC is that, it is not able to detect error if two bits in a data
unit are damaged and two bits in exactly the same position in other data unit are
also damaged.
Example : If data 110011 010101 is changed to 010010110100.

In this example 1st and 6th bit in one data unit is changed . Also, the 1st and 6th
bit in second unit is changed.

VRC-
Vertical Redundancy Check is also known as Parity Check. In this method, a
redundant bit also called parity bit is added to each data unit. This method includes
even parity and odd parity. Even parity means the total number of 1s in data is to
be even and odd parity means the total number of 1s in data is to be odd. Example
– If the source wants to transmit data unit 1100111 using even parity to the
destination. The source will have to pass through Even Parity Generator.

Parity generator will count number of 1s in data unit and will add parity bit. In the
above example, number of 1s in data unit is 5, parity generator appends a parity bit
1 to this data unit making the total number of 1s even i.e.. 6 which is clear from
above figure. Data along with parity bit is then transmitted across the network. In
this case, 11001111 will be transmitted. At the destination, This data is passed to
parity checker at the destination. The number of 1s in data is counted by parity
checker. If the number of 1s count out to be odd, e.g. 5 or 7 then destination will
come to know that there is some error in the data. The receiver then rejects such
an erroneous data unit.

Advantages :
VRC can detect all single bit error.
It can also detect burst errors but only in those cases where number of bits changed
is odd, i.e. 1, 3, 5, 7, …….etc.
VRC is simple to implement and can be easily incorporated into different
communication protocols and systems.
It is efficient in terms of computational complexity and memory requirements.
VRC can help improve the reliability of data transmission and reduce the likelihood
of data corruption or loss due to errors.
VRC can be combined with other error detection and correction techniques to
improve the overall error handling capabilities of a system.
Disadvantages :

The major disadvantage of using this method for error detection is that it is not able
to detect burst error if the number of bits changed is even, i.e. 2, 4, 6, 8, …….etc.
Example – If the original data is 1100111. After adding VRC, data unit that will be
transmitted is 11001111. Suppose on the way 2 bits are 01011111. When this data
will reach the destination, parity checker will count number of 1s in data and that
comes out to be even i.e. 6. So, in this case, parity is not changed, it is still even.
Destination will assume that there is no error in data even though data is erroneous.
VRC is not capable of correcting errors, only detecting them. This means that it can
identify errors, but it cannot fix them.
VRC is not suitable for applications that require high levels of error detection and
correction, such as mission-critical systems or safety-critical applications.
VRC is limited in its ability to detect and correct errors in large blocks of data, as the
probability of errors increases with the size of the data block.
VRC requires additional overhead bits to be added to the data stream, which can
increase the bandwidth and storage requirements of the system.
-------------------------------------------------------------------------------------
Error Detection Code – Checksum
Checksum is the error detection method used by upper layer protocols and is
considered to be more reliable than LRC, VRC and CRC. This method makes the use
of Checksum Generator on Sender side and Checksum Checker on Receiver side.
At the Sender side, the data is divided into equal subunits of n bit length by the
checksum generator. This bit is generally of 16-bit length. These subunits are then
added together using one’s complement method. This sum is of n bits. The resultant
bit is then complemented. This complemented sum which is called checksum is
appended to the end of original data unit and is then transmitted to receiver.

The Receiver after receiving data + checksum passes it to checksum checker.


Checksum checker divides this data unit into various subunits of equal length and
adds all these subunits. These subunits also contain checksum as one of the
subunits. The resultant bit is then complemented. If the complemented result is
zero, it means the data is error-free. If the result is non-zero it means the data
contains an error and Receiver rejects it.

Example – If the data unit to be transmitted is 10101001 00111001, the following


procedure is used at Sender site and Receiver site.
Sender Site
10101001 subunit 1
00111001 subunit 2
11100010 sum (using 1s complement)
00011101 checksum (complement of sum)

Receiver Site :
10101001 subunit 1
00111001 subunit 2
00011101 checksum
11111111 sum
00000000 sum's complement

Result is zero, it means no error.


Advantages of Checksum
The checksum detects all the errors involving an odd number of bits as well as the
error involving an even number of bits.

Disadvantages of Checksum
The main problem is that the error goes undetected if one or more bits of a subunit
is damaged and the corresponding bit or bits of a subunit are damaged and the
corresponding bit or bits of opposite value in second subunit are also damaged. This
is because the sum of those columns remains unchanged.

Cyclic Redundancy Check and Modulo-2 Division


CRC or Cyclic Redundancy Check is a method of detecting accidental changes/errors
in the communication channel.
CRC uses Generator Polynomial which is available on both sender and receiver side.
An example generator polynomial is of the form like x3 + x + 1. This generator
polynomial represents key 1011. Another example is x2 + 1 that represents key 101.

n : Number of bits in data to be sent


from sender side.
k : Number of bits in the key obtained
from generator polynomial.
Sender Side (Generation of Encoded Data from Data and Generator Polynomial
(or Key)):

The binary data is first augmented by adding k-1 zeros in the end of the data
Use modulo-2 binary division to divide binary data by the key and store remainder
of division.
Append the remainder at the end of the data to form the encoded data and send
the same
Receiver Side (Check if there are errors introduced in transmission)
Perform modulo-2 division again and if the remainder is 0, then there are no
errors.
Data word to be sent - 100100
Key - 1101 [ Or generator polynomial x3 + x2 + 1]
Sender Side:

Therefore, the remainder is 001 and hence the encoded


data sent is 100100001.

Receiver Side:
Code word received at the receiver side 100100001
Therefore, the remainder is all zeros. Hence, the
data received has no error.
Advantages
Efficient error detection: CRC is highly efficient at detecting errors, including single-
bit errors, burst errors, and some types of multiple-bit errors.
Simplicity: CRC is easy to implement, especially in binary hardware, because it
involves straightforward bitwise operations. It's also easy to analyze
mathematically.

Disadvantages
Limited error correction: CRC is good at detecting errors, but it can't correct them.
If an error is detected, the data, command, or response needs to be retransmitted.
Possibility of undetected errors: In rare situations, errors may remain undetected
even with CRC.

Hamming Code in Computer Network


Hamming code detects and corrects the errors that can occur when the data is
moved or stored from the sender to the receiver. This simple and effective method
helps improve the reliability of communication systems and digital storage. It adds
extra bits to the original data, allowing the system to detect and correct single-bit
errors. It is a technique developed by Richard Hamming in the 1950s.
What is Redundant Bits?
Redundant bits are extra binary bits that are generated and added to the
information-carrying bits of data transfer to ensure that no bits were lost during the
data transfer. The number of redundant bits can be calculated using the following
formula:

2r ≥ m + r + 1
where m is the number of bits in input data, and r is the number of redundant
bits.
This formula ensures that the code can detect and correct single-bit errors in the
transmitted message.
Example:
Let’s say you want to transmit 4 bits of data: 1011.
---------------------------------------------------------------------
Stop And Wait Protocol
Stop-and-wait ARQ, commonly known as the alternating bit protocol, refers to a
communication technique used to transmit data between two linked devices. It
makes sure that packets are received in the right order, and that data is not lost as
a result of dropped packets.
We must first think the error control mechanism in order to understand the stop
and wait protocol. The error control technique is used to ensure that the received
data is identical to that sent by the sender. The two types of error control
mechanisms are Sliding Window and Stop and Wait ARQ. The sliding window is
further separated into the Go Back N and Selective Repeat categories
Stop and Wait Protocol
In this case, stop and wait means that the sender provides the recipient’s desired
data. The sender pauses after transferring the data and waits for the receiver to
acknowledge his transmission. The stop and wait protocol, which is a flow control
protocol, uses the data link layer’s flow control functionality.
It is a DDL (data-link layer) protocol that is used to send data through channels
with no background noise. It offers unidirectional data transfer, which means that
only one of the two operations, data sending or receiving, can occur concurrently.
The concept behind using this frame is that after sending one frame, the sender
will wait for an acknowledgement before sending another one.
Primitives of Stop and Wait Protocol
Sender’s Side
Rule 1: The sender sends one data packet at a time.
Rule 2: The sender only sends the subsequent packet after getting the preceding
packet’s acknowledgement.
Receiver’s Side
Rule 1: Receive the data packet, then consume it.

Rule 2: The receiver provides the sender with an acknowledgement after


consuming the data packet.
Disadvantages of Stop and Wait Protocol
1. Problems arise because of lost data

Let’s say the sender sends the data, but it gets lost in transit. The receiver is
patiently awaiting the data packet. The receiver does not send an
acknowledgement because it does not receive the data. The sender won’t
send the subsequent packet because it has not received any
acknowledgement. The lost data is the root cause of this issue.
In this instance, there are two problems:
The sender waits an endless length of time for a response.
The receiver waits indefinitely for data.
2. Problems arise as a result of the lost acknowledgement

3. Problem resulting from delayed data or acknowledgement

Sliding Window Protocol


The sliding window is a technique for sending multiple frames at a time. It
controls the data packets between the two devices where reliable and
gradual delivery of data frames is needed. It is also used in TCP (Transmission
Control Protocol).

In this technique, each frame has sent from the sequence number. The
sequence numbers are used to find the missing data in the receiver end. The
purpose of the sliding window technique is to avoid duplicate data, so it uses
the sequence number.

Types of Sliding Window Protocol


Sliding window protocol has two types:

• Go-Back-N ARQ
• Selective Repeat ARQ
Go-Back-N ARQ

Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It


is a data link layer protocol that uses a sliding window method. In this, if any frame
is corrupted or lost, all subsequent frames have to be sent again.

Go Back N (GBN) Protocol

The Go-Back-N (GBN) protocol is a sliding window protocol used in networking for
reliable data transmission. It is part of the Automatic Repeat request (ARQ)
protocols, which ensure that data is correctly received and that any lost or
corrupted packets are retransmitted.

The three main characteristic features of GBN are:

1. Sender Window Size (Ws)- It is N itself. If we say the protocol is GB10, then
Ws = 10. N should be always greater than 1 in order to implement pipelining.

2. Receiver Window Size (WR)


WR is always 1 in GBN.
Now what exactly happens in GBN, we will explain with a help of example.
Consider the diagram given below. We have sender window size of 4.
Assume that we have lots of sequence numbers just for the sake of
explanation. Now the sender has sent the packets 0, 1, 2 and 3. After
acknowledging the packets 0 and 1, receiver is now expecting packet 2 and
sender window has also slided to further transmit the packets 4 and 5. Now
suppose the packet 2 is lost in the network, Receiver will discard all the
packets which sender has transmitted after packet 2 as it is expecting
sequence number of 2.

On the sender side for every packet send there is a time out timer which will
expire for packet number 2. Now from the last transmitted packet 5 sender
will go back to the packet number 2 in the current window and transmit all
the packets till packet number 5. That’s why it is called Go Back N. Go back
means sender has to go back N places from the last transmitted packet in the
unacknowledged window and not from the point where the packet is lost.

3.Acknowledgements
There are 2 kinds of acknowledgements namely:
• Cumulative Ack: One acknowledgement is used for many packets. The main
advantage is traffic is less. A disadvantage is less reliability as if one ack is the
loss that would mean that all the packets sent are lost.
• Independent Ack: If every packet is going to get acknowledgement
independently then the reliability is high here but a disadvantage is that
traffic is also high since for every packet we are receiving independent ack.
Advantages of GBN Protocol
Simple to implement and effective for reliable communication.
Better performance than stop-and-wait protocols for error-free or low-error
networks.
Disadvantages of GBN Protocol
Inefficient if errors are frequent, as multiple frames might need to be
retransmitted unnecessarily.
Bandwidth can be wasted due to redundant retransmissions.

Sliding Window Protocol | Set 3 (Selective Repeat)


the selective repeat protocol, is to allow the receiver to accept and buffer the
frames following a damaged or lost one. Selective Repeat attempts to retransmit
only those packets that are actually lost (due to errors) :

Receiver must be able to accept packets out of order.


Retransmission requests :
Implicit – The receiver acknowledges every good packet, packets that are not ACKed
before a time-out are assumed lost or in error.Notice that this approach must be
used to be sure that every packet is eventually received.
Explicit – An explicit NAK (selective reject) can request retransmission of just one
packet. This approach can expedite the retransmission but is not strictly needed.
Sender’s Windows ( Ws) = Receiver’s Windows ( Wr).
Window size should be less than or equal to half the sequence number in SR
protocol. This is to avoid packets being recognized incorrectly. If the size of the
window is greater than half the sequence number space, then if an ACK is lost, the
sender may send new packets that the receiver believes are retransmissions.
Sender can transmit new packets as long as their number is with W of all unACKed
packets.
Sender retransmit un-ACKed packets after a timeout – Or upon a NAK if NAK is
employed.
Receiver ACKs all correct packets.
Receiver stores correct packets until they can be delivered in order to the higher
layer.
In Selective Repeat ARQ, the size of the sender and receiver window must be at
most one-half of 2^m.

Channel Allocation Protocol


A random-access protocol is a communication protocol that allows
multiple devices to share a medium or communication channel, transmitting
data at random or in an arbitrary order. In a random-access protocol, all
devices have equal priority, and there is no set timing or predetermined
sequence for data transmission.

Some examples of random-access protocols include:


ALOHA: An early random-access protocol that was first used in the 1970s
CSMA/CD: Carrier Sense Multiple Access with Collision Detection
TDMA: Time Division Multiple Access
FDMA: Frequency Division Multiple Access
Random access protocols can lead to collisions when more than one station
tries to transmit data at the same time.
Key Characteristics
Decentralized Control: No central authority dictates when a device should
transmit.
Simplicity: Easier to implement compared to controlled access methods.
Flexibility: Suitable for networks where traffic is unpredictable.
Collision Potential: Higher chance of collisions, especially under heavy load.

1. ALOHA Protocol
ALOHA is one of the earliest random-access protocols developed for wireless
communication.

How It Works:

Pure ALOHA:
Devices transmit whenever they have data to send.
After sending, the device listens for an acknowledgment.
If no acknowledgment is received within a certain timeframe, it assumes a
collision occurred and retransmits after a random delay.
Slotted ALOHA:
Time is divided into discrete slots.
Devices can only transmit at the beginning of these time slots.
Reduces the chance of collision compared to Pure ALOHA.
Throughput:

Pure ALOHA: Maximum throughput ≈ 18%


Slotted ALOHA: Maximum throughput ≈ 36%
Example:
Imagine a group of friends trying to call each other on a single telephone line
without coordination. If two people try to speak simultaneously, their messages
collide, and both must retry after waiting for a random time.
2. Carrier Sense Multiple Access (CSMA)
CSMA improves upon ALOHA by introducing a listening mechanism where a device
checks if the channel is free before transmitting.

2. Carrier Sense Multiple Access (CSMA)


CSMA improves upon ALOHA by introducing a listening mechanism where a device
checks if the channel is free before transmitting.
Variants of CSMA:
a. CSMA with Collision Detection (CSMA/CD)
Used In: Ethernet networks.
How It Works:
Carrier Sensing: Before transmitting, a device listens to check if the channel is free.
Transmission: If the channel is free, the device starts transmitting.
Collision Detection: While transmitting, the device continues to listen. If a collision
is detected (e.g., voltage levels indicate multiple signals), it stops transmitting.
Backoff Algorithm: After a collision, devices wait for a random time before
attempting to retransmit.
Example:
In an Ethernet network, if two computers attempt to send data simultaneously, the
CSMA/CD protocol detects the collision, stops both transmissions, and schedules
retransmission after random delays.
Advantages:
Reduces the probability of collisions compared to ALOHA.
Efficient for wired networks with relatively low traffic.
Disadvantages:
Not suitable for wireless networks where collision detection is challenging.

b. CSMA with Collision Avoidance (CSMA/CA)


Used In: Wireless networks (e.g., Wi-Fi).
How It Works:
Carrier Sensing: Device listens to the channel to check if it's free.
Inter-Frame Space: If the channel is free, the device waits for a short period (e.g.,
DIFS in Wi-Fi).
Random Backoff: The device selects a random backoff time before transmitting to
minimize collision chances.
Transmission: After the backoff period, the device transmits data.
Acknowledgment: The receiver sends an acknowledgment. If not received, the
device assumes a collision occurred and retries.
Example:

In a Wi-Fi network, multiple devices like laptops and smartphones use CSMA/CA to
coordinate access to the wireless medium, reducing the likelihood of data packet
collisions.

Advantages:
• Better suited for wireless environments.
• Reduces collisions by avoiding simultaneous transmissions.
Disadvantages:
More complex than CSMA/CD.
Potential for increased latency due to backoff periods.
-------------------------------------------------------------------------------------

Introduction to Controlled Access Protocols


In computer networks, access control protocols determine how multiple devices
share the same communication medium efficiently and without conflicts. Unlike
Random Access Protocols, where devices transmit data independently and may
experience collisions, Controlled Access Protocols use a regulated method to
manage channel access, thereby minimizing collisions and ensuring orderly
communication.
Controlled Access Protocols are essential in environments where reliable and
orderly data transmission is critical, such as in industrial networks, metropolitan
area networks (MANs), and certain types of local area networks (LANs).

Token Passing Protocol


Imagine a group of friends sharing a single walkie-talkie to communicate. To ensure
that only one person speaks at a time and no one talks over each other, they decide
to pass a special object (like a small ball) among themselves. Only the person
holding the ball is allowed to speak. Once they're done, they pass the ball to the
next friend in the circle, giving that person the chance to speak.
Token Passing in computer networks works in a similar way. It's a method used to
control who gets to send data over the network at any given time, ensuring that
only one device transmits data at a time. This helps prevent data collisions and
makes the communication orderly.

How Does Token Passing Work?


The Token: Think of the token as the special ball in our walkie-talkie
example. It's a small data packet that circulates around the network.
Passing the Token: The token moves from one device (like a computer or
printer) to the next in a specific order. Only the device that has the token
can send data.
Sending Data:
If a device has data to send: It takes the token, sends its data, and then
passes the token to the next device.
If a device has no data to send: It simply passes the token to the next
device without sending anything.
No Token, No Sending: If a device doesn't have the token, it must wait
until it receives the token before it can send data.

Benefits of Token Passing


No Collisions: Since only one device can send data at a time, there's no
chance of data collisions.
Fair Access: Every device gets an equal chance to send data by waiting
for its turn to hold the token.
Predictable Performance: The order of access is fixed, making network
behavior more predictable.
Introduction to Multiple Access Techniques
In computer networks, especially in communication systems, multiple users or
devices often need to share the same communication medium. To efficiently
manage this sharing and avoid collisions or interference, multiple access techniques
are employed. Two fundamental multiple access methods are:

• Frequency Division Multiple Access (FDMA)


• Time Division Multiple Access (TDMA)
FDMA is a multiple access technique where the available bandwidth is divided into
distinct frequency bands. Each user is allocated a specific frequency band for
communication, allowing multiple users to transmit simultaneously without
interference.
Bandwidth is the maximum rate at which data can be transmitted over a network
connection.
Working Principle
Bandwidth Division: The total available bandwidth is partitioned into non-
overlapping frequency channels.
Assignment: Each user or communication channel is assigned a unique frequency
band.
Simultaneous Transmission: All users can transmit at the same time using their
designated frequencies.
Applications
Analog Radio Broadcasting: FM radio uses FDMA to assign different frequency
bands to various stations.
Example
Consider a radio communication system with a total bandwidth of 300 kHz. This
bandwidth is divided into 10 channels, each with a bandwidth of 30 kHz. Each
channel is assigned to a different user. User A transmits on 100.1 MHz, User B on
100.2 MHz, and so on. All users can transmit simultaneously on their respective
frequencies without interfering with each other.

2. Time Division Multiple Access (TDMA)


Definition
TDMA is a multiple access technique where the available communication time
is divided into distinct time slots. Each user is allocated specific time slots during
which they can transmit, allowing multiple users to share the same frequency
channel by transmitting in turns.
Working Principle
Time Slot Division: The total available time is partitioned into sequential time
slots.
Assignment: Each user is assigned one or more specific time slots within a
repeating frame.
Sequential Transmission: Users transmit their data during their allocated time
slots, cycling through the frame.
Advantages
Reduced Interference: Since users transmit at different times, there is minimal
interference between them.
Scalability: Easier to accommodate more users by adjusting the number of time
slots.
Disadvantages
Synchronization Required: Precise timing mechanisms are necessary to ensure
users transmit in their designated slots.
Latency: Users must wait for their time slots to transmit, which can introduce
delays.
Complex Implementation: Requires more sophisticated technology and control
mechanisms compared to FDMA.
Applications
Satellite Communication: TDMA is used in some satellite systems to manage
multiple user transmissions.
Wi-Fi Networks: Certain Wi-Fi standards implement TDMA-like mechanisms for
medium access control.
Example
In a TDMA system with a frame structure of 10 time slots, User X is assigned slots
1 and 6, User Y is assigned slots 2 and 7, and so on. During slot 1, only User X
transmits; during slot 2, only User Y transmits, etc. This cycle repeats
continuously, allowing multiple users to share the same frequency channel
without interference.

You might also like