Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
11 views

Module-2-notes

The document discusses the Data Link Layer of computer networks, focusing on error detection and correction methods such as block coding, parity-check codes, and cyclic codes. It explains the importance of redundancy, Hamming distance, and the role of coding schemes in ensuring data integrity during transmission. Additionally, it highlights the advantages of cyclic codes in detecting various types of errors and their efficient hardware implementation.

Uploaded by

Aditya Kamath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Module-2-notes

The document discusses the Data Link Layer of computer networks, focusing on error detection and correction methods such as block coding, parity-check codes, and cyclic codes. It explains the importance of redundancy, Hamming distance, and the role of coding schemes in ensuring data integrity during transmission. Additionally, it highlights the advantages of cyclic codes in detecting various types of errors and their efficient hardware implementation.

Uploaded by

Aditya Kamath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 109

COMPUTER NETWORKS

BCS502

Dr. Soumya J Bhat


Dept. of CSE
SMVITM, Bantakal

Textbook: 1. Behrouz A. Forouzan, Data Communications and


Networking, 5th Edition, Tata McGraw Hill,2013.

MODULE 2
Data Link Layer
The data link layer uses the services of the physical layer to send and receive
bits over communication channels.
It has a number of functions, including:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast
senders.
In summary, Data Link Layer ensures that the data is organized into frames,
transmitted efficiently, and received error-free on a local network.
Error Detection and Correction
Any time data are transmitted from one node to the next, they can become
corrupted on the way. Many factors can alter one or more bits of a message.
Some applications require a mechanism for detecting and correcting errors.
For example, random errors in audio or video transmissions may be tolerable,
but when we transfer text, we expect a very high level of accuracy.

10.1.1 Types of Errors


Whenever bits flow from one point to another, they may get changed
unexpectedly because of interference.
There are two types of errors. Single bit and burst error
single-bit error means that only 1 bit of a given data unit is changed from 1 to
0 or from 0 to 1. The term burst error means that 2 or more bits in the data
unit have changed from 1 to 0 or from 0 to 1.
10.1.2 Redundancy
To help in these situations, redundant bits are added by the sender and
removed by the receiver. These extra bits help to detect and correct errors.
10.1.3 Detection versus Correction
Error detection – just detect the error
error correction - we need to know the exact number of bits that are corrupted
and, more importantly, their location in the message.
10.1.4 Coding
Coding schemes define the ways to add redundant bits
We can divide coding schemes into two broad categories: block coding and
convolution coding. In this chapter we will study about block coding.

10.2 BLOCK CODING


Here, the message is divided into small pieces or blocks of length k bits. These
blocks are called datawords. R redundant bits are added to each of these
blocks. The block size will be n=k+r.
The resulting n-bit blocks are called codewords.
10.2.1 Error Detection
With k bits, we can create a combination of 2k datawords; with n bits, we can
create a combination of 2n codewords. Since n > k, the number of possible
codewords is larger than the number of possible datawords.
In block coding, each dataword is mapped to one codeword.
This means that we have 2n − 2k codewords that are not used. The trick in
error detection is the existence of these invalid codes
If the receiver receives an invalid codeword, this indicates that the data was
corrupted during transmission
However, if the codeword is corrupted during transmission but the received
word still matches a valid codeword, the error remains undetected.
If the following two conditions are met, the receiver can detect a change in the
original codeword.
1. The receiver has (or can find) a list of valid codewords.
2. The original codeword has changed to an invalid one.
Example 10.1 Let us assume that k = 2 and n = 3.
Hamming Distance
The Hamming distance can easily be found if we apply the XOR operation ( )
on the two words and count the number of 1s in the result.
The Hamming distance between two words is the number of differences
between corresponding bits.
Example 10.2 Let us find the Hamming distance between two pairs of words.
1. The Hamming distance d(000, 011) is 2 because (000 011) is 011 (two
1s).
2. The Hamming distance d(10101, 11110) is 3 because (10101 11110) is
01011 (three 1s).

Minimum Hamming Distance for Error Detection


In a set of codewords, the minimum Hamming distance is the smallest
Hamming distance between all possible pairs of codewords.
the minimum Hamming distance in a block code must be dmin = s+1 To
guarantee the detection of up to s errors.
The minimum Hamming distance for our first code scheme is 2. This code
guarantees detection of only a single error.
If a code scheme has a Hamming distance dmin = 4, This code guarantees the
detection of up to three errors (d = s + 1 or s = 3).
In summary: In block coding method, at sender side, You divide data into
small pieces of k bits size, add extra r redundant bits to get total n bit
code words. For every dataword, identify one codeword as valid
codeword. Other codewords become invalid codewords. Then, you
inform the list of valid codewords and the corresponding dataword to
receiver. When receiver receives the codeword, it can detect if its valid
codeword and extract the dataword.
How we prepare the set of codewords is very important. If we select the
codewords such that minimum hamming distance between any 2
codewords is s+1, then receiver can detect s bits of error. For eg, in table
10.1, the minimum hamming distance between all codes is 2, so all 1 bit
errors can be detected.
There are various ways to create codewords.
1. Linear Block Codes:
A variety of block code
A linear block code is a code in which the exclusive OR (addition modulo-2) of
two valid codewords creates another valid codeword.
The code in Table 10.1 is a linear block code because the result of XORing
any codeword with any other codeword is a valid codeword. For example, the
XORing of the second and third code words creates the fourth one.

Another way to find the minimum Hamming distance for Linear Block Codes is
the number of 1s in the nonzero valid codeword with the smallest number of
1s.
In Table 10.1, the numbers of 1s in the nonzero codewords are 2, 2, and 2. So
the minimum Hamming distance is dmin = 2.
Parity-Check Code
This code is a linear block code. In this code, a k-bit dataword is changed to
an n-bit codeword where n = k + 1. The extra bit, called the parity bit, is
selected to make the total number of 1s in the codeword even.
The minimum Hamming distance for this category is dmin = 2, which means
that the code is a single-bit error-detecting code.
Table 10.1 is a parity-check code (k = 2 and n = 3)
The code in Table 10.2 is also a parity-check code with k = 4 and n = 5.
The encoder uses a generator that takes a copy of a 4-bit dataword (a0, a1,
a2, and a3) and generates a parity bit r0.
If the number of 1s is even, the result is 0; if the number of 1s is odd, the result
is 1. In both cases, the total number of 1s in the codeword is even.
The sender sends the codeword, which may be corrupted during transmission.
The receiver receives a 5-bit word. The checker at the receiver does the same
thing
The result, which is called the syndrome, is just 1 bit. The syndrome is 0 when
the number of 1s in the received codeword is even; otherwise, it is 1. If the
syndrome is 0, there is no detectable error in the received codeword; if the
syndrome is 1, the data portion of the received codeword is discarded.
A parity-check code can detect an odd number of errors.
Example 10.7 Let us look at some transmission scenarios. Assume the sender
sends the dataword 1011. The code word created from this dataword is 10111,
which is sent to the receiver. We examine five cases:
1. No error occurs; the received codeword is 10111. The syndrome is 0. The
dataword 1011 is accepted.
2. The received codeword is 10011. The syndrome is 1. Dataword is not
accepted.
3. The received codeword is 10110. The syndrome is 1. Dataword is not
accepted.
4. The received codeword is 00110 with 2-bit errors. The syndrome is 0. The
dataword 0011 is accepted at the receiver. Note that here the dataword is
wrongly created due to the syndrome value. The simple parity-check decoder
cannot detect an even number of errors.
5. Three bits—a3, a2, and a1—are changed by errors. The received codeword
is 01011. The syndrome is 1. Dataword is not accepted.
A parity-check code can detect an odd number of errors.

10.3 CYCLIC CODES


These are linear block codes used in error detection and correction.
Special property of cyclic code is, if a codeword is cyclically shifted (rotated),
the result is another codeword. For example, if 1011000 is a codeword and we
cyclically left-shift, then 0110001 is also a codeword.
10.3.1 Cyclic Redundancy Check
We can create cyclic codes to correct errors.
a subset of cyclic codes called the cyclic redundancy check (CRC), which is
used in networks such as LANs and WANs.

Encoder
Let us take a closer look at the encoder. The encoder takes a dataword and
augments it with n − k number of 0s. It then divides the augmented dataword
by the divisor of size n − k + 1, as shown in Figure 10.6.
The 3-bit remainder forms the check bits (r2, r1, and r0). They are appended to
the dataword to create the codeword.
Decoder
The codeword can change during transmission. The decoder does the same
division process as the encoder. The remainder of the division is the
syndrome. If the syndrome is all 0s, there is no error
The left-hand figure in 10.7 shows the value of the syndrome when no error
has occurred; the syndrome is 000. The right-hand part of the figure shows the
case in which there is a single error. The syndrome is not all 0s (it is 011).
Divisor
Divisor is a generator polynomial agreed upon by both sender and receiver.
10.3.3 Cyclic Code Encoder Using Polynomials – another method for
creation of a code word from a dataword
Polynomials:
A pattern of 0s and 1s can be represented as a polynomial with coefficients of
0 and 1. Figure 10.8 shows a binary pattern and its polynomial representation.

Degree of a Polynomial:
The degree of a polynomial is the highest power in the polynomial.
In a polynomial representation , The dataword 1001 is represented as x3 + 1.
The divisor 1011 is represented as x3 + x + 1. In a polynomial representation,
the divisor is normally referred to as the generator polynomial or simply the
generator.
Adding and Subtracting Polynomials
adding or subtracting is done by combining terms and deleting pairs of
identical terms. For example, adding x5 + x4 + x2 and x6 + x4 + x2 gives just x6 +
x5. The terms x4 and x2 are deleted.
Shifting

Cyclic Code Encoder Using Polynomials


Figure 10.9 is the polynomial version of Figure 10.6. We can see that the
process is shorter. The dataword 1001 is represented as x3 + 1. The divisor
1011 is represented as x3 + x + 1. To find the augmented dataword, we have
left-shifted the dataword 3 bits (multiplying by x3). The result is x6 + x3.
Division is straightforward. We divide the first term of the dividend, x6, by the
first term of the divisor, x3. The first term of the quotient is then x6/x3, or x3.
Then we multiply x3 by the divisor and subtract (according to our previous
definition of subtraction)
we continue to divide until the degree of the remainder is less than the degree
of the divisor.
In a polynomial representation, the divisor is normally referred to as the
generator polynomial t(x).
10.3.4 Cyclic Code Analysis
We define the following
to find the criteria that must be imposed on the generator:
the received codeword is the sum of the sent codeword and the error.
Received codeword = c(x) + e(x)
The receiver divides the received codeword by g(x) to get the syndrome.

The first term at the right-hand side of the equality has a remainder of zero. So
the syndrome is actually the remainder of the second term on the right-hand
side. If this term does not have a remainder (syndrome = 0), either e(x) is 0 or
e(x) is divisible by g(x). We do not have to worry about the first case (there is
no error); the second case is very important. Those errors that are divisible
by g(x) are not caught.
In a cyclic code, those e(x) errors that are divisible by g(x) are not caught.
Let us show some specific errors and see how they can be caught by a well
designed g(x).
What should be generator polynomial?
(a)Single-Bit Error
If the generator has more than one term and the coefficient of x0 is 1, all
single-bit errors can be caught.
(b)Two Isolated Single-Bit Errors
If a generator cannot divide xt + 1 (t between 0 and n - 1), then all
isolated double errors can be detected. Here n is the length of code
word

Odd Numbers of Errors


A generator that contains a factor of x + 1 can detect all odd-
numbered errors.

Burst Errors
A burst error can have two or more errors.

where L is the length of the error and r is the number of check bits
Summary:

Standard Polynomials
Some standard polynomials used by popular protocols for CRC generation
are shown in Table 10.4
(ATM (Asynchronous Transfer Mode) is a high-speed, connection-
oriented, packet-switching technology designed for transmitting various
types of data, such as voice, video, and computer data, over the same
network.
HDLC (High-Level Data Link Control) is a widely used protocol in
computer networks for reliable communication over point-to-point and
multipoint links)

10.3.5 Advantages of Cyclic Codes


We have seen that cyclic codes have a very good performance in
detecting single-bit errors, double errors, an odd number of errors,
and burst errors. They can easily be implemented in hardware and
software. They are especially fast when implemented in hardware.
This has made cyclic codes a good candidate for many networks.

10.3.6 Other Cyclic Codes


The cyclic codes we have discussed in this section are very simple. There
are, however, more powerful polynomials. One of the most interesting of
these codes is the Reed Solomon code used today for both detection and
correction.

10.3.7 Hardware Implementation


advantages of a cyclic code is that the encoder and decoder can easily
and cheaply be implemented in hardware.

1. The divisor is repeatedly XORed with part of the dividend.


2. In our previous example, the divisor bits were either 1011 or 0000. The
choice was based on the leftmost bit.
3. 3. A close look shows that only n − k bits of the divisor are needed in the
XOR operation. The leftmost bit is not needed because the result of the
operation is always 0

Using these points, we can make a fixed (hardwired) divisor that can be
used for a cyclic code if we know the divisor pattern.
Figure 10.11 shows such a design for our previous example where divisor
is 1011
In our previous example, the remainder is 3 bits (n − k bits in general) in
length. We can use three registers (single-bit storage devices) to hold
these bits. At each time click 1 bit from an augmented dataword will arrive.
General Design
A general design for the encoder and decoder is shown in Figure 10.14.

10.4 CHECKSUM

Checksum is an error-detecting technique that can be applied to a


message of any length. In the Internet, the checksum technique is mostly
used at the network and transport layer.
At the source, the message is first divided into m-bit units. The generator
then creates an extra m-bit unit called the checksum, which is sent with
the message. At the destination, the checker creates a new checksum
from the combination of the message and sent checksum. If the new
checksum is all 0s, the message is accepted; otherwise, the message is
discarded

10.4.1 Concept
We show this using a simple example.
Example 10.11
Suppose the message is a list of five 4-bit numbers (7,11,12,0,6) that we
want to send to a destination. In addition to sending these numbers, we
send the checksum.
Checksum is calculated as below at the sender:
1. Find the sum of the 5 numbers. 7+11+12+0+6=36
2. Convert 36 into 4 bits (because all other numbers are 4 bits). the extra
leftmost bits need to be added to the m rightmost bits (wrapping). This
is called one’s complement addition.
In the example,

3. The sender then complements the result 6 to get the checksum = 9. i.e.,
complement of 6 = (0110)2 is 9 = (1001)2
4. The sender sends the five data numbers and the checksum (7, 11, 12, 0,
6, 9).
At the receiver:
1. If there is no corruption in transmission, the receiver receives (7, 11,
12, 0, 6, 9)
2. All the numbers are added by shifting leftmost bits to rightmost part to
get 4 bit number.
7+11+12+0+6+9 = 45 = 101101 -> (1101)+(10) ->1111
3. The receiver complements 15 to get 0. If the answer is zero, this
shows that data have not been corrupted.
Internet Checksum
Traditionally, the Internet has used a 16-bit checksum. The sender or the
receiver uses five steps.
Checksum is not as strong as the CRC in error-checking capability. For
example, if the value of one word is incremented and the value of another
word is decremented by the same amount, the two errors cannot be
detected because the sum and checksum remain the same.

10.4.2 Other Approaches to the Checksum

Fletcher Checksum
The Fletcher checksum was devised to weight each data item according to
its position. Fletcher has proposed two algorithms: 8-bit and 16-bit. The
first, 8-bit Fletcher, calculates on 8-bit data items and creates a 16-bit
checksum. The second, 16-bit Fletcher, calculates on 16-bit data items
and creates a 32-bit checksum.

The 8-bit Fletcher is calculated over data octets (bytes) and creates a 16-
bit check sum. The calculation is done modulo 256 (28)
Adler Checksum
The Adler checksum is a 32-bit checksum. Figure 10.19 shows a simple
algorithm in flowchart form.
CHAPTER 11 Data Link Control (DLC)
data-link layer is divided into two sublayers -
the upper sublayer, data link control (DLC) and the lower sublayer, multiple
access control (MAC)
11.1 DLC SERVICES
Data link control functions include framing and flow and error control.

11.1.1 Framing

The data-link layer needs to pack bits into frames. (this is similar to postal
system, how each letter is inserted into envelope,). For each frame, a
sender address and a destination address is added. The destination
address defines where the packet is to go; the sender address helps to
receive acknowledgement.

Although the whole message could be packed in one frame, that is not
normally done. When a message is carried in one very large frame, even a
single-bit error would require the retransmission of the whole frame. When
a message is divided into smaller frames, a single-bit error affects only
that small frame.

Frame Size
Frames can be of fixed or variable size. In fixed-size framing, there is no
need for defining the boundaries of the frames; An example of this type of
framing is the ATM WAN (
Asynchronous Transfer Mode)

In variable-size framing normally used in LAN, we need a way to define


the end of one frame and the beginning of the next.

Two approaches are used for this purpose: a character-oriented approach


and a bit-oriented approach.

1.Character-Oriented Framing
In character-oriented (or byte-oriented) framing, data to be carried are 8-bit
characters from a coding system such as ASCII. The header normally
carries the source and destination addresses and other control
information, and the trailer carries error detection redundant bits. To
separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and the end of a frame.

In character-oriented (or byte-oriented) framing, data to be carried are 8-bit


characters from a coding system such as ASCII. In addition to this, a
special byte is added to the data section of the frame when there is a
character with the same pattern as the flag. This is called byte stuffing (or
character stuffing). This byte is usually called the escape character (ESC)
and has a predefined bit pattern. Whenever the receiver encounters the
ESC character, it removes it from the data section and treats the next
character as data
2.Bit-Oriented Framing
In bit-oriented framing, the data section of a frame is a sequence of bits to
be interpreted by the upper layer as text, graphic, audio, video, and so on.
Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter
to define the beginning and the end of the frame.
But, similar to previous example, if the flag pattern appears in the data, we
need to somehow inform the receiver that this is not the end of the frame.
The strategy is called bit stuffing. In bit stuffing, if a 0 and five
consecutive 1 bits are encountered, an extra 0 is added.
11.1.2 Flow and Error Control
Other responsibilities of the data-link control sublayer is flow and error
control

Flow Control
Whenever an entity produces items and another entity consumes them,
there should be a balance between production and consumption rates. If
the items are produced faster than they can be consumed, the consumer
can be overwhelmed and may need to discard some items. Flow control is
related to this.

Flow control in this case can be feedback from the receiving node to the
sending node to stop or slow down pushing frames.

Flow control normally uses 2 buffers. One at the sending side and one at
the receiving side. A buffer is a set of memory locations that can hold
packets at the sender and receiver. When the buffer of the receiving data-
link layer is full, it informs the sending data-link layer to stop pushing
frames. When the buffer is empty, receiver asks sender to send frames.

Error Control
Error control at the data-link layer is normally very simple and
implemented using one of the following two methods. In both methods, a
CRC is added by the sender and checked by the receiver.

In the first method, if the frame is corrupted, it is silently discarded; if it is


not corrupted, the packet is delivered to the network layer. This method is
used mostly in wired LANs such as Ethernet.
In the second method, if the frame is corrupted, it is silently discarded; if it
is not corrupted, an acknowledgment is sent (for the purpose of both flow
and error control) to the sender.

Combination of Flow and Error Control


Flow and error control can be combined. In a simple situation, the
acknowledgment that is sent for flow control can also be used for error
control to tell the sender the packet has arrived uncorrupted. The lack of
acknowledgment means that there is a problem in the sent frame.

11.1.3 Connectionless and Connection-Oriented


A DLC protocol can be either connectionless or connection-oriented.

Connectionless Protocol
In a connectionless protocol, frames are sent from one node to the next
without any relationship between the frames; each frame is independent.
(Physical connection is common in both cases. But, frames are free to
take different routes)

Connection-Oriented Protocol
In a connection-oriented protocol, a logical connection should first be
established between the two nodes (setup phase). After all frames that are
somehow related to each other are transmitted (transfer phase), the logical
connection is terminated (teardown phase).

11.2 DATA-LINK LAYER PROTOCOLS


Traditionally four protocols have been defined for the data-link layer to deal
with flow and error control: Simple, Stop-and-Wait, Go-Back-N, and
Selective-Repeat.

The behavior of a data-link-layer protocol can be better shown as a finite


state machine (FSM). In Figure 11.6, we show an example of a machine
using FSM. We have used rounded-corner rectangles to show states,
colored text to show events, and regular black text to show actions. A
horizontal line is used to separate the event from the actions.
11.2.1 Simple Protocol
simple protocol with neither flow nor error control.
The data-link layer at the sender gets a packet from its network layer,
makes a frame out of it, and sends the frame. The data-link layer at the
receiver receives a frame from the link, extracts the packet from the frame,
and delivers the packet to its network layer.

Each FSM has only one state, the ready state. The sending machine
remains in the ready state until a request comes from the process in the
network layer. When this event occurs, the sending machine encapsulates
the message in a frame and sends it to the receiving machine. The
receiving machine remains in the ready state until a frame arrives from the
sending machine. When this event occurs, the receiving machine
decapsulates the message out of the frame and delivers it to the process
at the network layer.
11.2.2 Stop-and-Wait Protocol
uses both flow and error control.

In this protocol, the sender adds a CRC to each data frame, starts a timer
and sends one frame at a time and waits for an acknowledgment before
sending the next one. When a frame arrives at the receiver site, it is
checked. If its CRC is incorrect, the frame is corrupted and silently
discarded. The silence of the receiver is a signal for the sender that a
frame was either corrupted or lost. If the packet is received correctly,
receiver sends an acknowledgement to sender.

If an acknowledgment arrives before the timer expires, the timer is stopped


and the sender sends the next frame. If the timer expires, the sender
resends the previous frame, assuming that the frame was either lost or
corrupted.
Sequence and Acknowledgment Numbers
Frame is sent and acknowledged, but the acknowledgment is lost. The
frame is resent. However, there is a problem with this scheme. This results
in duplicate packets.
To correct this problem, we need to add sequence numbers to the data
frames and acknowledgment numbers to the ACK frames. However,
numbering in this case is very simple. Sequence numbers are 0, 1, 0, 1, 0,
1, . . . ; the acknowledgment numbers can also be 1, 0, 1, 0, 1, 0, … In
other words, the sequence numbers start with 0, the acknowledgment
numbers start with 1.
FSMs with Sequence and Acknowledgment Numbers – assignment
11.2.3 Piggybacking
to make the communication more efficient, the data in one direction is
piggybacked with the acknowledgment in the other direction. In other
words, when node A is sending data to node B, Node A also acknowledges
the data received from node B.

11.3 HDLC

High-level Data Link Control (HDLC) is a bit-oriented protocol for


communication over point-to-point and multipoint links. It implements the
Stop-and-Wait protocol.

HDLC provides two common transfer modes: normal response mode


(NRM) and asynchronous balanced mode (ABM).
In normal response mode (NRM), we have one primary station and
multiple secondary stations. A primary station can send commands; a
secondary station can only respond.

In ABM, each station can function as a primary and a secondary


11.3.2 Framing
HDLC defines three types of frames: information frames (I-frames),
supervisory frames (S-frames), and unnumbered frames (U-frames).

Each type of frame serves as an envelope for the transmission of a


different type of message.

I frames are used to data-link user data and control information relating to
user data (piggy backing). S-frames are used only to transport control
information. U-frames are reserved for system management. Each frame
in HDLC may contain up to six fields, as shown in Figure 11.16: a
beginning flag field, an address field, a control field, an information field, a
frame check sequence (FCS) field, and an ending flag field.

❑ Flag field. This field contains synchronization pattern 01111110, which


identifies both the beginning and the end of a frame.
❑ Address field. This field contains the address of the secondary station. If
a primary station created the frame, it contains a to address. If a
secondary station creates the frame, it contains a from address.
❑ Control field. The control field is one or two bytes used for flow and error
control and determines the type of frame.
❑ Information field. The information field contains the user’s data from the
network layer or management information. Its length can vary from one
network to another.
❑ FCS field. The frame check sequence (FCS) is the HDLC error
detection field. It can contain either a 2- or 4-byte CRC.

Control field in detail:


The control field determines the type of frame and defines its functionality.

Control Field for I-Frames:


I-frames are designed to carry user data from the network layer. In
addition, they can include flow- and error-control information
(piggybacking).
If the first bit of the control field is 0, this means the frame is an I-frame.
The next 3 bits, called N(S), define the sequence number of the frame.
Note that with 3 bits, we can define a sequence number between 0 and 7.
The last 3 bits, called N(R), correspond to the acknowledgment number
when piggybacking is used. The single bit between N(S) and N(R) is called
the P/F bit and can mean poll or final. It is Used to initiate or end
communications or to handle flow control in response mode.

Control Field for S-Frames:


S-frames do not have information fields. Supervisory frames are used for
flow and error control whenever piggybacking is either impossible or
inappropriate.
If the first 2 bits of the control field are 10, this means the frame is an S-
frame. The last 3 bits, called N(R), correspond to the acknowledgment
number. The 2 bits called code are used to define the type of S-frame
itself.

Receive ready (RR). If the value of the code subfield is 00, it is an RR S-


frame. This kind of frame acknowledges the receipt of frames.

Receive not ready (RNR). If the value of the code subfield is 10, it is an
RNR S frame. It acknowledges the receipt of frames, and it announces
that the receiver is busy and cannot receive more frames. It acts as a kind
of congestion-control mechanism by asking the sender to slow down.

Reject (REJ). If the value of the code subfield is 01, it is an REJ S-frame. It
is a NAK frame that informs the sender, before the sender timer expires,
that the last frame is lost or damaged. It is a NAK that can be used in Go-
Back-N

Selective reject (SREJ). If the value of the code subfield is 11, it is an


SREJ S frame. This is a NAK frame used in Selective Repeat
Control Field for U-Frames
If the first 2 bits of the control field are 11, this means the frame is an U-
frame. U-frames contain an information field, but one used for system
management information, not user data. U-frame codes are divided into
two sections: a 2-bit prefix before the P/ F bit and a 3-bit suffix after the
P/F bit.

(not much imp - )Figure 11.18 shows how U-frames can be used for
connection establishment and connection release. Node A asks for a
connection with a set asynchronous balanced mode (SABM) frame; node
B gives a positive response with an unnumbered acknowledgment (UA)
frame. After these two exchanges, data can be transferred between the
two nodes (not shown in the figure). After data transfer, node A sends a
DISC (disconnect) frame to release the connection; it is confirmed by node
B responding with a UA (unnumbered acknowledgment).
11.4 POINT-TO-POINT PROTOCOL (PPP)
This is a common protocol for point-to-point access
11.4.1 Services Provided by PPP
PPP defines the format of the frame to be exchanged between devices. It
also defines how two devices can negotiate the establishment of the link
and the exchange of data. The new version of PPP, called Multilink PPP,
provides connections over multiple links.
Normally used in internet connectivity setups, in cellular networks to
provide mobile data services.

Services Not Provided by PPP : PPP does not provide flow control. A
sender can send several frames one after another with no concern about
overwhelming the receiver. PPP has a very simple mechanism for error
control. A CRC field is used to detect errors.

11.4.2 Framing
PPP uses a character-oriented (or byte-oriented) frame.
Flag. A PPP frame starts and ends with a 1-byte flag with the bit pattern
01111110.
Address. Typically set to 11111111 (broadcast address) because PPP is
used for point-to-point links without specific addresses.
Control. This field is set to the constant value 00000011 to indicate that
it’s an unnumbered information frame.
Protocol. The protocol field defines what is being carried in the data field:
either user data or other information.
Payload field. This field carries either the user data or other information.
The data field is a sequence of bytes with the default of a maximum of
1500 bytes. The data field is byte-stuffed if the flag byte pattern appears in
this field.
FCS. The frame check sequence (FCS) is simply a 2-byte or 4-byte
standard CRC.

11.4.3 Transition Phases


A PPP connection goes through phases which can be shown in a FSM.
It starts with the dead state. When one of the two nodes starts the
communication, the connection goes into the establish state. If the two
parties agree that they need authentication, then the system needs to do
authentication; otherwise, the parties can simply start communication.
Data transfer takes place in the open state. When a connection reaches
this state, the exchange of data packets can be started. The connection
remains in this state until one of the endpoints wants to terminate the
connection. In this case, the system goes to the terminate state.

11.4.4 Multiplexing
Three sets of protocols are defined to make PPP powerful: the Link
Control Protocol (LCP), two Authentication Protocols (APs), and
several Network Control Protocols (NCPs). PPP uses these protocols
to establish the link, authenticate the parties involved, and carry the
network-layer data. At any moment, a PPP packet can carry data from
one of these protocols in its data field, as shown in Figure 11.22.
Link Control Protocol - The Link Control Protocol (LCP) is responsible
for establishing, maintaining, configuring, and terminating links.
All LCP packets are carried in the payload field of the PPP frame
There are three categories of LCP packets. The first category is used for
link configuration during the establish phase. The second category is used
for link termination during the termination phase. The last are used for link
monitoring and debugging.
The code field defines the type of LCP packet.
The ID field holds a value that matches a request with a reply. The length
field defines the length of the entire LCP packet. The information field
contains information, such as Maximum receive unit (payload field size),
Authentication protocol, which are needed for some LCP packets.
Authentication Protocols
PPP has created two protocols for authentication: Password
Authentication Protocol and Challenge Handshake Authentication
Protocol.

PAP
The Password Authentication Protocol (PAP) is a simple authentication
procedure with a two-step process: a. The user who wants to access a
system sends an authentication identification (usually the user name) and
a password. b. The system checks the validity of the identification and
password and either accepts or denies connection.
The three PAP packets are authenticate-request, authenticate-ack, and
authenticate-nak. The first packet is used by the user to send the user
name and pass word. The second is used by the system to allow access.
The third is used by the system to deny access.
Limitations of PAP:
 Security Risks: PAP is vulnerable to eavesdropping and replay attacks
because it sends credentials in plain text, allowing potential interception.
 No Re-authentication: Once authenticated, PAP does not periodically
recheck credentials, which is less secure for long-term sessions.
In modern networks, PAP is rarely used alone due to its security
drawbacks and is often replaced or supplemented with more secure
protocols like CHAP, EAP, or TLS.

CHAP
The Challenge Handshake Authentication Protocol (CHAP) is a three-way
hand shaking authentication protocol that provides greater security than
PAP. In this method, the password is kept secret; it is never sent online.
a. The system sends the user a challenge packet
b. The user applies a predefined function that takes the challenge value
and the user’s own password and creates a result. The user sends the
result in the response packet to the system.
c. The system does the same. If the result created is the same as the
result sent in the response packet, access is granted; otherwise, it is
denied. CHAP is more secure than PAP.
Commonly used in dial-up connections or VPNs to authenticate remote users.
Network Control Protocols
One NCP protocol is the Internet Protocol Control Protocol (IPCP). This
protocol configures the link used to carry IP packets in the Internet.

After the network-layer configuration is completed by one of the NCP


protocols, the users can exchange data packets from the network layer.

Multilink PPP
PPP was originally designed for a single-channel point-to-point physical
link. The availability of multiple channels in a single point-to-point link
motivated the development of Multilink PPP. In this case, a logical PPP
frame is divided into several actual PPP frames as shown in Figure
11.28.

Let us go through the phases followed by a network layer packet as it is


transmitted through a PPP connection. Figure 11.29 shows the steps.
Media Access Control (MAC)
When nodes or stations are connected and use a common link, called a
multipoint or broadcast link, we need a multiple-access protocol to
coordinate access to the link. Many protocols have been devised to
handle access to a shared link. All of these protocols belong to a
sublayer in the data-link layer called media access control (MAC).

12.1 RANDOM ACCESS


In a random-access method, each station has the right to the medium
without being controlled by any other station. However, if more than one
station tries to send, there is an access conflict—collision.
To avoid access conflict or to resolve it when it happens, each station
follows procedures defined below.
There are 4 random access methods – ALOHA, CSMA, CSMA/CD,
CSMA/CA

12.1.1 ALOHA
Pure ALOHA
The original ALOHA protocol is called pure ALOHA. The idea is that
each station sends a frame whenever it has a frame to send. However,
since there is only one channel to share, there is the possibility of
collision between frames from different stations.
The figure shows that each station sends two frames; there are a total
of eight frames on the shared medium. Some of these frames collide
because multiple frames are in contention for the shared channel.
Figure 12.2 shows that only two frames survive: one frame from station
1 and one frame from station 3. We need to mention that even if one bit
of a frame coexists on the channel with one bit from another frame,
there is a collision and both will be destroyed. It is obvious that we need
to resend the frames that have been destroyed during transmission.

The pure ALOHA protocol relies on acknowledgments from the receiver.


When a station sends a frame, it expects the receiver to send an
acknowledgment. If the acknowledgment does not arrive after a time-out
period, the station assumes that the frame (or the acknowledgment) has
been destroyed and resends the frame.
A collision involves two or more stations. If all these stations try to
resend their frames after the time-out, the frames will collide again. Pure
ALOHA dictates that when the time-out period passes, each station
waits a random amount of time before resending its frame. The
randomness will help avoid more collisions. We call this time the backoff
time TB
Pure ALOHA has a second method to prevent congesting the channel
with retransmitted frames. After a maximum number of retransmission
attempts Kmax, a station must give up and try later.
Vulnerable time – is the length of time in which there is a
possibility of collision.
The vulnerable time during which a collision may occur in pure ALOHA
is 2 times the frame transmission time.
Pure ALOHA vulnerable time = 2 X Tfr where Tfr is frame time
If B has to send a frame without collision at time t, other stations should
not start sending frame between time t-Tfr to t+Tfr. This 2XTfr is
vulnerable time.

Example 12.2
A pure ALOHA network transmits 200-bit frames on a shared channel of
200 kbps. What is the requirement to make this frame collision-free?
Solution: Average frame transmission time Tfr is 200 bits/200 kbps or 1
ms. The vulnerable time is 2 × 1 ms = 2 ms. This means no station
should send later than 1 ms before this station starts transmission and
no station should start sending during the period (1 ms) that this station
is sending.

Throughput:
throughput refers to the fraction of successfully transmitted frames
over the total transmission attempts in a network over time.

G the average number of frames generated by the system during one


frame transmission time.

Example 12.3 : A pure ALOHA network transmits 200-bit frames on a


shared channel of 200 kbps. What is the throughput if the system (all
stations together) produces 1000 frames per second?
Solution: The frame transmission time is 200/200 kbps or 1ms. If the
system creates 1000 frames per second, or 1 frame per millisecond,
then G = 1. In this case S = G × e−2G = 0.135 (13.5 percent). This means
that the throughput is 1000 × 0.135 = 135 frames. Only 135 frames out
of 1000 will probably survive.

Slotted ALOHA:
Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
In slotted ALOHA we divide the time into slots of Tfr seconds and force
the station to send only at the beginning of the time slot.

There is the possibility of collision if two stations try to send at the


beginning of the same time slot. The vulnerable time is now reduced to
Tfr.
Slotted ALOHA vulnerable time = Tfr

Throughput
Example 12.4 A slotted ALOHA network transmits 200-bit frames using
a shared channel with a 200-kbps bandwidth. Find the throughput if the
system (all stations together) produces 1000 frames per second.
Solution - This situation is similar to the previous exercise except that
the network is using slotted ALOHA instead of pure ALOHA. The frame
transmission time is 200/200 kbps or 1 ms. In this case G is 1. So S = G
× e−G = 0.368 (36.8 percent). This means that the throughput is 1000 ×
0.0368 = 368 frames. Only 368 out of 1000 frames will probably survive.

12.1.2 CSMA
To minimize the chance of collision and, therefore, increase the
performance, the CSMA method was developed. The chance of collision
can be reduced if a station senses the medium before trying to use it.
Carrier sense multiple access (CSMA) requires that each station first
listen to the medium (or check the state of the medium) before sending.
In other words, CSMA is based on the principle “sense before transmit”
or “listen before talk.”
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp. This is the
time needed for a signal to propagate from one end of the medium to
the other.

Persistence Methods – there are 3 methods in CSMA. the 1-persistent


method, the nonpersistent method, and the p-persistent method.

1-Persistent - In this method, after the station finds the line idle, it sends
its frame immediately (with probability 1). This method has the highest
chance of collision because two or more stations may find the line idle
and send their frames immediately.
Otherwise, if the channel is busy, the station just waits until it becomes
idle. Then the station transmits a frame.
Nonpersistent –
In the nonpersistent method, a station that has a frame to send senses
the line. If the line is idle, it sends immediately. If the line is not idle, it
waits a random amount of time and then senses the line again. The
nonpersistent approach reduces the chance of collision because it is
unlikely that two or more stations will wait the same amount of time and
retry to send simultaneously.
p-Persistent
In this method, after the station finds the line idle it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 − p, the station waits for the beginning of the
next time slot and checks the line again.
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the
back off procedure (defined in CSMA/CD).
12.1.3 CSMA/CD
The CSMA method does not specify the procedure following a collision.
Carrier sense multiple access with collision detection (CSMA/CD)
augments the algorithm to handle the collision. In this method, a station
monitors the medium after it sends a frame to see if the transmission was
successful. If there is a collision, the frame is sent again.
Now let us look at the flow diagram for CSMA/CD in Figure 12.13.
In CSMA/CD, transmission and collision detection are continuous
processes. We do not send the entire frame and then look for a collision.
The station transmits and receives continuously and simultaneously.

It sends of a short jamming signal to make sure that all other stations
become aware of the collision.

Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA.
For the 1-persistent method, the maximum throughput is around 50
percent when G = 1.

12.1.4 CSMA/CA
Carrier sense multiple access with collision avoidance (CSMA/CA) was
invented for wireless networks. Collisions are avoided through the use of
CSMA/CA’s three strategies: the interframe space, the contention window,
and acknowledgments, as shown in Figure 12.15.
Interframe Space (IFS). First, collisions are avoided by deferring
transmission even if the channel is found idle. When an idle channel is
found, the station does not send immediately. It waits for a period of time
called the interframe space or IFS.
Contention Window. The contention window is an amount of time divided
into slots. A station that is ready to send chooses a random number of
slots as its wait time.
Acknowledgment. With all these precautions, there still may be a collision
resulting in destroyed data. In addition, the data may be corrupted during
the transmission. The positive acknowledgment and the time-out timer can
help guarantee that the receiver has received the frame.
CSMA/CA was mostly intended for use in wireless networks.

12.2 CONTROLLED ACCESS


In controlled access, the stations consult one another to find which station
has the right to send. A station cannot send unless it has been authorized
by other stations.

Three controlled-access methods –


12.2.1 Reservation
In the reservation method, a station needs to make a reservation before
sending data.
If there are N stations in the system, there are exactly N reservation
minislots in the reservation frame. Each minislot belongs to a station.
When a station needs to send a data frame, it makes a reservation in its
own minislot.
The stations that have made reservations can send their data frames after
the reservation frame.
Figure 12.18 shows a situation with five stations and a five-minislot
reservation frame. In the first interval, only stations 1, 3, and 4 have made
reservations. In the second interval, only station 1 has made a reservation.

12.2.2 Polling
Polling works with topologies in which one device is designated as a
primary station and the other devices are secondary stations. The primary
device controls the link; the secondary devices follow its instructions. This
method uses poll and select functions to prevent collisions.
Select - The select function is used whenever the primary device has
something to send. The primary must alert the secondary to the upcoming
transmission and wait for an acknowledgment of the secondary’s ready
status. Before sending data, the primary creates and transmits a select
(SEL) frame.
Poll - When the primary is ready to receive data, it must ask (poll) each
device in turn if it has anything to send. It first asks the first device. If the
first device doesn’t have data, it sends NAK. Then, the primary asks
second device and so on. If a device has data, it sends the data to
primary. Primary will forward the data to other machines.

However, the drawback is if the primary station fails, the system goes
down.

12.2.3 Token Passing


In the token-passing method, the stations in a network are organized in a
logical ring.
In this method, a special packet called a token circulates through the ring.
The possession of the token gives the station the right to access the
channel and send its data. When a station has some data to send, it waits
until it receives the token. It then holds the token and sends its data. When
the station has no more data to send, it releases the token, passing it to
the next station in the ring.

Figure 12.20 shows four different physical topologies that can create a
logical ring.
In the physical ring topology, when a station sends the token, only the next
station can see the token.
The dual ring topology uses a second (auxiliary) ring which operates in the
reverse direction compared with the main ring. The second ring is for
emergencies only

In the bus ring topology, also called a token bus, the stations are
connected to a single cable called a bus.

In a star ring topology, the physical topology is a star where all the stations
are connected to a hub.

You might also like