Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 2 Module 2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 147

Data Link Layer

Dr. R. Sreekanth
Associate Professor
NHCE
Contents

• Data Link Layer Design issues,


• Services provided to Network Layer,
• Framing,
• Error Detection and Correction Codes,
• Data Link Protocols and Sliding window protocols :elementary Data
Link Protocol, unrestricted simplex Protocol, Simplex Stop-and-Wait
Protocol, Simplex Protocol for a Noisy,
• ARQ, Go-back-n ARQ Method, Selective-repeat ARQ.
• Medium Access Sublayer:
• Multiple access protocols and Examples: ALOHA, Pure ALOHA, Slotted
ALOHA Protocol,
• Ethernet: Carrier Sense Multiple Access (CSMA), Frame format of
CSMA,
• Types of CSMA,CSMA with Collision Detection(CSMA/CD),
• Ethernet LAN (802.3) frame format, Wireless LAN, Bluetooth,
spanning tree.
• The data link layer in the OSI (Open System Interconnections)
Model, is in between the physical layer and the network layer.
This layer converts the raw transmission facility provided by the
physical layer to a reliable and error-free link.
• The main functions and the design issues of this layer are
• Providing services to the network layer
• Framing
• Error Control
• Flow Control
Services to the Network Layer

• In the OSI Model, each layer uses the services of the layer
below it and provides services to the layer above it.
• The data link layer uses the services offered by the physical
layer. The primary function of this layer is to provide a well
defined service interface to network layer above it.
• The types of services provided can be of three types −
• Unacknowledged connectionless service
• Acknowledged connectionless service
• Acknowledged connection - oriented service
Unacknowledged connectionless service

• Here, the data link layer of the sending machine sends


independent frames to the data link layer of the receiving
machine.
• The receiving machine does not acknowledge receiving the
frame. No logical connection is set up between the host
machines. Error and data loss is not handled in this service.
This is applicable in Ethernet services and voice
communications.
Acknowledged connectionless service

• Here, no logical connection is set up between the host


machines, but each frame sent by the source machine is
acknowledged by the destination machine on receiving.
• If the source does not receive the acknowledgment within a
stipulated time, then it resends the frame. This is used in Wifi
(IEEE 802.11) services.
Acknowledged connection - oriented service
• This is the best service that the data link layer can offer to the
network layer. A logical connection is set up between the two
machines and the data is transmitted along this logical path.
The frames are numbered, that keeps track of loss of frames
and also ensures that frames are received in correct order.
• The service has three distinct phases −
• Set up of connection – A logical path is set up between the
source and the destination machines. Buffers and counters are
initialised to keep track of frames.
• Sending frames – The frames are transmitted.
• Release connection – The connection is released, buffers and
other resources are released.
• It is appropriate for satellite communications and long-distance
telephone circuits.
Framing

• The data link layer encapsulates (enclose) each data packet


from the network layer into frames that are then transmitted.
• A frame has three parts, namely −
• Frame Header
• Payload field that contains the data packet from network layer
• Trailer
Error Control

• The data link layer ensures error free link for data transmission.
The issues it caters to with respect to error control are −
• Dealing with transmission errors
• Sending acknowledgement frames in reliable connections
• Retransmitting lost frames
• Identifying duplicate frames and deleting them
• Controlling access to shared channels in case of broadcasting
Flow Control

• The data link layer regulates flow control so that a fast sender
does not drown a slow receiver. When the sender sends frames
at very high speeds, a slow receiver may not be able to handle
it. There will be frame losses even if the transmission is error-
free. The two common approaches for flow control are −
• Feedback based flow control
• Rate based flow control
Data Link Layer Frame
• A frame is a unit of communication in the data link layer.
• To provide service to the network layer, the data link layer must
use the service provided to it by the physical layer.
• Physical layer does is accept a raw bit stream and attempt to
deliver it to the destination. This bit stream is not guaranteed to
be error free.
• The number of bits received may be less than, equal to, or more
than the number of bits transmitted, and they may have different
values. It is up to the data link layer to detect and, if necessary,
correct errors.
Fields of a Data Link Layer Frame

• A data link layer frame has the following parts:


• Frame Header: It contains the source and the destination
addresses of the frame and the control bytes.
• Payload field: It contains the message to be delivered.
• Trailer: It contains the error detection and error correction bits.
It is also called a Frame Check Sequence (FCS).
• Flag: Two flag at the two ends mark the beginning and the end
of the frame.
Frame Header

• A frame header contains the destination address, the source


address and three control fields kind, seq, and ack serving the
following purposes:
• kind: This field states whether the frame is a data frame or it is
used for control functions like error and flow control or link
management etc.
• seq: This contains the sequence number of the frame for
rearrangement of out – of – sequence frames and sending
acknowledgments by the receiver.
• ack: This contains the acknowledgment number of some frame.
Specific Data Link Layer Frames

• The structure of the data link layer frame may be specialized


according to the type of protocol used. Let us study the frame
structure used in two protocols: Point – to – Point Protocol
(PPP) and High-level Data Link Control (HDLC).
Point – to – Point Protocol

• Point – to – Point Protocol (PPP) is a communication protocol of the data link


layer that is used to transmit multiprotocol data between two directly connected
(point-to-point) computers. The fields of a PPP frame are:
• Flag: It is of 1 byte that with bit pattern 01111110.
• Address: 1 byte which is set to 11111111 in case of the broadcast.
• Control: 1 byte set to a constant value of 11000000.
• Protocol: 1 or 2 bytes that define the type of data contained in the payload field.
• Payload: This carries the data from the network layer. The maximum length of
the payload field is 1500 bytes.
• FCS (Frame Check sequence): It is a 2 byte or 4 bytes frame check sequence
for error detection. The standard code used is CRC (cyclic redundancy code).
High-level Data Link Control

• High-level Data Link Control (HDLC) is a group of communication protocols


of the data link layer for transmitting data between network points or
nodes. The fields of an HDLC frame are:
• Flag: It is an 8-bit sequence with bit pattern 01111110.
• Address: It contains the address of the receiver. The address field may be
from 1 byte to several bytes.
• Control: It is 1 or 2 bytes containing flow and error control information.
• Payload: This carries the data from the network layer. Its length may vary
from one network to another.
• FCS: It is a 2 byte or 4 bytes frame check sequence for error detection.
The standard code used is CRC (cyclic redundancy code)
Error Detection and Correction in Data link Layer

• Data-link layer uses error control techniques to ensure that


frames, i.e. bit streams of data, are transmitted from the source
to the destination with a certain extent of accuracy.
• Errors
• When bits are transmitted over the computer network, they are
subject to get corrupted due to interference and network
problems. The corrupted bits leads to spurious data being
received by the destination and are called errors.
• Types of Errors
• Errors can be of three types, namely single bit errors, multiple
bit errors, and burst errors.
• Single bit error − In the received frame, only one bit has been
corrupted, i.e. either changed from 0 to 1 or from 1 to 0.
• Multiple bits error − In the received frame, more than one bits
are corrupted.
• Burst error − In the received frame, more than one consecutive
bits are corrupted.
Error Control

• Error control can be done in two ways


• Error detection − Error detection involves checking whether
any error has occurred or not. The number of error bits and the
type of error does not matter.
• Error correction − Error correction involves ascertaining the
exact number of bits that has been corrupted and the location of
the corrupted bits.
• For both error detection and error correction, the sender needs
to send some additional bits along with the data bits.
• The receiver performs necessary checks based upon the
additional redundant bits.
• If it finds that the data is free from errors, it removes the
redundant bits before passing the message to the upper layers.
Error Detection Techniques

• There are three main techniques for detecting errors in frames:


Parity Check, Checksum and Cyclic Redundancy Check (CRC).
• Parity Check
• The parity check is done by adding an extra bit, called parity bit
to the data to make a number of 1s either even in case of even
parity or odd in case of odd parity.
• While creating a frame, the sender counts the number of 1s in it
and adds the parity bit in the following way
• In case of even parity: If a number of 1s is even then parity bit
value is 0. If the number of 1s is odd then parity bit value is 1.
• In case of odd parity: If a number of 1s is odd then parity bit
value is 0. If a number of 1s is even then parity bit value is 1.
• On receiving a frame, the receiver counts the number of 1s in it.
In case of even parity check, if the count of 1s is even, the
frame is accepted, otherwise, it is rejected. A similar rule is
adopted for odd parity check.
• The parity check is suitable for single bit error detection only.
Checksum

• In this error detection scheme, the following procedure is applied


• Data is divided into fixed sized frames or segments.
• The sender adds the segments using 1’s complement arithmetic
to get the sum. It then complements the sum to get the checksum
and sends it along with the data frames.
• The receiver adds the incoming segments along with the
checksum using 1’s complement arithmetic to get the sum and
then complements it.
• If the result is zero, the received frames are accepted; otherwise,
they are discarded.
Cyclic Redundancy Check (CRC)

• Cyclic Redundancy Check (CRC) involves binary division of the


data bits being sent by a predetermined divisor agreed upon by the
communicating system. The divisor is generated using polynomials.
• Here, the sender performs binary division of the data segment by
the divisor. It then appends the remainder called CRC bits to the
end of the data segment. This makes the resulting data unit exactly
divisible by the divisor.
• The receiver divides the incoming data unit by the divisor. If there is
no remainder, the data unit is assumed to be correct and is
accepted. Otherwise, it is understood that the data is corrupted and
is therefore rejected.
Example-

• Consider the CRC generator is x7 + x6 + x4 + x3 + x + 1.


The corresponding binary pattern is obtained as-
Problem-01:
 
A bit stream 1101011011 is transmitted using the standard CRC method. The generator polynomial is x 4+x+1.
What is the actual bit string transmitted?

• The generator polynomial G(x) = x4 + x + 1 is encoded as 10011.


• Clearly, the generator polynomial consists of 5 bits.
• So, a string of 4 zeroes is appended to the bit stream to be transmitted.
• The resulting bit stream is 11010110110000.
Binary division is performed
• From here, CRC = 1110.
• Now,
• The code word to be transmitted is obtained by replacing the last 4
zeroes of 11010110110000 with the CRC.
• Thus, the code word transmitted to the receiver = 11010110111110.
Practice problems
• Problem-02:

• A bit stream 10011101 is transmitted using the standard CRC method.


The generator polynomial is x3+1.
1.What is the actual bit string transmitted?
2.Suppose the third bit from the left is inverted during transmission.
How will receiver detect this error?
Error Correction Techniques

• Error correction techniques find out the exact number of bits that
have been corrupted and as well as their locations. There are two
principle ways

• Backward Error Correction (Retransmission) −  If the receiver


detects an error in the incoming frame, it requests the sender to
retransmit the frame. It is a relatively simple technique. But it can be
efficiently used only where retransmitting is not expensive as in fiber
optics and the time for retransmission is low relative to the
requirements of the application.
• Forward Error Correction −  If the receiver detects some error
in the incoming frame, it executes error-correcting code that
generates the actual frame. This saves bandwidth required for
retransmission. It is inevitable in real-time systems. However, if
there are too many errors, the frames need to be retransmitted.
• The four main error correction codes are
• Hamming Codes
• Binary Convolution Code
• Reed – Solomon Code
• Low-Density Parity-Check Code
Hamming codes
• Hamming code is a set of error-correction codes that can be used
to detect and correct the errors that can occur when the data is
moved or stored from the sender to the receiver. It is technique
developed by R.W. Hamming for error correction.
• While transmitting the message, it is encoded with the
redundant bits. The redundant bits are the extra bits that are
placed at certain locations of the data bits to detect the error. At
the receiver end, the code is decoded to detect errors and the
original message is received.
Redundant bits

The number of redundant bits can be calculated using the following


formula:

2^r ≥ m + r + 1 where, r = redundant bit, m = data


bit
For example, if 4-bit information is to be transmitted, then n=4.
The number of redundant bits is determined by the trial and error method.
Let P=2, we get,
                                
The above equation implies 4 not greater than or equal to 7.
So let’s choose another value of P=3.
                                
Now, the equation satisfies the condition. So number of redundant bits, P=3.
Choosing the location of redundant bits

• For the above example, the number of data bits n=4, and the
number of redundant bits P=3. So the message consists of 7 bits
in total that are to be coded. Let the rightmost bit be designated
as bit 1, the next successive bit as bit 2 and so on.
• The seven bits are bit 7, bit 6, bit 5, bit 4, bit 3, bit 2, bit 1.
• In this, the redundant bits are placed at the positions that are
numbered corresponding to the power of 2, i.e., 1, 2, 4, 8,… Thus
the locations of data bit and redundant bit are D4, D3, D2, P3,
D1, P2, P1.
Assigning the values to redundant bits

• Now it is time to assign bit value to the redundant bits in the


formed hamming code group. The assigned bits are called a
parity bit.
• Each parity bit will check certain other bits in the total code
group. It is one with the bit location table, as shown below.
Bit Location 7 6 5 4 3 2 1
Bit
D4 D3 D2 P3 D1 P2 P1
designation
Binary
representati 111 110 101 100 011 010 001
on
Information
D4 D3 D2 D1
/ Data bits
Parity bits P3 P2 P1
• Parity bit P1 covers all data bits in positions whose binary
representation has 1 in the least significant position(001, 011,
101, 111, etc.). Thus P1 checks the bit in locations 1, 3, 5, 7, 9, 11,
etc..
• Parity bit P2 covers all data bits in positions whose binary
representation has 1 in the second least significant position(010,
011, 110, 111, etc.). Thus P2 checks the bit in locations 2, 3, 6, 7,
etc.
• Parity bit P3 covers all data bits in positions whose binary
representation has 1 in the third least significant position(100,
101, 110, 111, etc.). Thus P3 checks the bit in locations 4, 5, 6, 7,
etc.
• Each parity bit checks the corresponding bit locations and
assign the bit value as 1 or 0, so as to make the number of 1s as
even for even parity and odd for odd parity.
Example problem 1
Encode a binary word 11001 into the even parity
hamming code.
Given, number of data bits, n =5.
To find the number of redundant bits,
Let us try P=4

2^r ≥ m + r + 1
2^4 >= 5+4+1
                                
The equation is satisfied and so 4 redundant bits are selected.
So, total code bit = n+P = 9
The redundant bits are placed at bit positions 1, 2, 4 and 8.
Construct the bit location table.
Bit Location 9 8 7 6 5 4 3 2 1

Bit
D5 P4 D4 D3 D2 P3 D1 P2 P1
designation

Binary
representati 1001 1000 0111 0110 0101 0100 0011 0010 0001
on

Information 1 1 0 0 1
bits

Parity bits 1 1 0 1
• To determine the parity bits
• For P1: Bit locations 3, 5, 7 and 9 have three 1s. To have even
parity, P1 must be 1.
• For P2: Bit locations 3, 6, 7 have two 1s. To have even parity, P2
must be 0.
• For P3: Bit locations 5, 6, 7 have one 1s. To have even parity, P3
must be 1.
• For P4: Bit locations 8, 9 have one 1s. To have even parity, P2
must be 1.
• Thus the encoded 9-bit hamming code is 111001101.
Elementary Data Link Protocols

• Protocols in the data link layer are designed so that this layer
can perform its basic functions: framing, error control and flow
control.
• Framing is the process of dividing bit - streams from physical
layer into data frames whose size ranges from a few hundred to
a few thousand bytes.
• Error control mechanisms deals with transmission errors and
retransmission of corrupted and lost frames. Flow control
regulates speed of delivery and so that a fast sender does not
drown a slow receiver.
Types of Data Link Protocols

• Data link protocols can be broadly divided into two categories,


depending on whether the transmission channel is noiseless or
noisy.
AUTOMATIC REPEAT REQUEST
Un Restricted Simplex Protocol

• The Simplex protocol is hypothetical protocol designed for


unidirectional data transmission over an ideal channel, i.e. a
channel through which transmission can never go wrong. It has
distinct procedures for sender and receiver.
• The sender simply sends all its data available onto the channel
as soon as they are available its buffer. The receiver is
assumed to process all incoming data instantly. It is hypothetical
since it does not handle flow control or error control.
• The protocol consists of two distinct procedures, a sender and a
receiver.
• The sender runs in the data link layer of the source machine,
and the receiver runs in the data link layer of the destination
machine.
• No sequence numbers or acknowledgements are used here, so
MAX_SEQ is not needed. The only event type possible is
frame_arrival (i.e., the arrival of an undamaged frame).
• The sender is in an infinite while loop just pumping data out
onto the line as fast as it can.
• The body of the loop consists of three actions: go fetch a packet
from the (always obliging) network layer, construct an outbound
frame using the variable s, and send the frame on its way.
• Only the info field of the frame is used by this protocol, because
the other fields have to do with error and flow control and there
are no errors or flow control restrictions here.
• The receiver is equally simple. Initially, it waits for something to
happen, the only possibility being the arrival of an undamaged
frame.
• Eventually, the frame arrives and the procedure wait_for_event
returns, with event set to frame_arrival (which is ignored
anyway).
• The call to from_physical_layer removes the newly arrived
frame from the hardware buffer and puts it in the variable r,
where the receiver code can get at it. Finally, the data portion is
passed on to the network layer, and the data link layer settles
back to wait for the next frame, effectively suspending itself until
the frame arrives.
Stop – and – Wait Protocol

• Stop – and – Wait protocol is for noiseless channel too. It


provides unidirectional data transmission without any error
control facilities.
• However, it provides for flow control so that a fast sender does
not drown a slow receiver. The receiver has a finite buffer size
with finite processing speed. The sender can send a frame only
when it has received indication from the receiver that it is
available for further data processing.
• The main problem we have to deal with here is how to prevent
the sender from flooding the receiver with data faster than the
latter is able to process them.
• In essence, if the receiver requires a time t to execute
from_physical_layer plus to_network_layer, the sender must
transmit at an average rate less than one frame per time t.
Moreover, if we assume that no automatic buffering and queueing
are done within the receiver's hardware, the sender must never
transmit a new frame until the old one has been fetched by
from_physical_layer, lest the new one overwrite the old one.
• In certain restricted circumstances (e.g., synchronous
transmission and a receiving data link layer fully dedicated to
processing the one input line), it might be possible for the
sender to simply insert a delay into protocol 1 to slow it down
sufficiently to keep from swamping the receiver.
• However, more usually, each data link layer will have several
lines to attend to, and the time interval between a frame arriving
and its being processed may vary considerably.
• If the network designers can calculate the worst-case behavior
of the receiver, they can program the sender to transmit so
slowly that even if every frame suffers the maximum delay,
there will be no overruns. The trouble with this approach is that
it is too conservative. It leads to a bandwidth utilization that is
far below the optimum, unless the best and worst cases are
almost the same (i.e., the variation in the data link layer's
reaction time is small).
• A more general solution to this dilemma is to have the receiver
provide feedback to the sender. After having passed a packet to
its network layer, the receiver sends a little dummy frame back
to the sender which, in effect, gives the sender permission to
transmit the next frame.
• After having sent a frame, the sender is required by the protocol
to bide its time until the little dummy (i.e., acknowledgement)
frame arrives. Using feedback from the receiver to let the
sender know when it may send more data is an example of the
flow control mentioned earlier.
Stop – and – Wait ARQ

• Stop – and – wait Automatic Repeat Request (Stop – and –


Wait ARQ) is a variation of the above protocol with added error
control mechanisms, appropriate for noisy channels. The
sender keeps a copy of the sent frame.
• It then waits for a finite time to receive a positive
acknowledgement from receiver. If the timer expires or a
negative acknowledgement is received, the frame is
retransmitted. If a positive acknowledgement is received then
the next frame is sent.
• Stop and Wait ARQ (Automatic Repeat Request) resolves the issue of
both error control and flow control.
1. Time Out:
2. Sequence Number (Data)
 Delayed Acknowledgement:
• This is resolved by introducing sequence number for
acknowledgement also.
Working of Stop and Wait ARQ:

• 1) Sender A sends a data frame or packet with sequence number 0.


2) Receiver B, after receiving data frame, sends and acknowledgement
with sequence number 1 (sequence number of next expected data
frame or packet)
3)There is only one bit sequence number that implies that both sender
and receiver have buffer for one frame or packet only.
Characteristics of Stop and Wait ARQ:

• It uses link between sender and receiver as half duplex link


• Throughput = 1 Data packet/frame per  RTT
• If Bandwidth*Delay product is very high, then stop and wait protocol
is not so useful. The sender has to keep waiting for acknowledgements
before sending the processed next packet.
• It is an example for “Closed Loop OR connection oriented “
protocols
• It is an special category of SWP where its window size is 1
• Irrespective of number of packets sender is having stop and wait
protocol  requires only  2 sequence numbers 0 and 1
• So Stop and Wait ARQ may work fine where propagation delay is very
less for example LAN connections, but performs badly for distant
connections like satellite connection.
Go – Back – N ARQ

• Go – Back – N ARQ provides for sending multiple frames before


receiving the acknowledgement for the first frame. It uses the
concept of sliding window, and so is also called sliding window
protocol. The frames are sequentially numbered and a finite
number of frames are sent. If the acknowledgement of a frame
is not received within the time period, all frames starting from
that frame are retransmitted.
Working Principle

• Go – Back – N ARQ uses the concept of protocol pipelining, i.e.


sending multiple frames before receiving the acknowledgment
for the first frame. The frames are sequentially numbered and a
finite number of frames. The maximum number of frames that
can be sent depends upon the size of the sending window. If
the acknowledgment of a frame is not received within an agreed
upon time period, all frames starting from that frame are
retransmitted
• The size of the sending window determines the sequence
number of the outbound frames. If the sequence number of the
frames is an n-bit field, then the range of sequence numbers
that can be assigned is 0 to 2n−1. Consequently, the size of the
sending window is 2n−1. Thus in order to accommodate a
sending window size of 2n−1, an n-bit sequence number is
chosen.
• The sequence numbers are numbered as modulo-n. For
example, if the sending window size is 4, then the sequence
numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The
number of bits in the sequence number is 2 to generate the
binary sequence 00, 01, 10, 11.
• The size of the receiving window is 1.
Selective Repeat ARQ

• This protocol also provides for sending multiple frames before


receiving the acknowledgement for the first frame. However,
here only the erroneous or lost frames are retransmitted, while
the good frames are received and buffered.
• Sliding window protocol is a flow control protocol.
• It allows the sender to send multiple frames before needing the
acknowledgements.
• Go back N and Selective Repeat are the implementations of sliding
window protocol.
PRACTICE PROBLEMS BASED ON SLIDING WINDOW PROTOCOL-

• Problem-01:

A 3000 km long trunk operates at 1.536 Mbps and is used to transmit 64


byte frames and uses sliding window protocol. If the propagation speed
is 6 μsec / km (Microsec/km), how many bits should the sequence
number field be?
• Solution-

• Given-
• Distance = 3000 km
• Bandwidth = 1.536 Mbps
• Packet size = 64 bytes
• Propagation speed = 6 μsec / km

• Calculating Transmission Delay-

• Transmission delay (Tt)


• = Packet size / Bandwidth
• = 64 bytes / 1.536 Mbps (1 Mb = 1 * 106 bits)
• = (64 x 8 bits) / (1.536 x 106 bits per sec)
• == 333.33 μsec
Calculating Propagation Delay-
• For 1 km, propagation delay = 6 μsec
• For 3000 km, propagation delay = 3000 x 6 μsec = 18000 μsec

• Calculating Value Of ‘a’-

• a = Tp / Tt
• a = 18000 μsec / 333.33 μsec
• a = 54
• Calculating Bits Required in Sequence Number Field-

• Bits required in sequence number field


• = ⌈log2(1+2a)⌉
• = ⌈log2(1 + 2 x 54)⌉
• = ⌈log2(109)⌉
• = ⌈6.76⌉
• = 7 bits

• Thus, Minimum number of bits required in sequence number field = 7


Problem - 2
• Compute approximate optimal window size when packet size is 53 bytes, RTT is
60 msec and bottleneck bandwidth is 155 Mbps.

• Solution-
• 
• Given-
• Packet size = 53 bytes
• RTT = 60 msec
• Bandwidth = 155 Mbps
• 
• Calculating Transmission Delay-
• 
• Transmission delay (Tt)
• = Packet size / Bandwidth
• = 53 bytes / 155 Mbps
• = (53 x 8 bits) / (155 x 106 bits per sec)
• = 2.735 μsec
Calculating Propagation Delay-
• Propagation delay (Tp)
• = Round Trip Time / 2
• = 60 msec / 2
• = 30 msec
• Calculating Value of ‘a’-

• a = Tp / Tt
• a = 30 msec / 2.735 μsec
• a = 10968.921

• 
Calculating Optimal Window Size-
 
• Optimal window size
• = 1 + 2a
• = 1 + 2 x 10968.921
• = 21938.84
• 
• Thus, approximate optimal window size = 21938 frames.
• Problem-03:
• 
• A sliding window protocol is designed for a 1 Mbps point to point link
to the moon which has a one way latency (delay) of 1.25 sec.
Assuming that each frame carries 1 KB of data, what is the minimum
number of bits needed for the sequence number?
• Solution-
• 
• Given-
• Bandwidth = 1 Mbps
• Propagation delay (Tp) = 1.25 sec
• Packet size = 1 KB
• 
• Calculating Transmission Delay-
• 
• Transmission delay (Tt)
• = Packet size / Bandwidth
• = 1 KB / 1 Mbps
• = (210 x 8 bits) / (106 bits per sec)
• = 8.192 msec
• 
• Calculating Value of ‘a’-
• a = Tp / Tt
• a = 1.25 sec / 8.192 msec
• a = 152.59
• Calculating Bits Required in Sequence Number Field-
•  Bits required in sequence number field
• = ⌈log2(1+2a)⌉
• = ⌈log2(1 + 2 x 152.59)⌉
• = ⌈log2(306.176)⌉
• = ⌈8.25⌉
• = 9 bits
• 
• Thus,
• Minimum number of bits required in sequence number field = 9
Medium Access Sublayer
• The medium access control (MAC) is a sublayer of the data link
layer of the open system interconnections (OSI) reference
model for data transmission. It is responsible for flow control
and multiplexing for transmission medium. It controls the
transmission of data packets via remotely shared channels. It
sends data over the network interface card.
MAC Layer in the OSI Model

• The Open System Interconnections (OSI) model is a layered


networking framework that conceptualizes how communications
should be done between heterogeneous systems. The data link
layer is the second lowest layer. It is divided into two sublayers

• The logical link control (LLC) sublayer
• The medium access control (MAC) sublayer
• The following diagram depicts the position of the MAC layer −
Functions of MAC Layer

• It provides an abstraction of the physical layer to the LLC and upper layers of
the OSI network.
• It is responsible for encapsulating frames so that they are suitable for
transmission via the physical medium.
• It resolves the addressing of source station as well as the destination station, or
groups of destination stations.
• It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
• It also performs collision resolution and initiating retransmission in case of
collisions.
• It generates the frame check sequences and thus contributes to protection
against transmission errors.
MAC Addresses

• MAC address or media access control address is a unique


identifier allotted to a network interface controller (NIC) of a
device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi, and Bluetooth.
• MAC address is assigned to a network adapter at the time of
manufacturing. It is hardwired or hard-coded in the network
interface card (NIC). A MAC address comprises of six groups of
two hexadecimal digits, separated by hyphens, colons, or no
separators. An example of a MAC address is
00:0A:89:5B:F0:11.
Multiple Access Protocols
• Multiple access protocols are a set of protocols operating in the
Medium Access Control sublayer (MAC sublayer) of the Open
Systems Interconnection (OSI) model. These protocols allow a
number of nodes or users to access a shared network channel.
Several data streams originating from several nodes are
transferred through the multi-point transmission channel.
• The objectives of multiple access protocols are optimization of
transmission time, minimization of collisions and avoidance of
crosstalks.
Categories of Multiple Access Protocols
• Multiple access protocols can be broadly classified into three
categories - random access protocols, controlled access
protocols and channelization protocols.
Random Access Protocols

• Random access protocols assign uniform priority to all connected


nodes. Any node can send data if the transmission channel is idle.
No fixed time or fixed sequence is given for data transmission.
• The four random access protocols are−
• ALOHA
• Carrier sense multiple access (CSMA)
• Carrier sense multiple access with collision detection (CSMA/CD)
• Carrier sense multiple access with collision avoidance (CSMA/CA)
Controlled Access Protocols

• Controlled access protocols allow only one node to send data at


a given time. Before initiating transmission, a node seeks
information from other nodes to determine which station has the
right to send. This avoids collision of messages on the shared
channel.
• The station can be assigned the right to send by the following
three methods−
• Reservation
• Polling
• Token Passing
Channelization

• Channelization are a set of methods by which the available


bandwidth is divided among the different nodes for
simultaneous data transfer.
• The three channelization methods are−
• Frequency division multiple access (FDMA)
• Time division multiple access (TDMA)
• Code division multiple access (CDMA)
ALOHA Protocol in computer network

• ALOHA is a multiple access protocol for transmission of data via a shared


network channel. It operates in the medium access control sublayer (MAC
sublayer) of the open systems interconnection (OSI) model. Using this
protocol, several data streams originating from multiple nodes are
transferred through a multi-point transmission channel.
• In ALOHA, each node or station transmits a frame without trying to detect
whether the transmission channel is idle or busy. If the channel is idle,
then the frames will be successfully transmitted. If two frames attempt to
occupy the channel simultaneously, collision of frames will occur and the
frames will be discarded. These stations may choose to retransmit the
corrupted frames repeatedly until successful transmission occurs.
Versions of ALOHA Protocols
• Pure ALOHA
• In pure ALOHA, the time of transmission is continuous.
Whenever a station has an available frame, it sends the frame.
If there is collision and the frame is destroyed, the sender waits
for a random amount of time before retransmitting it.
• Slotted ALOHA
• Slotted ALOHA reduces the number of collisions and doubles
the capacity of pure ALOHA. The shared channel is divided into
a number of discrete time intervals called slots. A station can
transmit only at the beginning of each slot. However, there can
still be collisions if more than one station tries to transmit at the
beginning of the same time slot.
CSMA
• Carrier Sense Multiple Access (CSMA) is a network protocol for
carrier transmission that operates in the Medium Access Control
(MAC) layer.
• It senses or listens whether the shared channel for transmission
is busy or not, and transmits if the channel is not busy using
CMSA protocols.
Working Principle

• When a station has frames to transmit, it attempts to detect presence of


the carrier signal from the other nodes connected to the shared channel.
• If a carrier signal is detected, it implies that a transmission is in progress.
The station waits till the ongoing transmission executes to completion, and
then initiates its own transmission. Generally, transmissions by the node
are received by all other nodes connected to the channel.
• Since, the nodes detect for a transmission before sending their own
frames,collision of frames is reduced. However, if two nodes detect an idle
channel at the same time, they may simultaneously initiate transmission.
This would cause the frames to garble resulting in a collision.
CSMA with Collision Detection (CSMA/CD)

• Carrier Sense Multiple Access with Collision Detection


(CSMA/CD) is a network protocol for carrier transmission that
operates in the Medium Access Control (MAC) layer.
• It senses or listens whether the shared channel for transmission
is busy or not, and defers transmissions until the channel is
free. The collision detection technology detects collisions by
sensing transmissions from other stations. On detection of a
collision, the station stops transmitting, sends a jam signal, and
then waits for a random time interval before retransmission.
Algorithms

• The algorithm of CSMA/CD is:


• When a frame is ready, the transmitting station checks whether the
channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station starts transmitting and continually
monitors the channel to detect collision.
• If a collision is detected, the station starts the collision resolution
algorithm.
• The station resets the retransmission counters and completes frame
transmission.
• The algorithm of Collision Resolution is:
• The station continues transmission of the current frame for a
specified time along with a jam signal, to ensure that all the other
stations detect collision.
• The station increments the retransmission counter.
• If the maximum number of retransmission attempts is reached,
then the station aborts transmission.
• Otherwise, the station waits for a backoff period which is
generally a function of the number of collisions and restart main
algorithm.
• Though this algorithm detects collisions, it does not reduce the
number of collisions.
• It is not appropriate for large networks performance degrades
exponentially when more stations are added.
Ethernet LAN (802.3) frame format
• Basic frame format which is required for all MAC implementation is
defined in IEEE 802.3 standard.
• Though several optional formats are being used to extend the
protocol’s basic capability.

• Ethernet frame starts with Preamble and SFD (Start Frame Delimiter),
both works at the physical layer. Ethernet header contains both Source
and Destination MAC address, after which the payload of the frame is
present. The last field is CRC which is used to detect the error.
• PREAMBLE – Ethernet frame starts with 7-Bytes Preamble.
• This is a pattern of alternative 0’s and 1’s which indicates starting of
the frame and allow sender and receiver to establish bit
synchronization.
• Initially, PRE (Preamble) was introduced to allow for the loss of a few
bits due to signal delays. But today’s high-speed Ethernet don’t need
Preamble to protect the frame bits.
• PRE (Preamble) indicates the receiver that frame is coming and allow
the receiver to lock onto the data stream before the actual frame
begins.
• Start of frame delimiter (SFD) – This is a 1-Byte field which is
always set to 10101011. SFD indicates that upcoming bits are starting
of the frame, which is the destination address. Sometimes SFD is
considered the part of PRE, this is the reason Preamble is described as
8 Bytes in many places. The SFD warns station or stations that this is
the last chance for synchronization.
• Destination Address – This is 6-Byte field which contains the MAC
address of machine for which data is destined.
• Source Address – This is a 6-Byte field which contains the MAC
address of source machine. As Source Address is always an individual
address (Unicast), the least significant bit of first byte is always 0.
• Length – Length is a 2-Byte field, which indicates the length of entire
Ethernet frame. This 16-bit field can hold the length value between 0
to 65534, but length cannot be larger than 1500 because of some own
limitations of Ethernet.
• Data – This is the place where actual data is inserted, also known
as Payload. Both IP header and data will be inserted here if Internet
Protocol is used over Ethernet. The maximum data present may be as
long as 1500 Bytes. In case data length is less than minimum length i.e.
46 bytes, then padding 0’s is added to meet the minimum possible
length.
• Cyclic Redundancy Check (CRC) – CRC is 4 Byte field. This field
contains a 32-bits hash code of data, which is generated over the
Destination Address, Source Address, Length, and Data field. If the
checksum computed by destination is not the same as sent checksum
value, data received is corrupted.
Wireless LAN
• Wireless LANs (WLANs) are wireless computer networks that
use high-frequency radio waves instead of cables for
connecting the devices within a limited area forming LAN (Local
Area Network). Users connected by wireless LANs can move
around within this limited area such as home, school, campus,
office building, railway platform, etc.
• Most WLANs are based upon the standard IEEE 802.11
standard or WiFi.
Components of WLANs
• The components of WLAN architecture as laid down in IEEE 802.11 are −
• Stations (STA) − Stations comprises of all devices and equipment that
are connected to the wireless LAN. Each station has a wireless network
interface controller. A station can be of two types −
• Wireless Access Point (WAP or AP)
• Client
• Basic Service Set (BSS) − A basic service set is a group of stations
communicating at the physical layer level. BSS can be of two categories −
• Infrastructure BSS
• Independent BSS
• Extended Service Set (ESS) − It is a set of all connected BSS.
• Distribution System (DS) − It connects access points in ESS.
• Types of WLANS
• WLANs, as standardized by IEEE 802.11, operates in two basic
modes, infrastructure, and ad hoc mode.
• Infrastructure Mode − Mobile devices or clients connect to an
access point (AP) that in turn connects via a bridge to the LAN
or Internet. The client transmits frames to other clients via the
AP.
• Ad Hoc Mode − Clients transmit frames directly to each other in
a peer-to-peer fashion.
• Advantages of WLANs
• They provide clutter-free homes, offices and other networked
places.
• The LANs are scalable in nature, i.e. devices may be added or
removed from the network at greater ease than wired LANs.
• The system is portable within the network coverage. Access to
the network is not bounded by the length of the cables.
• Installation and setup are much easier than wired counterparts.
• The equipment and setup costs are reduced.
• Disadvantages of WLANs
• Since radio waves are used for communications, the signals are
noisier with more interference from nearby systems.
• Greater care is needed for encrypting information. Also, they
are more prone to errors. So, they require greater bandwidth
than the wired LANs.
• WLANs are slower than wired LANs.
Bluetooth
• It is a Wireless Personal Area Network (WPAN) technology and is
used for exchanging data over smaller distances. This technology was
invented by Ericson in 1994. It operates in the unlicensed, industrial,
scientific and medical (ISM) band at 2.4 GHz to 2.485 GHz.
Maximum devices that can be connected at the same time are 7.
Bluetooth ranges upto 10 meters.
• It provides data rates upto 1 Mbps or 3 Mbps depending upon the
version. The spreading technique which it uses is FHSS (Frequency
hopping spread spectrum). A bluetooth network is called piconet and a
collection of interconnected piconets is called scatternet. 
Bluetooth Architecture: 
• The architecture of bluetooth defines two types of networks:
PICONET
• Piconet is a type of bluetooth network that contains one primary
node called master node and seven active secondary nodes called slave
nodes.
• Thus, we can say that there are total of 8 active nodes which are present at a
distance of 10 metres. The communication between the primary and
secondary node can be one-to-one or one-to-many.
• Possible communication is only between the master and slave; Slave-slave
communication is not possible. It also have 255 parked nodes, these are
secondary nodes and cannot take participation in communication unless it
get converted to the active state.
• Scatternet:
• It is formed by using various piconets. A slave that is present in one
piconet can be act as master or we can say primary in other piconet.
This kind of node can receive message from master in one piconet and
deliver the message to its slave into the other piconet where it is acting
as a slave. This type of node is refer as bridge node. A station cannot
be master in two piconets.
Bluetooth protocol stack: 
 
1.Radio (RF) layer: 
It performs modulation/demodulation of the data into RF signals. It defines the physical characteristics of
bluetooth transceiver. It defines two types of physical link: connection-less and connection-oriented. 
 
2.Baseband Link layer: 
It performs the connection establishment within a piconet. 
 
3.Link Manager protocol layer: 
It performs the management of the already established links. It also includes authentication and
encryption processes. 
 
4.Logical Link Control and Adaption protocol layer: 
It is also known as the heart of the bluetooth protocol stack. It allows the communication between upper
and lower layers of the bluetooth protocol stack. It packages the data packets received from upper layers
into the form expected by lower layers. It also performs the segmentation and multiplexing. 
 
5.SDP layer: 
It is short for Service Discovery Protocol. It allows to discover the services available on another
bluetooth enabled device. 
 
1.RF comm layer: 
It is short for Radio Frontend Component. It provides serial interface with WAP and OBEX. 
 
2.OBEX: 
It is short for Object Exchange. It is a communication protocol to exchange objects between 2 devices. 
 
3.WAP: 
It is short for Wireless Access Protocol. It is used for internet access. 
 
4.TCS: 
It is short for Telephony Control Protocol. It provides telephony service. 
 
5.Application layer: 
It enables the user to interact with the application. 
 
• Advantages:  
• Low cost.
• Easy to use.
• It can also penetrate through walls.
• It creates an adhoc connection immediately without any wires.
• It is used for voice and data transfer.

• Disadvantages: 
 
• It can be hacked and hence, less secure.
• It has slow data transfer rate: 3 Mbps.
• It has small range: 10 meters.
Spanning tree
• Spanning Tree Protocol (STP) is a communication protocol
operating at data link layer the OSI model to prevent bridge
loops and the resulting broadcast storms. It creates a loop −
free topology for Ethernet networks.
• A bridge loop is created when there are more than one paths between
two nodes in a given network. When a message is sent, particularly
when a broadcast is done, the bridges repeatedly rebroadcast the same
message flooding the network. Since a data link layer frame does not
have a time-to-live field in the header, the broadcast frame may loop
forever, thus swamping the channels.
• Spanning tree protocol creates a spanning tree by disabling all links that
form a loop or cycle in the network. This leaves exactly one active path
between any two nodes of the network. So when a message is
broadcast, there is no way that the same message can be received from
an alternate path. The bridges that participate in spanning tree protocol
are often called spanning tree bridges.
• To construct a spanning tree, the bridges broadcast their
configuration routes. Then they execute a distributed algorithm
for finding out the minimal spanning tree in the network, i.e. the
spanning tree with minimal cost. The links not included in this
tree are disabled but not removed.
• In case a particular active link fails, the algorithm is executed
again to find the minimal spanning tree without the failed link.
The communication continues through the newly formed
spanning tree. When a failed link is restored, the algorithm is re-
run including the newly restored link.
• Let us consider a physical topology, as shown in the diagram, for
an Ethernet network that comprises of six interconnected bridges.
The bridges are named {B1, B2, B3, B4, B5, B6} and several
nodes are connected to each bridge. The links between two
bridges are named {L1, L2, L3, L4, L5, L6, L7, L8, L9}, where L1
connects B1 and B2, L2 connects B1 and B3 and so on. It is
assumed that all links are of uniform costs.
• From the diagram we can see that there are multiple paths from a
bridge to any other bridge in the network, forming several bridge
loops that makes the topology susceptible to broadcast storms.
• According to spanning tree protocol, links that form a cycle are
disabled. Thus, we get a logical topology so that there is
exactly one route between any two bridges. One possible
logical topology is shown in the following diagram below
containing links {L1, L2, L3, L4, L5} −
• In the above logical configuration, if a situation arises such that
link L4 fails. Then, the spanning tree is reconstituted leaving L4.
A possible logical reconfiguration containing links {L1, L2, L3,
L5, L9} is as follows −

You might also like