Computer Networking Unit-2
Computer Networking Unit-2
Data Link Layer is second layer of OSI Layered Model. This layer is one of
the most complicated layers and has complex functionalities and liabilities.
Data link layer hides the details of underlying hardware and represents itself
to upper layer as the medium to communicate.
Data link layer works between two hosts which are directly connected in
some sense. This direct connection could be point to point or broadcast.
Systems on broadcast network are said to be on same link. The work of data
link layer tends to get more complex when it is dealing with multiple hosts on
single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit
and to send that over the underlying hardware. At the receiving end, Data link
layer picks up data from hardware which are in the form of electrical signals,
assembles them in a recognizable frame format, and hands over to upper
layer.
Data link layer has two sub-layers:
Logical Link Control: It deals with protocols, flow-control, and error
control
Media Access Control: It deals with actual control of media
Data link layer does many tasks on behalf of upper layer. These are:
2. Framing
Framing Methods
1. Character count.
2. Flag bytes with byte stuffing.
3. Starting and ending flags, with bit stuffing.
4. Physical layer coding violations.
Character count:
The first framing method uses a field in the header to specify the number of
characters in the frame. When the data link layer at the destination sees the
character count, it knows how many characters follow and hence where the end of
the frame is. The trouble with this algorithm is that the count can be garbled by a
transmission error.
The second framing method gets around the problem of resynchronization after an
error by having each frame start and end with special bytes. In the past, the starting
and ending bytes were different, but in recent years most protocols have used the
same byte, called a flag byte, as both the starting and ending delimiter, as shown in
Figure as FLAG. In this way, if the receiver ever loses synchronization, it can just
search for the flag byte to find the end of the current frame. Two consecutive flag
bytes indicate the end of one frame and start of the next one.
A serious problem occurs with this method when binary data, such as object
programs or floating-point numbers, are being transmitted. It may easily happen that
the flag byte's bit pattern occurs in the data. This situation will usually interfere with
the framing. One way to solve this problem is to have the sender's data link layer
insert a special escape byte (ESC) just before each ''accidental'' flag byte in the data.
The data link layer on the receiving end removes the escape byte before the data are
given to the network layer. This technique is called byte stuffing or character stuffing.
Thus, a framing flag byte can be distinguished from one in the data by the absence or
presence of an escape byte before it.
Of course, the next question is: What happens if an escape byte occurs in the middle
of the data? The answer is that it, too, is stuffed with an escape byte. Thus, any single
escape byte is part of an escape sequence, whereas a doubled one indicates that a
single escape occurred naturally in the data.
A frame delimited by flag bytes. (b) Four examples of byte sequences before and after byte
stuffing.
A major disadvantage of using this framing method is that it is closely tied to the use of
8-bit characters. Not all character codes use 8-bit characters
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically destuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely
transparent to the network layer in both computers, so is bit stuffing. If the user data
contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in
the receiver's memory as 01111110. Figure gives an example of bit stuffing.
Figure . Bit stuffing. (a) The original data. (b) The data as they appear on the line. (c) The data as
they are stored in the receiver's memory after destuffing.
With bit stuffing, the boundary between two frames can be unambiguously recognized
by the flag pattern. Thus, if the receiver loses track of where it is, all it has to do is
scan the input for flag sequences, since they can only occur at frame boundaries and
never within the data.
As a final note on framing, many data link protocols use a combination of a character
count with one of the other methods for extra safety. When a frame arrives, the count
field is used to locate the end of the frame. Only if the appropriate delimiter is present
at that position and the checksum is correct is the frame accepted as valid.
Otherwise, the input stream is scanned for the next delimiter.
3. Error Correction and Detection
Error Detection
When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data transmitted
by another device. An Error is a situation when the message received at the receiver end
is not identical to the message transmitted.
Types Of Errors
o Single-Bit Error
o Burst Error
Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.
Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a
single-bit error to occurred, a noise must be more than 1 ?s.
Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires
are used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is
corrupted per byte.
Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error. The
Burst Error is determined from the first corrupted bit to the last corrupted bit.
The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
Burst Errors are most likely to occurr in Serial Data Transmission.
The number of affected bits depends on the duration of the noise and data rate.
o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at the
end of the data unit so that the number of 1s becomes even. Therefore, the total number
of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s bits
is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity
checking.
Drawbacks Of Single Parity Checking
o It can only detect single-bit errors which are very rare.
o If two bits are interchanged, then it cannot detect the errors.
Checksum
A Checksum is an error detection technique based on the concept of redundancy.
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data
into equal segments of n bits each, and all these segments are added together by using
one's complement arithmetic. The sum is complemented and appended to the original
data, known as checksum field. The extended data is transmitted across the network.
Suppose L is the total sum of the data segments, then the checksum would be ?L
1. The Sender follows the given steps:
2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the sum.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.
Checksum Checker
A Checksum is verified at the receiving side. The receiver subdivides the incoming data
into equal segments of n bits each, and all these segments are added together, and then
this sum is complemented. If the complement of the sum is zero, then the data is
accepted otherwise data is rejected.
1. The Receiver follows the given steps:
2. The block unit is divided into k sections and each of n bits.
3. All the k sections are added together by using one's complement algorithm to get the sum.
4. The sum is complemented.
5. If the result of the sum is zero, then the data is accepted otherwise the data is discarded.
o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less
than the number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as
binary division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat this
whole unit as a single unit, and it is divided by the same divisor that was used to find the
CRC remainder.
If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
Let's understand this concept through an example:
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end
of the data as the length of the divisor is 4 and we know that the length of the string 0s
to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
o The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the
final string would be 11100111 which is sent across the network.
CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker performs
the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.
Error Correction
Error Correction codes are used to detect and correct the errors when data is
transmitted from the sender to the receiver.
o Backward error correction: Once the error is discovered, the receiver requests the
sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code which
automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For example, If
we want to calculate a single-bit error, the error correction code will determine which
one of seven bits is in error. To achieve this, we have to add some additional redundant
bits.
Suppose r is the number of redundant bits and d is the total number of the data bits.
The number of redundant bits r can be calculated by using the formula: 2r>=d+r+1
The value of r is calculated by using the above formula. For example, if the value of d is
4, then the possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by R.W
Hamming is Hamming code which can be applied to any length of the data unit and
uses the relationship between data units and redundant units.
Hamming Code
Parity bits: The bit which is appended to the original data of binary bits so that the total
number of 1s is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the value of
the parity bit is 0. If the total number of 1s occurrences is odd, then the value of the
parity bit is 1.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of
parity bit is 1. If the total number of 1s is odd, then the value of parity bit is 0.
Data Link Control is the service provided by the Data Link Layer to provide reliable data
transfer over the physical medium. For example, In the half-duplex transmission mode,
one device can only transmit the data at a time. If both the devices at the end of the
links transmit the data simultaneously, they will collide and leads to the loss of the
information. The Data link layer provides the coordination among the devices so that no
collision occurs.
o Line discipline
o Flow Control
o Error Control
Line Discipline
o Line Discipline is a functionality of Data link layer that provides the coordination among
the link systems. It determines which device can send, and when it can send data.
o ENQ/ACK
o Poll/select
ENQ/ACK
The transmitter(sender) transmits the frame called an Enquiry (ENQ) asking whether the
receiver is available to receive the data or not.
The receiver responses either with the positive acknowledgement(ACK) or with the
negative acknowledgement(NACK) where positive acknowledgement means that the
receiver is ready to receive the transmission and negative acknowledgement means that
the receiver is unable to accept the transmission.
o If the response to the ENQ is positive, the sender will transmit its data, and once all of its
data has been transmitted, the device finishes its transmission with an EOT (END-of-
Transmission) frame.
o If the response to the ENQ is negative, then the sender disconnects and restarts the
transmission at another time.
o If the response is neither negative nor positive, the sender assumes that the ENQ frame
was lost during the transmission and makes three attempts to establish a link before
giving up.
Poll/Select
The Poll/Select method of line discipline works with those topologies where one device
is designated as a primary station, and other devices are secondary stations. (Centralized
control)
Working of Poll/Select
o In this, the primary device and multiple secondary devices consist of a single
transmission line, and all the exchanges are made through the primary device even
though the destination is a secondary device.
o The primary device has control over the communication link, and the secondary device
follows the instructions of the primary device.
o The primary device determines which device is allowed to use the communication
channel. Therefore, we can say that it is an initiator of the session.
o If the primary device wants to receive the data from the secondary device, it asks the
secondary device that they anything to send, this process is known as polling.
o If the primary device wants to send some data to the secondary device, then it tells the
target secondary to get ready to receive the data, this process is known as selecting.
Select
o The select mode is used when the primary device has something to send.
o When the primary device wants to send some data, then it alerts the secondary device
for the upcoming transmission by transmitting a Select (SEL) frame, one field of the
frame includes the address of the intended secondary device.
o When the secondary device receives the SEL frame, it sends an acknowledgement that
indicates the secondary ready status.
o If the secondary device is ready to accept the data, then the primary device sends two or
more data frames to the intended secondary device. Once the data has been
transmitted, the secondary sends an acknowledgement specifies that the data has been
received.
Poll
o The Poll mode is used when the primary device wants to receive some data from the
secondary device.
o When a primary device wants to receive the data, then it asks each device whether it has
anything to send.
o Firstly, the primary asks (poll) the first secondary device, if it responds with the NACK
(Negative Acknowledgement) means that it has nothing to send. Now, it approaches the
second secondary device, it responds with the ACK means that it has the data to send.
The secondary device can send more than one frame one after another or sometimes it
may be required to send ACK before sending each one, depending on the type of the
protocol being used.
5. Flow Control(for noisy and noiseless channels)
o It is a set of procedures that tells the sender how much data it can transmit before the
data overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data. Therefore,
the receiving device must be able to inform the sending device to stop the transmission
temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they are
processed.
o Stop-and-wait
o Sliding window
Stop-and-wait
o In the Stop-and-wait method, the sender waits for an acknowledgement after every
frame it sends.
o When acknowledgement is received, then only next frame is sent. The process of
alternately sending and waiting of a frame continues until the sender transmits the EOT
(End of transmission) frame.
Advantage of Stop-and-wait
Disadvantage of Stop-and-wait
Stop-and-wait technique is inefficient to use as each frame must travel across all the
way to the receiver, and an acknowledgement travels all the way before the next frame
is sent. Each frame sent and received uses the entire time needed to traverse the link.
Sliding Window
o The Sliding Window is a method of flow control in which a sender can transmit the
several frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the another due to
which capacity of the communication channel can be utilized efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver end.
o The window can hold the frames at either end, and it provides the upper limit on the
number of frames that can be transmitted before the acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n means that they
are numbered from 0 to n-1. For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1 frames can be
sent before acknowledgement.
o When the receiver sends ACK, it includes the number of the next frame that it wants to
receive. For example, to acknowledge the string of frames ending with frame number 4,
the receiver will send the ACK containing the number 5. When the sender sees the ACK
with the number 5, it got to know that the frames from 0 through 4 have been received.
Sender Window
o At the beginning of a transmission, the sender window contains n-1 frames, and when
they are sent out, the left boundary moves inward shrinking the size of the window. For
example, if the size of the window is w if three frames are sent out, then the number of
frames left out in the sender window is w-3.
o Once the ACK has arrived, then the sender window expands to the number which will be
equal to the number of frames acknowledged by ACK.
o For example, the size of the window is 7, and if frames 0 through 4 have been sent out
and no acknowledgement has arrived, then the sender window contains only two frames,
i.e., 5 and 6. Now, if ACK has arrived with a number 4 which means that 0 through 3
frames have arrived undamaged and the sender window is expanded to include the next
four frames. Therefore, the sender window contains six frames (5,6,7,0,1,2).
Receiver Window
o At the beginning of transmission, the receiver window does not contain n frames, but it
contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but it represents
the number of frames that can be received before an ACK is sent. For example, the size
of the window is w, if three frames are received then the number of spaces available in
the window is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the number equal
to the number of frames acknowledged.
o Suppose the size of the window is 7 means that the receiver window contains seven
spaces for seven frames. If the one frame is received, then the receiver window shrinks
and moving the boundary from 0 to 1. In this way, window shrinks one by one, so
window now contains the six spaces. If frames from 0 through 4 have sent, then the
window contains two spaces before an acknowledgement is sent.
6. Error Control (for noisy and noiseless channels)
Stop-and-wait ARQ
This technique works on the principle that the sender will not transmit the next frame
until it receives the acknowledgement of the last transmitted frame.
o The sending device keeps a copy of the last transmitted frame until the
acknowledgement is received. Keeping the copy allows the sender to retransmit the data
if the frame is not received correctly.
o Both data frames and ACK frames are numbered alternately 0 and 1 so that they can be
identified individually. Suppose data 1 frame acknowledges the data 0 frame means that
the data 0 frame has been arrived correctly and expects to receive data 1 frame.
o If an error occurs in the last transmitted frame, then the receiver sends the NAK frame
which is not numbered. On receiving the NAK frame, sender retransmits the data.
o It works with the timer. If the acknowledgement is not received within the allotted time,
then the sender assumes that the frame is lost during the transmission, so it will
retransmit the frame.
o In this case, the sender keeps the copies of all the transmitted frames until they have
been acknowledged. Suppose the frames from 0 through 4 have been transmitted, and
the last acknowledgement was for frame 2, the sender has to keep the copies of frames 3
and 4 until they receive correctly.
o The receiver can send either NAK or ACK depending on the conditions. The NAK frame
tells the sender that the data have been received damaged. Since the sliding window is a
continuous transmission mechanism, both ACK and NAK must be numbered for the
identification of a frame. The ACK frame consists of a number that represents the next
frame which the receiver expects to receive. The NAK frame consists of a number that
represents the damaged frame.
o The sliding window ARQ is equipped with the timer to handle the lost
acknowledgements. Suppose then n-1 frames have been sent before receiving any
acknowledgement. The sender waits for the acknowledgement, so it starts the timer and
waits before sending any more. If the allotted time runs out, the sender retransmits one
or all the frames depending upon the protocol used.
In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it retransmits all the frames
after which it does not receive the positive ACK.
o Damaged Frame: When the frame is damaged, then the receiver sends a NAK frame.
In the above figure, three frames have been transmitted before an error discovered in
the third frame. In this case, ACK 2 has been returned telling that the frames 0,1 have
been received successfully without any error. The receiver discovers the error in data 2
frame, so it returns the NAK 2 frame. The frame 3 is also discarded as it is transmitted
after the damaged frame. Therefore, the sender retransmits the frames 2,3.
o Lost Data Frame: In Sliding window protocols, data frames are sent sequentially. If any
of the frames is lost, then the next frame arrive at the receiver is out of sequence. The
receiver checks the sequence number of each of the frame, discovers the frame that has
been skipped, and returns the NAK for the missing frame. The sending device retransmits
the frame indicated by NAK as well as the frames transmitted after the lost frame.
o Lost Acknowledgement: The sender can send as many frames as the windows allow
before waiting for any acknowledgement. Once the limit of the window is reached, the
sender has no more frames to send; it must wait for the acknowledgement. If the
acknowledgement is lost, then the sender could wait forever. To avoid such situation, the
sender is equipped with the timer that starts counting whenever the window capacity is
reached. If the acknowledgement has not been received within the time limit, then the
sender retransmits the frame since the last ACK.
Selective-Reject ARQ
Piggybacking
A technique called piggybacking is used to improve the efficiency of the
bidirectional protocols. When a frame is carrying data from A to B, it can also
carry control information about arrived (or lost) frames from B; when a frame
is carrying data from B to A, it can also carry control information about the
arrived (or lost) frames from A.
7. Examples of Data link Protocols:
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
Normal Response Mode (NRM) − Here, two types of stations are there, a
primary station that send commands and secondary station that can respond to
received commands. It is used for both point - to - point and multipoint
communications.
The PPP stands for Point-to-Point protocol. It is the most commonly used protocol for
point-to-point access. Suppose the user wants to access the internet from the home, the
PPP protocol will be used.
It is a data link layer protocol that resides in the layer 2 of the OSI model. It is used to
encapsulate the layer 3 protocols and all the information available in the payload in
order to be transmitted across the serial links. The PPP protocol can be used on
synchronous link like ISDN as well as asynchronous link like dial-up. It is mainly used for
the communication between the two devices.
It can be used over many types of physical networks such as serial cable, phone line,
trunk line, cellular telephone, fiber optic links such as SONET. As the data link layer
protocol is used to identify from where the transmission starts and ends, so ISP (Internet
Service Provider) use the PPP protocol to provide the dial-up access to the internet.
o It is widely used in broadband communications having heavy loads and high speed. For
example, an internet operates on heavy load and high speed.
o It is used to transmit multiprotocol data between two connected (point-to-point)
computers. It is mainly used in point-to-point devices
o Flag: The flag field is used to indicate the start and end of the frame. The flag field is a 1-
byte field that appears at the beginning and the ending of the frame. The pattern of the
flag is similar to the bit pattern in HDLC, i.e., 01111110.
o Address: It is a 1-byte field that contains the constant value which is 11111111. These 8
ones represent a broadcast message.
o Control: It is a 1-byte field which is set through the constant value, i.e., 11000000. It is
not a required field as PPP does not support the flow control and a very limited error
control mechanism. The control field is a mandatory field where protocol supports flow
and error control mechanism.
o Protocol: It is a 1 or 2 bytes field that defines what is to be carried in the data field. The
data can be a user data or other information.
o Payload: The payload field carries either user data or other information. The maximum
length of the payload field is 1500 bytes. If data is less than 1500 bytes then padding
bytes are used.
o Checksum: It is a 16-bit field which is generally used for error detection.
Transition phases of PPP protocol
The following are the transition phases of a PPP protocol:
o Dead: Dead is a transition phase which means that the link is not used or there is no
active carrier at the physical layer.
o Establish: If one of the nodes starts working then the phase goes to the establish phase.
In short, we can say that when the node starts communication or carrier is detected then
it moves from the dead to the establish phase.
o Authenticate: It is an optional phase which means that the communication can also
moves to the authenticate phase. The phase moves from the establish to the
authenticate phase only when both the communicating nodes agree to make the
communication authenticated.
o Network: Once the authentication is successful, the network is established or phase is
network. In this phase, the negotiation of network layer protocols take place.
o Open: After the establishment of the network phase, it moves to the open phase. Here
open phase means that the exchange of data takes place. Or we can say that it reaches
to the open phase after the configuration of the network layer.
o Terminate: When all the work is done then the connection gets terminated, and it
moves to the terminate phase.
On reaching the terminate phase, the link moves to the dead phase which indicates that
the carrier is dropped which was earlier created.
There are two more possibilities that can exist in the transition phase:
o The link moves from the authenticate to the terminate phase when the authentication is
failed.
o The link can also move from the establish to the dead state when the carrier is failed.
PP Protocol Stack
In PPP stack, there are three set of protocols:
The role of LCP is to establish, maintain, configure, and terminate the links. It also
provides negotiation mechanism.
o Authentication protocols
There are two types of authentication protocols, i.e., PAP (Password Authenticate
protocols), and CHAP (Challenged Handshake Authentication Protocols).
Step 2: The router 2 maintains a database that contains a list of allowed hosts with their
login credentials. If no data is found which means that the router 1 is not a valid host to
connect with it and the connection gets terminated. If the match is found then the
random key is passed. This random key along with the password is passed in the MD5
hashing function, and the hashing function generates the hashed value from password
and the random key (password + random key). The hashed value is also known as
Challenge. The challenge along with the random key will be sent to the router 1.
Step 3: The router 1 receives the hashed value and a random key from the router 2.
Then, the router 1 will pass the random key and locally stored password to the MD5
hashing function. The MD5 hashing function generates the hashed value from the
combination of random key and password. If the generated hashed value does not
match with the received hashed value then the connection gets terminated. If it is
matched, then the connection is granted. Based on the above authentication result, the
authentication signal that could be either accepted or rejected is sent to the router 2.
After the establishment of the link and authentication, the next step is to connect to the
network layer. So, PPP uses another protocol known as network control protocol (NCP).
The NCP is a set of protocols that facilitates the encapsulation of data which is coming
from the network layer to the PPP frames.
The upper sublayer of the data link layer is mainly responsible for the flow control and
error control and is also referred to as Logical link control(LLC); while the lower layer is
mainly responsible for the multiple-access resolution and thus is known as Media
Access control (MAC) layer.
The main objectives of the multiple access protocols are the optimization of the
transmission time, minimization of collisions, and avoidance of the crosstalks.
Multiple Access protocols mainly allow a number of nodes to access the shared network
channel. Several data streams originating from several nodes are transferred via the
multi-point transmission channel.The Multiple access protocols are categorized as
follows. Let us take a look at them:
A. Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a
channel. In random access protocol, one or more stations cannot depend on
another station nor any station control another station. Depending on the
channel's state (idle or busy), each station transmits the data frame. However,
if more than one station sends the data over a channel, there may be a
collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit
data across a network simultaneously when a data frameset is available for
transmission.
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure
Aloha. In pure Aloha, when each station transmits data to a channel without
checking whether the channel is idle or not, the chances of collision may
occur, and the data frame can be lost. When any station transmits the data
frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If
it does not acknowledge the receiver end within the specified time, the station
waits for a random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed. Therefore, it
retransmits frame until all the data are successfully transmitted to the receiver.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because
pure Aloha has a very high possibility of frame hitting. In slotted Aloha, shared
channel is divided into a fixed time interval called slots. So that, if a station
wants to send a frame, the frame can only be sent at the beginning of a slot,
and only one frame is allowed in each slot. And if stations are unable to send
data to the beginning of slot, the station will have to wait until the beginning
of the slot for the next time. However, the possibility of a collision remains
when trying to send a frame at the beginning of two or more station time slot.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first
sense the shared channel and if the channel is idle, it immediately sends the
data. Else it must wait and keep track of the status of the channel to be idle
and broadcast the frame unconditionally as soon as the channel is idle.
CSMA/ CA
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become
idle, and if it gets the channel is idle, it does not immediately send the data.
Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.
Contention window: In the Contention window, the total time is divided into
different slots. When the station/ sender is ready to transmit the data frame, it
chooses a random slot number of slots as wait time. If the channel is still
busy, it does not restart the entire process, except that it restarts the timer
only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends
the data frame to the shared channel if the acknowledgment is not received
ahead of time.
Reservation
In the reservation method, a station needs to make a reservation before sending
data.
The time line has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
If there are M stations, the reservation interval is divided into M slots, and each
station has one slot.
Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No
other station is allowed to transmit during this slot.
th
In general, i station may announce that it has a frame to send by inserting a 1
bit into i th slot. After all N slots have been checked, each station knows which
stations wish to transmit.
The stations which have reserved their slots transfer their frames in that order.
After data transmission period, next reservation interval begins.
Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five slot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In the
second interval, only station 1 has made a reservation.
Polling
Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
The message sent by the controller contains the address of the node being
selected for granting access.
Although all nodes receive the message but the addressed one responds to it and
sends data, if any. If there is no data, usually a “poll reject”(NAK) message is sent
back.
Problems include high overhead of the polling messages and high dependence on
the reliability of the controller .
Efficiency
Let Tpoll be the time for polling and T t be the time required for
transmission of data. Then,
Efficiency = T t/(Tt + Tpoll)
Token Passing
In token passing scheme, the stations are connected logically to each other in
form of ring and access of stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one
station to the next in the some predefined order.
In Token ring, token is passed from one station to another adjacent station in the
ring whereas incase of Token bus, each station
uses the bus to send the token to the next station in some predefined order.
In both cases, token represents permission to send. If a station has a frame
queued for transmission when it receives the token, it can send that frame before
it passes the token to the next station. If it has no queued frame, it passes the
token simply.
After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbors and the other N – 1 stations to send a frame, if
they have one.
There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable
operation of this scheme.
Performance
Performance of token ring can be concluded by 2 parameters:-
1. Delay, which is a measure of time between when a packet is ready and when it is
delivered.So, the average time (delay) required to send a token to the next station
= a/N.
2. Throughput, which is a measure of the successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and T t = transmission delay)
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a
shared channel to be shared across multiple stations based on their time,
distance and codes. It can access all the stations at the same time to send the
data frames to the channel.
Following are the various methods to access the channel based on their time,
distance and codes:
FDMA
CDMA
In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers. Project 802 is a way of specifying functions of the physical layer and the
data link layer of major LAN protocols.
The relationship of the 802 Standard to the traditional OSI model is shown in below
Figure. The IEEE has subdivided the data link layer into two sub layers: logical link
control (LLC) and media access control(MAC) .
IEEE has also created several physical layer standards for different LAN protocols IEEE
standard for LANs
STANDARD ETHERNET
The original Ethernet was created in 1976 at Xerox’s Palo Alto Research Center
(PARC). Since then, it has gone through four generations.
Standard Ethernet (l0 Mbps), Fast Ethernet (100 Mbps), Gigabit Ethernet (l Gbps), and
Ten-Gigabit Ethernet (l0 Gbps). We briefly discuss the Standard (or traditional) Ethernet
in this section
Ethernet evolution through four generations
MAC Sublayer
In Standard Ethernet, the MAC sublayer governs the operation of the access method. It
also frames data received from the upper layer and passes them to the physical layer.
Frame Format
The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of
protocol data unit (PDU), upper-layer data, and the CRC. Ethernet does not provide any
mechanism for acknowledging received frames, making it what is known as an
unreliable medium. Acknowledgments must be implemented at the higher layers. The
format of the MAC frame is shown in
below figure
Preamble
.
The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating 0s and 1s that
alerts the receiving system to the coming frame and enables it to synchronize its input
timing. The pattern provides only an alert and a timing pulse. The 56-bit pattern allows
the stations to miss some bits at the beginning of the frame. The preamble is actually
added at the physical layer and is not (formally) part of the frame.
The DA field is 6 bytes and contains the physical address of the destination station or
stations to receive the packet.
The SA field is also 6 bytes and contains the physical address of the sender of the
packet.
Length or type
.
This field is defined as a type field or length field. The original Ethernet used this field as
the type field to define the upper-layer protocol using the MAC frame. The IEEE
standard used it as the length field to define the number of bytes in the data field. Both
uses are common today.
Data
.
This field carries data encapsulated from the upper-layer protocols. It is a minimum of
46 and a maximum of 1500 bytes.
CRC
. The last field contains error detection information, in this case a CRC-32
Frame Length
Ethernet has imposed restrictions on both the minimum and maximum lengths of a
frame, as shown in below Figure
An Ethernet frame needs to have a minimum length of 512 bits or 64 bytes. Part of this
length is the header and the trailer. If we count 18 bytes of header and trailer (6 bytes of
source address, 6 bytes of destination address, 2 bytes of length or type, and 4 bytes of
CRC), then the minimum length of data from the upper layer is 64 - 18 = 46 bytes. If the
upper-layer packet is less than 46 bytes, padding is added to make up the difference
The standard defines the maximum length of a frame (without preamble and SFD
field) as 1518 bytes. If we subtract the 18 bytes of header and trailer,the maximum
length of the payload is 1500 bytes.
The maximum length restriction has two historical reasons.
First, memory was very expensive when Ethernet was designed: a maximum length
restriction helped to reduce the size of the buffer.
Second, the maximum length restriction prevents one station from monopolizing the
shared medium, blocking other stations that have data to send.
Addressing
The Ethernet address is 6 bytes (48 bits), normally written in hexadecimal notation, with
a colon between the bytes.
Unicast, Multicast, and Broadcast Addresses: A source address is always a unicast
address-the frame comes from only one station. The destination address, however, can
be unicast, multicast, or broadcast. Below Figure shows how to distinguish a unicast
address from a multicast address.
If the least significant bit of the first byte in a destination address is 0, the address is
unicast; otherwise, it is multicast.
A unicast destination address defines only one recipient; the relationship between the
sender and the receiver is one-to-one.
A multicast destination address defines a group of addresses; the relationship between
the sender and the receivers is one-to-many.
The broadcast address is a special case of the multicast address; the recipients are all
the stations on the LAN. A broadcast destination address is forty-eight 1s.
Physical Layer
The Standard Ethernet defines several physical layer implementations; four of
the most common, are shown in Figure
The first implementation is called 10Base5, thick Ethernet, or Thicknet. lOBase5 was
the first Ethernet specification to use a bus topology with an external transceiver
(transmitter/receiver) connected via a tap to a thick coaxial cable. Figure shows a
schematic diagram of a lOBase5 implementation
10Base5 implementation
Thin coaxial cable is less expensive than thick coaxial. Installation is simpler because
the thin coaxial cable is very flexible. However, the length of each segment cannot
exceed 185 m (close to 200 m) due to the high level of attenuation in thin coaxial cable.
10Base-T implementation
Although there are several types of optical fiber 10-Mbps Ethernet, the most common is
called 10Base-F.10Base-F uses a star topology to connect stations to a hub. The
stations are connected to the hub using two fiber-optic cables, as shown in Figure
10Base-F implementation
10. Wireless LANS
Wireless LANs (WLANs) are wireless computer networks that use high-
frequency radio waves instead of cables for connecting the devices within a
limited area forming LAN (Local Area Network). Users connected by wireless
LANs can move around within this limited area such as home, school,
campus, office building, railway platform, etc.
Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.
Components of WLANs
The components of WLAN architecture as laid down in IEEE 802.11 are −
Stations (STA) − Stations comprises of all devices and equipment that
are connected to the wireless LAN. Each station has a wireless network
interface controller. A station can be of two types −
o Wireless Access Point (WAP or AP)
o Client
Basic Service Set (BSS) − A basic service set is a group of stations
communicating at the physical layer level. BSS can be of two categories
−
o Infrastructure BSS
o Independent BSS
Extended Service Set (ESS) − It is a set of all connected BSS.
Distribution System (DS) − It connects access points in ESS.
Types of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic modes,
infrastructure, and ad hoc mode.
Infrastructure Mode − Mobile devices or clients connect to an access
point (AP) that in turn connects via a bridge to the LAN or Internet. The
client transmits frames to other clients via the AP.
Ad Hoc Mode − Clients transmit frames directly to each other in a peer-
to-peer fashion.
Advantages of WLANs
They provide clutter-free homes, offices and other networked places.
The LANs are scalable in nature, i.e. devices may be added or removed
from the network at greater ease than wired LANs.
The system is portable within the network coverage. Access to the
network is not bounded by the length of the cables.
Installation and setup are much easier than wired counterparts.
The equipment and setup costs are reduced.
Disadvantages of WLANs
Since radio waves are used for communications, the signals are noisier
with more interference from nearby systems.
Greater care is needed for encrypting information. Also, they are more
prone to errors. So, they require greater bandwidth than the wired
LANs.
WLANs are slower than wired LANs.
802.11 Architecture
The 802.11architecture defines two types of services and three different types of
stations
802.11 Services
•
1` These extended networks are created by joining the access points of
basic services sets through a wired LAN known as distribution system.
• The distribution system can be any IEET LAN.
• There are two types of stations in ESS:
(i) Mobile stations: These are normal stations inside a BSS.
(ii) Stationary stations: These are AP stations that are part of a wired LAN.
• Communication between two stations in two different BSS usually occurs via two
APs.
• A mobile station can belong to more than one BSS at the same time.
802.11 Station Types
IEEE 802.11 defines three types of stations on the basis of their mobility in wireless
LAN. These are:
1. No-transition Mobility
2. BSS-transition Mobility
3. ESS-transition Mobility
1. No-transition .Mobility: These types of stations are either stationary i.e. immovable
or move only inside a BSS.
2. BSS-transition mobility: These types of stations can move from one BSS to
another but the movement is limited inside an ESS.
3. ESS-transition mobility: These types of stations can move from one ESS to
another. The communication mayor may not be continuous when a station moves from
one ESS to another ESS.
Physical layer functions
• As we know that physical layer is responsible for converting data stream into signals,
the bits of 802.11 networks can be converted to radio waves or infrared waves.
• These are six different specifications of IEEE 802.11. These implementations, except
the first one, operate in industrial, scientific and medical (ISM) band. These three
banks are unlicensed and their ranges are
1.902-928 MHz
2.2.400-4.835 GHz
3.5.725-5.850
GHz
• It uses diffused (not line of sight) infrared light in the range of 800 to 950 nm.
• It allows two different speeds: I Mbps and 2Mbps.
• For a I-Mbps data rate, 4 bits of data are encoded into 16 bit code. This 16 bit code
contains fifteen as and a single 1.
• For a 2-Mbps data rate, a 2 bit code is encoded into 4 bit code. This 4 bit code
contains three Os and a single 1.
• The modulation technique used is pulse position modulation (PPM) i.e. for converting
digital signal to analog.
2. IEEE 802.11 FHSS
• IEEE 802.11 uses Frequency Hoping Spread Spectrum (FHSS) method for signal
generation.
• This method uses 2.4 GHz ISM band. This band is divided into 79 subbands of 1MHz
with some guard bands.
• In this method, at one moment data is sent by using one carrier frequency and then
by some other carrier frequency at next moment. After this, an idle time is there in
communication. This cycle is repeated after regular intervals.
• A pseudo random number generator selects the hopping sequence.
• The allowed data rates are 1 or 2 Mbps.
• This method uses frequency shift keying (two level or four level) for modulation i.e. for
converting digital signal to analogy.
3. IEEE 802.11 DSSS
• This method uses Direct Sequence Spread Spectrum (DSSS) method for signal
generation. Each bit is transmitted as 11 chips using a Barker sequence.
• DSSS uses the 2.4-GHz ISM band.
• It also allows the data rates of 1 or 2 Mbps.
• It uses phase shift keying (PSK) technique at 1 M baud for converting digital signal to
analog signal.
4. IEEE 802.11a OFDM
• This method uses Orthogonal Frequency Division Multiplexing (OFDM) for signal
generation.
• This method is capable of delivering data upto 18 or 54 Mbps.
• In OFDM all the subbands are used by one source at a given time.
• It uses 5 GHz ISM band.
• This band is divided into 52 subbands, with 48 subbands for data and 4 subbands for
control information.
• If phase shift keying (PSK) is used for modulation then data rate is 18 Mbps. If
quadrature amplitude modulation (QAM) is used, the data rate can be 54 Mbps.
5. IEEE 802.11b HR-OSSS
• It uses High Rate Direct Sequence Spread Spectrum method for signal generation.
• HR-DSSS is similar to DSSS except for encoding method.
• Here, 4 or 8 bits are encoded into a special symbol called complementary code key
(CCK).
• It uses 2.4 GHz ISM band.
• It supports four data rates: 1,2,5.5 and 11 Mbps.
• 1 Mbps and 2 Mbps data rates uses phase shift modulation.
• The 5.5. Mbps version uses BPSK and transmits at 1.375 Mbaud/s with 4-bit CCK
encoding.
• The 11 Mbps version uses QPSK and transmits at 1.375 Mbps with 8-bit CCK
encoding.
6. IEEE 802.11g OFDM
• 802.11 standard uses Network Allocation Vector (NAV) for collision avoidance.
• The procedure used in NAV is explained below:
1. Whenever a station sends an RTS frame, it includes the duration of time for which
the station will occupy the channel.
2. All other stations that are affected by the transmission creates a timer caned network
allocation vector (NAV).
3. This NAV (created by other stations) specifies for how much time these stations
must not check the channel.
4. Each station before sensing the channel, check its NAV to see if has expired or not.
5. If its NA V has expired, the station can send data, otherwise it has to wait.
• There can also be a collision during handshaking i.e. when RTS or CTS control
frames are exchanged between the sender and receiver. In this case following
procedure is used for collision avoidance:
1. When two or more stations send RTS to a station at same time, their control frames
collide.
2. If CTS frame is not received by the sender, it assumes that there has been a
collision.
3. In such a case sender, waits for back off time and retransmits RTS.
2. Point Coordination Function
• PCF method is used in infrastructure network. In this Access point is used to control
the network activity.
• It is implemented on top of the DCF and IS used for time sensitive transmissions.
• PCF uses centralized, contention free polling access method.
• The AP performs polling for stations that wants to transmit data. The various stations
are polled one after the other.
• To give priority to PCF over DCF, another interframe space called PIFS is defined.
PIFS (PCF IFS) is shorter than DIFS.
• If at the same time, a station is using DCF and AP is using PCF, then AP is given
priority over the station.
• Due to this priority of PCF over DCF, stations that only use DCF may not gain access
to the channel.
• To overcome this problem, a repetition interval is defined that is repeated
continuously. This repetition interval starts with a special control frame called beacon
frame.
• When a station hears beacon frame, it start their NAV for the duration of the period of
the repetition interval.
Frame Format of 802.11
• There are four different addressing cases depending upon the value of To DS And
from DS subfields of FC field.
• Each flag can be 0 or 1, resulting in 4 different situations.
1. If To DS = 0 and From DS = 0, it indicates that frame is not going to distribution
system and is not coming from a distribution system. The frame is going from one
station in a BSS to another.
2. If To DS = 0 and From DS = 1, it indicates that the frame is coming from a
distribution system. The frame is coming from an AP and is going to a station. The
address 3 contains original sender of the frame (in another BSS).
3. If To DS = 1 and From DS = 0, it indicates that the frame is going to a distribution
system. The frame is going from a station to an AP. The address 3 field contains the
final destination of the frame.
4. If To DS = 1 and From DS = 1,it indicates that frame is going from one AP to another
AP in a wireless distributed system.
The table below specifies the addresses of all four
cases.
During 1990, Kam developed the MACA (Multiple Access with Collision Avoidance)
protocol for wireless transmission. The protocol is very simple to implement and works
in the following manner. Station X, willing to transmit data to the nearby station Y,
sends a short frame called RTS (Request to Send) first. On hearing this short frame, all
stations other than the receiving station, avoid transmission, thereby allowing the
communication to take place without interference. The receiving station sends a CTS
(Clear to Send) frame to the calling station. After receiving the CTS frame, station X
begins transmission. When simultaneous transmission of RTS by two stations Wand X
to station Y occurs, both frames collide with each other and are lost. When there is no
CTS from station Y, both stations wait for a random amount of time (binary exponential
back off) and start the whole process again.
MACAW Protocol
Bhargavan et al (1994) investigated the behavior of MACA protocol and refined it with
modifications. The first modification was the acknowledgment frame for the successful
receipt of each frame. This modification adds carrier sense to stations. The second
modification was to apply the binary exponential back off algorithm to source-
destination pair. This improves the fairness of the protocol. They have also added to
stations, the ability to exchange information, regarding congestion.