Professional Documents
Culture Documents
Data Link Layer
Data Link Layer
MODULE II
Page 25
COMPUTER NETWORKS CS 306
The function of the data link layer is to provide service to the network layer.
The principal service is transferring data from the network layer on the source machine to the network
layer on the destination machine.
The data link layer can be designed to offer various services, Three possibilities that are commonly provided
are:
Unacknowledged connectionless service consists of having the source machine send independent frames to
the destination machine without having the destination machine acknowledge them. No connection is established
beforehand or released afterward. Good channels with low error rates, for real-time traffic, such as speech.
Example: Ethernet, Voice over IP, etc.
Acknowledged connectionless service
When this service is offered, there are still no connections used, but each frame sent is individually
acknowledged. This way, the sender knows whether or not a frame has arrived safely. Good for unreliable channels,
such as wireless.
With this service, the source and destination machines establish a connection before any data are
transferred. Each frame sent over the connection is numbered, and the data link layer guarantees that each frame
Page 26
COMPUTER NETWORKS CS 306
sent is received. Furthermore, it guarantees that each frame is received exactly once and that all frames are
received in the right order.
1. In the first phase the connection is established by having both sides initialize variable and counter need
to keep track of which frames have been received and which ones have not.
2. In the second phase, one or more frames are actually transmitted.
3. In the third phase, the connection is released, freeing up the variables, buffers, and other resources used
to maintain the connection.
2. Framing
In order to provide service to the network layer, the data link layer must use the service provided to it by
the physical layer.
What the physical layer does is accept raw bit stream and attempt to deliver it to the destination. This bit
stream is not guaranteed to be error free.
It is up to the data link layer to detect, and if necessary, correct errors.
The usual approach is for the data link layer to break the bit stream up into discrete frames and compute
the checksum for each frame. When the frames arrive at the destination , the checksum is re-computed.
There are four methods of breaking up the bit stream
1. Character count.
2. Starting and ending character stuffing.
3. Starting and ending flags, with bit stuffing.
4. Physical layer coding violations.
Character count
Character count uses a field in the header to specify the number of characters in the frame. When the data
link layer at the destination sees the character count, it knows how many characters follow.
Problem: count can possible be misrepresented by a transmission error. This method is rarely used anymore.
Page 27
COMPUTER NETWORKS CS 306
Starting and ending character stuffing /Flag bytes with byte stuffing
It gets around the problem of resynchronization after an error by having each frame start and end with
special bytes.
o The sender inserts a special byte (ESC-Escape Character) just before each flag byte in the data.
o Beginning and ending byte in the frame are called flag byte.
o The receiver’s link layer removes this special byte before the data are given to the network layer.
Fig (a) Frame delimited by flag bytes (b) Four examples of byte sequences before and after byte stuffing.
Two consecutive flag bytes indicate the end of one frame and start of the next one.
Problem occurs with this method when binary data, such as object programs or floating-point numbers, are
being transmitted. It may easily happen that the flag byte's bit pattern occurs in the data.
One way to solve this problem is to have the sender's data link layer insert a special escape byte (ESC) just
before each ''accidental'' flag byte in the data. The data link layer on the receiving end removes the escape byte
before the data are given to the network layer. This technique is called byte stuffing or character stuffing. Thus, a
framing flag byte can be distinguished from one in the data by the absence or presence of an escape byte before it.
It allows data frames to contain and arbitrary number of bits and allows character codes with an arbitrary
number of bits per character. Each frame begins and ends with a special bit pattern, 01111110, called a flag byte.
Page 28
COMPUTER NETWORKS CS 306
It is only applicable to networks in which the encoding on the physical medium contains some redundancy.
For example, some LANs encode 1 bit of data by using 2 physical bits.
Ans: Large frame makes flow and error-control very inefficient. Even a single-bit error requires the re-
transmission of the whole message.
When a message is divided into smaller frames, a single-bit error affects only that small frame.
3. Error Control
The next problem to deal with is to make sure all frames are eventually delivered to the network layer at
the destination, and in proper order.
The usual way to ensure reliable delivery is to provide the sender with some feedback about what is
happening at the other end of the line.
Error control in the data link layer is based on automatic repeat request, which is the retransmission of data.
The whole issue of managing the timers and sequence numbers so as to ensure that each frame is
ultimately passed to the network layer at the destination exactly one, no more no less, is an important part
of the data link layer's duties.
4. Flow Control
Flow control refers to a set of procedures used to restrict the amount of data that the sender can send
before waiting for acknowledgment. Flow control coordinates the amount of data that can be sent before
receiving acknowledgment and is one of the most important duties of the data link layer. It ensures that slow
receiver is not being flooded by fast sender.
Two methods have been developed to control flow of data across communications links:
1. Stop and Wait 2. Sliding Window
Page 29
COMPUTER NETWORKS CS 306
This is the simplest form of flow control where a sender transmits a data frame. The sender sends one data
frame, waits for acknowledgement (ACK) from receiver .Only after getting the ACK, sender will sent the next frame.
Major drawback of Stop-and-Wait Flow Control is that only one frame can be in transmission at a time, this
leads to inefficiency if propagation delay is much longer than the transmission delay.
Transmission time: The time it takes for a station to transmit a frame (normalized to a value of 1).
Propagation delay: The time it takes for a bit to travel from sender to receiver (expressed as a).
– a < 1 : The frame is sufficiently long such that the first bits of the frame arrive at the destination before the
source has completed transmission of the frame.
– a > 1: Sender completes transmission of the entire frame before the leading bits of the frame arrive at the
receiver.
To improve the link utilization, we can use the following (sliding-window) protocol instead of using stop-
and-wait protocol.
Page 30
COMPUTER NETWORKS CS 306
All the previous data frames are assumed acknowledged on receipt of an acknowledgement
– Sender maintains a list of sequence numbers (frames) it is allowed to transmit, called sending window
– Receiver maintains a list of sequence numbers it is prepared to receive, called receiving window
For n-bit sequence number, we have 2 n numbers: 0, 1, · · · , 2n − 1, but the maximum window size N = 2n − 1
Sending window
Sending window maintains sequence numbers of frames sent out but not acknowledged and frames
which are next to be transmitted.
At the beginning of a transmission, the sender’s window contains n frames.
As frames are sent out, the left boundary of the window moves inward, shrinking the size of the window.
Once an ACK arrives, the window expands to allow in a number of new frames equal to the number of
frames acknowledged by that ACK.
Receiving window
Page 31
COMPUTER NETWORKS CS 306
c) Simplex Protocol for Noisy Channel.( One bit Sliding Window Protocol/Stop & Wait ARQ )
i. Stop-and-Wait ARQ
Source transmits a single frame, and waits for an ACK. If the frame is lost, the source retransmits the
frame.
Use sequence numbers to number the frames
The source station is equipped with a timer.
Keeping a copy of the sent frame and retransmitting of the frame when the timer expires.
If receiver receives a damaged frame, discard it and the source retransmits the frame.
If everything goes right, but the ACK is damaged or lost, the source will not recognize it, the source will
retransmit the frame, and receiver gets two copies of the same frame!
Page 32
COMPUTER NETWORKS CS 306
– In case of lost/ delayed ACK, time out will retransmit the frame.
– Timer introduces a problem: Suppose timeout and sender retransmits a frame but receiver
actually received the previous transmission . Receiver has duplicated copies.
– To avoid receiving and accepting two copies of same frame, frames and ACKs are alternatively
labeled 0 or 1: ACK0 for frame 1, ACK1 for frame 0
U = ( 1 – p ) / (1 + 2a)
where
p is the probability of receiving a bit in error
a is the propagation delay
Page 33
COMPUTER NETWORKS CS 306
a = ( tprop * R) / L where tprop, R and L denotes propagation time , datarate of the link
and number of bits in the frame
Disadvantage of Stop-and-Wait:
• In stop-and-wait, at any point in time, there is only one frame that is sent and waiting to be
acknowledged.
• Stop-and-wait is slow. Each frame must travel all the way to the receiver and an acknowledgment
must travel all the way back before the next frame can be sent.
• This is not a good use of transmission medium.
To improve efficiency, multiple frames should be in transition while waiting for ACK.
Piggybacking
Suppose the link between receiver and transmitter is full duplex and usually both transmitter and receiver
stations send data to each over. So, instead of sending separate acknowledgement packets, a portion (few bits) of
the data frames can be used for acknowledgement. This phenomenon is known as piggybacking.
Advantages:
Improves the efficiency
better use of available channel
bandwidth.
•
ii. Go-Back-N ARQ
Page 34
COMPUTER NETWORKS CS 306
The receiver does not have to acknowledge each frame received, it can send one cumulative ACK for
several frames.
In Go-Back-N ARQ,
Size of the sender window must be less than 2 m.
Size of the receiver is always 1.
If m = 3, sender window size = 2m – 1 = 7.
Operation
In Go back N ARQ, if one frame is lost or damaged, all frames sent since last frame acknowledged are
retransmitted.
Scenarios:
Lost / damaged frame
Lost/ delayed ACK
In Go-Back-N protocol,
the ACK number is cumulative and defines
the sequence number of the next packet expected
to arrive.
The sending device keeps a copy of sent frames until the acknowledgements arrive. If a frame is lost/
damaged, the time out mechanism or NAK cause the sender to retransmit all frames since last acknowledged.
If a frame is lost, out of order frames will receive at receiver. The receiver sends NAK and discards all
subsequent frames until it receives the one it is expecting.
Then the sender resends all frames, beginning with the one with expired timer.
Page 35
COMPUTER NETWORKS CS 306
– If ACK1, ACK2, and ACK3 are lost, ACK4 covers them if it arrives before the timer expires.
– If ACK4 arrives after time-out, the last frame and all the frames after that are resent.
If the next ACK arrives before the expiration of any timer, there is no need for retransmission of frames
because ACKs are cumulative in this protocol.
Receiver never resends an ACK.
A delayed ACK also triggers the resending of frames
Link Utilization:
where
p is the probability of receiving a bit in error
a is the propagation delay
a = ( tprop * R) / L where tprop, R and L denotes propagation time , datarate of the link
and number of bits in the frame
Advantages:
Disadvantages:
Buffer requirement
Retransmission of many error-free packets following an erroneous packet
Scheme is inefficient when round-trip delay large and data transmission rate is high
If NAK is lost, a long time is wasted until re-transmission of all packets (until another NAK is sent).
• In Selective Repeat ARQ, only the damaged frame is resent. More bandwidth efficient but more complex
processing at receiver.
• It defines a negative ACK (NAK) to report the sequence number of a damaged frame before the timer
expires.
• Selective Repeat ARQ overcomes the limitations of Go-Back-N by adding 2 new features,
(1) receiver window > 1 frame, so that out-of-order but error-free frames can be accepted
(2) retransmission mechanism is modified – only individual frames are retransmitted. (resends only
selective packets, those that are actually lost)
Send window for Selective Repeat ARQ:
Page 36
COMPUTER NETWORKS CS 306
Receiver must maintain large enough buffer, and must contain logic for reinserting the retransmitted
frame in the proper sequence
an ACK number
defines the sequence number
of the error free packet
received
Retransmission mechanism
– Timer: When the timer expires, only the corresponding frame is retransmitted.
– NAK: Whenever an out-of-sequence frame is observed at the receiver, a NAK frame is sent with
sequence number of lost frame. When the transmission receives such a NAK frame, it retransmits
the specific frame.
Scenarios:
Lost / damaged frame
Lost/ delayed ACK
Link Utilization:
Page 37
COMPUTER NETWORKS CS 306
where
p is the probability of receiving a bit in error
a is the propagation delay
a = ( tprop * R) / L where tprop, R and L denotes propagation time , datarate of the link
and number of bits in the frame
Advantages :
Disadvantages :
Qus 4 : Briefly explain stop -and-wait and selective repeat ARQ method of flow control.
Qus 5 : Compare Go back N and Selective Repeat sliding window protocol used in DLL.
The data received from the upper layers whether text, images, audio or video is carried in the form of
bits; the data field of each frame comprises a sequence of bits.
To distinguish each frame, a special 8-bit pattern flag 01111110 is used at the beginning and end of the
each frame.
In case the flag pattern appears in the data field of frame, bit stuffing is used at sender’s end.
After each five consecutive 1s in data, an extra bit 0 is inserted to prevent the bit pattern from looking
like a flag. The extra bit stuffed by the sender is then destuffed at the receivers end.
Page 38
COMPUTER NETWORKS CS 306
An example of bit oriented protocol is HDLC (High-Level Data Link Control) protocol
Original data
Bit Stuffing
The data received from the upper layers whether text, images, audio or video is carried in the form of
bytes such as ASCII. Each frame comprises a sequence of bytes.
To distinguish each frame, a special 1-byte flag is used at the beginning and end of the each frame; the
flag is usually comprises of special characters depending on the protocol being used.
In case the flag byte appears in the data field of frame, byte stuffing is used at sender’s end.
An extra byte called escape character(ESC), is stuffed in the data when there appears a character with
the same pattern as flag. The extra byte stuffed by the sender is then destuffed at the receivers end.
Fig (a): Frame delimited by flag bytes (b) Four examples of byte sequences before and after byte stuffing.
HDLC is a bit-oriented protocol for communication over point-to-point and multipoint links. It was
developed by the International Standards Organization (ISO). It specifies a packetization standard for serial links.
One of the most common datalink layer protocols is the HDLC protocol.
Features :
i. Reliable protocol (selective repeat or go-back-N)
ii. provides both connectionless service and connection oriented services
iii. Full-duplex communication (receive and transmit at the same time)
iv. Bit-oriented protocol
v. Flow control (adjust window size based on receiver capability).
Page 39
COMPUTER NETWORKS CS 306
vi. Uses physical layer clocking and synchronization to send and receive frames
To satisfy a variety of applications, HDLC defines three types of stations, two link configurations, and three
data-transfer modes of operation.
Stations:
Primary : sends data, controls the link with commands; handles error recovery.
Secondary : receives data, responds to control messages;
The primary station maintains a separate logical link with each secondary station.
Combined : can issue both commands and responses;
Link configuration:
Each type of frame serves as an envelope for the transmission of a different type of message.
Page 40
COMPUTER NETWORKS CS 306
Flag Fields:
Flag fields delimit the frame at both ends with the unique pattern 01111110. A single flag may be used
as the closing flag for one frame and the opening flag for the next.
The pattern 01111110 could be found inside a frame and thus using it as a delimiter will destruct
inner structure of the frame. Thus a method name Bit Stuffing was used in which sender will insert 0
after occurrence of 5 consecutive 1’s.
With the use of bit stuffing, arbitrary bit patterns can be inserted into the data field of the frame. This
property is known as data transparency.
Address Field :
The address field identifies the secondary station that transmitted or is to receive the frame. This field is
not needed for point-to-point links, but is always included for the sake of uniformity.
Control Field :
1) Information frames (I-frames) transport user data and control information relating to user data (piggybacking).
The first bit defines the type. If the first bit of the control field is 0, this means the frame is an I-frame.
• The next 3 bits, called N(S), define the sequence number of the frame.
• The last 3 bits, called N(R), correspond to the acknowledgment number when piggybacking is used.
• The single bit between N(S) and N(R) is called the P/F bit.
The P/F field is a single bit with a dual purpose.
Page 41
COMPUTER NETWORKS CS 306
o when it is set (bit = 1) means poll. It means poll when the frame is sent by a primary station to a
secondary (when the address field contains the address of the receiver).
o when it is set (bit = 0) means final when the frame is sent by a secondary to a primary (when the
address field contains the address of the sender).
2) Supervisory frames (S-frames) provide the ARQ mechanism when piggybacking is not used. S-frames do not
have information fields.
• If the first 2 bits of the control field is 10, this means the frame is an S-frame.
• The last 3 bits, called N(R), corresponds to the acknowledgment number (ACK) or negative acknowledgment
number (NAK) depending on the type of S-frame.
• The 2 bits called code is used to define the type of S-frame itself. With 2 bits, there are four types of
S-frames
• The fifth field in the control field is the P/F.
1. Receive ready (RR).- If the value of the code subfield is 00, it is an RR S-frame. This kind of frame
acknowledges the receipt of a frame or group of frames. In this case, the value N(R) field defines the
acknowledgment number.
2. Receive not ready (RNR).- If the value of the code subfield is 10, it is an RNR S-frame. It acknowledges the
receipt of a frame or group of frames, and it announces that the receiver is busy and cannot receive more
frames. It acts as a kind of congestion control mechanism by asking the sender to slow down. The value of
N(R) is the acknowledgment number.
3. Reject (REJ).- If the value of the code subfield is 01, it is a REJ S-frame. It is a NAK that can be used in Go-
Back-N ARQ to improve the efficiency of the process by informing the sender, before the sender time
expires, that the last frame is lost or damaged. The value of N(R) is the negative acknowledgment number.
4. Selective reject (SREJ).- If the value of the code subfield is 11, it is an SREJ S-frame. This is a NAK frame used
in Selective Repeat ARQ. The value of N(R) is the negative acknowledgment number.
3) Unnumbered frames (U-frames) are used to exchange session management and control information between
connected devices.
U-frames contain an information field, but used for system management information, not user data. U-
frame codes are divided into two sections:
• a 2-bit prefix before the P/F bit
• a 3-bit suffix after the P/F bit.
Together, these two segments (5 bits) can be used to create up to 32 different types of U-frames.
Information Field:
The field can contain any sequence of bits but must consist of an integral number of octets.
FCS Field:
Page 42
COMPUTER NETWORKS CS 306
The frame check sequence (FCS) is an error-detecting code calculated from the remaining bits of the
frame, exclusive of flags. The normal code is the 16-bit CRC CCITT . An optional 32-bit FCS, using
CRC-32, may also be used.
PPP(Point-to-point protocol)
PPP is a protocol that can be used to establish communication between any two communicating devices
that need to exchange information. Information is exchanged in the form of structured data packets. The point-to-
point links that utilize this protocol should be able to support full-duplex communication. PPP can be fragmented
into three parts:
1. Encapsulation
2. Link Control Protocol (LCP)
3. Network Control Protocol (NCP)
Encapsulation
Encapsulation is provided by PPP so that different protocols at the network layer can be supported
simultaneously. Data is sent in frames. Data is transmitted from the left to right.
• Flag—A single byte that indicates the beginning or end of a frame. The flag field consists of the binary
sequence 01111110.
• Address—A single byte that contains the binary sequence 11111111, the standard broadcast address.
PPP does not assign individual station addresses.
Control—A single byte that contains the binary sequence 00000011, which calls for transmission of user
data in an unnumbered frame.
Protocol—2 bytes that identify the protocol encapsulated in the information field of the frame.
Data—Zero or more bytes that contain the datagram for the protocol specified in the protocol field. The
default maximum length of the information field is 1,500 bytes.
PPP phases :
Page 43
COMPUTER NETWORKS CS 306
Link Dead
The protocol starts with the line in the DEAD state, which means that no physical layer carrier is
present and no physical layer connection exists. After physical connection is established, the line moves to
ESTABLISH.
This phase is where Link Control Protocol negotiation is attempted. If successful, control goes either
to the authentication phase or the Network-Layer Protocol phase, depending on whether authentication is
desired.
When one of the end machines starts, the connection goes to the establishing state. LCP negotiates
data link protocol options for establishing the link. It also provides a way for the two processes to test the
line quality to see if they consider it good enough to set up a connection.
Authentication Phase
This phase is optional. It allows the sides to authenticate each other with username and password.
If successful, control goes to the network-layer protocol phase. Here, the two parties can check on each
other's identities if desired.
This phase is where each desired protocols' Network Control Protocols are invoked. For example,
IPCP is used in establishing IP service over the line.
The connection remains in this state until one of the endpoints wants to terminate the connection.
Closing down of network protocols also occur in this phase. Some examples of NLPs are Internet Protocol
(IP), AppleTalk (AT) etc. The corresponding NCPs are Internet Protocol Control Protocol (IPCP) for IP,
AppleTalk Control Protocol (ATCP) for AT etc.
If the configuration is successful, OPEN is reached and data transports can take place. When data
transport is finished, the line moves into the TERMINATE phase, and from there, back to DEAD when the
carrier is dropped.
Finally, the LCP protocol also allows lines to be taken down when they are no longer needed. This
phase closes down the connection. This can happen if there is an authentication failure, if there are so
Page 44
COMPUTER NETWORKS CS 306
many checksum errors that the two parties decide to tear down the link automatically, if the link suddenly
fails, idle period timeout or if the user decides to hang up his connection.
Bitstream : 1101011011
Divisor : 10011
Message after appending zeros : 11010110110000
At sender side:
Page 45
COMPUTER NETWORKS CS 306
Qus 13 : If the bit string 0111101111101111110 is bit stuffed. What is the output string?
Qus 14 : Explain any one error correcting codes used by the datalink layer.
Hamming codes are code words formed by adding redundant check bits, or parity bits, to a data word. A
Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can
detect up to two bit errors, and correct single-bit errors.
Number of redundancy bits needed
• Let data bits = m1
• Redundancy bits = r
\ Total message sent = m+r
The value of r must satisfy the following relation:
2r ≥ m+r+1
The structure of the encoder and decoder for a Hamming code
At sender side, Parity checks are created
as follows (using modulo-2)
r0 = a2 + a1 + a0
Example: r1 = a3 + a2 + a1
r2 = a1 + a0 + a3
At receiver side, the checker in the
decoder creates a 3-bit syndrome
(s2s1s0).
s0 = b2 + b1 + b0 + q0
s1 = b3 + b2 + b1 + q1
s2 = b1 + b0 + b3 + q2
Page 46
COMPUTER NETWORKS CS 306
Thus the error is identified and can correct it by inverting the desired bit.
MAC Sublayer
• Point-to-point links – E.g. PPP for dial-up access, or point-to-point link between Ethernet switch and host
• Broadcast links (shared wire or medium) – E.g. traditional Ethernet, or 802.11 wireless LAN (WiFi)
Page 47
COMPUTER NETWORKS CS 306
In broadcast networks, single channel is shared by several stations. This channel can be allocated to only
one transmitting user at a time.
There are two different methods of channel allocations:
o Static Channel Allocation
o Dynamic Channel Allocation
In this method, a single channel is divided among various users either on the basis of frequency or on
the basis of time.
It either uses FDM (Frequency Division Multiplexing) or TDM (Time Division Multiplexing).
In FDM, fixed frequency is assigned to each user, whereas, in TDM, fixed time slot is assigned to each
user.
In this method, no user is assigned fixed frequency or fixed time slot.
All users are dynamically assigned frequency or time slot, depending upon the requirements of the user.
Page 48
COMPUTER NETWORKS CS 306
ALOHA
• ALOHA was developed at University of Hawaii in early 1970s by Norman Abramson.
• It was used for ground based radio broadcasting.
• In this method, stations share a common channel.
• When two stations transmit simultaneously, collision occurs and frames are lost.
Pure ALOHA
• In pure ALOHA, stations transmit frames whenever they have data to send.
• When two stations transmit simultaneously, there is collision and frames are lost.
• In pure ALOHA, whenever any station transmits a frame, it expects an acknowledgement from the receiver.
• If acknowledgement is not received within specified time, the station assumes that the frame has been lost.
• If the frame is lost, station waits for a random amount of time and sends it again.
• This waiting time must be random; otherwise, same frames will collide again and again.
• Whenever two frames try to occupy the channel at the same time, there will be collision and both the
frames will be lost.
• If first bit of a new frame overlaps with the last bit of a frame almost finished, both frames will be lost and
both will have to be retransmitted.
A collision involves two or more stations. If all
these stations try to resend their frames after
the time-out, the frames will collide again.
Pure ALOHA dictates that when the time-out
period passes, each station waits a random
amount of time before resending its frame.
The randomness will help avoid more
collisions. We call this time the back-off time
TB.
Pure ALOHA has a second method to prevent
congesting the channel with retransmitted
frames. After a maximum number of
retransmission attempts Kmax, a station must
give up and try later.
In this method, for each retransmission, a multiplier in the range 0 to 2K - 1 is randomly chosen and
multiplied by Tp (maximum propagation time) or Tfr (the average time required to send out a frame) to find TB' . In
this procedure, the range of the random numbers increases after each collision. The value of Kmax is usually chosen
as 15.
Vulnerable time:
The vulnerable time is in which there is a possibility of collision. Assume that the stations send fixed-length
frames with each frame taking Tfr s to send.
Throughput:
Let us call G the average number of frames generated by the system during one frame transmission time.
The throughput for pure ALOHA is
S = G × e −2G
The maximum throughput
Smax = 0.184 when G= (1/2).
In other words, if one-half a frame is generated during one frame transmission time (in other words, one
frame during two frame transmission times), then 18.4 percent of these frames reach their destination successfully.
Slotted ALOHA
• Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
• In slotted ALOHA, time of the channel is divided into intervals called slots.
• The station can send a frame only at the beginning of the slot and only one frame is sent in each slot.
• If any station is not able to place the frame onto the channel at the beginning of the slot, it has to wait until
the next time slot.
• There is still a possibility of collision if two stations try to send at the beginning of the same time slot.
Page 50
COMPUTER NETWORKS CS 306
equal to Tfr.
Throughput:
It can be proved that the average number of successful
transmissions for slotted ALOHA is
S = G x e-G.
The maximum throughput Smax is 0.368, when G = 1. In
other words, if a frame is generated during one frame
transmission time, then 36.8 percent of these frames reach
their destination successfully.
Need to transmit
No
Idle
?
Yes
Transmit
1-Persistent CSMA
Non-Persistent CSMA
P-Persistent CSMA
1-Persistent CSMA
• In this method, station that wants to transmit data, continuously senses the channel to check whether
the channel is idle or busy.
Page 51
COMPUTER NETWORKS CS 306
Non-Persistent CSMA
P-Persistent CSMA
• In this method, the channel has time slots such that the time slot duration is equal to or greater than the
maximum propagation delay time.
• When a station is ready to send, it senses the channel.
• If the channel is busy, station waits until next slot.
• If the channel is idle, it transmits the frame.
• It reduces the chance of collision and improves the efficiency of the network.
Page 52
COMPUTER NETWORKS CS 306
• In CSMA/CD, the station that sends its data on the channel, continues to sense the channel even after data
transmission.
• If collision is detected, the station aborts its transmission and waits for a random amount of time & sends
its data again.
• As soon as a collision is detected, the transmitting station releases a jam signal.
• Jam signal alerts other stations. Stations are not supposed to transmit immediately after the collision has
occurred.
Page 53
COMPUTER NETWORKS CS 306
Interframe Space
• Whenever the channel is found idle, the station does not transmit immediately.
• It waits for a period of time called Interframe Space (IFS).
• When channel is sensed idle, it may be possible that some distant station may have already started
transmitting.
• Therefore, the purpose of IFS time is to allow this transmitted signal to reach its destination.
• If after this IFS time, channel is still idle; the station can send the frames.
Contention Window
Acknowledgment
• Despite all the precautions, collisions may occur and destroy the data.
• Positive acknowledgement and the time-out timer help guarantee that the receiver has received the frame.
Page 54
COMPUTER NETWORKS CS 306
Reservation
Polling
Page 55
COMPUTER NETWORKS CS 306
• Polling method works in those networks where primary and secondary stations exist.
• All data exchanges are made through primary device even when the final destination is a secondary device.
• Primary device controls the link and secondary device follow the instructions.
Token Passing
• Token passing method is used in those networks where the stations are organized in a logical ring.
• In such networks, a special packet called token is circulated through the ring.
• Station that possesses the token has the right to access the channel.
• Whenever any station has some data to send, it waits for the token. It transmits data only after it gets the
possession of token.
• After transmitting the data, the station releases the token and passes it to the next station in th ring.
• If any station that receives the token has no data to send, it simply passes the token to the next station in
the ring.
Channelization Protocol
• Channelization is a multiple access method in which the available bandwidth of a link is shared in time,
frequency or code between different stations.
• There are three basic channelization protocols:
Frequency Division Multiple Access (FDMA)
Time Division Multiple Access (TDMA)
Code Division Multiple Access (CDMA)
Page 56
COMPUTER NETWORKS CS 306
• The IEEE 802 standards are further divided into many parts. Some are,
IEEE 802.1 Bridging (networking) and Network Management
IEEE 802.2 Logical link control (upper part of data link layer)
IEEE 802.3 Ethernet (CSMA/CD)
IEEE 802.4 Token bus
IEEE 802.5 Defines the MAC layer for a Token Ring
IEEE 802.11 Wireless LAN (CSMA/CA)
IEEE 802.15 Wireless PAN
IEEE 802.15.1 (Bluetooth certification)
IEEE 802.16 Broadband Wireless Access (WiMAX certification)
The 802.1 sublayer gives an introduction to set of standards and gives the details of the interface
primitives. It provides relationship between the OSI model and the 802 standards.
The 802.2 sublayer describes the LLC (logical link layer). It is used with the 802.3, 802.4, and 802.5
standards (lower DL sublayers)."
The IEEE has subdivided the data link layer into two sublayers: logical link control (LLC) and media access
control (MAC).
Page 57
COMPUTER NETWORKS CS 306
LLC provides one single data link control protocol for all IEEE LANs.
Implemented in software
The original Ethernet was created in 1976 at Xerox’s Palo Alto Research Center (PARC). Ethernet is a LAN
protocol that is used in Bus and Star topologies and implements CSMA/CD as the medium access method.
In Standard Ethernet, the MAC sublayer governs the operation of the access method. It also frames data
received from the upper layer and passes them to the physical layer.
Standard Ethernet uses CSMA/CD with 1-persistent.Ethernet architecture can be divided into two layers:
Page 58
COMPUTER NETWORKS CS 306
• Station interface
• Data Encapsulation /Decapsulation
• Link management
• Collision Management
Physical layer:
This layer takes care of following functions.
The IEEE 802.3 standard defines a basic data frame format that is required for all MAC implementations.
Preamble:
It consists of 6 bytes.
The DA field identifies which station(s) should receive the frame.
Page 59
COMPUTER NETWORKS CS 306
It consists of 6 bytes.
The SA field identifies the sending station.
The SA is always an individual address and the left-most bit in the SA field is always 0.
Length/Type:
It consists of 4 bytes.
This field defines the upper-layer protocol using the MAC frame.
OR
defines the number of bytes in the data filed.
Data:
Minimum frame length restriction(64 bytes) is required for the correct operation of CSMA/CD.
CRC:
Page 60
COMPUTER NETWORKS CS 306
Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its own network interface
card (NIC). The NIC fits inside the station and provides the station with a 6- byte physical address (MAC/Ethernet
address).
The Ethernet address is 6 bytes (48 bits), normally written in hexadecimal notation, with a colon between
the bytes.
A source address is always a unicast address--the frame comes from only one station.
The destination address, can be unicast, multicast, or broadcast
o If the least significant bit of the first byte in a destination address is 0, the address is unicast;
otherwise, it is multicast.
A unicast destination address defines only one recipient Relationship : one to one
A multicast destination address defines a group of addresses Relationship : one to many
Relationship : one to many
The broadcast address is a special case of multicast address; the
A broadcast destination address
recipients are all the stations on the LAN
is forty-eight l’s.
The transmission is left to right, byte by byte; however, for each byte, the least significant bit is sent first and
the most significant bit is sent last.
This means that the bit that defines an address as unicast or multicast arrives first at the receiver. This
helps the receiver to immediately know if the packet is unicast or multicast.
Example
Qus : Define the type of the following destination addresses:
a. 4A:30:10:21:10:1A
b. 47:20:IB:2E:08:EE
c. FF:FF:FF:FF:FF:FF
Solution
To find the type of the address, we need to look at the second hexadecimal digit from the left. If it is even, the
address is unicast. If it is odd, the address is multicast. If all digits are Fs, the address is broadcast. Therefore, we
have the following:
a. This is a unicast address because A in binary is 1010 (even).
Page 61
COMPUTER NETWORKS CS 306
Example
Qus : Show how the address 47:20:1B:2E:08:EE is sent out on line.
Solution
The address is sent left-to-right, byte by byte; for each byte, it is sent right-to-left( LSB first), bit by bit, as shown
below:
Address: left-to-right : EE→08→2E→1B→20→ 47
47 is 0100 0111 right -to-left 1110 0010
Access Method:
Slot Time: In an Ethernet network, the round-trip time required for a frame to travel from one end of a maximum-
length network to the other plus the time needed to send the jam sequence is called the slot time.
Slot time = round-trip time + time required to send the jam sequence
The slot time in Ethernet is defined in bits. It is the time required for a station to send 512 bits. This means
that the actual slot time depends on the data rate; for traditional 10-Mbps Ethernet it is 51.2µs.
The Standard Ethernet defines several physical layer implementation, four of the most common:
Page 62
COMPUTER NETWORKS CS 306
Page 63
COMPUTER NETWORKS CS 306
Compared to others, the hub replaces the coaxial cable as far as a collision is concerned.
Max length = 100 m to minimize attenuation.
The 10-Mbps Standard Ethernet has gone through several changes before moving to the higher data rates.
These changes actually opened the road to the evolution of the Ethernet to become compatible with other high-
data-rate LANs.
Page 64
COMPUTER NETWORKS CS 306
Bridged Ethernet
Switched Ethernet
Full-Duplex Ethernet
Bridged Ethernet
In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all stations with a frame to
send; the stations share the bandwidth of the network.
A bridge divides the network into two or more networks. Bandwidth-wise, each network is independent.
Switched Ethernet
Page 65
COMPUTER NETWORKS CS 306
Full-Duplex Ethernet
The full-duplex mode increases the capacity of each domain from 10 to 20 Mbps. In full-duplex switched
Ethernet, there is no need for the CSMA/CD method. In a full duplex switched Ethernet, each station is connected to
the switch via two separate links.
The figure below shows a switched Ethernet
in full-duplex mode.
Note that instead of using one link between
the station and the switch, the configuration uses two
links: one to transmit and one to receive.
Each link is a point-to-point dedicated path
between the station and the switch. There is no
longer a need for carrier sensing; there is no longer a
need for collision detection.
Four data rates are currently defined for operation over optical fiber and twisted-pair cables :
Fast Ethernet
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber Channel IEEE created Fast
Ethernet under the name 802.3u. Fast Ethernet is backward-compatible with Standard Ethernet, but it can transmit
data 10 times faster at a rate of 100 Mbps.
Fast Ethernet is designed to connect two or more stations together. If there are only two stations, they can
be connected point-to-point. Three or more stations need to be connected in a star topology with a hub or a switch
at the center.
Star topology have two choices :
Half-duplex approach:
- The stations are connected via a hub.
- The access method is CSMA/CD
Full-duplex approach.(Fast )
- The connection is made via a switch
with buffers at each port.
- No need for CSMA/CD
Page 66
COMPUTER NETWORKS CS 306
Encoding:
Manchester encoding needs a 200-Mbaud
bandwidth for a data rate of 100 Mbps.
(unsuitable for a medium such as Twisted Pair
cable).
Gigabit Ethernet
The need for an even higher data rate resulted in the design of the Gigabit Ethernet protocol (1000 Mbps).
The IEEE committee calls the Standard 802.3z.
Gigabit Ethernet is designed to connect two or more stations. If there are only two stations, they can be
connected point-to-point. Three or more stations need to be connected in a star topology with a hub or a switch at
the center.
Gigabit Ethernet can be used in half-duplex mode or full duplex mode.
In the full-duplex mode, there is a central switch connected to all computers or other switches. Each
switch has buffers for each input port in which data are stored until they are transmitted. There is
no collision; the maximum length of the cable is determined by the signal attenuation in the cable.
Page 67
COMPUTER NETWORKS CS 306
Ten-Gigabit Ethernet
The IEEE committee created Ten-Gigabit Ethernet and called it Standard 802.3ae.
Ten-Gigabit Ethernet operates only in full duplex mode which means there is no need for contention;
CSMA/CD is not used in Ten-Gigabit Ethernet.
The physical layer in Ten-Gigabit Ethernet is designed for using fiber-optic cable over long distances. Three
implementations are the most common:
10GBase-S
10GBase-L
10GBaseE
Page 68
COMPUTER NETWORKS CS 306
Page 69
COMPUTER NETWORKS CS 306
Page 70
COMPUTER NETWORKS CS 306
Page 71
COMPUTER NETWORKS CS 306
Qus: State which layers of the OSI model does the following interconnecting devices operate
a) Repeaters
b) Bridges
c) Routers
d) Gateways.
Answers:
a) Repeaters
A repeater connects different segments of a LAN.
A repeater forwards every frame it receives.
Repeaters operate at the physical layer of the OSI model.
It simply repeats, retimes and amplifies the bits it receives.
b) Bridges
Bridges can be used to connect networks with different types of cabling or physical topologies but
with same communication protocol.
A bridge can divide a large network into two or more smaller and efficient networks.
Bridges learn which workstations are on what network segment by looking at the hardware address
in the frames it receives and entering this information into a table.
If the recipient’s MAC address is not in the list, the bridge then does a flood; it sends the signal to
all the ports except the one from where it was received.
It inspects incoming traffic and decides whether to forward or discard it.
A bridge operates both in physical and data-link layer
Types of bridges:
Transparent Bridges
Source routing bridges
c) Routers
Page 72
COMPUTER NETWORKS CS 306
d) Gateways
Gateways are multi-purpose connection devices.
Transparent bridge
- Transparent bridge keeps a suitable of addresses in memory to determine where to send data
- Source Routing Bridge requires the entire routing table to be included in the transmission and do
not route packet intelligently.
A, C want to send to B
Page 73
COMPUTER NETWORKS CS 306
IEEE has defined the specifications for a wireless LAN, called IEEE 802.11, which covers the physical
and data link layers. A wireless LAN uses wireless transmission medium.
802.11 transmissions are complicated by wireless conditions that vary with even small changes in
the environment. At the frequencies used for 802.11, radio signals can be reflected off solid objects so
that multiple echoes of a transmission may reach a receiver along different paths. The echoes can cancel
or reinforce each other, causing the received signal to fluctuate greatly. This phenomenon is called
multipath fading.
The key idea for overcoming variable wireless conditions is path diversity, or the sending of
information along multiple, independent paths.
Page 74