Department of Computer Science and Engineering: PREPARED BY: Mr. D. Ramkumar - A.P-CSE
Department of Computer Science and Engineering: PREPARED BY: Mr. D. Ramkumar - A.P-CSE
Department of Computer Science and Engineering: PREPARED BY: Mr. D. Ramkumar - A.P-CSE
2 Marks
2) What is the difference between TCP & UDP? (NOV 2014 & 2016)
TCP UDP
5) Define jitter
Jitter is the variation in delay for packets belonging to the same flow.
Example: 2ms delay for 1st packet
60ms delay for second packet.
Jitter is the VARIATION in delay over time from point-to-point. If the delay of transmissions
varies too widely in a VoIP call, the call quality is greatly degraded. The amount of jitter
tolerable on the network is affected by the depth of the jitter buffer on the network equipment in
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 1
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
the voice path. The more jitter buffer available, the more the network can reduce the effects of
jitter.
26. What is the difference between service point address, logical address and physical
address? Service point addressing Logical addressing Physical addressing
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 3
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
The transport layer header If a packet passes the If the frames are to be
includes a type of address network boundary we need distributed to different
called a service point another addressing to systems on the network, the
address or port address, differentiate the source and data link layer adds the
which makes a data delivery destination systems. The header, which defines the
from a specific process on network layer adds a source machine’ s address
one computer to a specific header, which indicate the and the destination
process on another logical address of the sender Machine’ s address.
computer. and receiver.
Scheduling
Traffic shaping
Admission control
Resource reservation
32. What is meant by quality of service or QoS? (NOV 2014 & 2015)
The quality of service defines a set of attributes related to the performance of the connection. For
each connection, the user can request a particular attribute each service class is associated with a
set of attributes.
35. Why is UDP pseudo header included in UDP checksum calculation? What is the effect of
an invalid checksum at the receiving UDP?
The UDP checksum is performed over the entire payload, and the other fields in the header, and
some fields from the IP header. A pseudo-header is constructed from the IP header in order to
perform the calculation (which is done over this pseudo-header, the UDP header and the
payload). The reason the pseudo-header is included is to catch packets that have been routed to
the wrong IP address.
If the checksum validation is enabled and it detected an invalid checksum, features like packet
reassembling won't be processed.
36. How can the effect of jitter be compensated? What type of application require for this
compensation?
Jitter is an undesirable effect caused by the inherent tendencies of TCP/IP networks and
components.
Jitter is defined as a variation in the delay of received packets. The sending side transmits \
packets in a continuous stream and spaces them evenly apart. Because of network congestion,
improper queuing, or configuration errors, the delay between packets can vary instead of
remaining constant. This variation causes problems for audio playback at the receiving end.
Playback may experience gaps while waiting for the arrival of variable delayed packets.
When a router receives an audio stream for VoIP, it must compensate for any jitter that it detects.
The playout delay buffer mechanism handles this function. Playout delay is the amount of time
The playout delay buffer must buffer these packets and then play them out in a steady Stream to
the DSPs. The DSPs then convert the packets back into an analog audio stream. The play out
delay buffer is also referred to as the dejitter buffer.
Process can directly identify each other with an OS-assigned process ID(pid) More commonly-
processes indirectly identify each other using a port or mailbox Source sends a message to a port
and destination receives the message from the port UDP port is 16 bits, so there are 64K possible
ports- not enough for all Internet hosts Process is identified as a port on a particular host – a
(port, host) pair.
To send a message the process learns the port in the following way:
A client initiates a message exchange with a server process. The server knows the client’s port
(contained in message header and can reply to it. Server accepts messages at a well known port.
Examples: DNS at port 53, mail at port 25
41. What is the difference between congestion control and flow control? (Nov 2015)
Congestion control
It involves preventing too much data from being injected into the network, thereby causing
switches or links to become overloaded. Thus flow control is an end to an end issue, while
congestion control is concerned with how hosts and networks interact.
Flow control
The amount of data flowed from source to destination should be restricted. The source can send
one byte at a time, but it will take long time to transmit n bytes
42. List the mechanisms used in TCP congestion control mechanism
o Additive Increase/Multiplicative Decrease
o Slow Start
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 6
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
o Fast Retransmit and Fast Recovery
48. List some of the Quality of service parameters of transport layer (May 2015)
Reliability
Delay
Jitter
Bandwidth
49. How does transport layer perform duplication control? (May 2015)
Duplication can be controlled by the use of sequence number & acknowledgment number
50. What do you mean by slow start in TCP congestion? (May 2016)
The sender starts with a very slow rate of transmission but increases the rate rapidly to
reach a threshold
Connection termination
16 MARKS
The UDP is called a connection less, unreliable transport protocol. The purpose of UDP
is to break up a stream of data into datagram, add a source and destination port information, a
length and a checksum. It is the receiving application's responsibility to detect and recover lost or
damaged packets, as UDP doesn't take care of this.
Advantages:
If a process wants to send a small message and doesn’t care much about reliability, it can
use UDP.
Sending a small message by using UDP takes much less interaction between the sender
and receiver than using TCP.
– UDP has a smaller overhead then TCP, especially when the total
size of the messages is small.
– UDP messages can be lost or duplicated, or they may arrive out of order; and they
can arrive faster than the receiver can process them.
– The application programmers using UDP have to consider and tackle these issues
themselves.
– Not buffered -- UDP accepts data and transmits immediately (no buffering before
transmission)
User Datagram:
UDP packets called user datagram which has a fixed size header of 8 bytes.
Length
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 9
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
This is a 16 bits field that defines the total length of the user data gram, header plus data.
The 16 bits can defined a total length of 0 to 65535 bytes.
Checksum
This field is used to detect errors over the entire user datagram. The calculation of
checksum and its inclusion in the user datagram are optional.
There is no flow-control mechanism that tells the sender to slow down. When an
application process wants to receive a message, one is removed from the front of the queue. If
the queue is empty, the process blocks until a message becomes available.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 10
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
2. Describe in detail about TCP segment (Header) format (NOV 2013, 2014)(May & Nov 2015)
A Packet in TCP is called a segment. The below diagram shows the format of the segment.
The segment consists of a 20 to 60 byte header, followed by data from the application program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains options.
Header
– The header is composed of a 20-byte fixed part and an optional part with a variable
length. The total size of the header (in 32-bit words) is specified in HLEN.
Data - The data can have a variable size, which can be up to 65535 – 20 = 65515 bytes.
Source port number (16 bits)
– The SOURCE PORT field identifies the TCP process which sent the datagram.
Header Length
– This 4-bit field indicates the number of 4-byte words in the TCP header. The length of
the header can be between 20 and 60 bytes.
Reserved
– This is a 6-bit field reserved for future use.
Code bits
– The CODE BITS (or FLAGS) field contains one or more 1-bit flags
– Control bits to indicate end of stream, acknowledgement field being valid, connection
reset, urgent pointer field being valid, etc.
[CONTROL]: URG (1)- Urgent Bit validates the Urgent Pointer field.
Acknowledge Bit, set if the Acknowledge Number field is being
[CONTROL]: ACK (1)-
used.
[CONTROL]: PSH (1)- Push Bit tells the sender that a higher throughput is required.
Reset Bit resets the connection when there's conflicting sequence
[CONTROL]: RST (1)-
numbers.
Sequence Number Synchronization. Used in 3 types of segments:
connection request, connection confirmations (with ACK) and
[CONTROL]: SYN (1)- confirmation termination (with FIN) in 3 types of segments:
terminal request, terminal confirmation (with ACK) and
acknowledgement of terminal confirmation (with ACK).
[CONTROL]: FIN (1)- Used with SYN to confirm termination of connections
Note: The process of sending data along with the acknowledgment is called piggybacking
Checksum(16 bit)
– The CHECKSUM field contains a simple checksum over the TCP segment header and
data.
Urgent Pointer (16 bit)
– This 16-bit field, which is valid only if the urgent flag is set, is used when the segment
contains urgent data. It defines the number that must be added to the sequence number to
obtain the number of the last urgent byte in the data section of the segment.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 12
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Options
– There can be up to 40 bytes of optional information in the TCP header.
3. Explain in detail about TCP connection establishment & termination (TCP Connection
Management) (NOV 2013) (May & Nov 2015)
Connection termination
Connection establishment:
TCP transmits data in full-duplex mode. When two TCP’s in two machines or connected, they
are able to send segments to each other simultaneously.
Three-way handshaking.
The connection establishment in TCP is called three way handshaking. The process starts with
the server. The server program tells its TCP that it is ready to accept a connection. This is called
a request for a passive open.
The client program issues a request for an active open. A client that wishes to connect to an
open server tells its TCP that it needs to be connected to that particular server. TCP can now
start the three-way handshaking process. Each segment has the sequence number the
acknowledgement number, the control flags, and the window size, if not empty.
2. The server sends the second segment, a SYN + ACK segment, with 2 flag bits set: SYN
and ACK. This segment has a dual pupose. It is a SYN segment for communication in
the other direction and serves as the acknowledgement for the SYN segment. It
consumes one sequence number.
3. The client sends the third segment. This is just an ACK segment. It acknowdeges the
receipt of the second segmant with the ACK flag and acknowledgment number field.
Data Transfer
After connection is established, bidirectional data transfer can take place. The client and server
can both send data and acknowledgements.
The below figure shows an example. In this example, after connection is established, the client
sends 2000 bytes of data in two segments. The server then sends 2000 bytes in one segment.
The client sends one more segment. The first three segments carry both data and
acknowledgment, but the last segment carries only an acknowledgement because there are no
more data to be sent.
The data segments sent by the client have the PSH (push) flag set so that the server TCP knows
to deliver data to the server process as soon as they are received.
Pushing Data: The sending TCP uses a buffer to store the stream of data coming from
the sending application program. The sending TCP can select the segment size. The
receiving TCP also buffers the data when they arrive and delivers them to the application
program when the application program is ready or when it is convenient for the receiving
TCP. This type of flexibility increases the efficiency of TCP.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 14
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
The application program at the sending site can request a push operation. This means
that the sending TCP must not wait for the window to be filled. It must create a segment
and send it immediately. The sending TCP must also set the push bit (PSH) to let the
receiving TCP know that the segment includes data that must be delivered to the
receiving application program as soon as possible and not to wait for more data to come.
Any of the two parties involved in exchanging data (client or server) can close the connection,
although it is usually initiated by the client. Most implementations today allow two options for
connection termination: three-way handshaking and four-way handshaking with a half-close
option.
Three-way handshaking
In a normal situation, the client TCP, after receiving a close command from the client process,
sends the first segment, a FIN segment in which the FIN flag is set. The FIN segment consumes
one sequence number if it does not carry data.
1. The server TCP, after receiving the FIN segment, informs its process of the situation and
sends the second segment, a FIN + ACK segment, to confirm the receipt of the FIN
segment from the client and at the same time to announce the closing of the connection in
the other direction. This segment can also contain the last chunk of data from the server.
The FIN + ACK segment consumes one sequence number if it does not carry data.
2. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the
FIN segment from the TCP server. This segment contains the acknowledgement number,
which is 1 plus the sequence number received in the FIN segment from the server. This
segment cannot carry data and consumes no sequence number.
Client Diagram:
While in this state, the client TCP can receive an active open request from the client
application program. It sends a SYN segment to the server TCP and goes to the SYN-
SENT state.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 16
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
While in this state, the client TCP can receive a SYN + ACK segment from other TCP. It
sends an ACK segment to the other TCP and goes to the ESTABLISHED state. This is
the data transfer state. The client remains in this state as long as it is sending and
receiving data.
While in this state, the client TCP can receive a close request from the client application
program. It sends a FIN segment to the other TCP and goes to the FIN-WAIT1 state.
While in this state, the client TCP waits to receive an ACK from the server TCP. When
ACK is received, it goes to the FIN-WAIT2 state. Now the connection is closed in one
direction.
The client remains in this state, waiting for the server to close the connection. If it
receives a FIN segment, it sends an ACK segment and goes to the TIME-WAIT state.
When the client is in this state, it starts a timer and waits until this timer goes off. After
the time-out, the client goes to the CLOSED state, where it began.
Server Diagram:
While in this state, the server TCP can receive an open request from the server
application program, it goes to the LISTEN state.
While in this state, the server TCP can receive a SYN segment. It sends a SYN + ACK
segment to the client and goes to the SYN-RCVD state.
While in this state, the server TCP receives an ACK and goes to ESTABLISHED state.
This is the data transfer state. The server remains in this state as long as it is receiving
and sending data.
While in this state, the server TCP can receive a FIN segment from the client. It can send
an ACK and goes to the CLOSE-WAIT state.
While in this state, the server waits until it receives a close request from the server
program. It then sends a FIN segment and goes to LAST-ACK state.
While in this state, the server waits for the last ACK segment and goes to the CLOSED
state.
4. Explain in detail about TCP flow control OR TCP Adaptive flow control (NOV/DEC 2013,
2014)
TCP uses a sliding window mechanism to control the flow of data. When a connection is
established, each end of the connection allocates a buffer to hold incoming data, and sends the
size of the buffer to the other end. As data arrives, the receiver sends acknowledgements together
with the amount of buffer space available called a window advertisement.
To make the following discussion simpler to follow, we initially ignore the fact that both
the buffers and the sequence numbers are of some finite size and hence will eventually wrap
around. Also, we do not distinguish between a pointer into a buffer where a particular byte of
data is stored and the sequence number for that byte
Looking first at the sending side, three pointers are maintained into the send buffer, each
with an obvious meaning: LastByteAcked, LastByteSent, and LastByteWritten. Clearly
LastByteAcked ≤ LastByteSent
since the receiver cannot have acknowledged a byte that has not yet been sent, and
LastByteSent ≤ LastByteWritten
since TCP cannot send a byte that the application process has not yet written. Also note that none
of the bytes to the left of LastByteAcked need to be saved in the buffer because they have
already been acknowledged, and none of the bytes to the right of LastByteWritten need to be
buffered because they have not yet been generated.
is true because a byte cannot be read by the application until it is received and all preceding
bytes have also been received. NextByteExpected points to the byte immediately after the latest
byte to meet this criterion. Second,
NextByteExpected ≤ LastByteRcvd+1
since, if data has arrived in order, NextByteExpected points to the byte after LastByteRcvd,
whereas if data has arrived out of order, then NextByteExpected points to the start of the first gap
in the data, as in Figure 5.8.Note that bytes to the left of LastByteRead need not be buffered
because they have already been read by the local application process, and bytes to the right of
LastByteRcvd need not be buffered because they have not yet arrived.
Flow Control
Recall that in a sliding window protocol, the size of the window sets the amount of data that can
be sent without waiting for acknowledgment from the receiver. Thus, the receiver throttles the
sender by advertising a window that is no larger than the amount of data that it can buffer.
Observe that TCP on the receive side must keep
LastByteRcvd−LastByteRead ≤ MaxRcvBuffer
which represents the amount of free space remaining in its buffer. As data arrives, the receiver
acknowledges it as long as all the preceding bytes have also arrived. In addition, LastByteRcvd
moves to the right (is incremented), meaning that the advertised window potentially shrinks.
Whether or not it shrinks depends on how fast the local application process is consuming data. If
the local process is reading data just as fast as it arrives (causing LastByteRead to be
incremented at the same rate as LastByteRcvd), then the advertised window stays open (i.e.,
AdvertisedWindow = MaxRcvBuffer). If, however, the receiving process falls behind, perhaps
because it performs a very expensive operation on each byte of data that it reads, then the
advertised window grows smaller with every segment that arrives, until it eventually goes to 0.
TCP on the send side must then adhere to the advertised windowit gets from the receiver.
This means that at any given time, it must ensure that
LastByteSent−LastByteAcked ≤ AdvertisedWindow
Said another way, the sender computes an effective window that limits how much data it can
send:
EffectiveWindow = AdvertisedWindow−(LastByteSent−LastByteAcked)
LastByteWritten−LastByteAcked ≤ MaxSendBuffer
then TCP blocks the sending process and does not allow it to generate more data.
TCP copies with the loss of packets using a technique called retransmission. When TCP
data arrives an acknowledgement is sent back to the sender. Whenever a TCP segment is
transmitted, a copy of it is also placed on the retransmission queue. When TCP data is sent, a
timer is started this starts from a particular value and counts down to zero. If the timer expires
before an acknowledgement arrives, TCP retransmits the data.
Original Algorithm
We begin with a simple algorithm for computing a timeout value between a pair of hosts.
This is the algorithm that was originally described in the TCP specification—and the following
description presents it in those terms—but it could be used by any end-to-end protocol.
The idea is to keep a running average of the RTT and then to compute the timeout as a
function of this RTT. Specifically, every time TCP sends a data segment, it records the time.
When an ACK for that segment arrives, TCP reads the time again, and then takes the difference
between these two times as a SampleRTT. TCP then computes an EstimatedRTT as a weighted
average between the previous estimate and this new sample. That is,
The parameter _ is selected to smooth the EstimatedRTT. A small _ tracks changes in the
RTT but is perhaps too heavily influenced by temporary fluctuations. On the other hand, a large
is more stable but perhaps not quick enough to adapt to real changes. The original TCP
specification recommended a setting of between 0.8 and 0.9. TCP then uses EstimatedRTT to
compute the timeout in a rather conservative way:
TimeOut = 2×EstimatedRTT
Karn/Partridge Algorithm
The solution, which was proposed in 1987, is surprisingly simple. Whenever TCP retransmits a
segment, it stops taking samples of the RTT; it only measures SampleRTT for segments that
have been sent only once. This solution is knownas the Karn/Partridge algorithm, after its
inventors.
Their proposed fix also includes a second small change to TCP’s timeout mechanism.
Each time TCP retransmits, it sets the next timeout to be twice the last timeout, rather than
basing it on the last EstimatedRTT. That is, Karn and Partridge proposed that TCP use
exponential backoff, similar to what the Ethernet does. The motivation for using exponential
backoff is simple: Congestion is the most likely cause of lost segments, meaning that the TCP
source should not react too aggressively to a timeout. In fact, the more times the connection
times out, the more cautious the source should become
Jacobson/Karels Algorithm
The Karn/Partridge algorithm was introduced at a time when the Internet was suffering
from high levels of network congestion. Their approach was designed to fix some of the causes
of that congestion, but, although it was an improvement, the congestion was not eliminated. The
following year (1988), two other researchers—Jacobson and Karels—proposed a more drastic
change to TCP to battle congestion. The bulk of that proposed change is described in Chapter 6.
Here, we focus on the aspect of that proposal that is related to deciding when to time out and
retransmit a segment.
The main problem with the original computation is that it does not take the variance of
the sample RTTs into account. Intuitively, if the variation among samples is small, then the
EstimatedRTT can be better trusted and there is no reason for multiplying this estimate by 2 to
compute the timeout. On the other hand, a large variance in the samples suggests that the timeout
value should not be too tightly coupled to the EstimatedRTT.
In the new approach, the sender measures a new SampleRTT as before. It then folds this new
sample into the timeout calculation as follows:
where is a fraction between 0 and 1. That is, we calculate both the mean RTT and the variation
in that mean.
TCP then computes the timeout value as a function of both Estimated- RTT and
Deviation as follows:
where based on experience, is typically set to 1 and is set to 4. Thus, when the variance is
small, TimeOut is close to EstimatedRTT; a large variance causes the Deviation termto dominate
the calculation.
6. Explain in detail about TCP congestion control mechanisms (NOV 2013, 2014, 2015, 2016)
Congestion, in a network may occur if the load on the network – the number of packets sent to
the network is greater than the capacity of the network – the number of packets a network can
handle.
Congestion control refers to the mechanisms and techniques to control the congestion and
keep the load below the capacity that can either prevent congestion before it happens or
remove congestion, after it has happened.
There are two categories of congestion control
– Open-loop congestion control (prevention): are applied to prevent congestion
before it happens. In this, congestion control is handled by either the source or
the destination.
– Closed-loop congestion control (removal): try to remove congestion after it
happens.
Factors of congestion:
Congestion Window
The sender’s window size is determined by the receiver and also by congestion in the network.
The sender has two pieces of information:
i) The receiver – advertised window size (rwnd)
ii) The congestion window size (cwnd)
Congestion Policy:
TCP handles congestion is based on three phases
i) Slow start (Exponential Increase )
ii) Additive Increase / Multiplicative Decrease
iii) Fast Retransmit and Fast Recovery
i) Slow Start
In this, the sender starts with a very slow rate of transmission but increases the rate rapidly to
reach a threshold.
Slow start adds another window to the sender's TCP: the congestion window, called "cwnd".
When a new connection is established with a host on another network, the congestion window is
initialized to one segment. Each time an ACK is received, the congestion window is increased
by one segment. The sender can transmit up to the minimum of the congestion window and the
advertised window. The congestion window is flow control imposed by the sender, while the
advertised window is flow control imposed by the receiver. The former is based on the sender's
assessment of perceived network congestion; the latter is related to the amount of available
buffer space at the receiver for this connection.
The sender starts by transmitting one segment and waiting for its ACK. When that ACK is
received, the congestion window is incremented from one to two, and two segments can be sent.
When each of those two segments is acknowledged, the congestion window is increased to four.
This provides an exponential growth, although it is not exactly exponential because the receiver
may delay its ACKs, typically sending one ACK for every two segments that it receives.
Early implementations performed slow start only if the other end was on a different network.
Current implementations always perform slow start.
– When the connection goes dead waiting for a timeout to occur (i.e, the advertized
window goes to zero!)
However, in the second case the source has more information. The current value of cwnd
can be saved as a congestion threshold.
This is also known as the “slow start threshold” ssthresh.
When the size of window in bytes reaches this threshold, slow start stops and the next phase
starts.
To avoid congestion before it happens, one must slow down the exponential growth. When the
size of the congestion window reaches the slow start threshold, the slow start phase steps and the
additive phase begins. In this, each time the whole window of segments is acknowledged, the
size of the congestion window is increased by 1.
After the sender has received acknowledgements for a complete window size of segments, the
size of the congestion window increases additively until congestion is detected.
The congestion window is incremented as follows each time an ACK arrives:
Multiplicative Decrease
If congestion occurs, the congestion window size must be decreased. Retransmission can occur
in one of two cases, when a timer times out (or) when three ACKS are received. In both cases,
the size of the threshold is dropped to one-half, a multiplicative decrease.
Every time a data packet arrives at the receiving side, the receiver responds with an
acknowledgement. When a packet arrives out of order, TCP resends the same acknowledgement
is sent the last time. This second transmission of the same acknowledgement is called a
duplicate ACK.
When the sending side sees a duplicate ACK, it knows that the other side must have received a
packet out of order. The sender waits until it sees some no. of duplicate ACK’s and then
retransmit the missing packet. TCP waits until it has seen three duplicate ACK’s before
retransmitting the packet.
In this diagram, the destination receives packets 1 & 2, but packet 3 is lost in the network. Thus
the destination will send a duplicate ACK for packet 2 when packet 4 arrives, again when packet
5 arrives & so on. When the sender sees the third duplicate ACK for packet 2, the receiver had
gotten packet 6, it retransmits packet s. When the retransmitted copy of packet 3 arrives at the
destination, the receiver then sends a cumulative ACK for everything up to and including packet
6 back to the sends.
Fast Recovery
After fast retransmit sends what appears to be the missing segment, congestion avoidance, but
not slow start is performed. This is the fast recovery algorithm. It is an improvement that allows
high throughput under moderate congestion, especially for large windows.
The fast retransmit and fast recovery algorithms are usually implemented together as follows.
1. When the third duplicate ACK in a row is received, set ssthresh to one-half the current
congestion window, cwnd, but no less than two segments. Retransmit the missing
segment. Set cwnd to ssthresh plus 3 times the segment size. This inflates the congestion
window by the number of segments that have left the network and which the other end
has cached.
2. Each time another duplicate ACK arrives, increment cwnd by the segment size. This
inflates the congestion window for the additional segment that has left the network.
Transmit a packet, if allowed by the new value of cwnd.
3. When the next ACK arrives that acknowledges new data, set cwnd to ssthresh (the value
set in step 1). This ACK should be the acknowledgment of the retransmission from step
1, one round-trip time after the retransmission. Additionally, this ACK should
acknowledge all the intermediate segments sent between the lost packet and the receipt of
the first duplicate ACK. This step is congestion avoidance, since TCP is down to one-half
the rate it was at when the packet was lost.
When fast retransmit detects three duplicate ACKs, start the recovery process from
congestion avoidance region and use ACKs in the pipe to pace the sending of packets.
DECbit
It is a first mechanism
The idea here is to more evenly split the responsibility for congestion control between the
routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.
This notification is implemented by setting a binary congestion bit in the packets that
flow through the router: hence the name DECbit.
The destination host then copies this congestion bit into the ACk it sends back to the
source.
This average queue length is measured over a time interval that distance the last bust +
idle cycle, plus the current busy cycle. (The router is busy when it is transmitting and
idles when it is not).
The above figure shows the queue length at a router as a function of time. Essentially,
the router calculates the area under the curve and divides this value by the time interval to
compute the average queue length
If less than 50% of the packets had the bit set, then the source increases its congestion window
by one packet. If 50% or more of the last window’s worth of packets had the congestion bit set,
the source decreases its congestion window to 0.875 times the previous value.
A second mechanism, called random early detection (RED), is similar to the DECbit scheme in
that each router is programmed to monitor its own queue length, and when it detects that
congestion is imminent (forthcoming), to notify the source to adjust its congestion window.
The first is that rather than explicitly sending a congestion notification message to the
source, RED is most commonly implemented such that it implicitly notifies the source of
congestion by dropping one of its packets.
The source is, effectively notified by the subsequent timeout or duplicates ACK. In case
you haven’t already guessed, RED is designed to be used in conjunction with TCP, which
currently detects congestion by means of timeouts.
A strategy for detecting the initial stages of congestion – before losses occur – from the
end hosts.
The general idea of these techniques is to watch for some sign from the network that
some router’s queue is building up and that congestion will happen soon if nothing is
done about it.
A first Scheme the congestion window normally increases as in TCP, but every two
round-trip delays the algorithm checks to see if the current RTT is greater than the
average of the minimum and maximum RTT’s seen so far. If it is, then the algorithm
decreases the congestion window by one-eighth.
A second algorithm is the decision as to whether or not to change the current window
size is based on changes to both the RTT and the window size. The window is adjusted
once every two round-trip delays based on the product
If the result is positive, the source decreases the window size by one-eighth; if the result
is negative or 0, the source increases the window by one maximum packet size.
A third scheme, Every RTT, it increases the window size by one packet and compares
the throughput achieved to the throughput when the window was one packet smaller. If
the difference is less than one-half the throughput achieved when only one packet was in
transit. If the difference is greater than the algorithm decreases the window by one
packet. This scheme calculates the throughput by dividing the number of bytes
outstanding in the network by the RTT.
It compares the measured throughput rate with an expected throughput rate. The
algorithm, which is called TCP Vegas.
Third, TCP Vegas compares ActualRate to ExpectedRate and adjusts the window
accordingly. We let Diff = ExpectedRate – ActualRate. Note that Diff is positive
or 0 by definition, since ActualRate > ExpectedRate implies that we need to
change BaseRTT to the latest sampled RTT.
We also define two thresholds, α < β, roughly corresponding to having too little and too
much extra data in the network, respectively. When Diff < α, TCP Vegas increases the
congestion window linearly during the next RTT, and when Diff > β, TCP Vegas
decreases the congestion window linearly during the next RTT. TCP Vegas leaves the
congestion window unchanged when α < Diff < β.
8. Discuss QoS
Quality of Service (QoS) is an internetworking issue that has been discussed more than defined.
i) Flow Characteristics
Traditionally four types of characteristics are attributed to a flow reliability, delay, jitter and
bandwidth.
Reliability:
Reliability is a characteristic that a flow needs lack of reliability means losing a packet or
acknowledgment, which entails retransmission. However, the sensitivity of application
programs to reliability is not the same.
For example, it is more important that electronic mail, file transfer and internet access
have reliable transmission than telephony (or) audio conferencing.
Delay:
Again applications can tolerate delay in different degrees. In this case, telephony Audio
conferencing, video conferencing and remote login need minimum delay, while delay in
file transfer or email is less important.
Jitter:
Jitter is the variation in delay for packets belonging to the same flow.
Bandwidth:
Based on flow characteristics, we can classify flow into groups, with each group having similar
levels of characteristics. This categorization is not format or universal; some protocols such as
ATM have defined classes.
Scheduling
Traffic shaping
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 32
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Admission control
Resource reservation
Application Requirements
We can divide applications into two types: real-time and non-real-time. The latter are
sometimes called traditional data applications, since they have traditionally been the major
applications found on data networks. They include most popular applications like telnet, FTP,
email, web browsing, and so on. All of these applications can work without guarantees of timely
delivery of data. Another term for this non-real-time class of applications is elastic, since they
are able to stretch gracefully in the face of increased delay. Note that these applications can
benefit from shorter-length delays, but they do not become unusable as delays increase. Also
note that their delay requirements vary from the interactive applications like telnet to more
asynchronous ones like email, with interactive bulk transfers like FTP in the middle.
The operation of a playback buffer is illustrated in Figure 6.21. The left hand diagonal line shows
packets being generated at a steady rate. The wavy line shows when the packets arrive, some
variable amount of time after they were sent, depending on what they encountered in the
network. The right-hand diagonal line shows the packets being played back at a steady rate, after
sitting in the playback buffer for some period of time. As long as the playback line is far enough
to the right in time, the variation in network delay is never noticed by the application. However,
if we move the playback line a little to the left, then some packets will begin to arrive too late to
be useful.
For our audio application, there are limits to how far we can delay playing back data. It is
hard to carry on a conversation if the time between when you speak and when your listener hears
you is more than 300 ms. Thus, what we want from the network in this case is a guarantee that
all our data will arrive within 300 ms. If data arrives early, we buffer it until its correct playback
time. If it arrives late, we have no use for it and must discard it.
The first characteristic by which we can categorize applications is their tolerance of loss of data,
where “loss” might occur because a packet arrived too late to be played back as well as arising
from the usual causes in the network. On the one hand, one lost audio sample can be interpolated
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 34
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
from the surrounding samples with relatively little effect on the perceived audio quality. It is
only as more and more samples are lost that quality declines to the point that the speech becomes
incomprehensible. On the other hand, a robot control program is likely to be an example of a
real-time application that cannot tolerate loss—losing the packet that contains the command
instructing the robot armto stop is unacceptable. Thus, we can categorize real-time applications
as tolerant or intolerant depending on whether they can tolerate occasional loss.
Considering this rich space of application requirements, what we need is a richer service
model that meets the needs of any application. This leads us to a service model with not just one
class (best effort), but with several classes, each available to meet the needs of some set of
applications. Towards this end, we are now ready to look at some of the approaches that have
been developed to provide a range of qualities of service. These can be divided into two broad
categories:
Fine-grained approaches, which provide QoS to individual applications or flows
Coarse-grained approaches, which provide QoS to large classes of data or aggregated
traffic
10. What is the need for Nagle’s algorithm? How does it determine when to transmit data?
Returning to the TCP sender, if there is data to send but the window is open less than
MSS, then we may want to wait some amount of time before sending the available data, but the
question is how long? If we wait too long, then we hurt interactive applications like Telnet. If we
don’t wait long enough, then we risk sending a bunch of tiny packets and falling into the silly
window syndrome. The answer is to introduce a timer and to transmit when the timer expires.
While we could use a clock-based timer—for example, one that fires every 100 ms—
Nagle introduced an elegant self-clocking solution. The idea is that as long as TCP has any data
in flight, the sender will eventually receive an ACK. This ACK can be treated like a timer firing,
triggering the transmission of more data. Nagle’s algorithm provides a simple, unified rule for
deciding when to transmit:
In other words, it’s always OK to send a full segment if the window allows. It’s also all right to
immediately send a small amount of data if there are currently no segments in transit, but if there
is anything in flight the sender must wait for an ACK before transmitting the next segment. Thus,
an interactive application like Telnet that continually writes one byte at a time will send data at a
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 35
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
rate of one segment per RTT. Some segments will contain a single byte, while others will contain
as many bytes as the user was able to type in one round-trip time. Because some applications
cannot afford such a delay for each write it does to a TCP connection, the socket interface allows
the application to turn off Nagel’s algorithm by setting the TCP_NODELAY option. Setting this
option means that data is transmitted as soon as possible.
UNIVERSITY QUESTIONS
2 MARKS
1. Differentiate TCP and UDP. (Q.NO. 2)
2. What is QOS? (Q.NO.32)
16 MARKS
1. Explain the following
(i) TCP header (8) (Q.NO. 2)
(ii) Adaptive flow control (8) (Q.NO. 4)
2. How is congestion controlled? Explain in detail the TCP congestion control (16) (Q.NO.
6)
1. List some of the Quality of service parameters of transport layer (Q.NO. 48)
2. How does transport layer perform duplication control? (Q.NO. 49)
16 MARKS
1. Explain the various fields of TCP header and the working of TCP protocol (16) (Q.NO. 2 & 3)
2 (i) i.Explain the three way handshake protocol to establish the transport level connection (8)
(Q.NO. 3)
(ii) List the various congestion control mechanisms. Explain any one in detail (8) (Q.NO. 6)
2 MARKS
1. What is the difference between congestion control and flow control? ( Q.NO 41)
2. What do you mean by QoS? (Q.NO 32)
16 MARKS
1. With a neat architecture, explain TCP in detail (Q.NO 2 & 3)
2. Explain TCP congestion control methods (Q.NO 6)
16 MARKS
1. Define UDP. Discuss the operations of UDP. Explain UDP checksum withone example. (Q.NO 1)
2. Explain in detail the various TCP congestion control mechanisms (Q.NO 6)
2 MARKS
1. Differentiate between TCP and UDP. (Q.NO 2)
16 MARKS
1. Explain various fields of TCP header and the working of the TCP protocol (Q.NO 2)
2. How is Congestion controlled?Explain in detail about congestion control techniques in
transport layer (Q.NO 6)
PART- B