Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Department of Computer Science and Engineering: PREPARED BY: Mr. D. Ramkumar - A.P-CSE

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

2 Marks

1) List the duties of Transport Layer (TL)


 Packetizing
 Connection Control
 Addressing
 Providing reliability

2) What is the difference between TCP & UDP? (NOV 2014 & 2016)

TCP UDP

Connection Oriented Service Connection less Service


Reliable Not much reliable
Not suitable for multimedia, real time used for multimedia and multicasting
applications applications

3) What is socket? Define socket address.


Socket is the end point of a bi-directional communication flow across IP based network
(Internet)
Socket address is the combination of an IP address (location of computer) and a port
(application program process) into a single entity.

4) What is congestion? How to control congestion?


Congestion in network is the network is the situation in which an increase in data
transmission results in a reduction in the throughput.
Throughput-amount of data passes through network congestion can be controlled using
two techniques.
 Open-loop congestion control (prevention)
 Closed-loop congestion control(removal)

5) Define jitter
Jitter is the variation in delay for packets belonging to the same flow.
Example: 2ms delay for 1st packet
60ms delay for second packet.

6) What is the use of integrated services?


Integrated services (Instserv) is a followed based QoS model where the user creates a
flow from source to direction and inform all the routers of the rource requirement.

7). Differentiate between delay and jitter.


Voice over IP (VoIP) is susceptible to network behaviors, referred to as delay and jitter, which
can degrade the voice application to the point of being unacceptable to the average user. Delay is
the time taken from point-to-point in a network. Delay can be measured in either one-way or
round-trip delay.

Jitter is the VARIATION in delay over time from point-to-point. If the delay of transmissions
varies too widely in a VoIP call, the call quality is greatly degraded. The amount of jitter
tolerable on the network is affected by the depth of the jitter buffer on the network equipment in
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 1
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
the voice path. The more jitter buffer available, the more the network can reduce the effects of
jitter.

8) Draw UDP header format

9) What is traffic shaping?


Traffic shaping is a mechanism to control the amount and rate of traffic sent to the
network.

10) What is the unit of data transfer in UDP and TCP?


In UDP, the Unit of data transfer is called datagram.
In TCP, Unit of data transfer is called segments.

11) List the timers used by TCP.


1) Retransmission timer
2) Persistence timer
3) Keep alive timer
4) Time waited timer

12) Define Sill window syndrome.


Sending less amount of Data (Ex. 1 byte) which is lesser than header size (20 bytes of
TCP header +20 bytes of IP header) is called silly window syndrome. Here the capacity of
network is used inefficiently.

13. Explain the main idea of UDP? Or Simple Demultiplexer


The basic idea is for a source process to send a message to a port and for the destination
process to receive the message from a port.

14. What are the different fields in pseudo header?


 Protocol number
 Source IP address
 Destination IP addresses.

15. Define TCP? Or Reliable byte stream


TCP guarantees the reliable, in order delivery of a stream of bytes. It is a full-duplex
protocol, meaning that each TCP connection supports a pair of byte streams, one flowing in each
direction.

16. Define Congestion Control?

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 2


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
It involves preventing too much data from being injected into the network, thereby
causing switches or links to become overloaded. Thus flow control is an end to an end issue,
while congestion control is concerned with how hosts and networks interact.

17. State the two kinds of events trigger a state transition?


 A segment arrives from the peer.
 The local application process invokes an operation on TCP.

18. What is meant by segment?


At the sending and receiving end of the transmission, TCP divides long transmissions
into smaller data units and packages each into a frame called a segment.

19. What is meant by segmentation?


When the size of the data unit received from the upper layer is too long for the network
layer datagram or data link layer frame to handle, the transport protocol divides it into smaller
usable blocks. The dividing process is called segmentation.

20. What is meant by Concatenation?


The size of the data unit belonging to single sessions are so small that several can fit
together into a single datagram or frame, the transport protocol combines them into a single data
unit. The combining process is called concatenation.

21. What is rate based design?


Rate- based design, in which the receiver tells the sender the rate-expressed in either bytes or
packets per second – at which it is willing to accept incoming data.

22. Define Gateway.


A device used to connect two separate networks that use different communication protocols.

23. What are the two categories of QoS attributes?


The two main categories are,
 User Oriented
 Network Oriented

24. What is RED?


Random Early Detection in each router is programmed to monitor its own queue length
and when it detects that congestion is imminent, to notify the source to adjust its congestion
window.

25. What are the three events involved in the connection?


For security, the transport layer may create a connection between the two end ports. A
connection is a single logical path between the source and destination that is associated with all
packets in a message. Creating a connection involves three steps:
 Connection establishment
 Data transfer
 Connection release

26. What is the difference between service point address, logical address and physical
address? Service point addressing Logical addressing Physical addressing
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 3
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Service point addressing Logical addressing Physical addressing

The transport layer header If a packet passes the If the frames are to be
includes a type of address network boundary we need distributed to different
called a service point another addressing to systems on the network, the
address or port address, differentiate the source and data link layer adds the
which makes a data delivery destination systems. The header, which defines the
from a specific process on network layer adds a source machine’ s address
one computer to a specific header, which indicate the and the destination
process on another logical address of the sender Machine’ s address.
computer. and receiver.

27. Give the approaches to improve the QoS


Four common techniques are:

 Scheduling

 Traffic shaping

 Admission control

 Resource reservation

28. Draw TCP header format

29. How will the congestion be avoided?


The congestion may be avoided by two bits
BECN - Backward Explicit Congestion Notification
FECN - Forward Explicit Congestion Notification
30. What is the function of BECN BIT?
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 4
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
The BECN bit warns the sender of congestion in network. The sender can respond to this
warning by simply reducing the data rate.

31. What is the function of FECN?


The FECN bit is used to warn the receiver of congestion in the network. The sender and receiver
are communicating with each other and are using some types of flow control at a higher level.

32. What is meant by quality of service or QoS? (NOV 2014 & 2015)
The quality of service defines a set of attributes related to the performance of the connection. For
each connection, the user can request a particular attribute each service class is associated with a
set of attributes.

33. List out the user related attributes?


User related attributes are
SCR – Sustainable Cell Rate
PCR – Peak Cell Rate
MCR- Minimum Cell Rate
CVDT – Cell Variation Delay Tolerance

34. What are the networks related attributes?


The network related attributes are,
Cell loss ratio (CLR)
Cell transfer delay (CTD)
Cell delay variation (CDV)
Cell error ratio (CER)

35. Why is UDP pseudo header included in UDP checksum calculation? What is the effect of
an invalid checksum at the receiving UDP?
The UDP checksum is performed over the entire payload, and the other fields in the header, and
some fields from the IP header. A pseudo-header is constructed from the IP header in order to
perform the calculation (which is done over this pseudo-header, the UDP header and the
payload). The reason the pseudo-header is included is to catch packets that have been routed to
the wrong IP address.
If the checksum validation is enabled and it detected an invalid checksum, features like packet
reassembling won't be processed.

36. How can the effect of jitter be compensated? What type of application require for this
compensation?
Jitter is an undesirable effect caused by the inherent tendencies of TCP/IP networks and
components.
Jitter is defined as a variation in the delay of received packets. The sending side transmits \
packets in a continuous stream and spaces them evenly apart. Because of network congestion,
improper queuing, or configuration errors, the delay between packets can vary instead of
remaining constant. This variation causes problems for audio playback at the receiving end.
Playback may experience gaps while waiting for the arrival of variable delayed packets.

When a router receives an audio stream for VoIP, it must compensate for any jitter that it detects.
The playout delay buffer mechanism handles this function. Playout delay is the amount of time

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 5


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
that elapses between the time a voice packet is received at the jitter buffer on the DSP and the
time a voice packet is played out to the codec.

The playout delay buffer must buffer these packets and then play them out in a steady Stream to
the DSPs. The DSPs then convert the packets back into an analog audio stream. The play out
delay buffer is also referred to as the dejitter buffer.

37. What is meant by PORT or MAILBOX related with UDP?


Form of address used to identify the target process:

Process can directly identify each other with an OS-assigned process ID(pid) More commonly-
processes indirectly identify each other using a port or mailbox Source sends a message to a port
and destination receives the message from the port UDP port is 16 bits, so there are 64K possible
ports- not enough for all Internet hosts Process is identified as a port on a particular host – a
(port, host) pair.

To send a message the process learns the port in the following way:
A client initiates a message exchange with a server process. The server knows the client’s port
(contained in message header and can reply to it. Server accepts messages at a well known port.
Examples: DNS at port 53, mail at port 25

38. List out the various features of sliding window protocol.


The key feature of the sliding window protocol is that it permits pipelined communication. In
contrast, with a simple stop-and-wait protocol, the sender waits for an acknowledgment after
transmitting every frame. As a result, there is at most a single outstanding frame on the channel
at any given time, which may be far less than the channel's capacity. For maximum throughput,
the amount of data in transit at any given time should be equal to (channel bandwidth) X
(channel delay).

39. What is the function of a router?


 Connect network segment together
 Router forwards the packet to the right path

40. What is the advantage of using UDP over TCP?


 UDP can send data in a faster way than TCP
 UDP is suitable for sending multicasting and multimedia applications

41. What is the difference between congestion control and flow control? (Nov 2015)
Congestion control
It involves preventing too much data from being injected into the network, thereby causing
switches or links to become overloaded. Thus flow control is an end to an end issue, while
congestion control is concerned with how hosts and networks interact.

Flow control
The amount of data flowed from source to destination should be restricted. The source can send
one byte at a time, but it will take long time to transmit n bytes
42. List the mechanisms used in TCP congestion control mechanism
o Additive Increase/Multiplicative Decrease
o Slow Start
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 6
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
o Fast Retransmit and Fast Recovery

43. List the mechanisms used in TCP congestion avoidance


o DEC bit
o RED(Random Early Detection)
o Source-based Congestion Avoidance

44. Define DEC bit


Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur. This notification is implemented by setting a binary
congestion bit in the packets that flow through the router, hence the name DEC bit.

45. What is meant by Source-Based Congestion Avoidance?


The general idea of these techniques is to watch for some sign from the network that
some router’s queue is building up and that congestion will happen soon if nothing is done about
it

46. List the approaches to QoS support


Fine-grained approaches, which provide QoS to individual applications or flows
Coarse-grained approaches, which provide QoS to large classes of data or aggregated traffic

47. List the types of application requirements in QoS


o Real-time
o Non-real-time

48. List some of the Quality of service parameters of transport layer (May 2015)
 Reliability
 Delay
 Jitter
 Bandwidth

49. How does transport layer perform duplication control? (May 2015)
Duplication can be controlled by the use of sequence number & acknowledgment number

50. What do you mean by slow start in TCP congestion? (May 2016)
The sender starts with a very slow rate of transmission but increases the rate rapidly to
reach a threshold

51. List the different phases used in TCP connection


 Connection establishment and Data transfer

 Connection termination

16 MARKS

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 7


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
1. Explain the working of USER DATAGRAM PROTOCOL (UDP) or Simple Demultiplexer
(May 2016)

The UDP is called a connection less, unreliable transport protocol. The purpose of UDP
is to break up a stream of data into datagram, add a source and destination port information, a
length and a checksum. It is the receiving application's responsibility to detect and recover lost or
damaged packets, as UDP doesn't take care of this.

Advantages:

 It is a very simple protocol using a minimum of overhead.

 If a process wants to send a small message and doesn’t care much about reliability, it can
use UDP.

 It is a convenient protocol for multimedia and multicasting applications.

 Sending a small message by using UDP takes much less interaction between the sender
and receiver than using TCP.

Basic Properties of UDP:

 UDP is a connectionless transport protocol.

– A UDP application sends messages without establishing and then


closing a connection.

– UDP has a smaller overhead then TCP, especially when the total
size of the messages is small.

 UDP does not guarantee reliable data delivery.

– UDP messages can be lost or duplicated, or they may arrive out of order; and they
can arrive faster than the receiver can process them.

– The application programmers using UDP have to consider and tackle these issues
themselves.

– Not buffered -- UDP accepts data and transmits immediately (no buffering before
transmission)

– Full duplex -- concurrent transfers can take place in both directions

 UDP has no mechanism for flow control.

 Multiplexing and Demultiplexing

– This is many to one relationship used in sender side.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 8


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
– The protocol accepts messages from different processes, differentiated by their
assigned port numbers.

– Demultiplexing is one to many relationship used in receiver side.

 Encapsulation and Decapsulation


– To send a message from one processes to another, the UDP
protocol encapsulate and decapsulate messages in an IP datagram
– Encapsulate each UDP message in an IP datagram, and use IP to
deliver this datagram across the internet.

UDP Message Format

User Datagram:
 UDP packets called user datagram which has a fixed size header of 8 bytes.

Format of User Datagram:

User datagram has the following fields.

 Source Port Number


The source port number used by the processes running on the source host (local
computer). It is 16 bits long, which means that the port number can range from 0 to
65535. If the source host is the client, the port number requested by the processes and
chosen by the UDP software running on the source host.

 Destination Port Number


This is the port number used by the processes running on the destination host. It is also
16 bits long. The Destination port is usually a 'well known port number' such as 69 for
trivial file transfer protocol, or 53 for DNS. These port numbers allow the remote
machine to recognize a request for a particular type of service. If the destination host is a
client, the server copies the port number it has received in the request packet.

 Length
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 9
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
This is a 16 bits field that defines the total length of the user data gram, header plus data.
The 16 bits can defined a total length of 0 to 65535 bytes.

 Checksum
This field is used to detect errors over the entire user datagram. The calculation of
checksum and its inclusion in the user datagram are optional.

Process communication in UDP


The next issue is how a process learns the port for the process to which it wants to send
the message.
Typically a client process initiates a message exchange with a server process. Once a
client has contacted a server, the server knows the client’s port (it was contained in the message
header) and can reply to it.
The real problem, therefore, is how the client learns the server’s port in the first place. A
common approach is for the server to accept messages at a well known port.
That is, each server receives its messages at some fixed port that is widely published,
much like the emergency telephone service available at the well-known number 911.
In the Internet, for example, the domain name server (DNS) receives messages at well-
known port 53 on each host, the mail service listens of messages at port 25,and the Unix talk
program accepts messages at well known port 517,and so on.
This mapping is published periodically in an RFC and is available on the most Unix
systems in file /etc/services. Sometimes a well-known port to agree on some other port that they
will use for subsequent communication leaving the well-known port free for other clients.
A port is purely an abstraction. Exactly how it is implemented differs from system to
system, or more precisely, from OS to OS.
For example, the socket API is an example implementation of ports. Typically, a port is
implemented by a message queue.
When a message arrives, the protocol l(eg.UDP) appends the message to the end of the
queue. Should the queue be full, the message is discarded.

UDP Message queue

There is no flow-control mechanism that tells the sender to slow down. When an
application process wants to receive a message, one is removed from the front of the queue. If
the queue is empty, the process blocks until a message becomes available.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 10
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Uses or applications of UDP


 UDP is used for process with simple request-response communication with little concern
for and error control.
 UDP is suitable for a process with internal flow and error control mechanism.
 UDP is suitable for multicasting. Multicasting capabilities are embedded in UDP
software but not in TCP software.
 UDP is used for some route updating protocols, such as routing information protocol
(RIP).
 UDP is used for management processes such as SNMP.

2. Describe in detail about TCP segment (Header) format (NOV 2013, 2014)(May & Nov 2015)
A Packet in TCP is called a segment. The below diagram shows the format of the segment.
The segment consists of a 20 to 60 byte header, followed by data from the application program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains options.

 Header
– The header is composed of a 20-byte fixed part and an optional part with a variable
length. The total size of the header (in 32-bit words) is specified in HLEN.

 Data - The data can have a variable size, which can be up to 65535 – 20 = 65515 bytes.
 Source port number (16 bits)
– The SOURCE PORT field identifies the TCP process which sent the datagram.

 Destination port number (16 bits)


– The DESTINATION PORT field identifies the TCP process which is receiving the
datagram.

 Sequence number (32 bits)


PREPARED BY: Mr. D. Ramkumar - A.P-CSE 11
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
– The SEQUENCE NUMBER field identifies the first byte of the outgoing data. The
receiver uses this to re-order segments arriving out of order and to compute an
acknowledgement number.

 Acknowledgement number (32 bits)


– Contains the next sequence number that the sender of the acknowledgement expects to
receive which is the sequence number plus 1 (plus the number of bytes received in the
last message). This number is used only if the ACK flag is on. The
ACKNOWLEDGEMENT NUMBER field identifies the sequence number of the incoming
data that is expected next.

 Header Length
– This 4-bit field indicates the number of 4-byte words in the TCP header. The length of
the header can be between 20 and 60 bytes.

 Reserved
– This is a 6-bit field reserved for future use.

 Code bits
– The CODE BITS (or FLAGS) field contains one or more 1-bit flags
– Control bits to indicate end of stream, acknowledgement field being valid, connection
reset, urgent pointer field being valid, etc.

[CONTROL]: URG (1)- Urgent Bit validates the Urgent Pointer field.
Acknowledge Bit, set if the Acknowledge Number field is being
[CONTROL]: ACK (1)-
used.
[CONTROL]: PSH (1)- Push Bit tells the sender that a higher throughput is required.
Reset Bit resets the connection when there's conflicting sequence
[CONTROL]: RST (1)-
numbers.
Sequence Number Synchronization. Used in 3 types of segments:
connection request, connection confirmations (with ACK) and
[CONTROL]: SYN (1)- confirmation termination (with FIN) in 3 types of segments:
terminal request, terminal confirmation (with ACK) and
acknowledgement of terminal confirmation (with ACK).
[CONTROL]: FIN (1)- Used with SYN to confirm termination of connections

 Window Size(16 bit)


– The WINDOW field identifies how much buffer space is available for incoming data.
– During piggybacking, how much data a receiver is willing to accept.

Note: The process of sending data along with the acknowledgment is called piggybacking
 Checksum(16 bit)
– The CHECKSUM field contains a simple checksum over the TCP segment header and
data.
 Urgent Pointer (16 bit)
– This 16-bit field, which is valid only if the urgent flag is set, is used when the segment
contains urgent data. It defines the number that must be added to the sequence number to
obtain the number of the last urgent byte in the data section of the segment.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 12
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
 Options
– There can be up to 40 bytes of optional information in the TCP header.

3. Explain in detail about TCP connection establishment & termination (TCP Connection
Management) (NOV 2013) (May & Nov 2015)

TCP is connection-oriented. A connection-oriented transport protocol establishes a virtual path


between the source and destination. All the segments belonging to a message are then sent over
this virtual path. Using a single virtual pathway for the entire message facilitates the
acknowledgement process as well as retransmission of damaged or lost frames. TCP, which uses
the services of IP, a connection-less protocol, can be connection-oriented. TCP uses the services
of IP to deliver individual segments to the receiver, but it controls the connection itself. If a
segment is lost or corrupted, it is retransmitted.

In TCP connection-oriented transmission requires two phases:

 Connection establishment and Data transfer

 Connection termination

Connection establishment:

TCP transmits data in full-duplex mode. When two TCP’s in two machines or connected, they
are able to send segments to each other simultaneously.

Three-way handshaking.

The connection establishment in TCP is called three way handshaking. The process starts with
the server. The server program tells its TCP that it is ready to accept a connection. This is called
a request for a passive open.

The client program issues a request for an active open. A client that wishes to connect to an
open server tells its TCP that it needs to be connected to that particular server. TCP can now
start the three-way handshaking process. Each segment has the sequence number the
acknowledgement number, the control flags, and the window size, if not empty.

The three steps in this phase are as follows.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 13


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
1. The client sends the first segment, a SYN segment, in which only the SYN flag is set.
This segment is for synchronization of sequence numbers. It consumes one sequence
number. When the data transfer starts, the sequence number is incremented by 1. A
SYN segment cannot carry data, but it consumes one sequence number.

2. The server sends the second segment, a SYN + ACK segment, with 2 flag bits set: SYN
and ACK. This segment has a dual pupose. It is a SYN segment for communication in
the other direction and serves as the acknowledgement for the SYN segment. It
consumes one sequence number.

3. The client sends the third segment. This is just an ACK segment. It acknowdeges the
receipt of the second segmant with the ACK flag and acknowledgment number field.

Data Transfer

After connection is established, bidirectional data transfer can take place. The client and server
can both send data and acknowledgements.

The below figure shows an example. In this example, after connection is established, the client
sends 2000 bytes of data in two segments. The server then sends 2000 bytes in one segment.
The client sends one more segment. The first three segments carry both data and
acknowledgment, but the last segment carries only an acknowledgement because there are no
more data to be sent.

The data segments sent by the client have the PSH (push) flag set so that the server TCP knows
to deliver data to the server process as soon as they are received.

Pushing Data: The sending TCP uses a buffer to store the stream of data coming from
the sending application program. The sending TCP can select the segment size. The
receiving TCP also buffers the data when they arrive and delivers them to the application
program when the application program is ready or when it is convenient for the receiving
TCP. This type of flexibility increases the efficiency of TCP.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 14
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
The application program at the sending site can request a push operation. This means
that the sending TCP must not wait for the window to be filled. It must create a segment
and send it immediately. The sending TCP must also set the push bit (PSH) to let the
receiving TCP know that the segment includes data that must be delivered to the
receiving application program as soon as possible and not to wait for more data to come.

TCP Connection Release (or) Connection Termination

Any of the two parties involved in exchanging data (client or server) can close the connection,
although it is usually initiated by the client. Most implementations today allow two options for
connection termination: three-way handshaking and four-way handshaking with a half-close
option.

Three-way handshaking

In a normal situation, the client TCP, after receiving a close command from the client process,
sends the first segment, a FIN segment in which the FIN flag is set. The FIN segment consumes
one sequence number if it does not carry data.

1. The server TCP, after receiving the FIN segment, informs its process of the situation and
sends the second segment, a FIN + ACK segment, to confirm the receipt of the FIN
segment from the client and at the same time to announce the closing of the connection in
the other direction. This segment can also contain the last chunk of data from the server.
The FIN + ACK segment consumes one sequence number if it does not carry data.

2. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the
FIN segment from the TCP server. This segment contains the acknowledgement number,
which is 1 plus the sequence number received in the FIN segment from the server. This
segment cannot carry data and consumes no sequence number.

TCP Connection Management (State transition diagram) (NOV/DEC 2013)


The steps to manage a TCP connection can be represented in a finite state machine with the
eleven states listed in Figure

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 15


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Client Diagram:

 The client TCP starts in the CLOSED state.

 While in this state, the client TCP can receive an active open request from the client
application program. It sends a SYN segment to the server TCP and goes to the SYN-
SENT state.
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 16
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
 While in this state, the client TCP can receive a SYN + ACK segment from other TCP. It
sends an ACK segment to the other TCP and goes to the ESTABLISHED state. This is
the data transfer state. The client remains in this state as long as it is sending and
receiving data.

 While in this state, the client TCP can receive a close request from the client application
program. It sends a FIN segment to the other TCP and goes to the FIN-WAIT1 state.

 While in this state, the client TCP waits to receive an ACK from the server TCP. When
ACK is received, it goes to the FIN-WAIT2 state. Now the connection is closed in one
direction.

 The client remains in this state, waiting for the server to close the connection. If it
receives a FIN segment, it sends an ACK segment and goes to the TIME-WAIT state.

 When the client is in this state, it starts a timer and waits until this timer goes off. After
the time-out, the client goes to the CLOSED state, where it began.

Server Diagram:

 The server TCP starts in the CLOSED state.

 While in this state, the server TCP can receive an open request from the server
application program, it goes to the LISTEN state.

 While in this state, the server TCP can receive a SYN segment. It sends a SYN + ACK
segment to the client and goes to the SYN-RCVD state.

 While in this state, the server TCP receives an ACK and goes to ESTABLISHED state.
This is the data transfer state. The server remains in this state as long as it is receiving
and sending data.

 While in this state, the server TCP can receive a FIN segment from the client. It can send
an ACK and goes to the CLOSE-WAIT state.

 While in this state, the server waits until it receives a close request from the server
program. It then sends a FIN segment and goes to LAST-ACK state.

 While in this state, the server waits for the last ACK segment and goes to the CLOSED
state.

4. Explain in detail about TCP flow control OR TCP Adaptive flow control (NOV/DEC 2013,
2014)

TCP uses a sliding window mechanism to control the flow of data. When a connection is
established, each end of the connection allocates a buffer to hold incoming data, and sends the
size of the buffer to the other end. As data arrives, the receiver sends acknowledgements together
with the amount of buffer space available called a window advertisement.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 17


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Sliding window algorithm serves several purposes.
1. It guarantees the reliable delivery of data
2. It ensures that data is delivered in order.
3. It enforces flow control between, the sender and the receiver.

Reliable and Ordered Delivery


To see how the sending and receiving sides of TCP interact with each other to implement
reliable and ordered delivery, consider the situation illustrated in Figure 5.8. TCP on the sending
side maintains a send buffer. This buffer is used to store data that has been sent but not yet
acknowledged, as well as data that has been written by the sending application but not
transmitted. On the receiving side, TCP maintains a receive buffer. This buffer holds data that
arrives out of order, as well as data that is in the correct order (i.e., there are no missing bytes
earlier in the stream) but that the application process has not yet had the chance to read.

To make the following discussion simpler to follow, we initially ignore the fact that both
the buffers and the sequence numbers are of some finite size and hence will eventually wrap
around. Also, we do not distinguish between a pointer into a buffer where a particular byte of
data is stored and the sequence number for that byte

Looking first at the sending side, three pointers are maintained into the send buffer, each
with an obvious meaning: LastByteAcked, LastByteSent, and LastByteWritten. Clearly

LastByteAcked ≤ LastByteSent

since the receiver cannot have acknowledged a byte that has not yet been sent, and
LastByteSent ≤ LastByteWritten

since TCP cannot send a byte that the application process has not yet written. Also note that none
of the bytes to the left of LastByteAcked need to be saved in the buffer because they have
already been acknowledged, and none of the bytes to the right of LastByteWritten need to be
buffered because they have not yet been generated.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 18


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
A similar set of pointers (sequence numbers) are maintained on the receiving side:
LastByteRead, NextByteExpected, and LastByteRcvd. The inequalities are a little less intuitive,
however, because of the problem of out-of-order delivery. The first relationship

LastByteRead < NextByteExpected

is true because a byte cannot be read by the application until it is received and all preceding
bytes have also been received. NextByteExpected points to the byte immediately after the latest
byte to meet this criterion. Second,

NextByteExpected ≤ LastByteRcvd+1

since, if data has arrived in order, NextByteExpected points to the byte after LastByteRcvd,
whereas if data has arrived out of order, then NextByteExpected points to the start of the first gap
in the data, as in Figure 5.8.Note that bytes to the left of LastByteRead need not be buffered
because they have already been read by the local application process, and bytes to the right of
LastByteRcvd need not be buffered because they have not yet arrived.

Flow Control

Recall that in a sliding window protocol, the size of the window sets the amount of data that can
be sent without waiting for acknowledgment from the receiver. Thus, the receiver throttles the
sender by advertising a window that is no larger than the amount of data that it can buffer.
Observe that TCP on the receive side must keep

LastByteRcvd−LastByteRead ≤ MaxRcvBuffer

to avoid overflowing its buffer. It therefore advertises a window size of

AdvertisedWindow = MaxRcvBuffer−((NextByteExpected−1) −LastByteRead)

which represents the amount of free space remaining in its buffer. As data arrives, the receiver
acknowledges it as long as all the preceding bytes have also arrived. In addition, LastByteRcvd
moves to the right (is incremented), meaning that the advertised window potentially shrinks.
Whether or not it shrinks depends on how fast the local application process is consuming data. If
the local process is reading data just as fast as it arrives (causing LastByteRead to be
incremented at the same rate as LastByteRcvd), then the advertised window stays open (i.e.,
AdvertisedWindow = MaxRcvBuffer). If, however, the receiving process falls behind, perhaps
because it performs a very expensive operation on each byte of data that it reads, then the
advertised window grows smaller with every segment that arrives, until it eventually goes to 0.

TCP on the send side must then adhere to the advertised windowit gets from the receiver.
This means that at any given time, it must ensure that

LastByteSent−LastByteAcked ≤ AdvertisedWindow

Said another way, the sender computes an effective window that limits how much data it can
send:
EffectiveWindow = AdvertisedWindow−(LastByteSent−LastByteAcked)

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 19


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
All the while this is going on, the send side must also make sure that the local application
process does not overflow the send buffer—that is, that

LastByteWritten−LastByteAcked ≤ MaxSendBuffer

If the sending process tries to write y bytes to TCP, but

(LastByteWritten−LastByteAcked)+y > MaxSendBuffer

then TCP blocks the sending process and does not allow it to generate more data.

5. Discuss TCP adaptive retransmission

TCP ADAPTIVE RETRANSISSION

TCP copies with the loss of packets using a technique called retransmission. When TCP
data arrives an acknowledgement is sent back to the sender. Whenever a TCP segment is
transmitted, a copy of it is also placed on the retransmission queue. When TCP data is sent, a
timer is started this starts from a particular value and counts down to zero. If the timer expires
before an acknowledgement arrives, TCP retransmits the data.

Three algorithm of adaptive retransmission


 Simple algorithm (Original algorithm)
 Kern/Partridge algorithm
 Jacobson/Karels algorithm

Original Algorithm
We begin with a simple algorithm for computing a timeout value between a pair of hosts.
This is the algorithm that was originally described in the TCP specification—and the following
description presents it in those terms—but it could be used by any end-to-end protocol.

The idea is to keep a running average of the RTT and then to compute the timeout as a
function of this RTT. Specifically, every time TCP sends a data segment, it records the time.
When an ACK for that segment arrives, TCP reads the time again, and then takes the difference
between these two times as a SampleRTT. TCP then computes an EstimatedRTT as a weighted
average between the previous estimate and this new sample. That is,

The parameter _ is selected to smooth the EstimatedRTT. A small _ tracks changes in the
RTT but is perhaps too heavily influenced by temporary fluctuations. On the other hand, a large
is more stable but perhaps not quick enough to adapt to real changes. The original TCP
specification recommended a setting of between 0.8 and 0.9. TCP then uses EstimatedRTT to
compute the timeout in a rather conservative way:

TimeOut = 2×EstimatedRTT

Karn/Partridge Algorithm

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 20


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
After several years of use on the Internet, a rather obvious flaw was discovered in this
simple algorithm. The problem was that an ACK does not really acknowledge a transmission; it
actually acknowledges the receipt of data. In other words, whenever a segment is retransmitted
and then an ACK arrives at the sender, it is impossible to determine if this ACK should be
associated with the first or the second transmission of the segment for the purpose of measuring
the sample RTT.
It is necessary to know which transmission to associate it with so as to compute an
accurate SampleRTT. As illustrated in Figure 5.10, if you assume that the ACK is for the
original transmission but it was really for the second, then the SampleRTT is too large (a); if you
assume that the ACK is for the second transmission but it was actually for the first, then the
SampleRTT is too small (b).

The solution, which was proposed in 1987, is surprisingly simple. Whenever TCP retransmits a
segment, it stops taking samples of the RTT; it only measures SampleRTT for segments that
have been sent only once. This solution is knownas the Karn/Partridge algorithm, after its
inventors.

Their proposed fix also includes a second small change to TCP’s timeout mechanism.
Each time TCP retransmits, it sets the next timeout to be twice the last timeout, rather than
basing it on the last EstimatedRTT. That is, Karn and Partridge proposed that TCP use
exponential backoff, similar to what the Ethernet does. The motivation for using exponential
backoff is simple: Congestion is the most likely cause of lost segments, meaning that the TCP
source should not react too aggressively to a timeout. In fact, the more times the connection
times out, the more cautious the source should become

Jacobson/Karels Algorithm

The Karn/Partridge algorithm was introduced at a time when the Internet was suffering
from high levels of network congestion. Their approach was designed to fix some of the causes
of that congestion, but, although it was an improvement, the congestion was not eliminated. The
following year (1988), two other researchers—Jacobson and Karels—proposed a more drastic
change to TCP to battle congestion. The bulk of that proposed change is described in Chapter 6.
Here, we focus on the aspect of that proposal that is related to deciding when to time out and
retransmit a segment.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 21


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
As an aside, it should be clear how the timeout mechanism is related to congestion—if
you time out too soon, you may unnecessarily retransmit a segment, which only adds to the load
on the network. As we will see in Chapter 6, the other reason for needing an accurate timeout
value is that a timeout is taken to imply congestion, which triggers a congestion-control
mechanism. Finally, note that there is nothing about the Jacobson/Karels timeout computation
that is specific to TCP. It could be used by any endto- end protocol.

The main problem with the original computation is that it does not take the variance of
the sample RTTs into account. Intuitively, if the variation among samples is small, then the
EstimatedRTT can be better trusted and there is no reason for multiplying this estimate by 2 to
compute the timeout. On the other hand, a large variance in the samples suggests that the timeout
value should not be too tightly coupled to the EstimatedRTT.

In the new approach, the sender measures a new SampleRTT as before. It then folds this new
sample into the timeout calculation as follows:

where is a fraction between 0 and 1. That is, we calculate both the mean RTT and the variation
in that mean.
TCP then computes the timeout value as a function of both Estimated- RTT and
Deviation as follows:

where based on experience, is typically set to 1 and is set to 4. Thus, when the variance is
small, TimeOut is close to EstimatedRTT; a large variance causes the Deviation termto dominate
the calculation.

6. Explain in detail about TCP congestion control mechanisms (NOV 2013, 2014, 2015, 2016)

Congestion, in a network may occur if the load on the network – the number of packets sent to
the network is greater than the capacity of the network – the number of packets a network can
handle.

 Congestion control refers to the mechanisms and techniques to control the congestion and
keep the load below the capacity that can either prevent congestion before it happens or
remove congestion, after it has happened.
 There are two categories of congestion control
– Open-loop congestion control (prevention): are applied to prevent congestion
before it happens. In this, congestion control is handled by either the source or
the destination.
– Closed-loop congestion control (removal): try to remove congestion after it
happens.

TCP CONGESTION CONTROL

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 22


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Too many sources sending too much data too fast for network to handle. TCP uses congestion
control to avoid congestion or remove congestion in the network.

Factors of congestion:

 Two senders, two receivers


 One router, infinite buffers
 No retransmission
 One router, finite buffers, reliable data transfer
 Sender retransmission of lost packet

Congestion Window
The sender’s window size is determined by the receiver and also by congestion in the network.
The sender has two pieces of information:
i) The receiver – advertised window size (rwnd)
ii) The congestion window size (cwnd)

The actual size of the window is the minimize of these two


Max window = min (Congestion window, advertised window)
Effective window = Max window – (LastByteSend – LastByteAcked)
LastByteSend – LastByteAcked <= CongWin

Congestion Policy:
TCP handles congestion is based on three phases
i) Slow start (Exponential Increase )
ii) Additive Increase / Multiplicative Decrease
iii) Fast Retransmit and Fast Recovery

i) Slow Start

In this, the sender starts with a very slow rate of transmission but increases the rate rapidly to
reach a threshold.

Slow start adds another window to the sender's TCP: the congestion window, called "cwnd".
When a new connection is established with a host on another network, the congestion window is
initialized to one segment. Each time an ACK is received, the congestion window is increased
by one segment. The sender can transmit up to the minimum of the congestion window and the
advertised window. The congestion window is flow control imposed by the sender, while the
advertised window is flow control imposed by the receiver. The former is based on the sender's
assessment of perceived network congestion; the latter is related to the amount of available
buffer space at the receiver for this connection.

The sender starts by transmitting one segment and waiting for its ACK. When that ACK is
received, the congestion window is incremented from one to two, and two segments can be sent.
When each of those two segments is acknowledged, the congestion window is increased to four.
This provides an exponential growth, although it is not exactly exponential because the receiver
may delay its ACKs, typically sending one ACK for every two segments that it receives.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 23


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
At some point the capacity of the internet can be reached, and an intermediate router will start
discarding packets. This tells the sender that its congestion window has gotten too large.

Early implementations performed slow start only if the other end was on a different network.
Current implementations always perform slow start.

 The source starts with cwnd = 1.

 Every time an ACK arrives, cwnd is incremented.


 Two slow start situations:
– At the very beginning of a connection {cold start}.

– When the connection goes dead waiting for a timeout to occur (i.e, the advertized
window goes to zero!)

 However, in the second case the source has more information. The current value of cwnd
can be saved as a congestion threshold.
 This is also known as the “slow start threshold” ssthresh.

When the size of window in bytes reaches this threshold, slow start stops and the next phase
starts.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 24


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

ii) Additive Increase (Congestion avoidance) / Multiplicative Decrease

To avoid congestion before it happens, one must slow down the exponential growth. When the
size of the congestion window reaches the slow start threshold, the slow start phase steps and the
additive phase begins. In this, each time the whole window of segments is acknowledged, the
size of the congestion window is increased by 1.

After the sender has received acknowledgements for a complete window size of segments, the
size of the congestion window increases additively until congestion is detected.
The congestion window is incremented as follows each time an ACK arrives:

Increment = MSS X (MSS / congestion window)


Congestion Window += Increment

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 25


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Where MSS – Message Segment Size.

Multiplicative Decrease

If congestion occurs, the congestion window size must be decreased. Retransmission can occur
in one of two cases, when a timer times out (or) when three ACKS are received. In both cases,
the size of the threshold is dropped to one-half, a multiplicative decrease.

TCP implementations have two reactions:


1. If a time-out occurs, there is a stronger possibility of congestion, a segment has probably
been dropped in the network, and there is no news about the sent segments. In this, TCP
reacts the following
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the size of one segment.
c. It starts the slow-start phase again.
2. If three ACK’s are received, there is a weaker possibility of congestion, a segment may
have been dropped, but some segments after that may have arrived safely since three
ACKs are received. This is called fast transmission and fast recovery.

In this, TCP reacts the following:


a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the value of the threshold.
c. It starts the congestion avoidance phase

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 26


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

iii)Fast Retransmit & Fast Recovery

Every time a data packet arrives at the receiving side, the receiver responds with an
acknowledgement. When a packet arrives out of order, TCP resends the same acknowledgement
is sent the last time. This second transmission of the same acknowledgement is called a
duplicate ACK.

When the sending side sees a duplicate ACK, it knows that the other side must have received a
packet out of order. The sender waits until it sees some no. of duplicate ACK’s and then
retransmit the missing packet. TCP waits until it has seen three duplicate ACK’s before
retransmitting the packet.

In this diagram, the destination receives packets 1 & 2, but packet 3 is lost in the network. Thus
the destination will send a duplicate ACK for packet 2 when packet 4 arrives, again when packet
5 arrives & so on. When the sender sees the third duplicate ACK for packet 2, the receiver had
gotten packet 6, it retransmits packet s. When the retransmitted copy of packet 3 arrives at the
destination, the receiver then sends a cumulative ACK for everything up to and including packet
6 back to the sends.

Fast Recovery

After fast retransmit sends what appears to be the missing segment, congestion avoidance, but
not slow start is performed. This is the fast recovery algorithm. It is an improvement that allows
high throughput under moderate congestion, especially for large windows.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 27


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
The reason for not performing slow start in this case is that the receipt of the duplicate ACKs
tells TCP more than just a packet has been lost. Since the receiver can only generate the
duplicate ACK when another segment is received, that segment has left the network and is in the
receiver's buffer. That is, there is still data flowing between the two ends, and TCP does not want
to reduce the flow abruptly by going into slow start.

The fast retransmit and fast recovery algorithms are usually implemented together as follows.

1. When the third duplicate ACK in a row is received, set ssthresh to one-half the current
congestion window, cwnd, but no less than two segments. Retransmit the missing
segment. Set cwnd to ssthresh plus 3 times the segment size. This inflates the congestion
window by the number of segments that have left the network and which the other end
has cached.

2. Each time another duplicate ACK arrives, increment cwnd by the segment size. This
inflates the congestion window for the additional segment that has left the network.
Transmit a packet, if allowed by the new value of cwnd.

3. When the next ACK arrives that acknowledges new data, set cwnd to ssthresh (the value
set in step 1). This ACK should be the acknowledgment of the retransmission from step
1, one round-trip time after the retransmission. Additionally, this ACK should
acknowledge all the intermediate segments sent between the lost packet and the receipt of
the first duplicate ACK. This step is congestion avoidance, since TCP is down to one-half
the rate it was at when the packet was lost.

When fast retransmit detects three duplicate ACKs, start the recovery process from
congestion avoidance region and use ACKs in the pipe to pace the sending of packets.

7. Explain in detail about congestion avoidance

1. DEC Bit, 2. RED & 3. Source based Congestion Avoidance

DECbit
It is a first mechanism
 The idea here is to more evenly split the responsibility for congestion control between the
routers and the end nodes.

 Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.

 This notification is implemented by setting a binary congestion bit in the packets that
flow through the router: hence the name DECbit.

 The destination host then copies this congestion bit into the ACk it sends back to the
source.

 Finally, the source adjusts its sending rate so as to avoid congestion.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 28


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
How it is functioning:
 A single congestion bit is added to the packet header. A router sets this bit in a packet if
its average queue length is grater than or equal to 1 at the time the packet arrives.

 This average queue length is measured over a time interval that distance the last bust +
idle cycle, plus the current busy cycle. (The router is busy when it is transmitting and
idles when it is not).

 The above figure shows the queue length at a router as a function of time. Essentially,
the router calculates the area under the curve and divides this value by the time interval to
compute the average queue length

If less than 50% of the packets had the bit set, then the source increases its congestion window
by one packet. If 50% or more of the last window’s worth of packets had the congestion bit set,
the source decreases its congestion window to 0.875 times the previous value.

Random Early Detection (RED)

A second mechanism, called random early detection (RED), is similar to the DECbit scheme in
that each router is programmed to monitor its own queue length, and when it detects that
congestion is imminent (forthcoming), to notify the source to adjust its congestion window.
 The first is that rather than explicitly sending a congestion notification message to the
source, RED is most commonly implemented such that it implicitly notifies the source of
congestion by dropping one of its packets.
 The source is, effectively notified by the subsequent timeout or duplicates ACK. In case
you haven’t already guessed, RED is designed to be used in conjunction with TCP, which
currently detects congestion by means of timeouts.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 29


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
 As the “early” part of the RED acronym suggests, the gateway drops the packet earlier
than it would have to, so as to notify the source that it should decrease its congestion
window sooner than it would normally have.
 In other words, the router drops a few packets before it has exhausted its buffer space
completely, so as to cause the source to slow down, with the hope that this will mean it
does not have to drop lots of packets later on.
 Note that RED could easily be adapted to work with an explicit feedback scheme simply
by marking a packet instead of dropping it, as discussed in the sidebar on Explicit
Congestion Notification.

Source-Based Congestion Avoidance

 A strategy for detecting the initial stages of congestion – before losses occur – from the
end hosts.

 The general idea of these techniques is to watch for some sign from the network that
some router’s queue is building up and that congestion will happen soon if nothing is
done about it.

 A first Scheme the congestion window normally increases as in TCP, but every two
round-trip delays the algorithm checks to see if the current RTT is greater than the
average of the minimum and maximum RTT’s seen so far. If it is, then the algorithm
decreases the congestion window by one-eighth.

 A second algorithm is the decision as to whether or not to change the current window
size is based on changes to both the RTT and the window size. The window is adjusted
once every two round-trip delays based on the product

(CurrentWindow – OldWindow) X (CurrentRTT – OldRTT)

If the result is positive, the source decreases the window size by one-eighth; if the result
is negative or 0, the source increases the window by one maximum packet size.
 A third scheme, Every RTT, it increases the window size by one packet and compares
the throughput achieved to the throughput when the window was one packet smaller. If
the difference is less than one-half the throughput achieved when only one packet was in
transit. If the difference is greater than the algorithm decreases the window by one
packet. This scheme calculates the throughput by dividing the number of bytes
outstanding in the network by the RTT.

 A fourth mechanism, it looks at changes in the throughput rate, or more specifically,


changes in the sending rate.

It compares the measured throughput rate with an expected throughput rate. The
algorithm, which is called TCP Vegas.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 30


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
TCP Vegas uses this idea to measure and control the amount of extra data this connection
has in transit, where by “extra data” we mean that the source would not have transmitted
had it been trying to match exactly the available bandwidth of the network. The goal of
TCP Vegas is to maintain the “right” amount of extra data in the network.
Obviously, if a source is sending too much extra data, it will cause long delays and
possibly lead to congestion. Less obviously, if a connection is sending too little extra
data, it cannot respond rapidly enough to transient increases in the available network
bandwidth.
 TCP Vegas sets BaseRTT to the minimum of all measured round-trip times; it is
commonly the RTT of the first packet sent by the connection, before the router
queues increase due to traffic generated by this flow. If we assume that we are
not overflowing the connection, then the expected throughput is give by

ExpectedRate = CongestionWindow / BaseRTT

Where CongestinWindow is the TCP congestion window, which we assume (for


the purpose of this discussion) to be equal to the number of bytes in transit.
 Second TCP Vegas calculates the current sending rate, ActualRate. This is done
by recording the sending time for a distinguished packet, recording how many
bytes are transmitted between the time that packet is sent and when its
acknowledgment is received, computing the sample RTT for the distinguished
packet when its acknowledgment arrives, and dividing the number of bytes
transmitted by the sample RTT. This calculation is done once per round-trip time.

 Third, TCP Vegas compares ActualRate to ExpectedRate and adjusts the window
accordingly. We let Diff = ExpectedRate – ActualRate. Note that Diff is positive
or 0 by definition, since ActualRate > ExpectedRate implies that we need to
change BaseRTT to the latest sampled RTT.

We also define two thresholds, α < β, roughly corresponding to having too little and too
much extra data in the network, respectively. When Diff < α, TCP Vegas increases the
congestion window linearly during the next RTT, and when Diff > β, TCP Vegas
decreases the congestion window linearly during the next RTT. TCP Vegas leaves the
congestion window unchanged when α < Diff < β.

8. Discuss QoS

Quality of Service (QoS) is an internetworking issue that has been discussed more than defined.

i) Flow Characteristics

Traditionally four types of characteristics are attributed to a flow reliability, delay, jitter and
bandwidth.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 31


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Reliability:

 Reliability is a characteristic that a flow needs lack of reliability means losing a packet or
acknowledgment, which entails retransmission. However, the sensitivity of application
programs to reliability is not the same.

 For example, it is more important that electronic mail, file transfer and internet access
have reliable transmission than telephony (or) audio conferencing.

Delay:

 Source-to-destination delay is another flow characteristic.

 Again applications can tolerate delay in different degrees. In this case, telephony Audio
conferencing, video conferencing and remote login need minimum delay, while delay in
file transfer or email is less important.

Jitter:

 Jitter is the variation in delay for packets belonging to the same flow.

Bandwidth:

 Different applications need different bandwidths. In video conferencing we need to send


millions of bits per second to refresh a color screen while the total number of bits in an
email may not reach even a Resource reservation million.

ii) Flow Classes

Based on flow characteristics, we can classify flow into groups, with each group having similar
levels of characteristics. This categorization is not format or universal; some protocols such as
ATM have defined classes.

iii) Techniques to Improve QoS

Four common techniques are:

 Scheduling

 Traffic shaping
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 32
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
 Admission control

 Resource reservation

9. Discuss the application requirements for QoS

Application Requirements
We can divide applications into two types: real-time and non-real-time. The latter are
sometimes called traditional data applications, since they have traditionally been the major
applications found on data networks. They include most popular applications like telnet, FTP,
email, web browsing, and so on. All of these applications can work without guarantees of timely
delivery of data. Another term for this non-real-time class of applications is elastic, since they
are able to stretch gracefully in the face of increased delay. Note that these applications can
benefit from shorter-length delays, but they do not become unusable as delays increase. Also
note that their delay requirements vary from the interactive applications like telnet to more
asynchronous ones like email, with interactive bulk transfers like FTP in the middle.

Real-Time Audio Example


As a concrete example of a real-time application, consider an audio application similar to
the one illustrated in Figure 6.20. Data is generated by collecting samples from a microphone
and digitizing them using an analog-to-digital (A!D) converter. The digital samples are placed in
packets, which are transmitted across the network and received at the other end. At the receiving
host, the data must be played back at some appropriate rate.

The operation of a playback buffer is illustrated in Figure 6.21. The left hand diagonal line shows
packets being generated at a steady rate. The wavy line shows when the packets arrive, some
variable amount of time after they were sent, depending on what they encountered in the
network. The right-hand diagonal line shows the packets being played back at a steady rate, after
sitting in the playback buffer for some period of time. As long as the playback line is far enough
to the right in time, the variation in network delay is never noticed by the application. However,
if we move the playback line a little to the left, then some packets will begin to arrive too late to
be useful.
For our audio application, there are limits to how far we can delay playing back data. It is
hard to carry on a conversation if the time between when you speak and when your listener hears
you is more than 300 ms. Thus, what we want from the network in this case is a guarantee that
all our data will arrive within 300 ms. If data arrives early, we buffer it until its correct playback
time. If it arrives late, we have no use for it and must discard it.

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 33


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Taxonomy of Real-Time Applications

The first characteristic by which we can categorize applications is their tolerance of loss of data,
where “loss” might occur because a packet arrived too late to be played back as well as arising
from the usual causes in the network. On the one hand, one lost audio sample can be interpolated
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 34
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
from the surrounding samples with relatively little effect on the perceived audio quality. It is
only as more and more samples are lost that quality declines to the point that the speech becomes
incomprehensible. On the other hand, a robot control program is likely to be an example of a
real-time application that cannot tolerate loss—losing the packet that contains the command
instructing the robot armto stop is unacceptable. Thus, we can categorize real-time applications
as tolerant or intolerant depending on whether they can tolerate occasional loss.

A second way to characterize real-time applications is by their adaptability.

Approaches to QoS Support

Considering this rich space of application requirements, what we need is a richer service
model that meets the needs of any application. This leads us to a service model with not just one
class (best effort), but with several classes, each available to meet the needs of some set of
applications. Towards this end, we are now ready to look at some of the approaches that have
been developed to provide a range of qualities of service. These can be divided into two broad
categories:
 Fine-grained approaches, which provide QoS to individual applications or flows
 Coarse-grained approaches, which provide QoS to large classes of data or aggregated
traffic

10. What is the need for Nagle’s algorithm? How does it determine when to transmit data?

Returning to the TCP sender, if there is data to send but the window is open less than
MSS, then we may want to wait some amount of time before sending the available data, but the
question is how long? If we wait too long, then we hurt interactive applications like Telnet. If we
don’t wait long enough, then we risk sending a bunch of tiny packets and falling into the silly
window syndrome. The answer is to introduce a timer and to transmit when the timer expires.
While we could use a clock-based timer—for example, one that fires every 100 ms—
Nagle introduced an elegant self-clocking solution. The idea is that as long as TCP has any data
in flight, the sender will eventually receive an ACK. This ACK can be treated like a timer firing,
triggering the transmission of more data. Nagle’s algorithm provides a simple, unified rule for
deciding when to transmit:

When the application produces data to send


if both the available data and the window ≥ MSS
send a full segment
else
if there is unACKed data in flight
buffer the new data until an ACK arrives
else
send all the new data now

In other words, it’s always OK to send a full segment if the window allows. It’s also all right to
immediately send a small amount of data if there are currently no segments in transit, but if there
is anything in flight the sender must wait for an ACK before transmitting the next segment. Thus,
an interactive application like Telnet that continually writes one byte at a time will send data at a
PREPARED BY: Mr. D. Ramkumar - A.P-CSE 35
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
rate of one segment per RTT. Some segments will contain a single byte, while others will contain
as many bytes as the user was able to type in one round-trip time. Because some applications
cannot afford such a delay for each write it does to a TCP connection, the socket interface allows
the application to turn off Nagel’s algorithm by setting the TCP_NODELAY option. Setting this
option means that data is transmitted as soon as possible.

UNIVERSITY QUESTIONS

B.E/B.TECH NOVEMBER/DECEMBER 2014(2008 Regulation)

2 MARKS
1. Differentiate TCP and UDP. (Q.NO. 2)
2. What is QOS? (Q.NO.32)

16 MARKS
1. Explain the following
(i) TCP header (8) (Q.NO. 2)
(ii) Adaptive flow control (8) (Q.NO. 4)
2. How is congestion controlled? Explain in detail the TCP congestion control (16) (Q.NO.
6)

B.E/B.Tech April May 2015


2 MARKS

1. List some of the Quality of service parameters of transport layer (Q.NO. 48)
2. How does transport layer perform duplication control? (Q.NO. 49)

16 MARKS

1. Explain the various fields of TCP header and the working of TCP protocol (16) (Q.NO. 2 & 3)
2 (i) i.Explain the three way handshake protocol to establish the transport level connection (8)
(Q.NO. 3)
(ii) List the various congestion control mechanisms. Explain any one in detail (8) (Q.NO. 6)

B.E/B.Tech Nov-Dec 2015

2 MARKS
1. What is the difference between congestion control and flow control? ( Q.NO 41)
2. What do you mean by QoS? (Q.NO 32)

16 MARKS
1. With a neat architecture, explain TCP in detail (Q.NO 2 & 3)
2. Explain TCP congestion control methods (Q.NO 6)

B.E/B.Tech April-May 2016


PREPARED BY: Mr. D. Ramkumar - A.P-CSE 36
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
2 MARKS
1. What do you mean by slow start in TCP congestion? ( Q.NO 50)
2. List the different phases used in TCP connection ( Q.NO 51)

16 MARKS
1. Define UDP. Discuss the operations of UDP. Explain UDP checksum withone example. (Q.NO 1)
2. Explain in detail the various TCP congestion control mechanisms (Q.NO 6)

B.E/B.Tech Nov-Dec 2016

2 MARKS
1. Differentiate between TCP and UDP. (Q.NO 2)

16 MARKS
1. Explain various fields of TCP header and the working of the TCP protocol (Q.NO 2)
2. How is Congestion controlled?Explain in detail about congestion control techniques in
transport layer (Q.NO 6)

UNIT 4 - IMPORTANT QUESTIONS

1. List the duties of Transport Layer (TL)


2. What is the difference between TCP & UDP?
3. What is socket? Define socket address.
4. What is congestion? How to control congestion?
5. Draw UDP header format
6. Explain the main idea of UDP? Or Simple Demultiplexer
7. Define TCP? Or Reliable byte stream
8. Define Gateway.
9. What is RED?
10. What are the three events involved in the connection?
11. Give the approaches to improve the QoS
12. Draw TCP header format
13. What is the function of BECN BIT?
14. What is the function of FECN?
15. What is meant by quality of service or QoS?
16. List the types of application requirements in QoS
17. List the approaches to QoS support
18. Define DEC bit
19. List the mechanisms used in TCP congestion avoidance
20. List the mechanisms used in TCP congestion control mechanism
21. What is the difference between congestion control and flow control?
22. What is the advantage of using UDP over TCP?
23. What is meant by PORT or MAILBOX related with UDP?
24. What are the networks related attributes?
25. List some of the Quality of service parameters of transport layer
26. How does transport layer perform duplication control?

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 37


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

PART- B

1. Explain in detail about TCP congestion control mechanisms


2. a) Explain in detail about congestion avoidance
3. b) What is the need for Nagle’s algorithm? How does it determine when to transmit data?
4. Discuss the application requirements for QoS
5. a)Explain the working of USER DATAGRAM PROTOCOL (UDP) or Simple
Demultiplexer
b) Describe in detail about TCP segment (Header) format
6. a) Explain in detail about TCP connection establishment & termination
b) Discuss TCP Connection Management (State transition diagram)
7. a) Explain in detail about TCP flow control OR TCP Adaptive flow control
b) Discuss TCP adaptive retransmission

PREPARED BY: Mr. D. Ramkumar - A.P-CSE 38

You might also like