Computer Networks Unit - IV: Transport Layer
Computer Networks Unit - IV: Transport Layer
Computer Networks Unit - IV: Transport Layer
Unit – IV
Transport Layer
Introduction (Transport Layer)
A transport layer protocol provides for logical communication between
application processes running on different hosts.
The logical communication means that the communicating application
processes are not physically connected to each other from the
applications viewpoint.
Application processes use the logical communication provided by the
transport layer to send messages to each other.
Transport layer protocols are implemented in the end systems but not
in network routers. Network routers only act on the network-layer
fields. All transport layer protocols provide an application
multiplexing/de-multiplexing service.
2
Introduction (Transport Layer)
The transport service is said to perform “peer to peer” communication,
with the remote transport entity. The data communicated by the
transport layer is encapsulated in a transport layer PDU and sent in a
network layer SDU The network layer nodes transfer the transport PDU
intact, without decoding or modifying the content of the PDU.
The transport layer is the fourth layer In the OSI layered architecture.
The Transport layer is responsible for reliable data delivery.
3
Introduction (Transport Layer)
Transport Layer Functions:
This layer breaks messages into packets.
It performs error recovery if the lower layer are not adequately error free
Function of flow control if not done adequately at the network layer.
Functions of multiplexing and de-multiplexing sessions together.
This layer can be responsible for setting up and releasing connections
across the network.
Following parameters are used for communication:
1. Local host
2. Local process
3. Remote host
4. Remote process
4
Transport Services
The following categories of service are useful for describing the transport
service.
1. Type of service
2. Quality of service
3. Data transfer
4. User interface
5. Connection management
6. Expedited delivery
7. Status reporting
8. Security
5
User Datagram Protocol (UDP)
UDP is a simple, datagram-oriented, transport layer protocol. This
protocol is used in place of TCP. UDP is connectionless protocol
provides no reliability or flow control mechanisms. It also has no error
recovery procedures.
Several application layer protocols such as TFTP (Trivial File Transfer
Protocol) and the RPC use UDP. UDP makes use of the port concept to
direct the datagrams to the proper upper-layer applications. UDP serves
as a simple application interface to the IP.
The UDP datagram contains a source port number and destination port
number. Source port number identifies the port of the sending application
process. The destination port number identifies the receiving process on
the destination host machine.
6
User Datagram Protocol (UDP)
UDP encapsulation
7
User Datagram Protocol (UDP)
UDP header
8
User Datagram Protocol (UDP)
The UDP length field is the length of the UDP header and the UDP data
in bytes.
The UDP checksum covers the UDP header and the UDP data. Both
UDP and TCP include a 12 byte pseudo-header with the UDP datagram
just for the checksum computation.
UDP checksum is end-to-end checksum. It is calculated by the sender,
and then verified by receiver. It is designed to catch any modification of
the UDP header or data anywhere between sender and receiver.
9
User Datagram Protocol (UDP)
Port Numbers and Applications:
11
User Datagram Protocol (UDP)
12
User Datagram Protocol (UDP)
Remote operations with stub :
13
User Datagram Protocol (UDP)
14
User Datagram Protocol (UDP)
15
User Datagram Protocol (UDP)
Real Time Transport Protocol:
16
User Datagram Protocol (UDP)
Payload type: Size of the payload type field is 7-bits. This field is
used for indicating encoding algorithm has been used. It determines its
interpretation by the application.
Sequence number: This 16-bit field is incremented by one each time
an RTP packet is sent. The number can be used by the receiver to detect
packet loss and to recover packet sequence. The initial value is selected
at random.
17
User Datagram Protocol (UDP)
Real Time Transport Protocol:
18
User Datagram Protocol (UDP)
RTP Control Protocol (RTCP) :
19
User Datagram Protocol (UDP)
20
Transmission Control Protocol (TCP)
21
Transmission Control Protocol (TCP)
TCP Services :
TCP and UDP use the same network layer (IP), TCP provides totally
different services. TCP provides a connection-oriented, reliable, byte
stream service. There are exactly two end points communicating with
each other on a TCP connection.
TCP does not support multicasting and broadcasting. The
application data is broken into what TCP considers the best sized chunks
to send. The unit of information passed by TCP to IP is called a segment.
When TCP sends a segment it maintains a timer, waiting for the other
end to acknowledge reception of segment. If an acknowledgement isn’t
received in time, the segment is retransmitted.
22
Transmission Control Protocol (TCP)
TCP Services :
When TCP receives data from the other end of the connection, it
sends an acknowledgement. TCP maintains a checksum on its header
and data.
TCP segments are transmitted as IP datagrams, and since IP
datagrams can arrive out of order, TCP segments can arrive out of order.
Since IP datagrams can get duplicated, a receiving TCP must discard
duplicate data.
TCP also provides flow control. Each end of a TCP connection has
a finite amount of buffer space. A receiving TCP only allows the other
end to send as much data as the receiver has buffers for. This prevents a
fast host from taking all the buffers on a slower host.
23
Transmission Control Protocol (TCP)
TCP Services :
24
Transmission Control Protocol (TCP)
TCP Segment Format :
Encapsulation of TCP data
25
Transmission Control Protocol (TCP)
TCP Segment Format :
TCP header format
26
Transmission Control Protocol (TCP)
27
Transmission Control Protocol (TCP)
TCP Protocol:
28
Transmission Control Protocol (TCP)
29
Transmission Control Protocol (TCP)
TCP Connection Establishment:
30
Transmission Control Protocol (TCP)
TCP Connection Release:
Any of the two parties involved in exchanging data can close the
connection when connection in one direction is terminated, the other
party can continue sending data in the other direction.
Four step connection termination. Steps are as follows:
The client TCP sends the first segment, a FIN segment.
The server TCP sends the second segment, an ACK segment, to
confirm the receipt of the FIN segment from the client.
The server TCP can continue sending data in the server client direction.
When it does not have any more data to send, it sends the third
segment.
The client TCP sends the fourth segment, an ACK segment, to confirm
the receipt of the FIN segment from the TCP server.
31
Transmission Control Protocol (TCP)
Four steps connection termination:
32
Transmission Control Protocol (TCP)
TCP Connection Management Modeling :
33
Transmission Control Protocol (TCP)
TCP Connection Management Modeling :
When the ACK arrives, a transition is made to state FIN WAIT 2 and one
direction of the connection is now closed. When the other side closes,
too, a FIN comes in, which is acknowledged. Now both sides are closed,
but TCP waits a time equal to the maximum packet lifetime to guarantee
that all packets from the connection have died off, just in case the
acknowledgement was lost. When the timer goes off, TCP deletes the
connection record.
Connection management from server view point, sever does a LISTEN
and settles down to see who turns up. When a SYN comes in, it is
acknowledged and the server goes to the SYN ACK state. When the
server’s SYN is itself acknowledged, the three way handshake is
complete and the server goes to the ESTABLISHED state.
34
Transmission Control Protocol (TCP)
Finite state Machine for TCP Connection :
35
Transmission Control Protocol (TCP)
3. It provides error control and flow control It does not provide flow control and error control
5. TCP supports full duplex transmission It does not support full duplex transmission.
36
Congestion Control
TCP uses a form of end to end flow control. Both the sender and the
receiver agree on a common window size for packet flow. The window
size represents the number of bytes that the source can send at a time.
The window size varies according to the condition of traffic in the network
to avoid congestion.
A file of size f with a total transfer time of ‘A’ on a TCP connection results
in a TCP transfer throughput ®. i.e. Bandwidth utilization (pu) = where B
= Link bandwidth
TCP has three congestion control methods
1. Additive increase
2. Slow start
3. Retransmit
37
Congestion Control
Additive Increase, Multiplicative Decrease Control (AIMD):
TCP maintains a new state variable for each connection, called
congestion window, which is used by the source to limit how much data it
is allowed to have in transit at a given time. The congestion window
represents the amount of data, in bytes.
AIMD performs a slow increase in the congestion window size when the
congestion in the network decreases and a fast drop in the window size
when congestion increases.
Let W be the maximum window size, in bytes, representing the maximum
amount of unacknowledged data that a sender is allowed to send.
38
Congestion Control
Additive Increase, Multiplicative Decrease Control (AIMD):
Max window replaces Advertised window in the calculation of Effective
window.
Two important factors in setting timeouts follow.
1.Average round trip times (RTTs) and RTT standard deviation
based to set timeouts.
2. RTTS are sampled once every RTT is completed.
39
Congestion Control
Slow Start Method :
Slow start method increases the congestion window size nonlinearly and
in most cases exponentially, as compared to the linear increase in
additive increase. In this method, the congestion window is again
interpreted in packets instead of bytes.
The slow start method is normally used.
1. Just after a TCP connection is set up.
2. When a source is blocked, waiting for a timeout.
40
Congestion Control
Slow Start Method :
41
Congestion Control
Congestion:
When too many packets rushing to a node or a part of network, the network
performance degrades, and this situation is called as congestion. When the
number of packets dumped into the subnet and as traffic increases the
network is no longer able to cope and design losing packets at very high
traffic, performance collapses completely and almost no packets are
delivered.
42
Congestion Control
Congestion Control:
Congestion control is a process of maintaining the number of packets in a
network below a certain level at which performance falls off. Congestion
control makes sure that subnet is able to carry the offered traffic. So
congestion control is different process than flow control.
43
Resource Allocation
44
Techniques for Resource Allocation
Static vs. Dynamic Allocation:
Static: Fixed allocation, simple but inflexible
Dynamic: Adaptive allocation based on demand, more
efficient
Centralized vs. Distributed Allocation:
Centralized: Single control point, easier management but
single point of failure
Distributed: Multiple control points, more robust but complex
45
TCP Congestion Control
Primary Goals:
Avoid congestion collapse
Efficiently utilize available bandwidth
Ensure fairness among multiple TCP flows
Additional Objectives:
Minimize delay and packet loss
Maintain stability in network traffic
TCP Congestion Control Mechanisms
Key Mechanisms:
Slow Start
Congestion Avoidance
Fast Retransmit
Fast Recovery
46
Slow Start
Initial phase of TCP congestion control
Exponential increase in congestion window (cwnd)
Congestion Avoidance
Phase following Slow Start
Linear increase in cwnd to avoid congestion
Fast Retransmit
Quick retransmission of lost packets
Fast Recovery
Recover from packet loss without returning to Slow Start
47
Congestion Avoidance
Congestion Avoidance:
A congestion avoidance scheme allows a network to operate in the region of
low delay and high throughput. It is a prevention mechanism while
congestion control is a recovery mechanism.
48
Congestion Avoidance
DECbit Scheme:
RED:
RED stands for Random Early Detection. The main idea is to provide
congestion control at the router for TCP flows. RED is based on DECbit,
and was designed to work well with TCP.
RED implicitly notifies sender by dropping packets. Packet dropping
probability is increased as the average queue length increases.
The moving average of the queue length is used so as to detect long
term congestion, yet allow short term bursts to arrives.
50
Congestion Avoidance
Properties of RED:
Drops packets before queue is full, in the hope of reducing the rates of
some flows.
Drops packet for each flow roughly in proportion to its rate.
Drops are spaced out in time.
Because it uses average queue length, RED is tolerant of bursts.
Random drops hopefully desynchronize TCP sources.
51
Quality of Service (QoS)
52
Quality of Service (QoS)
Policing:
Policing is the regulation of the rate at which packet flow is injected
into the network.
Criteria for policing :
Three important policing criteria are identified, these are:
1. Average rate
2. Peak rate
3. Burst size
53
Quality of Service (QoS)
Average rate: Average rate is defined as packets per time interval. The
average rate of packets in a network can be limited as a policy. This
limits the traffic in the network for a long period of time.
Peak rate: Peak rate is defined as maximum number of packets that can
be sent over a short period of time over a network.
Burst size: Burst size is the maximum number of packets that can be
sent into the network over a extremely short interval of time.
54
Quality of Service (QoS)
Integrated Services:
Integrated service is a framework to provide guaranteed to individual
application sessions.
A call step process involves following steps:
1.Traffic characterization and specification of desired QoS.
2.Signalling for call setup.
3. Pre element call admission.
The integrate architecture defines two major classes of service.
a. Guaranteed service.
b. Controlled load service.
55
Quality of Service (QoS)
Integrated Services:
Traffic Shaping :
56
Quality of Service (QoS)
Integrated Services:
Traffic Shaping :
SNO Leaky Bucket (LB) Token Bucket (TB)
2. With LB, a packet can be transmitted, if the With TB/ a packet can only be transmitted if there
bucket is not full. are enough tokens to cover its length in bytes.
3. LB sends the packets at an average rate. TB allows for large bursts to be sent faster by
speeding up the output.
4. LB does not allow saving, a constant rate is TB allows saving up tokens (permissions) to send
maintained. large bursts.
57
Quality of Service (QoS)
Integrated Services:
Admission Control:
59
Quality of Service (QoS)
Differentiated Services/QoS :
60