Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

CHAPTER 5

TRANSPORT LAYER

The transport layer is located between the application layer and the network layer. It
provides a process to process communication between two application layer, one at local
host and other at remote host. Communication is provided using a logical connection
which may be located in different parts of the globe, assuming imaginary direct
connection through which they can send and receive messages.

Transport layer provides logical communication between application processes running


on different hosts. Sender: breaks application messages into segments, and passes to
network layer. Receiver: reassembles segments into messages, passes to application
layer. Multiple transport protocol available to applications: TCP and UDP.

Some of the main services are:

o Process to process communication


o Addressing
o Flow control and buffering
o Multiplexing and Demultiplexing
o Congestion control

Services provided to the upper layers

 The transport layer is responsible for providing services to the application layer;
 It receives services from the network layer

Process to Process communication

 A process is an application layer entity that uses the services of the transport
layer.
 The network layer is responsible for communication at the computer level. A
network layer can deliver the message only to the destination computer. The

1
transport layer protocol is responsible for delivery of the message to the
appropriate process.

Figure : Process to Process Communication

Addressing (port and socket address)

 For communication the local host, local process, remote host and remote process
need to be defined. The local host and remote host are defined using IP address.
 To define processes, another identifiers is needed, called port numbers.
Port and socket
The port numbers are 16 bits integer between 0 and 65535.
0-1023  well known port numbers
1024- 65535  ephemeral port

 The client program defines itself with a port number which is chosen randomly.
This number is called as ephemeral port number (temporary port number).
 The server process should also define itself with a port number but this port
number cannot be chosen randomly. The internet uses universal port numbers for
servers and these numbers are called as well known port number.

2
 Some the examples are: Telnet: 23, HTTP: 80, SSL : 443, POP3 : 110, SMTP: 25,
etc. .

Transport layer protocol in the TCP suite needs both the IP address, at each end to
make a connection. The combination of an IP address and a Port number is called a
socket address. Hence, a socket is an address formed by concatenating the IP address
and the port number. For instance, a host having IPv4 address: 10.13.3.27 and telnet
port number: 23 has socket address:10.13.3.27: 23.

Flow control and buffering

 For flow control, a sliding window is required on each connection to keep a


fast transmitter from overwhelming slow receiver; same as in data link layer.
 Since a host may have numerous connections, it is impractical to implement
the same data link buffering strategy (use of dedicated buffers for each line).
 A suitable size value is negotiated during connection establishment. The value
can be dynamically adjusted subsequently by the receiver.
 Chained fixed size buffer
o If most of the TPDUs are nearly the same size, it is natural to organize
the buffers as a pool of identically-sized buffers, with one TPDU per
buffer.
 Chained variable-sized buffer
o If there is wide variation in TPDU size, from a few characters typed to
thousands of characters, then variables-sized buffer can be used but
must have high price.

3
 One large circular buffer per connection
o The optimum trade-off between source buffering, and destination
buffering depends on the type of traffic carried by the connection.

(a) Chained fixed-sized buffer


(b) Chained variable-sized buffer
(c) One large circular buffer per connection.

Multiplexing and demultiplexing


 At the sending end, there are several processes that want to send packets. But
there is only one transport layer protocol (UDP or TCP). This is many to one
relationship and hence requires multiplexing.
 The protocol accepts messages from different processes. These messages are
separated from each other by their port numbers. Then the transport layer adds
header and passes the packet to the network layers.
 At the receiving end, the relationship is one to many. Therefore need
demultiplexing.
 Transport layer receives datagrams from the network layer. The transport layer
checks for errors and drops the header to obtain the messages and delivers them to
appropriate process based on the port number.

4
Figure: Multiplexer and Demultiplexer

Connection Establishment

Connection is established when one transport entity sends a Connection Request TPDU
to the destination and wait for a Connection Accepted reply.

(a) Establishing a connection using 3-way handshake. Normal operation.


(b) Old duplicate Connection Request appearing out of nowhere

Connection Release

Anyone of the two parties involved in data exchange can close the connection. There
are two types of terminating a connection: asymmetric release and symmetric
release.

5
Asymmetric release: In asymmetric release when one party hangs up, the connection
is broken. It is an abrupt release and may result in loss of data.

Figure: Abrupt disconnection with loss of data

After the connection is established, host 1 sends a TPDU that arrives properly at host
2. Then host 2 sends another TPDU. Unfortunately, host issues a Disconnect before
the second TPDU arrives. The result is that the connection is released and data are
lost.

Symmetric Release

Symmetric release treats the connection as two separate unidirectional connections


and requires each one to be released separately.

Figure: Connection Release

6
Transport Protocols

There are two distinct transport- layer protocols to the application layer. One of these
protocols is UDP (User Datagram Protocol), which provides an unreliable, connectionless
service. The second protocol is TCP (Transmission Control Protocol), which provides a
reliable, connection- oriented service to the invoking application.

User Datagram Protocol (UDP )

 UDP is connectionless, unreliable Transport level service protocol. It is primarily


used for protocols that require a broadcast capability eg: RIP
 It provides no packet sequencing, may lose packets, and does not check for
duplicates
 It is used by applications that do not need a reliable transport service
 Application data is encapsulated in a UDP header which in turn is encapsulated in
an IP header.
 No need to establish a connection with a host before exchanging data with it using
UDP.
 There is no mechanism for ensuring that data sent is received.
 UDP takes messages from the application process, attaches source and destination
port number fields for the multiplexing/ demultiplexing service, adds two other
small fields, and passes the resulting segment to the network layer.
 UDP adds four 16-bit header fields to whatever data is sent. These fields are: a
length field, a checksum field and source and destination port numbers.

Figure: UDP Header

7
 Some of the application are: Internet telephony, RIP, Streaming Multimedia,
DNS, Real Time Application, etc.

Transmission Control Protocol


 The TCP provides reliable transmission of data in an IP environment.
 Among the services, TCP provides are stream data transfer reliability, efficient
flow control, full-duplex operation and multiplexing.
 TCP provides stream data transfer identified by sequence numbers.
 TCP offers reliability by providing connection-oriented, end to end reliable packet
delivery through an internetwork.
 TCP offers efficient control, which means that when sending acknowledgement
back to the source, the receiving TCP process indicates about the internal buffer
to avoid overflow.
 Full-duplex operation means that TCP processes can both send and receive at the
same time.
 TCP’s multiplexing means that numerous simultaneous upper-layer conversations
can be multiplexed over a single connection.

TCP Header

Figure: TCP Header Format

8
 Source Port and Destination Port: These are 16-bit fields that define the port
number of the application program in the host that is sending the segment and the
host that is receiving the segment respectively.
 Sequence Number: This is 32-bit field defines that number assigned to the first
byte of data contained in this segment.
 Acknowledgement: This field defines the byte number that the receiver of the
segment is expecting to receive from the other party.
 Data Offset: This is 4 bit field that specifies the size of the TCP header.
 Reserved: This is 3 bits field and is for future use and should be set to zero
 Flags or Control bits: This field defines 6 different control bits or flags.
 URG: indicates that the Urgent Pointer field is valid
 ACK: indicates that the Acknowledgment field is significant.
 PSH: Request for push
 RST: Reset the connection
 SYN: Synchronize sequence numbers.
 FIN: Terminate the connection.
 Window Size: This field defines the window size of the sending TCP in bytes.
Note that the length of this field is 16 its, which means that the maximum size of
the window is 65535 bytes.
 Checksum: This 16-bit field contains the checksum.
 Urgent Pointer: This 16-bit field, which is valid only if the urgent flag is set, is
used when the segment contains urgent data.
 Options: There can be up to 40 bytes of optional information in the TCP header.

TCP Synchronization or 3-Way Handshaking

The connection establishment in TCP is called three-way handshaking. TCP is


connection oriented protocol. Communicating hosts go through a synchronization process
to establish a virtual connection. This synchronization process insures that both sides are
ready for data transmission and allows the devices to determine the initial sequence
numbers.

9
Figure: Connection establishment using three-way handshaking

TCP and UDP Comparison

UDP TCP

Simple, high-speed low functionality Full-featured protocol that allows


wrapper. applications to send data reliably
without worrying about network layer
issues.

Connectionless data is sent without Connection-oriented


setup.

Message based data is sent in discrete Stream-based data is sent by the


packages by the application. application.

Unreliable, best-effort delivery Reliable delivery of messages with all

10
without acknowledgement. data being acknowledged.

Retransmission is detected by Lost data is retransmitted


application. automatically.

Transmission speed is very high due to Lower speed than UDP.


low overhead.

Applications where data delivery Applications where reliability matters


matters more than completeness. more than the speed.

Multimedia applications, DNS, FTP, TELNET, SMTP, HTTP, POP,


DHCP, TFTP, SNMP, RIP,.etc. IMAP, BGP,etc.

Congestion Control Algorithm


An important issue in a packet switching network is congestion. When two many packets
are present in a part of subnet, the performance degrades. This situation is known as
congestion.
Congestion occurs when the number of packets sent to the network is greater than the
capacity of the network.
Congestion will lead to a large queue length which results in buffer overflow and loss of
packets. Therefore congestion control is necessary in the network.

11
Congestion Control

Open Loop Solution Closed loop Solution

Retransmission Policy Backpressure

Window Policy Choke Point

Acknowledgement Policy Implicit Signaling

Discard Policy Explicit Signaling

Admission Policy

Open Loop Control

 It is a prevention approach
 Open loop solutions try to solve the problems by excellent design to prevent the
congestion from happening.
 It uses techniques like, deciding when to accept the new packets, when to discard
the packets, which packets are to be discarded and making scheduling decisions at
various points.

Closed Loop Control

 The closed loop congestion control uses some kind of feedback. A closed loop
control is based on the following three steps.
o After congestion occurred, detect the congestion and locate it by
monitoring the system.
o Transfer the congestion information to places where action can be
taken
o Adjust the system operations to correct the congestion.

Congestion control in virtual circuits subnets

One technique that is widely used to keep congestion that has already started from getting
worse is admission control.

12
Admission control: This technique is used to keep the congestion which has already
begun to manageable level. Its principle is once congestion has been detected do not set
any more virtual circuits until the congestion is cleared.

Alternative approach: an alternative approach to admission control is to allow the virtual


circuits to set up even when a congestion has taken place. However, carefully route all
the new virtual circuits around the problem area.

Congestion control in Datagram subnets

Some congestion control approaches which can be used in the datagram subnets are:

Choke Packets: Send a control packet from a congestion node to some or all source
nodes. This choke packet will have the effect of stopping or slowing the rate of
transmission from source and hence limit the total number of packets in the network.

Load shedding: For heavy congestion, load shedding technique can be used. The
principle of load shedding states that when the routers are being flooded by the packets
that they cannot handle, they should simply throw the packets away. There are better
ways of dropping packets, which depends on type of packet. For file transfer an old
packet is more important than a new packet. For multimedia a new packet is more
important than an old one.

Jitter Control: Jitter may be defined as the variation in delay for the packets belonging to
the same flow. The real time video and audio cannot tolerate jitter. So, there must be
some technique to control the jitter.

When a packet arrives at a router, the router will check to see whether the packet is
behind or ahead and by what time. This information is stored in the packet and updated at
every hop. If the packet is ahead of the schedule then the router will hold it for a sloght
time and if the packet is behind the schedule, the router will try to send it out as quickly
as possible.

Traffic Shaping

Traffic shaping is a process of altering a traffic flow to avoid burst. Traffic shaping
manages the congestion by forcing the packet transmission rate to be more predictable. It
regulates the average rate of data transmission. Monitoring a traffic flow is called as
traffic policing.

Two traffic shaping technique are:

 Leaky bucket algorithm


 Token bucket algorithm

13
Leaky Bucket Algorithm

It is an algorithm used to control congestion in network traffic. It uses similar technique


to a leaky bucket. Every host in the network is having a buffer with finite queue length.
Packets which are put in the buffer when buffer is full are thrown away.

Implementation of leaky bucket

The implementation of leaky bucket can be discussed in two conditions.

For fixed size packets

14
If the arriving packets are of fixed sized, then the process of figure removes a fixed
numbers of packets form the queue at each tick of the clock.

For variable size packet

If the arriving packets are of different size, then the following algorithm is used.

Initialize a counter to n at the tick of the clock.

If n is greater than packet size, then send the packet and decrement counter by
the packet size.

Repeat step 2 until n becomes smaller than the packet size.

Reset the counter and go back to step 1.

Token Bucket Algorithm

A variant on the leaky bucket is token bucket. The bucket is filled with tokens at a certain
rate. A packet must grab and destroy a token to leave the bucket. Packets are not lost but
have to wait for an available token.

Figure: Token bucket algorithm


15
Implementation of token bucket

Example: sender is trying to inject five data entities into the network with three available
credit points. After transmitting three of five data entities in this time tick, no more
credits are available, thus no more data entities are injected into the network until new
credits are accumulated with the next time tick.

Token bucket technique allows sending small data-burst immediately, which do typically
not congest networks. On the other hand, this algorithm will not drop any packets on
sender’s side such as leaky bucket. Because if no further token are available in the
bucket, any sending attempt is blocked until a new token becomes available.

16

You might also like