Chapter 5
Chapter 5
Chapter 5
TRANSPORT LAYER
The transport layer is located between the application layer and the network layer. It
provides a process to process communication between two application layer, one at local
host and other at remote host. Communication is provided using a logical connection
which may be located in different parts of the globe, assuming imaginary direct
connection through which they can send and receive messages.
The transport layer is responsible for providing services to the application layer;
It receives services from the network layer
A process is an application layer entity that uses the services of the transport
layer.
The network layer is responsible for communication at the computer level. A
network layer can deliver the message only to the destination computer. The
1
transport layer protocol is responsible for delivery of the message to the
appropriate process.
For communication the local host, local process, remote host and remote process
need to be defined. The local host and remote host are defined using IP address.
To define processes, another identifiers is needed, called port numbers.
Port and socket
The port numbers are 16 bits integer between 0 and 65535.
0-1023 well known port numbers
1024- 65535 ephemeral port
The client program defines itself with a port number which is chosen randomly.
This number is called as ephemeral port number (temporary port number).
The server process should also define itself with a port number but this port
number cannot be chosen randomly. The internet uses universal port numbers for
servers and these numbers are called as well known port number.
2
Some the examples are: Telnet: 23, HTTP: 80, SSL : 443, POP3 : 110, SMTP: 25,
etc. .
Transport layer protocol in the TCP suite needs both the IP address, at each end to
make a connection. The combination of an IP address and a Port number is called a
socket address. Hence, a socket is an address formed by concatenating the IP address
and the port number. For instance, a host having IPv4 address: 10.13.3.27 and telnet
port number: 23 has socket address:10.13.3.27: 23.
3
One large circular buffer per connection
o The optimum trade-off between source buffering, and destination
buffering depends on the type of traffic carried by the connection.
4
Figure: Multiplexer and Demultiplexer
Connection Establishment
Connection is established when one transport entity sends a Connection Request TPDU
to the destination and wait for a Connection Accepted reply.
Connection Release
Anyone of the two parties involved in data exchange can close the connection. There
are two types of terminating a connection: asymmetric release and symmetric
release.
5
Asymmetric release: In asymmetric release when one party hangs up, the connection
is broken. It is an abrupt release and may result in loss of data.
After the connection is established, host 1 sends a TPDU that arrives properly at host
2. Then host 2 sends another TPDU. Unfortunately, host issues a Disconnect before
the second TPDU arrives. The result is that the connection is released and data are
lost.
Symmetric Release
6
Transport Protocols
There are two distinct transport- layer protocols to the application layer. One of these
protocols is UDP (User Datagram Protocol), which provides an unreliable, connectionless
service. The second protocol is TCP (Transmission Control Protocol), which provides a
reliable, connection- oriented service to the invoking application.
7
Some of the application are: Internet telephony, RIP, Streaming Multimedia,
DNS, Real Time Application, etc.
TCP Header
8
Source Port and Destination Port: These are 16-bit fields that define the port
number of the application program in the host that is sending the segment and the
host that is receiving the segment respectively.
Sequence Number: This is 32-bit field defines that number assigned to the first
byte of data contained in this segment.
Acknowledgement: This field defines the byte number that the receiver of the
segment is expecting to receive from the other party.
Data Offset: This is 4 bit field that specifies the size of the TCP header.
Reserved: This is 3 bits field and is for future use and should be set to zero
Flags or Control bits: This field defines 6 different control bits or flags.
URG: indicates that the Urgent Pointer field is valid
ACK: indicates that the Acknowledgment field is significant.
PSH: Request for push
RST: Reset the connection
SYN: Synchronize sequence numbers.
FIN: Terminate the connection.
Window Size: This field defines the window size of the sending TCP in bytes.
Note that the length of this field is 16 its, which means that the maximum size of
the window is 65535 bytes.
Checksum: This 16-bit field contains the checksum.
Urgent Pointer: This 16-bit field, which is valid only if the urgent flag is set, is
used when the segment contains urgent data.
Options: There can be up to 40 bytes of optional information in the TCP header.
9
Figure: Connection establishment using three-way handshaking
UDP TCP
10
without acknowledgement. data being acknowledged.
11
Congestion Control
Admission Policy
It is a prevention approach
Open loop solutions try to solve the problems by excellent design to prevent the
congestion from happening.
It uses techniques like, deciding when to accept the new packets, when to discard
the packets, which packets are to be discarded and making scheduling decisions at
various points.
The closed loop congestion control uses some kind of feedback. A closed loop
control is based on the following three steps.
o After congestion occurred, detect the congestion and locate it by
monitoring the system.
o Transfer the congestion information to places where action can be
taken
o Adjust the system operations to correct the congestion.
One technique that is widely used to keep congestion that has already started from getting
worse is admission control.
12
Admission control: This technique is used to keep the congestion which has already
begun to manageable level. Its principle is once congestion has been detected do not set
any more virtual circuits until the congestion is cleared.
Some congestion control approaches which can be used in the datagram subnets are:
Choke Packets: Send a control packet from a congestion node to some or all source
nodes. This choke packet will have the effect of stopping or slowing the rate of
transmission from source and hence limit the total number of packets in the network.
Load shedding: For heavy congestion, load shedding technique can be used. The
principle of load shedding states that when the routers are being flooded by the packets
that they cannot handle, they should simply throw the packets away. There are better
ways of dropping packets, which depends on type of packet. For file transfer an old
packet is more important than a new packet. For multimedia a new packet is more
important than an old one.
Jitter Control: Jitter may be defined as the variation in delay for the packets belonging to
the same flow. The real time video and audio cannot tolerate jitter. So, there must be
some technique to control the jitter.
When a packet arrives at a router, the router will check to see whether the packet is
behind or ahead and by what time. This information is stored in the packet and updated at
every hop. If the packet is ahead of the schedule then the router will hold it for a sloght
time and if the packet is behind the schedule, the router will try to send it out as quickly
as possible.
Traffic Shaping
Traffic shaping is a process of altering a traffic flow to avoid burst. Traffic shaping
manages the congestion by forcing the packet transmission rate to be more predictable. It
regulates the average rate of data transmission. Monitoring a traffic flow is called as
traffic policing.
13
Leaky Bucket Algorithm
14
If the arriving packets are of fixed sized, then the process of figure removes a fixed
numbers of packets form the queue at each tick of the clock.
If the arriving packets are of different size, then the following algorithm is used.
If n is greater than packet size, then send the packet and decrement counter by
the packet size.
A variant on the leaky bucket is token bucket. The bucket is filled with tokens at a certain
rate. A packet must grab and destroy a token to leave the bucket. Packets are not lost but
have to wait for an available token.
Example: sender is trying to inject five data entities into the network with three available
credit points. After transmitting three of five data entities in this time tick, no more
credits are available, thus no more data entities are injected into the network until new
credits are accumulated with the next time tick.
Token bucket technique allows sending small data-burst immediately, which do typically
not congest networks. On the other hand, this algorithm will not drop any packets on
sender’s side such as leaky bucket. Because if no further token are available in the
bucket, any sending attempt is blocked until a new token becomes available.
16