Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

NWT Unit 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Unit 4

Introduction to Transport Layer & Application Layer

Relationship between Transport Layer & Network Layer

TCP (Transmission Control Protocol)


• TCP stands for Transmission Control Protocol.
• It provides full transport layer services to applications.
• It is a connection-oriented protocol means the connection established between both
the ends of the transmission.
• For creating the connection, TCP generates a virtual circuit between sender and
receiver for the duration of a transmission.

Features of TCP protocol


• Stream data transfer: TCP protocol transfers the data in the form of contiguous stream
of bytes. TCP group the bytes in the form of TCP segments and then passed it to the
IP layer for transmission to the destination. TCP itself segments the data and forward
to the IP.
• Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP.
• If ACK is not received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments if they
arrive out of order or to eliminate the duplicate segments.
• Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer.
The number of bytes is sent in ACK in the form of the highest sequence number that
it can receive without any problem. This mechanism is also referred to as a window
mechanism.
• Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different computers.
• At the receiving end, the data is forwarded to the correct application. This process is
known as demultiplexing. TCP transmits the packet to the correct application by
using the logical channels known as ports.
• Logical Connections: The combination of sockets, sequence numbers, and window
sizes, is called a logical connection. Each connection is identified by the pair of
sockets used by sending and receiving processes.
• Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the
directions at the same time. To achieve Full Duplex service, each TCP should have
sending and receiving buffers so that the segments can flow in both the directions.
TCP is a connection-oriented protocol. Suppose the process A wants to send and
receive the data from process B.
• The following steps occur:
• Establish a connection between two TCPs.
• Data is exchanged in both the directions.
• The Connection is terminated.
• Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
• Destination port address: It is used to define the address of the application program
in a destination computer. It is a 16-bit field.
• Sequence number: A stream of data is divided into two or more TCP segments. The
32-bit sequence number field represents the position of the data in an original data
stream.
• Acknowledgement number: A 32-field acknowledgement number acknowledge the
data from other communicating devices. If ACK field is set to 1, then it specifies the
sequence number that the receiver is expecting to receive.
• Header Length (HLEN): It specifies the size of the TCP header in 32-bit words.
• The minimum size of the header is 5 words, and the maximum size of the header is
15 words. Therefore, the maximum size of the TCP header is 60 bytes, and the
minimum size of the TCP header is 20 bytes.
• Reserved: It is a six-bit field which is reserved for future use.
• Control bits: Each bit of a control field functions individually and independently. A
control bit defines the use of a segment or serves as a validity check for other fields.
• There are total six types of flags in control field:
• URG: The URG field indicates that the data in a segment is urgent.
• ACK: When ACK field is set, then it validates the acknowledgement number.
• PSH: The PSH field is used to inform the sender that higher throughput is needed so
if possible, data must be pushed with higher throughput.
• RST: The reset bit is used to reset the TCP connection when there is any confusion
occurs in the sequence numbers.
• SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
• FIN: The FIN field is used to inform the receiving TCP module that the sender has
finished sending data. It is used in connection termination in three types of segments:
termination request, termination confirmation, and acknowledgement of termination
confirmation.
Differences between TCP & UDP

Introduction to the Transport Layer


The transport layer is part of the OSI model and provides crucial services. Furthermore, it
provides end-to-end communication services for applications running on different hosts.
Additionally, its principal function is to guarantee the integrity of information sent
between programs hosted on various machines, with features such as error
detection, flow control, and congestion control.
Moreover, one of the crucial responsibilities of this layer is multiplexing and
demultiplexing. Multiplexing involves combining multiple data streams into a single
transmission channel. On the other hand, demultiplexing involves separating a single
transmission channel into multiple data streams at the receiving end.
Furthermore, Transmission Control Protocol (TCP) and User Datagram Protocol
(UDP) are two of the most popular examples of transport layer protocols. TCP provides
reliable, connection-oriented communication. In contrast to TCP, UDP
is connectionless and provides unreliable, best-effort delivery of data. Generally,
applications such as email, file transfer, and web browsing typically use TCP for reliable
data transfer. Additionally, applications prioritizing speed and low latency, such as video
streaming, utilize UDP.
Multiplexing
Introduction
We use the multiplexing technique in telecommunications and networking. Additionally,
it combines multiple data streams into a single channel. Furthermore, it allows multiple
signals to share a single transmission medium:

Specifically, the primary purpose of multiplexing is to increase the capacity of the


transmission medium, which could be a physical wire, a fiber optic cable, or even a wireless
channel. Instead of dedicating separate channels to each signal, multiplexing allows several
signals to share a single channel. In this way, it makes more efficient use of the available
bandwidth.
Furthermore, we can use multiplexing in various communication systems, including
telephone networks, cable TV, and satellite communications. Moreover, it’s an essential
technique for improving the efficiency of communication channels.

Types of Multiplexing

There’re several types of multiplexing used in communication systems. Let’s discuss some
of them.
Time-division multiplexing (TDM) is a popular variant of multiplexing. In TDM, we
first divide the available bandwidth of the communication channel into time slots.
Furthermore, we assign each input signal to a specific time slot. Therefore, the input signals
are then transmitted sequentially, one after the other, in their assigned time slots:
Another widely used variant is frequency-division multiplexing (FDM). In FDM, we divide
the available bandwidth of the communication channel into multiple frequency bands.
Additionally, we assign each input signal to a specific frequency band. Moreover, the input
signals are then transmitted simultaneously, each in their assigned frequency band:

Wavelength-division multiplexing (WDM) is another technique that is similar to FDM


but used in optical communication systems. In WDM, in order to allocate each input
signal to a wavelength band, we split the possible bandwidth of an optical fiber into a
number of wavelength bands:

Overall, we use these techniques to improve the effectiveness and scalability of


communication networks by enabling the transmission of numerous signals over a single
channel.
Advantages and Disadvantages
Let’s see some advantages and disadvantages of multiplexing:

Advantages Disadvantages

Allows multiple data streams to be Can be complex. Hence, require specialized


transmitted over a single channel equipment and expertise to implement

Can limit the flexibility of a system.


Reduces the cost of transmitting data Therefore, as all data streams must be
compatible with the same channel

If the multiplexing system fails, all data


Reduces the amount of time required to
streams transmitted over the channel will be
transmit data
affected

Can increase latency, as data streams may


Allows more data to be transmitted over a
have to wait to be transmitted over the
given bandwidth
channel

Demultiplexing
Introduction

Demultiplexing is the process of separating and directing individual data streams


combined for transmission over a shared communication channel or medium. In other
words, demultiplexing is the reverse process of multiplexing:

In communication, we use multiplexing to increase data transmission efficiency over a


shared medium by combining multiple signals or data streams into a single stream.
Conversely, we use demultiplexing to separate the combined signals into their individual
data streams at the receiving end.
Additionally, several methods exist for carrying out demultiplexing, each suited to a certain
set of circumstances involving the data and its transport medium. Specifically, we can use
specialized hardware, such as a demultiplexer IC, to separate the input stream into multiple
output streams. Additionally, we can utilize software devices to analyze the input data and
determine which output channel it should be routed to.
In summary, demultiplexing is an essential process in communication that allows for the
efficient transmission of multiple data streams over a shared medium by separating and
directing them to their intended destinations.

Types: There’re several types of demultiplexing, including 1-to-2 demultiplexing, 1-to-4


demultiplexing, 1-to-8 demultiplexing and 1-to-16 demultiplexing:

Generally, we use 1-to-2 demultiplexing and 1-to-4 demultiplexing when the number of
destinations is limited and the signal routing is straightforward. However, for applications
that require complex signal routing, we can utilize 1-to-8 demultiplexing and 1-to-16
demultiplexing techniques.
Furthermore, we can classify demultiplexers based on their construction, such
as transistor-transistor logic (TTL) and complementary metal-oxide-semiconductor
(CMOS) demultiplexers. Additionally, the choice of type depends on the application
requirements, such as speed, power consumption, and the number of inputs and outputs
needed.
Advantages and Disadvantages
Let’s see some advantages and disadvantages of demultiplexing:

Advantages Disadvantages

Can limit the scalability of a system.


Allows data streams to be separated and sent
Hence, adding or removing data streams
to their respective destinations, ensuring data
may require significant changes to the
isolation
system

Equipment can be costly, particularly for


Reduces the likelihood of errors occurring
systems with large numbers of data
during data transmission
streams

Allows a system to be easily scaled up or May not provide sufficient security for all
down, as new data streams can be added or data streams. Therefore, different data
removed from the system without affecting streams may require different security
the existing data streams protocols

Allows different security protocols to be


Provides limited scalability and complex
applied to different data streams, enhancing
to implement
the overall security of the system

Stream Control Transmission Protocol (SCTP)


• Stream Control Transmission Protocol (SCTP) is a connection-oriented network
protocol for transmitting multiple streams of data simultaneously between two
endpoints that have established a connection in a computer network.
• SCTP is used for transmitting multiple streams of data between two end points at the
same time that have established a connection in network.
• It is sometimes referred to as next generation TCP, SCTP makes it easier to support
telephonic conversation on Internet.
• A telephonic conversation requires transmitting of voice along with other data at the
same time on both ends, SCTP protocol makes it easier to establish reliable
connection.
• SCTP is also intended to make it easier to establish connection over wireless network
and managing transmission of multimedia data.
• SCTP is a standard protocol (RFC 2960) and is developed by Internet Engineering
Task Force (IETF).

Characteristics of SCTP

• Unicast with Multiple properties – It is a point-to-point protocol which can use


different paths to reach end host.
• Reliable Transmission – It uses SACK and checksums to detect damaged,
corrupted, discarded, duplicate and reordered data. It is similar to TCP but SCTP is
more efficient when it comes to reordering of data.
• Message oriented – Each message can be framed and we can keep order of
datastream and tabs on structure.
• For this, In TCP, we need a different layer for abstraction.
• Multi-homing – It can establish multiple connection paths between two end points
and does not need to rely on IP layer for resilience.
• Security – Another characteristic of SCTP that is security. In SCTP, resource
allocation for association establishment only takes place following cookie exchange
identification verification for the client (INIT ACK). Man-in-the-middle and denial-
of-service attacks are less likely as a result. Furthermore, SCTP doesn’t allow for
half-open connections, making it more resistant to network floods and masquerade
attacks.

Advantages of SCTP
• It is a full- duplex connection i.e. users can send and receive data simultaneously.
• It allows half- closed connections.
• The message’s boundaries are maintained and application doesn’t have to split
messages.
• It has properties of both TCP and UDP protocol.
• It doesn’t rely on IP layer for resilience of paths.

Disadvantages of SCTP
• One of key challenges is that it requires changes in transport stack on node.
• Applications need to be modified to use SCTP instead of TCP/UDP.
• Applications need to be modified to handle multiple simultaneous streams.

Principles of TCP congestion control


• There are a huge number of issues related to TCP congestion control.
• Congestion Control and Flow Control are different aspects of TCP data transfer.
• Flow control refers to limiting data sent, so as not to overcome the capacity of the re
ceiving host.
• Congestion control, on the other hand, is related to changes in the intervening netwo
rk parameters like
ambient data traffic, router capacities and physical link properties.
• Accordingly, TCP implementations handle congestion in different ways.
• What is congestion?
• When the transmission rate out of a router is lower than the incoming data rate, bits
get queued up in the router buffers.
• When the buffer gets full, all further incoming packets are dropped by the router TC
P in this case no error messages are sent to the originator of the datagram.
• The sender will have to realize this drop event, and retransmit the dropped packet.
• This retransmission should be at a rate that does not further overwhelm the congeste
d routers.
• There are many complicating factors in congestion control.
• For example, if a packet is dropped due to a checksum
error, how can it not be confused with a drop due to congestion?
• So the principal issues are
• How to detect congestion
• How to respond to congestion
TCP congestion control
• Congestion in Network-
• Congestion is an important issue that can arise in Packet Switched Network.
• Congestion leads to the loss of packets in transit.
• So, it is necessary to control the congestion in network.
• It is not possible to completely avoid the congestion.
• Congestion Control-
• Congestion control refers to techniques and mechanisms that can-
• Either prevent congestion before it happens
• Or remove congestion after it has happened
TCP reacts to congestion by reducing the sender window size.
• The size of the sender window is determined by the following two factors-
• Receiver window size
• Congestion window size
1. Receiver Window Size-
• Sender should not send data greater than receiver window size.
• Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
• So, sender should always send data less than or equal to receiver window size.
• Receiver dictates its window size to the sender through TCP Header.
2. Congestion Window-
• Sender should not send data greater than congestion window size.
• Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
• So, sender should always send data less than or equal to congestion window size.
• Different variants of TCP use different approaches to calculate the size of congestion
window.
• Congestion window is known only to the sender and is not sent over the links.
• Sender window size = Minimum (Receiver window size, Congestion window
size)
• TCP Congestion Policy-
• TCP’s general policy for handling congestion consists of following three phases-
• 1.Slow Start Phase: Starts slow increment is exponential to the threshold.
• 2.Congestion Avoidance Phase: After reaching the threshold increment is by 1.
• 3.Congestion Detection Phase: The sender goes back to the Slow start phase or the
Congestion avoidance phase.
1. Slow Start Phase-
• Initially, sender sets congestion window size = Maximum Segment Size (1 MSS).
• After receiving each acknowledgment, sender increases the congestion window size
by 1 MSS.
• In this phase, the size of congestion window increases exponentially.
• The followed formula is-
• Congestion window size = Congestion window size + Maximum segment size

• After 1 round trip time, congestion window size


= (2)1 = 2 MSS
• After 2 round trip time, congestion window size
= (2)2 = 4 MSS
• After 3 round trip time, congestion window size
= (2)3 = 8 MSS and so on.
= Maximum number of TCP segments that receiver window can accommodate / 2
= (Receiver window size / Maximum Segment Size) / 2
2. Congestion Avoidance Phase-
• After reaching the threshold,
• Sender increases the congestion window size linearly to avoid the congestion.
• On receiving each acknowledgement, sender increments the congestion window size
by 1.
• The followed formula is-
• Congestion window size = Congestion window size + 1
• This phase continues until the congestion window size becomes equal to the receiver
window size.

3. Congestion Detection Phase-


• When sender detects the loss of segments, it reacts in different ways depending on
how the loss is detected-
• Case-01: Detection On Time Out-
• Time Out Timer expires before receiving the acknowledgement for a segment.
• This case suggests the stronger possibility of congestion in the network.
• There are chances that a segment has been dropped in the network.
• Reaction-
• In this case, sender reacts by-
• Setting the slow start threshold to half of the current congestion window size.
• Decreasing the congestion window size to 1 MSS.
• Resuming the slow start phase.
• Case-02: Detection On Receiving 3 Duplicate Acknowledgements-
• Sender receives 3 duplicate acknowledgements for a segment.
• This case suggests the weaker possibility of congestion in the network.
• There are chances that a segment has been dropped but few segments sent later may
have reached.
• Reaction-
• In this case, sender reacts by-
• Setting the slow start threshold to half of the current congestion window size.
• Decreasing the congestion window size to slow start threshold.
• Resuming the congestion avoidance phase.

Application Layer
• The Application Layer is the topmost layer in the Open System Interconnection (OSI)
model.
• This layer provides several ways for manipulating the data which enables any type of
user to access the network with ease.
• The Application Layer interface directly interacts with the application and provides
common web application services.
• The application layer performs several kinds of functions that are required in any kind
of application or communication process.
• In this article, we will discuss various application layer protocols.
• Application layer protocols are those protocols utilized at the application layer of
the OSI (Open Systems Interconnection) and TCP/IP models.
• They facilitate communication and data sharing between software applications on
various network devices.
• These protocols define the rules and standards that allow applications to interact and
communicate quickly and effectively over a network.
1. TELNET
• Telnet stands for the TELetypeNetwork.
• It helps in terminal emulation.
• It allows Telnet clients to access the resources of the Telnet server.
• It is used for managing files on the Internet.
• It is used for the initial setup of devices like switches.
• The telnet command is a command that uses the Telnet protocol to communicate with
a remote device or system.
• The port number of the telnet is 23.

2. FTP
• FTP stands for File Transfer Protocol.
• It is the protocol that actually lets us transfer files.
• It can facilitate this between any two machines using it.
• But FTP is not just a protocol but it is also a program.
• FTP promotes sharing of files via remote computers with reliable and efficient data
transfer.
• The Port number for FTP is 20 for data and 21 for control.

3. TFTP
• The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP,
but it’s the protocol of choice if you know exactly what you want and where to find
it.
• It’s a technology for transferring files between network devices and is a simplified
version of FTP.
• The Port number for TFTP is 69.

4. NFS
• It stands for a Network File System.
• It allows remote hosts to mount file systems over a network and interact with those
file systems as though they are mounted locally.
• This enables system administrators to consolidate resources onto centralized servers
on the network.
• The Port number for NFS is 2049.

5. SMTP
• It stands for Simple Mail Transfer Protocol.
• It is a part of the TCP/IP protocol. Using a process called “store and forward,” SMTP
moves your email on and across networks.
• It works closely with something called the Mail Transfer Agent (MTA) to send your
communication to the right computer and email inbox.
• The Port number for SMTP is 25.

6. SNMP
• It stands for Simple Network Management Protocol.
• It gathers data by polling the devices on the network from a management station at
fixed or random intervals, requiring them to disclose certain information.
• It is a way that servers can share information about their current state, and also a
channel through which an administrate can modify pre-defined values.
• The Port number of SNMP is 161(TCP) and 162(UDP).

7. DNS
• It stands for Domain Name System.
• Every time you use a domain name, therefore, a DNS service must translate the name
into the corresponding IP address.
• For example, the domain name www.abc.com might translate to 198.105.232.4.
• The Port number for DNS is 53.

8. DHCP
• It stands for Dynamic Host Configuration Protocol (DHCP).
• It gives IP addresses to hosts.
• There is a lot of information a DHCP server can provide to a host when the host is
registering for an IP address with the DHCP server.
• Port number for DHCP is 67, 68.

8. DHCP
• It stands for Dynamic Host Configuration Protocol (DHCP).
• It gives IP addresses to hosts.
• There is a lot of information a DHCP server can provide to a host when the host is
registering for an IP address with the DHCP server.
• Port number for DHCP is 67, 68.

10. POP
• POP stands for Post Office Protocol and the latest version is known as POP3 (Post
Office Protocol version 3). This is a simple protocol used by User agents for message
retrieval from mail servers.
• POP protocol work with Port number 110.
• It uses TCP for establishing connections.
• POP works in dual mode- Delete mode, Keep Mode.
• In Delete mode, it deletes the message from the mail server once they are downloaded
to the local system.
• In Keep mode, it doesn’t delete the message from the mail server and also facilitates
the users to access the mails later from the mail server.

11. MIME
• MIME stands for Multipurpose Internet Mail Extension.
• This protocol is designed to extend the capabilities of the existing Internet email
protocol like SMTP.
• MIME allows non-ASCII data to be sent via SMTP.
• It allows users to send/receive various kinds of files over the Internet like audio,
video, programs, etc.
• MIME is not a standalone protocol it works in collaboration with other protocols to
extend their capabilities.

You might also like