Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
15 views

Module IV

The document summarizes key aspects of the transport layer. It discusses: 1) The transport layer builds on the network layer by providing reliable data transfer between processes. Protocols like TCP and UDP provide this functionality. 2) Transport services can be connection-oriented or connectionless. Connection establishment is complex due to network phenomena like packet loss and delay. Protocols use techniques like sequence numbers and three-way handshakes. 3) Transport protocols address issues like explicit addressing, connection establishment complexity, packet delay/duplication, and flow control through elements like port numbers, connection release approaches, and buffer management.

Uploaded by

vinutha k
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Module IV

The document summarizes key aspects of the transport layer. It discusses: 1) The transport layer builds on the network layer by providing reliable data transfer between processes. Protocols like TCP and UDP provide this functionality. 2) Transport services can be connection-oriented or connectionless. Connection establishment is complex due to network phenomena like packet loss and delay. Protocols use techniques like sequence numbers and three-way handshakes. 3) Transport protocols address issues like explicit addressing, connection establishment complexity, packet delay/duplication, and flow control through elements like port numbers, connection release approaches, and buffer management.

Uploaded by

vinutha k
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 73

MODULE -4

THE TRANSPORT
LAYER
Contents

The Transport Services


Elements Of Transport Protocol
Congestion Control
The Internet Transport Protocol
1. THE TRANSPORT LAYER
Network Layer Functions:
– End-to-End packet delivery through the use of datagram's or virtual circuits.
– It ensures that data can traverse the network efficiently.
Transport Layer Building on Network Layer:
– The TL builds upon the network layer's capabilities.
– It provide reliable data transfer between processes on source and
destination machines.
Protocols TCP and UDP:
– TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
– Understands their roles, differences and how they contribute to reliable data
transport.
2. THE TRANSPORT SERVICE

1. Services Provided to the Upper Layers


2. Transport Service Primitives
3. Berkeley Sockets/Socket programming

Services Provided to the Upper Layers


 Aims: Provide efficient, reliable and cost-effective data
transmission service to users, typically processes in the
application layer.
 Relies on services provided by the NL to achieve its objectives.
2. THE TRANSPORT SERVICE

 TL: Efficient & reliable , cost-effective service to user at AL


 Transport Entity(H/W or S/W): Available in , os kernel, library
packages, separate user processes, or even on the network
interface card(NIC).
2. THE TRANSPORT SERVICE

 Two Types of Transport Services: Connection-oriented and


connectionless.

 Difference, where the code runs—the transport code operates on


users' machines, in NL runs on routers.

 The network service models real networks, which can be


unreliable, while the connection-oriented transport service aims
for reliability on top of an unreliable network.

 The transport layer isolates upper layers from the technology,


design, and imperfections of the network.
2. THE TRANSPORT SERVICE

Primitives Usage Example:


Consider an application with a server and remote clients
– Server initiates by executing a LISTEN primitive, blocking until
a client arrives.
– When a client wants to communicate, it executes a CONNECT
primitive, and the transport entity sends a packet to the server
with a transport layer message in the payload.
Transport Service Primitives(TPDU)
• Segments (transport layer) are contained in packets
(network layer), which, in turn, are contained in
frames (data link layer).
2. THE TRANSPORT SERVICE
Client-Server - Connection Establishment:
• The client's CONNECT call sends a
CONNECTION REQUEST segment to the
server.
• Upon arrival, the server checks if it's blocked
on a LISTEN and, if so, unblocks itself and
sends a CONNECTION ACCEPTED segment
back to the client.
• This establishes the connection, and data
exchange can begin using SEND and
RECEIVE primitives.

Connection Release Variants:


• Disconnection has two variants: asymmetric
and symmetric.
• Asymmetric: Transport user to issue a
DISCONNECT primitive,.
• Symmetric, each direction is closed
independently. DISCONNECT is initiated
when one side has no more data to send but is
willing to accept data from its partner.
3. Berkeley Sockets/Socket Programming
 Introduced as part of the Berkeley UNIX 4.2BSD software distribution in
1983.
 They have gained widespread popularity and are extensively used for
Internet programming across various operating systems, particularly
UNIX-based systems.
 A socket-style API named 'winsock' exists for Windows systems.
Socket Primitives:
• Includes:
1. SOCKET: Creates a new communication endpoint.
2. BIND: Associates a local address with a socket(IP ,PORT).
3. LISTEN: Announces willingness to accept connections and specifies the queue
size.
4. ACCEPT: Passively establishes an incoming connection.
5. CONNECT: Actively attempts to establish a connection.
6. SEND: Sends data over the connection.
7. RECEIVE: Receives data from the connection.
8. CLOSE: Releases the connection.
3. Berkeley Sockets/Socket Programming
Server-Side Execution:
1. Servers execute the first four primitives in
sequence.
2. SOCKET creates a new endpoint and allocates table
space for it within the transport entity.
3. BIND assigns network addresses to sockets,
allowing remote clients to connect.
4. LISTEN allocates space to queue incoming calls,
while ACCEPT blocks until an incoming
connection request arrives.
Client-Side Execution:
5. On the client side, a socket is created using the
SOCKET primitive.
6. BIND is not necessary since the server does not
require a specific address.
7. CONNECT actively initiates the connection process
and blocks the caller until it completes.
8. Once established, both sides can use SEND and
RECEIVE to transmit and receive data over the
connection.
Connection Release:
9. Connection release with sockets is symmetric.
10. Once both sides execute a CLOSE primitive, the
connection is released.
4. ELEMENTS OF TRANSPORT
PROTOCOLS
The transport services are realized through a transport
entities- managing communication aspects.
TL protocols, address: EC,FC, sequencing.
DL protocols routers directly(physical channel),
TL over the entire network, leading to several
implications.

1. Need for Explicit Addressing


2. Complexity of Connection Establishment
3. Potential for Packet Delay and Duplication
4. Buffering and Flow Control Complexity
4. ELEMENTS OF TRANSPORT PROTOCOLS

1.Need for Explicit Addressing:


 Strategies like stable TSAP addresses, port mappers, and proxy
servers to manage address discovery and connection establishment
complexities.
 Transport Service Access Points (TSAPs): Represent specific
endpoints in the transport layer, analogous to ports on the Internet.
 Portmapper Solution: Permits clients to look up the port no of
any remote program supported by the server.
4. ELEMENTS OF TRANSPORT PROTOCOLS

 Example: Mail server process on


host 2 attaches to TSAP 1522.
Application process on host-1
attaches to TSAP 1208 and issues
a CONNECT request to establish
a connection with TSAP 1522 on
host 2.
 NSAPs (Network Service Access
Points) represent endpoints in the
network layer, example-IP
addresses.

Figure: TCP addressing


4. ELEMENTS OF TRANSPORT PROTOCOLS

2. Complexity of Connection Establishment:


 In TL connection establishment is complex due to network
phenomena like packet loss, delay, corruption, and duplication.
 Techniques like packet lifetime restriction and sequence number
management used for mitigating impact of delayed duplicates.
 Addressing Delayed Duplicates: Throwaway transport addresses
- each connection generates a new address .
 Another method Assign Unique identifiers each connection,
reject duplicates.
4. ELEMENTS OF TRANSPORT PROTOCOLS

2. Complexity of Connection Establishment:


 Managing Packet Lifetime: Techniques such as restricted network
design, hop counters, or time stamping restrict packet lifetime.
 Rejecting Delayed Duplicates: Segments labeled with sequence
numbers not reused within defined time period (T) to effectively reject
delayed duplicates. Prevents confusion at destination by discarding
delayed duplicates of old packets.
 Clock-Based Solution: Hosts equipped with time-of-day clocks as
binary counters proposed by Tomlinson for reliable sequence number
management. Clock continues running even after host crashes,
ensuring reliability.
 Buffering and Flow Control Complexity: Transport layer manages
larger and varying number of connections with fluctuating bandwidth.
Requires different approach to buffer allocation compared to fixed
allocation in data link protocols.
TCP 3-Way Handshake Process

1. Step 1 (SYN): In the first step, the client wants to establish a connection with a server, so it sends
a segment with SYN(Synchronize Sequence Number) which informs the server that the client is
likely to start communication and with what sequence number it starts segments with
2. Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK signal bits set.
ACK : response of the segment it received , SYN: what sequence number it is likely to start the
segments with
3. Step 3 (ACK): In the final part client acknowledges the response of the server and they both
establish a reliable connection with which they will start the actual data transfer
4. ELEMENTS OF TRANSPORT PROTOCOLS
2. Complexity of Connection Establishment:
 Three-way handshake: Tomlinson for reliable connection
establishment in presence of delayed duplicate control segments.
1. The typical setup procedure involves the initiating host sending a
CONNECTION REQUEST segment containing a seq no (x) to the receiving host.
2. The receiving host responds with an ACK segment seq no (x) and announcing
its own initial sequence number (y).
3. Finally, the initiating host acknowledges the receiving host's choice of an initial
sequence number in the first data segment it sends.
4. ELEMENTS OF TRANSPORT PROTOCOLS

Connection Release:
Two styles:
Asymmetric release can
lead to abrupt termination
and data loss;
symmetric release enables
independent release of each
direction, avoiding data
loss.

Abrupt disconnection with loss of data ,


(a) Normal case of three-way handshake.
(b) Final ACK lost.
(c) Response lost.
(d) Response lost and subsequent DRs
lost.
5. Error Control and Flow Control
Error Control Mechanisms:
– Error-Detecting Code: Frame carries CRC or checksum to
check for correct reception.
– Automatic Repeat reQuest (ARQ): Frame carries sequence
number; sender retransmits until successful receipt
acknowledged.
Flow Control Mechanisms:
– Maximum Number of Outstanding Frames: Sender limits
number of unacknowledged frames; pauses if receiver doesn't
acknowledge quickly.
– Stop-and-Wait Protocol: Maximum one packet outstanding;
sender waits for acknowledgment before sending next packet.
– Sliding Window Protocol: Combines features of ARQ and
flow control; supports bidirectional data transfer.
5. Error Control and Flow Control
Dynamic Buffer Allocation:
• Adjusting buffer allocations dynamically is crucial as traffic patterns
change and connections open and close.
• The transport protocol should allow a sending host to request buffer
space at the receiver, either per connection or collectively for all
connections between hosts.
• Alternatively, the receiver can inform the sender about reserved buffer
space.
• TCP implements dynamic buffer management, decoupling buffering
from acknowledgements and using a variable-sized window.
Example Scenario:
• Illustrate dynamic window management in a datagram network with 4-
bit sequence numbers.
• Sender requests eight buffers but receives only four. It sends three
segments, one of which is lost, causing a deadlock due to improper
buffer allocation handling.
5. Error Control and Flow Control
Preventing Deadlock:
• To prevent deadlock, hosts should periodically send control segments with
acknowledgement and buffer status on each connection.
• This ensures that deadlock situations, caused by lost control segments, are
resolved eventually.
Changing Bottlenecks:
• While buffer space was once a bottleneck, falling memory prices have
mitigated this issue.
• Another bottleneck arises from the network's carrying capacity, limiting the
maximum flow between hosts.
• A mechanism is needed to limit transmissions based on the network's capacity
rather than the receiver's buffering capacity.
Dynamic Sliding Window:
• Belsnes (1975) proposed a sliding window flow-control scheme where the
sender adjusts the window size dynamically to match the network's carrying
capacity.
• The sender's window size should be proportional to the network's capacity and
adjusted frequently to track changes in carrying capacity, as done in TCP.
6. Multiplexing in Transport Layer

• Multiplexing is required when only one network


address is available on a host, and all transport
connections on that host must use it.
• It helps distinguish which process should receive
incoming segments when multiple transport
connections share the same network connection.
• The multiplexing can distribute traffic among multiple
network paths on a round-robin basis, increasing
effective bandwidth.
• Demultiplexing is the process of directing incoming
data packets to the appropriate application or process
based on their destination addresses or ports.
6. Multiplexing/Demultiplexing in Transport Layer

• The protocols SCTP (Stream


Control Transmission
Protocol), which can utilize
multiple network interfaces
for a single connection.
• For example, if a host is
running a web server
(listening on port 80) and an
email server (listening on
port 25), the demultiplexer
ensures that HTTP requests
destined for port 80 are
routed to the web server
application, while SMTP
emails destined for port 25
are routed to the email server
application.
6. Multiplexing/Demultiplexing in Transport Layer

• For example, if a host


is running a web server
(listening on port 80)
and an email server
(listening on port 25),
the demultiplexer
ensures that HTTP
requests destined for
port 80 are routed to the
web server application,
while SMTP emails
destined for port 25 are
routed to the email
server application.
6. Multiplexing/Demultiplexing in Transport Layer

• For example, if a host


is running a web
server (listening on
port 80) and an email
server (listening on
port 25), the
demultiplexer ensures
that HTTP requests
destined for port 80
are routed to the web
server application,
while SMTP emails
destined for port 25
are routed to the email
server application.
6. Multiplexing/Demultiplexing in Transport Layer

• For example, if a host


is running a web
server (listening on
port 80) and an email
server (listening on
port 25), the
demultiplexer ensures
that HTTP requests
destined for port 80
are routed to the web
server application,
while SMTP emails
destined for port 25
are routed to the email
server application.
7. Crash Recovery

• Recovery from crashes is a critical issue in networked


systems, especially for long-lived connections such as
large software or media downloads.
• Recovery from network or router crashes is relatively
straightforward within hosts as the transport entities expect
lost segments and handle them using retransmissions.
• Recovering from host crashes poses more challenges,
particularly when clients need to continue working even if
servers crash and reboot.
• An example scenario illustrates the difficulty: a client
sending a long file to a server using a stop-and-wait
protocol. If the server crashes midway, it loses track of its
previous state upon reboot.
7. Crash Recovery

• The server may attempt to recover by broadcasting a crash


announcement and requesting clients to inform it of their
connection status. Clients, based on their state (S1 or S0),
decide whether to retransmit the most recent segment.
• Various programming combinations for the server and client
(acknowledge first or write first, retransmit strategies) fail
to ensure proper recovery in all situations, resulting in
protocol failures despite different configurations.
• The complexity of crash recovery underscores the need for
robust protocols and careful consideration of asynchronous
events to ensure reliable communication in networked
systems.
CONGESTION CONTROL

 Congestion control: Excessive flood of packets,


leads performance degrade, packet delays ,losses.
 Requires transport protocols to regulate packet
transmission rates to avoid overloading the
network.

1. Desirable Bandwidth Allocation


2. Regulating the Sending Rate- AIMD (Additive
Increase Multiplicative Decrease)
3. Wireless Issues in congestion control
1. Desirable Bandwidth Allocation
Goal of Congestion Control is achieving an
optimal allocation of bandwidth among transport
entities & ensures:

2. Utilization of available B/W without causing congestion.


3. Fairness among competing transport entities.
4. Adaptation to changes in traffic demands.
 Max-Min Fairness: Dynamically allocate bandwidth to competing
connections based on demand.
 Increasing bandwidth for one flow would decrease the bandwidth for
others, ensuring a max-min fair distribution.
 In Fig., flows A, B, C, and D are allocated bandwidth ensuring no flow
can increase its bandwidth without reducing others’.
 3- flows compete for the bottom-left link between routers R4 and R5,
resulting in each flow receiving 1/3 of the link's capacity.
 Flow A competes with B on the link from R2 to R3. Since B has an
allocation of 1/3, A receives the remaining 2/3 of the link's capacity.
Convergence in Congestion Control: Adapting over time
• Bandwidth needs fluctuate over time due to factors like
browsing web pages or downloading large videos, requiring the
network to adapt continuously.
2. Regulating the Sending Rate
• Flow control occurs when the receiver lacks sufficient buffering
capacity, while congestion arises from insufficient network
capacity as shown in figure.
Regulating the Sending Rate

1. Dual Solutions: Transport protocols need to address both FC,EC


employing variable-sized window solutions for FC,EC congestion
control algorithms for network capacity issues.
2. Feedback Mechanisms: The method of regulating sending rates depends
on the feedback provided by the network, which can be explicit or
implicit, precise or imprecise.
a. XCP (eXplicit Congestion Protocol): Explicit and precise, where
routers inform sources of the rate at which they may send.
b. TCP with ECN (Explicit Congestion Notification): Explicit but
imprecise, using packet markings to signal congestion without
specifying how much to slow down.
c. FAST TCP: Implicit and precise, utilizing round-trip delay as a signal
to avoid congestion.
d. Compound TCP and CUBIC TCP: Implicit and imprecise, relying on
packet loss as an indication of congestion.
AIMD (Additive Increase
Multiplicative Decrease):
Hosts additively increase rate
while n/w is not congested.
Hosts multiplicatively
decrease when congestion
occurs.
The congestion window
(cwnd) is increased by ‘one’
segment ‘per’ RTT, this is the AI
phase.
The cwnd is decreased by
‘half’ on detecting a packet loss,
this is the MD phase.
Regulating the
Sending Rate
• They constructed a
graphical representation for
the case of two connections
competing for a single
link's bandwidth, as
depicted in Fig. 6-24.
Wireless Network
Wireless Network Challenges:
1. These often experience packet loss due to transmission errors-
Attenuation.
2. Wireless LANs have frame loss rates of at least 10%.
3. High throughput requires very low levels of packet loss.
4. The sender might be unaware that the path includes a wireless
link, complicating congestion control efforts.
5. When a loss occurs, only one mechanism should take action,
either addressing a transmission error or responding to a
congestion signal, to prevent unnecessarily slow performance
over wireless links.
6. Internet paths often feature a mix of wired and wireless
segments, presenting a challenge as there's no standardized
method for the sender to detect the types of links in the path.
Wireless Network
Wireless Network
Wireless Network Solutions:
1. The masking strategy, primarily reliant on link layer
retransmissions, generally allows most transport
protocols to function effectively over various wireless
links.
2. However, for wireless links with long round-trip
times, Forward Error Correction (FEC) may be
necessary to mitigate losses, or non-loss signals may
need to be utilized for congestion control.
Wireless Network
Wireless Network Solutions: Forward
Error Correction (FEC).
1. Original Stream: Actual information
2. Redundancy: FEC introduces redundant
information into the original data stream
before transmission.
3. Packet Loss: During transmission,
packets may be lost or corrupted due to
various factors such as noise,
interference, or network congestion. In
such cases, the redundant information
embedded by FEC allows the receiver to
recover from these errors.
4. Reconstructed Stream: By utilizing the
redundant information received along
with the original data packets, the
receiver can reconstruct the original data
stream.
The Internet Transport Protocols: UDP & TCP

• Two Main Protocols: a connectionless protocol (UDP) and a


connection-oriented (TCP).

1. Connectionless Protocol (UDP): UDP (User Datagram Protocol)


is a connectionless transport layer protocol, sending packets
without establishing a connection, providing minimal services
beyond basic delivery, UDP pkts called user datagram’.
2. Connection-Oriented Protocol (TCP): TCP (Transmission
Control Protocol) is a connection-oriented counterpart to UDP,
managing aspects like connection establishment, reliability
through retransmissions, and flow and congestion control for
applications.
The User Datagram Protocol (UDP)
1. Header Structure: UDP segments consist of an 8-byte header + payload.
The header contains source port, destination port, length, and optional
checksum.
2. Source port: 16-bit info used to identify src port of the pkt.
3. Dest port: :16-bit info used to identify application level service on
destination machine.
4. Length(16-bit) : The entire length of UPD pkt including header. The
minimum length is 8 bytes, i.e. size of UDP header itself.
5. Checksum: This field stores the checksum value generated by the sender
before sending. In IPv4 its optional .
Remote Procedure Call (RPC):
• Allows programs to call procedures located on remote hosts.
• Client-Server Model: In RPC, Client- calling process, Server- called process.
• Stub Procedures: client stub represents the server procedure in the client's address
space, and vice versa.

The RPC process involves several steps:


1. The client calls the client stub, which
internally organizes the parameters into a
message.
2. The message is sent from the client machine
to the server machine by the operating
system.
3. The server stub unpacks the parameters and
calls the server procedure.
4. The server procedure executes and returns
results back to the client in a similar
fashion.
1. RTP (Real-time Transport Protocol)
• The primarily functions as a transport protocol, providing
facilities for the transmission of real-time multimedia
applications like internet radio, telephony, and video streaming,
• But RTP is typically implemented in the application layer. This
is because RTP operates in user space, alongside the multimedia
application, handling tasks such as multiplexing and
packetization of audio and video streams.
• RTP facilitates the transmission of multimedia data packets and
ensures timely playback at the receiver end.
• Each RTP packet is numbered sequentially to aid receivers in
detecting missing packets. Upon packet loss, receivers can
choose appropriate actions
1. RTP (Real-time Transport Protocol)
• The primarily functions as a transport protocol, providing
facilities for the transmission Packet nesting in the context of
RTP involves encapsulating RTP packets within UDP (User
Datagram Protocol) packets, which are then further encapsulated
within IP (Internet Protocol) packets for transmission over the
network
1. RTP (Real-time Transport Protocol)
• Payload and Encoding: RTP payloads can contain multiple
samples and may be encoded in various formats defined by
application-specific profiles
• Timestamping and Synchronization: Timestamps aid in
reducing network delay variation (jitter) and synchronizing
multiple streams, facilitating scenarios like synchronized
video and audio playback from different physical devices.
1. RTP (Real-time Transport Protocol)
RTP Header Structure:
8. The Internet Transport Protocols: UDP

1. RTP (Real-time Transport Protocol)


RTP Header Structure:
2. Version (V): version of the RTP protocol being used.
Currently, the version is set to 2,
3. Padding (P): The padding bit (P) is set to indicate that the
RTP packet has been padded to ensure a multiple of 4 bytes.
The last byte of the padding field specifies the number of
padding bytes added, allowing the receiver to properly
interpret the packet.
4. Extension (X): indicates whether an extension header is
present in the RTP packet. However, the format and
meaning of the extension header are not standardized,
providing flexibility for future requirements.
8. The Internet Transport Protocols: UDP

1. RTP (Real-time Transport Protocol)


RTP Header Structure:
4. Contributing Sources Count (CC): This field specifies the
number of contributing sources present in the RTP packet,
ranging from 0 to 15.
5. Marker Bit (M): an application-specific flag that can be used
to mark significant events within the multimedia data stream.
Like, video frame, audio segment, or other relevant events.
6. Payload Type: It indicates the format of the multimedia data,
such as uncompressed audio, MP3, H.264 video, etc.
7. Sequence Number: a monotonically increasing counter that
increments with each RTP packet sent. It aids the receiver in
detecting lost or out-of-order packets.
8. The Internet Transport Protocols: UDP

1. RTP (Real-time Transport Protocol)


RTP Header Structure:
8. Timestamp: the time at which the first sample in the packet was
captured. It allows the receiver to synchronize the playback of
multimedia data by compensating for network delays and jitter.
9. Synchronization Source Identifier (SSRC): Each stream is
assigned a unique SSRC identifier for identification and
synchronization purposes.
10.Contributing Source Identifiers (CSRC): If multiple
contributing sources are present in the RTP packet (indicated by
the CC field), the CSRC field lists the identifiers of these
contributing sources. This information is particularly useful in
scenarios involving mixers, where multiple audio or video
streams are combined.
2. The Real-time Transport Control Protocol (RTCP)
RTCP, complements RTP by managing feedback, synchronization, and
user interface aspects without transporting any media samples:
• Feedback Mechanism: RTCP provides feedback on network
properties like delay, jitter, bandwidth, and congestion to sources.
• Bandwidth Regulation: In multicast scenarios, RTCP reports are
distributed to all participants, potentially consuming substantial
bandwidth.
• Inter stream Synchronization: RTCP addresses synchronization
challenges arising from different streams using disparate clocks,
granularities, and drift rates. It helps maintain synchronization among
multiple streams.
• Source Naming: RTCP facilitates source identification by assigning
names, typically in ASCII text format. This enables receivers to display
information about active participants, enhancing user experience.
8. The Internet Transport Protocols: UDP

Playout with Buffering and Jitter Control:


Streaming multimedia
• Media information must be played out at the right time upon
reaching the receiver.
• Packets experience different transit times, causing jitter,
which can lead to media artifacts if played out immediately.
• Buffers are used at the receiver to mitigate jitter by storing
incoming packets.
• Example scenario: Packets arrive with varying delays,
buffered on the client until playback. Playback begins after a
delay to allow for buffering, ensuring smooth play. Delayed
packets may cause gaps in playback, which can be addressed
by skipping or pausing playback.
8. The Internet Transport Protocols: UDP

Playout with Buffering and Jitter Control:


Streaming multimedia
8. The Internet Transport Protocols: UDP

Playout with Buffering and Jitter Control:


Streaming multimedia
8. The Internet Transport Protocols: UDP

Playout with Buffering and Jitter Control:


Streaming multimedia
8. The Internet Transport Protocols: UDP

Playout with Buffering and Jitter Control:


Streaming multimedia
• The playback point determines how long to wait before
playing out buffered media.
• The selection of the playback point depends on the level of
jitter in the connection.
• Playback points may need to be adapted dynamically based on
changing network conditions.
• Glitches in playback can occur if the adaptation is not handled
smoothly
• Alternatively, improving network quality, e.g., using expedited
forwarding differentiated service, can reduce jitter and delay.
8. The Internet Transport Protocols: UDP

Playout with Buffering


and Jitter Control:
Streaming multimedia
• High jitter can lead to a
degraded quality of service,
especially in real-time
applications
• High jitter can exacerbate
packet loss, particularly if the
network is already congested
and make it challenging to
accurately monitor network
performance and diagnose
issues as show in figure.
The Internet Transport Protocols: TCP

Transmission Control Protocol (TCP) is a core


communication protocol in computer networks.
Purpose: It provides reliable end-to-end byte stream
communication over an unreliable internetwork.
Design Goals of TCP:
1. Reliable Data Delivery: TCP ensures the reliable and
sequenced delivery of data over internetworks.
2. Management of Data Transmission: It manages the
transmission of data between devices.
3. Handling Congestion: TCP handles network congestion
by adjusting the transmission rate.
4. Retransmission of Lost Data: It retransmits lost data
packets to ensure complete delivery.
9. The Internet Transport Protocols: TCP
Functionality of TCP:
– Reliable Data Delivery: TCP guarantees the delivery of data packets in the
correct sequence.
– Congestion Control: It regulates the flow of data to prevent network
congestion.
– Error Handling: TCP detects and retransmits lost or corrupted data packets.
– Out-of-Order Delivery: It addresses issues related to packets arriving out
of order.
– Data Reassembly: TCP reassembles received data packets into the correct
sequence at the receiver's end.
TCP vs. IP:
– IP (Internet Protocol) provides the basic mechanism for packet delivery.
– However, IP alone does not ensure reliable delivery or indicate the optimal
transmission rate.
– TCP complements IP by providing reliable and orderly data delivery.
– It ensures the performance and reliability required by most applications
running on the Internet.
9. The Internet Transport Protocols: TCP

Sockets and Port Numbers:


– TCP communication utilizes endpoints called sockets,
identified by a unique combination of an IP address and a 16-
bit port number.
– Port numbers below 1024 are reserved for standard services,
while those from 1024 through 49151 can be registered for use
by unprivileged users.
Well-Known Ports:
– Examples of well-known ports include FTP (File Transfer
Protocol) on ports 20 and 21, SSH on port 22, SMTP on port
25, HTTP on port 80, and HTTPS on port 443.
Port Assignment and Management:
– Applications can choose their own ports, but often a single
daemon listens on multiple ports and delegates connections
based on the port used, optimizing system resources.
9. The Internet Transport Protocols: TCP

TCP Connection Characteristics:


– TCP connections are full duplex and point-to-point,
involving exactly two endpoints. However, TCP does not
support multicasting or broadcasting.
TCP Connection as Byte Stream:
– TCP treats connections as byte streams, not message streams,
meaning message boundaries are not preserved end-to-end.
Sequence Numbers:
– Each byte on a TCP connection has a unique 32-bit sequence
number, crucial for maintaining data order and integrity.
TCP Segments:
– Data exchange occurs in segments, each with a fixed 20-byte
header plus optional data. Segment size is determined
considering factors like IP payload size and MTU.
9. The Internet Transport Protocols: TCP

MTU and Path MTU Discovery:


– TCP segments must fit within the MTU to avoid
fragmentation. Path MTU Discovery dynamically adjusts
segment size to prevent fragmentation, improving
performance.
Sliding Window Protocol:
– TCP uses a sliding window protocol for flow control, with a
dynamic window size. Senders transmit segments and
receivers send acknowledgements indicating the next expected
sequence number and remaining window size.
Handling Network Issues:
– TCP deals with challenges like out-of-order arrival and
retransmissions due to timeouts. TCP implementations
optimize performance by managing retransmissions and
tracking received bytes using sequence numbers.
The TCP Segment Header

1. Source Port: 16-bit , representing the sender's port on the local host. Together
with the sender's IP address, it forms a unique endpoint for the connection.
2. Destination Port: 16-bit , representing the receiver's port on the remote host.
Together with the receiver's IP address, it forms the destination endpoint for
the connection.
3. Sequence Number: This 32-bit field indicates the seq. number of the first
data byte in the segment. It is used to maintain the correct order of data
transmission and reception.
4. Ack Number: 32-bit field, it indicates the next sequence number that the
sender of the segment expects to receive from the receiver. This field is used
for acknowledging received data and facilitating flow control.
5. Data Offset: This 4-bit field specifies the size of the TCP header in 32-bit
words. Since the header is fixed-format, this field is used to determine where
the data begins in the segment.
6. Reserved: This 6-bit field is reserved for future use and must be set to zero.
8. Window Size: 16-bit-specifies the size of the receive window, which is the
amount of data the sender can transmit before receiving further
acknowledgment from the receiver.
9. Checksum: This 16-bit field is used for error checking of the TCP header
and data.
10. Urgent Pointer: If the URG flag is set, this 16-bit field indicates the offset
from the sequence number indicating urgent data in the segment.
11. Options: Additional header options, such as timestamps, maximum segment
size (MSS), window scale factor,
12. Data: TCP segments can carry up to 65,495 data bytes, calculated by
subtracting the sizes of the IP header (20 bytes) and the TCP header (20
bytes) from the maximum payload size allowed by IP (65,535 bytes).
7. Control Bits: These 8 flags, each occupying 1 bit, control various aspects of the
TCP connection:
1. ECE: Explicit Congestion Notification is used, with ECE
indicating congestion to the TCP sender
2. CWR :CWR signaling the TCP sender to reduce congestion
3. URG: Indicates urgent data in the segment.
4. ACK: Indicates that the Acknowledgment Number field is valid.
5. PSH: Indicates that the data should be pushed to the receiving
application.
6. RST: Indicates a reset request to terminate the connection abruptly
due to various reasons such as host crashes or invalid segments.
7. SYN: Synchronizes sequence numbers to initiate a connection.
SYN = 1 indicating a connection request and SYN = 1 and ACK =
1 indicating a connection acceptance, distinguishing between
connection requests and accepted connections.
8. FIN: Indicates the end of data transmission.
TCP Connection Establishment
•The connections in TCP are established using the three-way
handshake.
Server Side:
 The server passively waits for an incoming connection by
executing the LISTEN and ACCEPT primitives in that order.
 It specifies either a specific source or waits for connections from
any source.
Client Side:
 The client executes a CONNECT primitive, specifying the
destination IP address and port, the maximum TCP segment size it
accepts, and optionally user data.
 The CONNECT primitive sends a TCP segment with the SYN bit
on and ACK bit off, waiting for a response.
TCP Connection Establishment
Connection Establishment Process:
1. Upon receiving the SYN segment at the destination, the
TCP entity checks if there's a process listening on the
specified port.
2. If no process is listening, the destination sends a reply
with the RST bit on to reject the connection.
3. If a process is listening, it receives the incoming TCP
segment and can accept or reject the connection.
4. If accepted, an ack segment is sent back to the client.
TCP Connection Release:
1. TCP connections, although full duplex, are best
understood as a pair of simplex connections.
2. To release a connection, either party sends a TCP
segment with the FIN bit set, indicating no more data to
transmit.
3. Upon acknowledgment of the FIN, that direction is shut
down for new data, but data may continue flowing in the
other direction.
4. Connection release requires four TCP segments in total
(one FIN and one ACK for each direction), but it's
possible for the first ACK and second FIN to be in the
same segment, reducing the count to three.
9. The Internet Transport Protocols: TCP

TCP Connection Management Modeling:


 TCP connection management can be represented using a finite state
machine with 11 states.
 Each connection starts in the CLOSED state and transitions to either
LISTEN or SYN SENT based on passive or active open requests.
 Connection release can be initiated by either side, and upon
completion, the state returns to CLOSED.
 The finite state machine includes states such as LISTEN, SYN
RCVD, SYN SENT, ESTABLISHED, FIN WAIT 1, FIN WAIT 2,
TIME WAIT, CLOSING, CLOSE WAIT, and LAST ACK.
 Transitions between states are triggered by various events/actions,
including user-initiated system calls, segment arrivals (SYN, FIN,
ACK, RST), or timeouts.
TCP Connection Management Modeling:

You might also like