Combinepdf
Combinepdf
Combinepdf
▪ P2P applications
▪ Principles of network ▪ video streaming and content
applications distribution networks
▪ Web and HTTP ▪ socket programming with
▪ E-mail, SMTP, IMAP UDP and TCP
▪ The Domain Name System
DNS
7 6
Recursive query: 1 TLD DNS server
▪ puts burden of name 8
resolution on requesting host at local DNS server
5 4
engineering.nyu.edu dns.nyu.edu
contacted name gaia.cs.umass.edu
server
▪ heavy load at upper authoritative DNS server
levels of hierarchy? dns.cs.umass.edu
identification flags
message header:
▪ identification: 16 bit # for query, # questions # answer RRs
identification flags
log
running on different hosts
i
cal
▪ transport protocols actions in end
e
nd-
end
systems: local or
tra
• sender: breaks application messages regional ISP
nsp
into segments, passes to network layer
ort
home network content
• receiver: reassembles segments into provider
network datacenter
messages, passes to application layer application
transport
network
network
Sender:
application ▪ is passed an application
app. msg
application-layer message
transport ▪ determines segment TThhtransport
app. msg
header fields values
network (IP) ▪ creates segment network (IP)
physical physical
Receiver:
application ▪ receives segment from IP application
▪ checks header values
transport
transport
app. msg ▪ extracts application-layer
message
network (IP)
network (IP) ▪ demultiplexes message up
link to application via socket link
physical physical
Th app. msg
log
• congestion control
i
cal
• flow control
e
nd-
• connection setup
end
▪UDP: User Datagram Protocol
local or
tra
regional ISP
nsp
• unreliable, unordered delivery
ort
home network content
provider
• no-frills extension of “best-effort” IP network datacenter
application
transport Hnnetwork
Ht HTTP msg transport
network link network
link physical link
physical physical
transport
Hn Ht HTTP msg
transport
application
application application
transport transport
(UDP) (UDP)
link link
physical physical
physical physical
data to/from
UDP segment format application layer
Transmitted: 5 6 11
Received: 4 6 11
receiver-computed sender-computed
checksum
= checksum (as received)
sum 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
Note: when adding numbers, a carryout from the most significant bit needs to be
added to the result
* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-33
Internet checksum: weak protection!
example: add two 16-bit integers
0 1
1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 Even though
numbers have
sum 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0 changed (bit
flips), no change
checksum 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 in checksum!
sending receiving
process process
applicatio data data
ntranspor
t reliable
channel
reliable service abstraction
transport
network
unreliable channel
sending receiving
process process
application data data
transport
sender-side of receiver-side
Complexity of reliable data reliable data
transfer protocol
of reliable data
transfer protocol
transfer protocol will depend
(strongly) on characteristics of transport
network
unreliable channel (lose, unreliable channel
corrupt, reorder data?)
reliable service implementation
sending receiving
process process
application data data
transport
sender-side of receiver-side
reliable data of reliable data
Sender, receiver do not know transfer protocol transfer protocol
the “state” of each other, e.g.,
was a message received? transport
message
reliable service implementation
unreliable channel
udt_send(): called by rdt rdt_rcv(): called when packet
to transfer packet over arrives on receiver side of
Bi-directional communication over
unreliable channel to receiver unreliable channel channel
Transport Layer: 3-41
Reliable data transfer: getting started
We will:
▪ incrementally develop sender, receiver sides of reliable data transfer
protocol (rdt)
▪ consider only unidirectional data transfer
• but control info will flow in both directions!
▪ use finite state machines (FSM) to specify sender, receiver
event causing state transition
actions taken on state transition
state: when in this “state”
next state uniquely state state
determined by next 1 event
event
2
actions
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)
rdt_send(data)
sndpkt = make_pkt(1, data, checksum)
udt_send(sndpkt)
start_timer
L/R L/R
Usender=
RTT + L / R
.008 RTT
=
30.008
= 0.00027
rcv_base
Not received
Transport Layer: 3-68
Go-Back-N in action
sender window (N=4) sender receiver
012345678 send pkt0
012345678 send pkt1
send pkt2 receive pkt0, send ack0
012345678
send pkt3 Xloss receive pkt1, send ack1
012345678
(wait)
receive pkt3, discard,
012345678 rcv ack0, send pkt4 (re)send ack1
012345678 rcv ack1, send pkt5 receive pkt4, discard,
(re)send ack1
ignore duplicate ACK receive pkt5, discard,
(re)send ack1
pkt 2 timeout
012345678 send pkt2
012345678 send pkt3
012345678 send pkt4 rcv pkt2, deliver, send ack2
012345678 send pkt5 rcv pkt3, deliver, send ack3
rcv pkt4, deliver, send ack4
rcv pkt5, deliver, send ack5
0123012 pkt0
(after receipt)
a dilemma! 0123012
0123012
pkt1
pkt2
0123012
0123012
0123012
example: 0123012 pkt3
X
▪ seq #s: 0, 1, 2, 3 (base 4 counting)
0123012
pkt0 will accept packet
▪ window size=3
with seq number 0
(a) no problem
0123012 pkt0
0123012 pkt1 0123012
0123012 pkt2 X 0123012
X 0123012
X
timeout
retransmit pkt0
0123012 pkt0
will accept packet
with seq number 0
(b) oops!
Transport Layer: 3-74
sender window receiver window
0123012 pkt0
(after receipt)
a dilemma! 0123012
0123012
pkt1
pkt2
0123012
0123012
0123012
example: 0123012 pkt3
X
▪ seq #s: 0, 1, 2, 3 (base 4 counting) ▪ receiver can’t
0123012
pkt0 will accept packet
see sender side
▪ window size=3
with seq number 0
▪ receiver
(a) no problem
behavior
identical in both
cases!
▪0something’s
123012 pkt0
Q: what relationship is needed 0(very)
1 2 3 0 1wrong!
2 pkt1 0123012
window size
Acknowledgements: N
User types‘C’
Seq=42, ACK=79, data = ‘C’
host ACKs receipt of‘C’,
echoes back ‘C’
Seq=79, ACK=43, data = ‘C’
host ACKs receipt
of echoed ‘C’
Seq=43, ACK=80
(milliseconds)
RTT
sampleRTT
EstimatedRTT
time (seconds)
Transport Layer: 3-82
TCP round trip time, timeout
▪ timeout interval: EstimatedRTT plus “safety margin”
• large variation in EstimatedRTT: want a larger safety margin
TimeoutInterval = EstimatedRTT + 4*DevRTT
* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-83
TCP Sender (simplified)
event: data received from event: timeout
application ▪ retransmit segment that
caused timeout
▪ create segment with seq #
▪ restart timer
▪ seq # is byte-stream number
of first data byte in segment
event: ACK received
▪ start timer if not already
running ▪ if ACK acknowledges
previously unACKed segments
• think of timer as for oldest
unACKed segment • update what is known to be
ACKed
• expiration interval:
TimeOutInterval • start timer if there are still
unACKed segments
Transport Layer: 3-84
TCP Receiver: ACK generation [RFC 5681]
Event at receiver TCP receiver action
arrival of in-order segment with delayed ACK. Wait up to 500ms
expected seq #. All data up to for next segment. If no next segment,
expected seq # already ACKed send ACK
SendBase=92
Seq=92, 8 bytes of data Seq=92, 8 bytes of data
timeo
ACK=100
ut
ut
X
ACK=100
ACK=120
SendBase=120
timeout
C K =100
A
=100
ACK
C K =100
Receipt of three duplicate ACKs A
TCP
code
Network layer
delivering IP datagram
payload into TCP
IP
socket buffers code
from sender
TCP
code
Network layer
delivering IP datagram
payload into TCP
IP
socket buffers code
from sender
TCP
code
receive window
flow control: # bytes
receiver willing to accept IP
code
from sender
TCP
flow control code
application application
network network
ESTAB
data(x+1) accept
ACK(x+1) data(x+1)
connection
x completes
No problem!
choose x
req_conn(x)
ESTAB
retransmit acc_conn(x)
req_conn(x)
ESTAB
req_conn(x)
connection
client x completes server
terminates forgets x
ESTAB
acc_conn(x)
Problem: half open
connection! (no client)
Transport Layer: 3-99
2-way handshake scenarios
choose x
req_conn(x)
ESTAB
retransmit acc_conn(x)
req_conn(x)
ESTAB
data(x+1) accept
data(x+1)
retransmit
data(x+1)
connection
x completes server
client
terminates forgets x
req_conn(x)
ESTAB
data(x+1) accept
data(x+1)
Problem: dup data
accepted!
TCP 3-way handshake
Server state
serverSocket = socket(AF_INET,SOCK_STREAM)
Client state serverSocket.bind((‘’,serverPort))
serverSocket.listen(1)
clientSocket = socket(AF_INET, SOCK_STREAM) connectionSocket, addr = serverSocket.accept()
LISTEN
clientSocket.connect((serverName,serverPort)) LISTEN
choose init seq num, x
send TCP SYN msg
SYNSENT SYNbit=1, Seq=x
choose init seq num, y
send TCP SYNACK
msg, acking SYN SYN RCVD
SYNbit=1, Seq=y
ACKbit=1; ACKnum=x+1
received SYNACK(x)
ESTAB indicates server is live;
send ACK for SYNACK;
this segment may contain ACKbit=1, ACKnum=y+1
client-to-server data
received ACK(y)
indicates client is live
ESTAB
1. On belay?
2. Belay on.
3. Climbing.
▪ two flows
R R
▪ no retransmissions needed
Host B
R/2
Q: What happens as
λout
delay
arrival rate λin throughput:
approaches R/2?
λin R/2 λin R/2
maximum per-connection large delays as arrival rate
throughput: R/2 λin approaches capacity
Transport Layer: 3-106
Causes/costs of congestion: scenario 2
▪ one router, finite buffers
▪ sender retransmits lost, timed-out packet
• application-layer input = application-layer output: λin = λout
• transport-layer input includes retransmissions : λ’in λin
R R
throughput: λout
Host A λin : original data λin
copy λout R/2
λ'in: original data, plus
retransmitted data
R R
no buffer space!
R R
throughput: λout
to full buffers
when sending at
▪ sender knows when packet has been R/2, some packets
are needed
dropped: only resends if packet known to be retransmissions
lost
Host A λin : original data λin R/2
λ'in: original data, plus
retransmitted data
R R
throughput: λout
full buffers – requiring retransmissions to un-needed
retransmissions
▪ but sender times can time out prematurely,
sending two copies, both of which are delivered when sending at
R/2, some packets
are retransmissions,
including needed
λin : original data and un-needed
Host A λin duplicates, that are
copy
timeout R/2
λ'in: original data, plus delivered!
retransmitted data
R R
throughput: λout
full buffers – requiring retransmissions to un-needed
retransmissions
▪ but sender times can time out prematurely,
sending two copies, both of which are delivered when sending at
R/2, some packets
are retransmissions,
including needed
and un-needed
λin R/2 duplicates, that are
delivered!
“costs” of congestion:
▪ more work (retransmission) for given receiver throughput
▪ unneeded retransmissions: link carries multiple copies of a packet
• decreasing maximum achievable throughput
Host D
λout
Host C
λin’ R/2
router
▪ may indicate congestion level or
explicitly set sending rate
▪ TCP ECN, ATM, DECbit protocols
Transport Layer: 3-117
Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-118
TCP congestion control: AIMD
▪ approach: senders can increase sending rate until packet loss
(congestion) occurs, then decrease sending rate on loss event
Additive Increase Multiplicative Decrease
increase sending rate by 1 cut sending rate in half at each
maximum segment size every loss event
RTT until loss detected
TCP sender Sending rate
AIMD sawtooth
behavior: probing
for bandwidth
Why AIMD?
▪ AIMD – a distributed, asynchronous algorithm – has been
shown to:
• optimize congested flow rates network wide!
• have desirable stability properties
RTT
• initially cwnd = 1 MSS two segm
ents
• double cwnd every RTT
• done by incrementing cwnd
four segm
for every ACK received ents
Implementation:
▪ variable ssthresh
▪ on loss event, ssthresh is set to
1/2 of cwnd just before loss event
* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-123
Summary: TCP congestion control
New
New ACK!
ACK! new ACK
duplicate ACK
dupACKcount++ new ACK .
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount = 0
cwnd = cwnd+MSS transmit new segment(s), as allowed
dupACKcount = 0
Λ transmit new segment(s), as allowed
cwnd = 1 MSS
ssthresh = 64 KB cwnd > ssthresh
dupACKcount = 0 slow Λ congestion
start timeout avoidance
ssthresh = cwnd/2
cwnd = 1 MSS duplicate ACK
timeout dupACKcount = 0 dupACKcount++
ssthresh = cwnd/2 retransmit missing segment
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment
timeout New
ACK!
ssthresh = cwnd/2
cwnd = 1 New ACK
dupACKcount = 0
dupACKcount == 3 cwnd = ssthresh dupACKcount == 3
retransmit missing segment dupACKcount = 0
ssthresh= cwnd/2 ssthresh= cwnd/2
cwnd = ssthresh + 3 cwnd = ssthresh + 3
retransmit missing segment
retransmit missing segment
fast
recovery
duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed
time
t0 t1 t2 t3 t4
Transport Layer: 3-126
TCP and the congested “bottleneck link”
▪ TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs
at some router’s output: the bottleneck link
sourc destination
e
application application
TCP TCP
network network
link link
physical physical
packet queue almost
never empty, sometimes
overflows packet (loss)
ECN=10 ECN=11
IP datagram
Transport Layer: 3-131
TCP fairness
Fairness goal: if K TCP sessions share same bottleneck link of
bandwidth R, each should have average rate of R/K
TCP connection 1
bottleneck
TCP connection 2 router
capacity R
Connection 1 throughput R
Transport Layer: 3-133
Fairness: must all network apps be “fair”?
Fairness and UDP Fairness, parallel TCP
▪ multimedia apps often do not connections
use TCP ▪ application can open multiple
• do not want rate throttled by
congestion control parallel connections between two
hosts
▪ instead use UDP:
• send audio/video at constant rate, ▪ web browsers do this , e.g., link of
tolerate packet loss rate R with 9 existing connections:
▪ there is no “Internet police” • new app asks for 1 TCP, gets rate R/10
policing use of congestion • new app asks for 11 TCPs, gets R/2
control
Network IP IP
TCP handshake
(transport layer) QUIC handshake
data
TLS handshake
(security)
data
GET GET
HTTP
GET QUIC QUIC QUIC QUIC QUIC QUIC
encrypt encrypt encrypt encrypt encrypt encrypt
QUIC QUIC QUIC
TLS encryption TLS encryption RDT RDT RDT error! QUIC
QUIC QUIC
RDT RDT RDT
SYN
SYN sent
rcvd
SYNACK(seq=y,ACKnum=x+1)
ESTAB
ACK(ACKnum=y+1) ACK(ACKnum=y+1)
Λ
LAST_ACK
FINbit=1, seq=y
TIMED_WAIT can no longer
send data
ACKbit=1; ACKnum=y+1
timed wait
for 2*max CLOSED
segment lifetime
CLOSED
W/2
TCP over “long, fat pipes”
▪ example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput
▪ requires W = 83,333 in-flight segments
▪ throughput in terms of segment loss probability, L [Mathis 1997]:
1.22 . MSS
TCP throughput =
RTT L
➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a
very small loss rate!
▪ versions of TCP for long, high-speed scenarios
layer network
physical
application
transport
network layer protocols network
data link
physical
network
data link
network
data link
value in arriving
packet’s header
0111 1
3 2
forwarding data
plane (hardware)
high-speed
switching
fabric
physical layer:
bit-level reception
data link layer: decentralized switching:
e.g., Ethernet given datagram dest., lookup output port
see chapter 5 using forwarding table in input port
memory (“match plus action”)
goal: complete input port processing at
‘line speed’
queuing: if datagrams arrive faster than
forwarding rate into switch fabric
Network Layer 4-9
Switching fabrics
transfer packet from input buffer to appropriate
output buffer
switching rate: rate at which packets can be
transfer from inputs to outputs
often measured as multiple of input/output line rate
N inputs: switching rate N times line rate desirable
three types of switching fabrics
memory
input output
port memory port
(e.g., (e.g.,
Ethernet) Ethernet)
system bus
datagram
switch buffer link
fabric layer line
protocol termination
queueing (send)
switch
switch
fabric
fabric
switch switch
fabric fabric
physical layer
…
in: one large datagram
different link types, out: 3 smaller datagrams
different MTUs
large IP datagram divided
(“fragmented”) within net reassembly
one datagram becomes
several datagrams
“reassembled” only at …
final destination
IP header bits used to
identify, order related
fragments
Network Layer 4-19
IP fragmentation, reassembly
length ID fragflag offset
example: =4000 =x =0 =0
4000 byte datagram
one large datagram becomes
MTU = 1500 bytes several smaller datagrams
interface 223.1.1.2
223.1.1.4 223.1.2.9
interface: connection
between host/router and
physical link
223.1.3.27
223.1.1.3
223.1.2.2
router’s typically have
multiple interfaces
host typically has one or
two interfaces (e.g., wired 223.1.3.1 223.1.3.2
223 1 1 1
in chapter 5, 6.
223.1.3.27
223.1.1.3
223.1.2.2
is called a subnet
223.1.3.0/24
223.1.1.3
223.1.9.2 223.1.7.0
223.1.9.1 223.1.7.1
223.1.8.1 223.1.8.0
223.1.2.6 223.1.3.27
subnet host
part part
11001000 00010111 00010000 00000000
200.23.16.0/23
With CIDR, a
router can
summarize
these routes
using a single
network
address by
using a 13-bit
prefix:
172.24.0.0 /13
Steps:
1. Count the number of left-most matching bits, /13 (255.248.0.0)
2. Add all zeros after the last matching bit:
172.24.0.0 = 10101100 00011000 00000000 00000000
Supernetting Example
• Company XYZ needs to address 400 hosts.
• Its ISP gives them two contiguous Class C addresses:
• 207.21.54.0/24
• 207.21.55.0/24
• Company XYZ can use a prefix of 207.21.54.0 /23 to supernet these two
contiguous networks. (Yielding 510 hosts)
• 207.21.54.0 /23
• 207.21.54.0/24
• 207.21.55.0/24
23 bits in common
Supernetting Example
• With the ISP acting as the addressing authority for a CIDR block of addresses, the
ISP’s customer networks, which include XYZ, can be advertised among Internet
routers as a single supernet.
Example
Revision 2
Services provided by DNS
• Name Resolution
• To translate user-supplied hostnames to IP addresses.
• Host aliasing
• A host with a complicated hostname can have one or more alias names.
• DNS can be invoked by an application to obtain the canonical hostname for a
supplied alias hostname.
• Mail server aliasing.
• The hostname of the Hotmail mail server is more complicated and much less
mnemonic than simply hotmail.com (for example, relay1.west-coast.hotmail.com).
DNS helps to obtain the canonical hostname for a supplied alias hostname as well as
the IP address of the host.
• Load distribution.
• DNS is also used to perform load distribution among replicated servers, such as
replicated Web servers.
Problems with centralized DNS design
• A single point of failure: If the DNS server crashes, so does the entire
Internet!
• Traffic volume: A single DNS server would have to handle all DNS queries
• Distant centralized database: A single DNS server cannot be “close to” all
the querying clients. If we put the single DNS server in New York City, then
all queries from Australia must travel to the other side of the globe,
perhaps over slow and congested links. This can lead to significant delays.
• Maintenance: The single DNS server would have to keep records for all
Internet hosts. This centralized database will be huge.
• A centralized database in a single DNS server simply doesn’t scale.
Methods of interaction / resolution of various DNS servers
• Iterative DNS queries
1. host cis.poly.edu first sends a DNS query message to its local
DNS server, dns.poly.edu. The query message contains the
hostname to be translated, namely, gaia.cs.umass.edu.
2. The local DNS server forwards the query message to a root DNS
server.
3. The root DNS server takes note of the edu suffix and returns to the
local DNS server a list of IP addresses for TLD servers responsible
for edu.
4. The local DNS server then resends the query message to one of these
TLD servers.
5. The TLD server takes note of the umass.edu suffix and responds
with the IP address of the authoritative DNS server for the University
of Massachusetts, namely, dns.umass.edu.
8. Finally, the local DNS server resends the query message directly to
dns.umass.edu, which responds with the IP address of
gaia.cs.umass.edu.
• Recursive DNS queries
DNS records and messages
• A resource record is a four-tuple that contains the following fields: (Name, Value,
Type, TTL)
• TTL is the time to live of the resource record.
• If Type=A, then Name is a hostname and Value is the IP address for the hostname.
Thus, a Type A record provides the standard hostname-to-IP address mapping.
• If Type=NS, then Name is a domain (such as foo.com) and Value is the hostname of an
authoritative DNS server that knows how to obtain the IP addresses for hosts in the
domain.
• If Type=CNAME, then Value is a canonical hostname for the alias hostname Name.
This record can provide querying hosts the canonical name for a hostname.
• If Type=MX, then Value is the canonical name of a mail server that has an alias
hostname Name. MX records allow the hostnames of mail servers to have simple
aliases.
• DNS Messages
• The first 12 bytes is the
header section
• The question section
contains information
about the query that is
being made.
• In a reply from a DNS
server, the answer section
contains the resource
records for the name that
was originally queried
• The authority section contains records of other authoritative servers.
To be
UDP segment structure
• The UDP header has only four fields,
each consisting of two bytes.
• The application data occupies the data
field of the UDP segment.
• The port numbers allow the destination
host to pass the application data to the
correct process running on the
destination end system.
• The length field specifies the number of
bytes in the UDP segment (header plus
data).
• The checksum is used by the receiving
host to check whether errors have been
introduced into the segment.
Checksum for the given words
0110011001100000
0101010101010101 After wrapping around
1000111100001100 0100101011000010
Connection Establishment:
Three way handshake protocol
Step 1.
• The client-side TCP sends a TCP segment to server-side TCP containing the SYN bit, is set to 1.
• This is referred to as a SYN segment.
• The client chooses an initial sequence number (client_isn) and places in the sequence number field of
the initial TCP SYN.
Step 2.
• Once the TCP SYN segment arrives at the server, the server extracts it, allocates the TCP buffers and
variables to the connection, and sends a connection-granted segment to the client TCP.
• This connection-granted segment contains three important information in the segment header.
• First, the SYN bit is set to 1.
• Second, the acknowledgment field of the TCP segment header is set to client_isn+1.
• Finally, the server chooses its own initial sequence number (server_isn) and puts it in the sequence number field.
• The connection-granted segment is referred to as a SYNACK segment.
Step 3.
• Upon receiving the SYNACK segment, the client also allocates buffers and variables to the connection.
• The client then sends the server yet another segment.
• This last segment acknowledges the server’s connection-granted segment.
• The SYN bit is set to zero.
• The client application process issues a
close command.
• This causes the client TCP to send a
special TCP segment to the server
process.
• This special segment has a flag bit in
the segment’s header, the FIN bit set
to 1.
• When the server receives this
segment, it sends the client an
acknowledgment segment in return.
• The server then sends its own
shutdown segment, which has the
FIN bit set to 1.
• Finally, the client acknowledges the
server’s shutdown segment.
Estimation of Round Trip Time
Reliable Data
Operation Transfer
with no loss Operation with packet loss
Operation with ACK loss Premature time out
Step 1.
• The client-side TCP sends a TCP segment to server-side TCP containing the SYN bit, is set to 1.
• This is referred to as a SYN segment.
• The client chooses an initial sequence number (client_isn) and places in the sequence number field of
the initial TCP SYN.
Step 2.
• Once the TCP SYN segment arrives at the server, the server extracts it, allocates the TCP buffers and
variables to the connection, and sends a connection-granted segment to the client TCP.
• This connection-granted segment contains three important information in the segment header.
• First, the SYN bit is set to 1.
• Second, the acknowledgment field of the TCP segment header is set to client_isn+1.
• Finally, the server chooses its own initial sequence number (server_isn) and puts it in the sequence number field.
• The connection-granted segment is referred to as a SYNACK segment.
Step 3.
• Upon receiving the SYNACK segment, the client also allocates buffers and variables to the connection.
• The client then sends the server yet another segment.
• This last segment acknowledges the server’s connection-granted segment.
• The SYN bit is set to zero.
• The client application process issues a
close command.
• This causes the client TCP to send a
special TCP segment to the server
process.
• This special segment has a flag bit in
the segment’s header, the FIN bit set
to 1.
• When the server receives this
segment, it sends the client an
acknowledgment segment in return.
• The server then sends its own
shutdown segment, which has the
FIN bit set to 1.
• Finally, the client acknowledges the
server’s shutdown segment.
TCP flow control mechanism
• TCP provides a flow-control service to its applications to eliminate the
possibility of the sender overflowing the receiver’s buffer.
• TCP provides flow control by having the sender maintain a variable called the
receive window (rwnd).
• Receive window is used to give the sender an idea of how much free buffer
space is available at the receiver.
• Because TCP is full-duplex, the sender at each side of the connection maintains
a distinct receive window.
• Suppose that Host A is sending a large file to Host B over a TCP connection.
• Host B allocates a receive buffer to this connection; denote its size by RcvBuffer.
• From time to time, the application process in Host B reads from the buffer.
• Because the spare room changes with time, rwnd is dynamic.
Congestion Control
Functionalities of network layer
Two functions
• Forwarding. When a packet arrives
at a router’s input link, the router
must move the packet to the
appropriate output link.
• Routing. The network layer must
determine the route or path taken
by packets as they flow from a
sender to a receiver. The algorithms
that calculate these paths are
referred to as routing algorithms.
Router Architecture
• Input port. It terminates an incoming physical link at a router and lookup function is also
performed at the input port
• Switching fabric. The switching fabric connects the router’s input ports to its output ports.
• Output ports. An output port stores packets received from the switching fabric and transmits
these packets on the outgoing link by performing the necessary link-layer and physical-layer
functions.
• Routing processor. The routing processor executes the routing protocols, maintains routing
tables and attached link state information, and computes the forwarding table for the router.
It also performs the network management functions.
IPv4 Datagram Format
• Version number. These 4 bits specify the IP protocol version of the
datagram.
• Header length. These 4 bits are needed to determine where in the IP
datagram the data actually begins. Most IP datagrams do not contain
options, so the typical IP datagram has a 20-byte header.
• Type of service. The type of service (TOS) bits were included in the
IPv4 header to allow different types of IP datagrams (for example,
datagrams particularly requiring low delay, high throughput, or
reliability) to be distinguished from each other.
• Datagram length. This is the total length of the IP datagram (header
plus data), measured in bytes.
• Identifier, flags, fragmentation offset. These three fields have to do with so-called
IP fragmentation.
• Time-to-live. The time-to-live (TTL) field is included to ensure that datagrams do
not circulate forever in the network.
• Protocol. This field is used only when an IP datagram reaches its final destination.
• Header checksum. The header checksum aids a router in detecting bit errors in a
received IP datagram.
• Source and destination IP addresses. When a source creates a datagram, it inserts
its IP address into the source IP address field and inserts the address of the
ultimate destination into the destination IP address field.
• Options. The options fields allow an IP header to be extended.
• Data(payload). the data field of the IP datagram contains the transport-layer
segment (TCPor UDP) to be delivered to the destination.
23-4 SCTP
23.2
Table 23.4 Some SCTP applications
23.3
Figure 23.27 Multiple-stream concept
23.4
Note
23.5
Figure 23.28 Multihoming concept
23.6
Note
23.7
■ Full-Duplex Communication
■ Like TCP, SCTP offers full-duplex service, in which data can flow in both
directions at the same time. Each SCTP then has a sending and receiving
buffer, and packets are sent in both directions.
■ Connection-Oriented Service
■ Like TCP, SCTP is a connection-oriented protocol. However, in SCTP, a
connection is called an association. When a process at site A wants to send
and receive data from another process at site B, the following occurs:
■ 1. The two SCTPs establish an association between each other.
■ 2. Data are exchanged in both directions.
■ 3. The association is terminated.
■ Reliable Service
■ SCTP, like TCP, is a reliable transport protocol. It uses an acknowledgment
mechanism to check the safe and sound arrival of data.
23.8
SCTP Features
■ Transmission Sequence Number
■ SCTP uses a transmission sequence number (TSN) to number
the data chunks
■ Stream Identifier
■ Each stream in SCTP needs to be identified by using a stream
identifier
23.10
Note
23.11
Note
23.12
Note
23.13
Figure 23.29 Comparison between a TCP segment and an SCTP packet
The verification tag in SCTP is an association identifier, which does not exist in TCP. In TCP, the
combination of IP and port addresses defines a connection; in SCTP may have multihorning using
different IP addresses. A unique verification tag is needed to define each association.
23.14
Note
23.15
Figure 23.30 Packet, data chunks, and streams
23.16
Note
23.17
Note
23.18
Figure 23.31 SCTP packet format
23.19
Note
23.20
Figure 23.32 General header
23.21
Table 23.5 Chunks
23.22
Note
23.23
Note
23.24
Figure 23.33 Four-way handshaking
23.25
Note
23.26
Figure 23.34 Simple data transfer
23.27
Note
23.28
Figure 23.35 Association termination
23.29
Figure 23.36 Flow control, receiver site
23.30
Figure 23.37 Flow control, sender site
23.31
Figure 23.38 Flow control scenario
23.32
Figure 23.39 Error control, receiver site
23.33
Figure 23.40 Error control, sender site
23.34