Cse2011 DCCN m1 Notes
Cse2011 DCCN m1 Notes
Introduction
Computer Networks
The Internet is a computer network that interconnects billions of computing devices throughout
the world. these computing devices were primarily traditional desktop computers, Linux
workstations, and so-called servers that store and transmit information such as Web pages and e-
mail messages. In Internet jargon, all of these devices are called hosts or end systems.
End systems are connected together by a network of communication links and packet switches.
End systems are also referred to as hosts because they host (that is, run) application programs such
as a Web browser program, a Web server program, an e-mail. The terms hosts and end systems
are used interchangeably; that is, host = end system. Hosts are sometimes further divided into two
categories: clients and servers. Informally, clients tend to be desktops, laptops, smartphones, and
so on, whereas servers tend to be more powerful machines that store and distribute Web pages,
stream video, relay e-mail, and so on. Today, most of the servers from which we receive search
results, e-mail, Web pages, videos and mobile app content reside in large data centers.
There are many types of communication links, which are made up of different types of physical
media, including coaxial cable, copper wire, optical fiber, and radio spectrum. Different links can
transmit data at different rates, with the transmission rate of a link measured in bits/second.In the
jargon of computer networks packets are nothing but packages of data or information.
A packet switch takes a packet arriving on one of its incoming communication links and forwards
that packet on one of its outgoing communication links. Packet switches come in many shapes and
flavors, but the two most prominent types in today’s Internet are routers and link-layer switches.
Both types of switches forward packets toward their ultimate destinations. Link-layer switches are
typically used in access networks, while routers are typically used in the network core. The
sequence of communication links and packet switches traversed by a packet from the sending end
system to the receiving end system is known as a route or path through the network. End systems
access the Internet through Internet Service Providers (ISPs). Each ISP is in itself a network of
packet switches and communication links. ISPs provide a variety of types of network access to the
end systems, including residential broadband access such as cable modem or DSL, high-speed
local area network access, and mobile wireless access.
What Is a Protocol?
A protocol defines the format and the order of messages exchanged between two or more
communicating entities, as well as the actions taken on the transmission and/or receipt of a
message or other event.
Network Protocols are a set of rules governing exchange of information in an easy, reliable and
secure way.
Physical Media
Examples of physical media include twisted-pair copper wire, coaxial cable, multimode fiber-optic
cable, terrestrial radio spectrum, and satellite radio spectrum. Physical media fall into two
categories: guided media and unguided media. With guided media, the waves are guided along
a solid medium, such as a fiber-optic cable, a twisted-pair copper wire, or a coaxial cable. With
unguided media, the waves propagate in the atmosphere and in outer space, such as in a wireless
LAN or a digital satellite channel.
Bus Topology
The bus topology is designed in such a way that all the stations are connected
through a single cable known as a backbone cable.
Each node is either connected to the backbone cable by drop cable or directly
connected to the backbone cable.
When a node wants to send a message over the network, it puts a message over
the network. All the stations available in the network will receive the message
whether it has been addressed or not.
The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard
networks.
The backbone cable is considered as a "single lane" through which the
message is broadcast to all the stations.
The most common access method of the bus topologies is CSMA (Carrier Sense
Multiple Access).
Ring Topology
Star Topology
TreeTopology
A tree topology combines several star topologies by connecting several components to the
centre node.
It may define by experts as tree topology is a combination of bus and star topologies in
which all nodes are attached with the help of a single central node.
Every node in this architecture is connected one to one in a hierarchy level, with each
neighbouring node on its lower level. Each secondary node has a point-to-point link to the
parent node, and all secondary nodes under its jurisdiction have point-to-point connections
to the tertiary nodes. When examined in a visual sense, these systems resemble a tree
structure
Mesh topology
Hybrid Topology
Layered Architecture
The main aim of the layered architecture is to divide the design into small pieces.
Each lower layer adds its services to the higher layer to provide a full set of services to
manage communications and run the applications.
It provides modularity and clear interfaces, i.e., provides interaction between subsystems.
OSI Model
OSI stands for Open System Interconnection is a reference model that describes
how information from a software application in one computer moves through a
physical medium to the software application in another computer.
OSI consists of seven layers, and each layer performs a particular network function.
OSI model was developed by the International Organization for Standardization (ISO)
in 1984, and it is now considered as an architectural model for the inter-computer
communications
DCCN-CSE2011 PREPARED BY SRIDEVI S - ASST.PROF - CSE
8
`
Physical Layer (Layer 1) :
The lowest layer of the OSI reference model is the physical layer. It is responsible for the
actual physical connection between the devices. The physical layer contains information in
the form of bits. It is responsible for transmitting individual bits from one node to the next.
When receiving data, this layer will get the signal received and convert it into 0s and 1s and
send them to the Data Link layer, which will put the frame back together
Physical addressing: After creating frames, the Data link layer adds physical addresses
(MAC address) of the sender and/or receiver in the header of each frame.
Error control: Data link layer provides the mechanism of error control in which it detects
and retransmits damaged or lost frames.
Flow Control: The data rate must be constant on both sides else the data may get corrupted
thus, flow control coordinates the amount of data that can be sent before receiving
acknowledgement.
Access control: When a single communication channel is shared by multiple devices, the
MAC sub-layer of the data link layer helps to determine which device has control over the
channel at a given time
Note: Packet in Data Link layer is referred to as Frame.
Switch & Bridge are Data Link Layer devices
Routing: The network layer protocols determine which route is suitable from source to
destination. This function of the network layer is known as routing.
Logical Addressing: In order to identify each device on internetwork uniquely, the network
layer defines an addressing scheme. The sender & receiver’s IP addresses are placed in the
header by the network layer. Such an address distinguishes each device uniquely and
universally.
The data in the Network layer is referred to as Datagram.
Transport Layer
The transport layer provides services to the application layer and takes services from the
network layer. The data in the transport layer is referred to as Segments. It is responsible for
the End to End Delivery of the complete message.
The services provided by the transport layer :
Connection-Oriented Service
Connectionless service
Session Layer
This layer is responsible for the establishment of connection, maintenance of sessions,
authentication, and also ensures security.
Presentation Layer
The presentation layer is also called the Translation layer. The data from the application
layer is extracted here and manipulated as per the required format to transmit over the
network.
The functions of the presentation layer are :
Translation: For example, ASCII to EBCDIC.
Encryption/ Decryption: Data encryption translates the data into another form or code.
The encrypted data is known as the ciphertext and the decrypted data is known as plain
text. A key value is used for encrypting as well as decrypting data.
Compression: Reduces the number of bits that need to be transmitted on the network.
Application Layer
At the very top of the OSI Reference Model stack of layers, we find the Application layer
which is implemented by the network applications. These applications produce the data,
which has to be transferred over the network. This layer also serves as a window for the
application services to access the network and for displaying the received information to the
user.
TCP/IP model
Internet uses TCP/IP protocol suite, also known as Internet suite..The internet is independent
of its underlying network architecture . This model has the following layers:
The OSI Model we just looked at is just a reference/logical model. It was designed to
describe the functions of the communication system by dividing the communication
procedure into smaller and simpler components. But when we talk about the TCP/IP model,
it was designed and developed by Department of Defense (DoD) in 1960s and is based on
standard protocols. It stands for Transmission Control Protocol/Internet Protocol
Encapsulation
Figure 1.24 shows the physical path that data takes down a sending end system’s protocol
stack, up and down the protocol stacks of an intervening link-layer switch and router, and
then up the protocol stack at the receiving end system.
At the sending host, an application-layer message (M in Figure 1.24) is passed to the transport
layer. In the simplest case, the transport layer takes the message and appends additional
information (so-called transport-layer header information, Ht in Figure 1.24) that will be used
by the receiver-side transport layer. The application-layer message and the transport-layer
header information together constitute the transport-layer segment. The transport-layer
segment thus encapsulates the application-layer message. The added information might
include information allowing the receiver-side transport layer to deliver the message up to
the appropriate application, and error-detection bits that allow the receiver to determine
whether bits in the message have been changed in route. The transport layer then passes the
segment to the network layer, which adds network-layer header information (Hn in Figure
1.24) such as source and destination end system addresses, creating a network-layer
datagram. The datagram is then passed to the link layer, which (of course!) will add its own
link-layer header information and create a link-layer frame. Thus, we see that at each layer,
a packet has two types of fields: header fields and a payload field. The payload is typically a
packet from the layer above
Any message sent from one process to another must go through the underlying network. A
process sends messages into, and receives messages from, the network through a software
interface called a socket. Figure 2.3 illustrates socket communication between two processes.
A socket is the interface between the application layer and the transport layer within a host.
It is also referred to as the Application Programming Interface (API) between the application
and the network, since the socket is the programming interface with which network
applications are built
Addressing Processes
In order to send postal mail to a particular destination, the destination needs to have an
address. Similarly, in order for a process running on one host to send packets to a process
running on another host, the receiving process needs to have an address. To identify the
receiving process, two pieces of information need to be specified: (1) the address of the host
and (2) an identifier that specifies the receiving process in the destination host. In the Internet,
the host is identified by its IP address. In addition to knowing the address of the host to
which a message is destined, the sending process must also identify the receiving process
(more specifically, the receiving socket) running in the host. This information is needed
because in general a host could be running many network applications. A destination port
number serves this purpose
The Web was the first Internet application that caught the general public’s eye. It
dramatically changed how people interact inside and outside their work environments.
The World Wide Web is abbreviated as WWW and is commonly known as the web.
The WWW was initiated by CERN (European library for Nuclear Research) in 1989.
WWW can be defined as the collection of different websites around the world,
containing different information shared via local servers(or computers).
Overview of HTTP
HTTP stands for HyperText Transfer Protocol.
It is an application layer protocol used to access the data on the World Wide Web
(www).
The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
HTTP is a connectionless protocol. HTTP client initiates a request and waits for a
response from the server. When the server receives the request, the server processes
the request and sends back the response to the HTTP client after which the client
disconnects the connection. The connection between client and server exist only
during the current request and response time only.
The above figure shows the HTTP transaction between client and server. The client
initiates a transaction by sending a request message to the server. The server replies
to the request message by sending a response message.
DCCN-CSE2011 PREPARED BY SRIDEVI S - ASST.PROF - CSE
14
The first line of an HTTP request message is called the request line; the subsequent
lines are called the header lines.
The request line has three fields: the method field, the URL field, and the HTTP version
field.
The method field can take on several different values, including GET, POST, HEAD,
PUT, and DELETE
It has three sections: an initial status line, six header lines, and then the entity body.
The entity body is the meat of the message—it contains the requested object itself
(represented by data data data data data ...).
The status line has three fields: the protocol version field, a status code, and a
corresponding status message.
In this example, the status line indicates that the server is using HTTP/1.1 and that
everything is OK
Some common status codes and associated phrases include:
• 200 OK: Request succeeded and the information is returned in the response.
400 Bad Request: This is a generic error code indicating that the request could not be
understood by the server.
• 404 Not Found: The requested document does not exist on this server.
• 505 HTTP Version Not Supported: The requested HTTP protocol version is not supported
by the server.
Non-Persistent
In non-persistent connection HTTP, there can be at most one object that can be sent over a
single TCP connection. This means that for each object that is to be sent from source to
destination, a new connection will be created.
Persistent
In persistent connection HTTP, multiple objects can be sent over a single TCP connection.
This means that multiple objects can be transmitted from source to destination on a single
HTTP connection
We human beings can be identified in many ways. For example, we can be identified by the
names that appear on our birth certificate Just as humans can be identified in many ways, so
too can Internet hosts. One identifier for a host is its hostname. Hostnames—such as
www.facebook.com, www.google.com, gaia.cs.umass.edu. Furthermore, because hostnames
can consist of variable-length alphanumeric characters, it would be difficult to process by
routers. For these reasons, hosts are also identified by so-called IP addresses.
DNS is a service that translates the domain name into IP addresses. This allows the users
of networks to utilize user-friendly names when looking for other hosts instead of
remembering the IP addresses
The client first contacts one of the root servers, which returns IP addresses for TLD servers
for the top-level domain com. The client then contacts one of these TLD servers, which
returns the IP address of an authoritative server for amazon.com. Finally, the client contacts
one of the authoritative servers for amazon.com, which returns the IP address for the
hostname www.amazon.com.
Root DNS servers. There are more than 1000 root servers instances scattered all over the
world, as shown in Figure 2.18. These root servers are copies of 13 different root servers,
managed by 12 different organizations, and coordinated through the Internet Assigned
Numbers Authority [IANA 2020]. The full list of root name servers, along with the
organizations that manage them and their IP addresses can be found at [Root Servers 2020].
Root name servers provide the IP addresses of the TLD servers.
Top-level domain (TLD) servers. For each of the top-level domains—top-level domains
such as com, org, net, edu, and gov, and all of the country top-level domains such as uk, fr,
ca, and jp—there is TLD server (or server cluster). The company Verisign Global Registry
Services maintains the TLD servers for the com top-level domain, and the company Educause
maintains the TLD servers for the edu top-level domain. The network infrastructure
supporting a TLD can be large and complex; see
Authoritative DNS servers. Every organization with publicly accessible hosts (such as Web
servers and mail servers) on the Internet must provide publicly accessible DNS records that
map the names of those hosts to IP addresses. An organization’s authoritative DNS server
houses these DNS records. An organization can choose to implement its own authoritative
DNS server to hold these records; alternatively, the organization can pay to have these
records stored in an authoritative DNS server of some service provider. Most universities and
large companies implement and maintain their own primary and secondary (backup)
authoritative DNS server.
Transport Layer
The main role of the transport layer is to provide the communication services
directly to the application processes running on different hosts.
The transport layer provides a logical communication between application
processes running on different hosts. Although the application processes on
different hosts are not physically connected, application processes use the logical
communication provided by the transport layer to send the messages to each other.
A computer network provides more than one protocol to the network applications.
For example, TCP and UDP are two transport layer protocols that provide a
different set of services to the network layer.
The services provided by the transport layer protocols can be divided into five categories:
End-to-end delivery
Addressing
Reliable delivery
Flow control
Multiplexing
Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
Total length: It defines the total length of the user datagram in bytes. It is a 16-bit
field.
Checksum: The checksum is a 16-bit field which is used in error detection.
Checksum Calculation:
Steps followed in checksum calculation
The first thing that we do is to divide and slice it up to 16 bit pieces. Let's assume we have
three 16 bit data as below.
1001101001010110
0000101110001110
0000110111001100
If we add those three 16 bit data using binary addition. We get the below 16 bit data(Its
simple binary addition).
1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 (data)
0 0 0 0 1 0 1 1 1 0 0 0 1 1 1 0 (data)
0 0 0 0 1 1 0 1 1 1 0 0 1 1 0 0 (data)
0 1 0 0 1 1 0 0 0 1 0 0 1 1 1 1 (Checksum)
The reciever will simply add all the above 4 things. Data as well as checksum are added
together. Let's try adding it.
If the output of sum of 16 bit data and checksum is 1111111111111111. All fields will be 1s.
Even if there is one 0, then that means errors were introduced in the data during transit.
The finite-state machine (FSM) definitions for the rdt1.0 sender and receiver are shown in
Figure 3.9. The FSM in Figure 3.9(a) defines the operation of the sender, while the FSM in
Figure 3.9(b) defines the operation of the receiver. The arrows in the FSM description indicate
the transition of the protocol from one state to another.
The sending side of rdt simply accepts data from the upper layer via the rdt_send(data)
event, creates a packet containing the data (via the action make_pkt(data)) and sends
the packet into the channel. In practice, the rdt_send(data) event would result from a
procedure call (for example, to rdt_send()) by the upper-layer application.
On the receiving side, rdt receives a packet from the underlying channel via the
rdt_rcv(packet) event, removes the data from the packet (via the action extract (packet,
data)) and passes the data up to the upper layer (via the action deliver_data(data)). In
practice, the rdt_rcv(packet) event would result from a procedure call (for example, to
rdt_rcv()) from the lowerlayer protocol
The sender will be waiting for an ACK or a NAK packet from the receiver. If an ACK
packet is received the sender knows that the most recently transmitted packet has been
received correctly and thus the protocol returns to the state of waiting for data from the
upper layer. If a NAK is received, the protocol retransmits the last packet and waits for
an ACK or NAK to be returned by the receiver in response to the retransmitted data
packet. It is important to note that when the sender is in the wait-for-ACK-or-NAK
state, it cannot get more data from the upper layer. the sender will not send a new piece
of data until it is sure that the receiver has correctly received the current packet. Because
of this behavior, protocols such as rdt2.0 are known as stop-and-wait protocols.
RDT 3.0 is the last and best version of the Reliable Data Transfer protocol. Before RDT
3.0, RDT 2.2 was introduced, to account for the channel with bit errors in which bit errors
can also occur in acknowledgments. As the design of RDT 2.2 is a stop and wait for protocol.
If there is a network issue and the acknowledgment/packet is lost. The sender waits infinitely
for it.
How RDT 3.0 solves this problem?
RDT 3.0 introduces a timer at the sender side if the acknowledgment is not received within
a particular time the sender resends the packet. This method solves the issue of packet loss
That is, the sender was busy only 2.7 hundredths of one percent of the time! Viewed another
way, the sender was able to send only 1,000 bytes in 30.008 milliseconds, an effective
throughput of only 267 kbps—even though a 1 Gbps link was available! Imagine the unhappy
network manager who just paid a fortune for a gigabit capacity link but manages to get a
throughput of only 267 kilobits per second! This is a graphic example of how network
protocols can limit the capabilities provided by the underlying network hardware. Also, we
have neglected lowerlayer protocol-processing times at the sender and receiver, as well as
the processing and queuing delays that would occur at any intermediate routers between the
sender and receiver. Including these effects would serve only to further increase the delay
and further accentuate the poor performance. The solution to this particular performance
problem is simple: Rather than operate in a stop-and-wait manner, the sender is allowed to
send multiple packets without waiting for acknowledgments, as illustrated in Figure 3.17(b).
DCCN-CSE2011 PREPARED BY SRIDEVI S - ASST.PROF - CSE
25
Figure 3.18(b) shows that if the sender is allowed to transmit three packets before having to
wait for acknowledgments, the utilization of the sender is essentially tripled . Since the many
in-transit sender-to-receiver packets can be visualized as filling a pipeline, this technique is
known as pipelining. Two basic approaches toward pipelined error recovery can be
identified: Go-Back-N and selective repeat.
Go-Back-N (GBN)
In a Go-Back-N (GBN) protocol, the sender is allowed to transmit multiple packets (when
available) without waiting for an acknowledgment, but is constrained to have no more than
some maximum allowable number, N, of unacknowledged packets in the pipeline. For this
reason, N is often referred to as the window size and the GBN protocol itself as a sliding-
window protocol.
Figure 3.22 shows the operation of the GBN protocol for the case of a window size of four
packets. Because of this window size limitation, the sender sends packets 0 through 3 but
then must wait for one or more of these packets to be acknowledged before proceeding. As
each successive ACK (for example, ACK0 and ACK1) is received, the window slides
forward and the sender can transmit one new packet (pkt4 and pkt5, respectively). On the
receiver side, packet 2 is lost and thus packets 3, 4, and 5 are found to be out of order and are
discarded.
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and
now the sender is expected to receive the acknowledgment of the 0th frame
Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the
receiver has successfully received it.
The sender will then send the next frame, i.e., 4, and the window slides containing four
frames (1,2,3,4).
The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window
will slide having four frames (2,3,4,5)
Now, let's assume that the receiver is not acknowledging the frame no 2, either the
frame is lost, or the acknowledgment is lost. Instead of sending the frame no 6, the
sender Go-Back to 2, which is the first frame of the current window, retransmits all the
frames in the current window, i.e., 2,3,4,5.
Example 1: In GB4, if every 6th packet being transmitted is lost and if we have to spend
10 packets then how many transmissions are required where N=4.
Source port: It defines the port of the application, which is sending the data. So, this
field contains the source port address, which is 16 bits.
Destination port: It defines the port of the application on the receiving side. So, this
field contains the destination port address, which is 16 bits.
Sequence number: This field contains the sequence number of data bytes in a
particular session.
Acknowledgment number: When the ACK flag is set, then this contains the next
sequence number of the data byte and works as an acknowledgment for the previous
data received. For example, if the receiver receives the segment number 'x', then it
responds 'x+1' as an acknowledgment number.
The 16-bit receive window field is used for flow control.
The 4-bit field specifies the length of the TCP header in 32-bit words header length
Unused: It is a 4-bit field reserved for future use, and by default, all are set to zero.
Flags
There are six control bits or flags:
o URG: It represents an urgent pointer. If it is set, then the data is processed
urgently.
o ACK: If the ACK is set to 0, then it means that the data packet does not contain
an acknowledgment.
o PSH: If this field is set, then it requests the receiving device to push the data
to the receiving application without buffering it.
o RST: If it is set, then it requests to restart a connection.
o SYN: It is used to establish a connection between the hosts.
o FIN: It is used to release a connection, and no further data exchange will
happen.
Windowsize
It is a 16-bit field. It contains the size of data that the receiver can accept. This field
is used for the flow control between the sender and receiver and also determines the
amount of buffer allocated by the receiver for a segment. The value of this field is
determined by the receiver.
Checksum
It is a 16-bit field. This field is optional in UDP, but in the case of TCP/IP, this field
is mandatory.
Urgentpointer
It is a pointer that points to the urgent data byte if the URG flag is set to 1. It defines
a value that will be added to the sequence number to get the sequence number of the
last urgent byte.
Options
It provides additional options. The optional field is represented in 32-bits. If this field
contains the data less than 32-bit, then padding is required to obtain the remaining
bits.
Suppose Host A initiates a Telnet session with Host B. Because Host A initiates the
session, it is labeled the client, and Host B is labeled the server.
The starting sequence numbers are 42 and 79 for the client and server, respectively.
Recall that the sequence number of a segment is the sequence number of the first byte
in the data field. Thus, the first segment sent from the client will have sequence number
42; the first segment sent from the server will have sequence number 79. Recall that the
acknowledgment number is the sequence number of the next byte of data that the host
is waiting for.
After the TCP connection is established but before any data is sent, the client is waiting
for byte 79 and the server is waiting for byte 42. As shown in Figure 3.31, three
segments are sent. The first segment is sent from the client to the server, containing the
1-byte ASCII representation of the letter ‘C’ in its data field.
This first segment also has 42 in its sequence number field, as we just described. Also,
because the client has not yet received any data from the server, this first segment will
have 79 in its acknowledgment number field.
The second segment is sent from the server to the client. It serves a dual purpose. First
it provides an acknowledgment of the data the server has received. By putting 43 in the
acknowledgment field, the server is telling the client that it has successfully received
everything up through byte 42 and is now waiting for bytes 43 onward.
The third segment is sent from the client to the server. Its sole purpose is to
acknowledge the data it has received from the server
the client sends a small TCP segment to the server, the server acknowledges and
responds with a small TCP segment, and, finally, the client acknowledges back to the
server. The first two parts of the three-way handshake take one RTT. After completing
the first two parts of the handshake, the client sends the HTTP request message
combined with the third part of the three-way handshake (the acknowledgment) into
the TCP connection. Once the request message arrives at the server, the server sends
the HTML file into the TCP connection. This HTTP request/response eats up another
RTT. Thus, roughly, the total response time is two RTTs plus the transmission time at
the server of the HTML file.
The sample RTT, denoted SampleRTT, for a segment is the amount of time between
when the segment is sent and when an acknowledgment for the segment is received.
Instead of measuring a SampleRTT for every transmitted segment, most TCP
implementations take only one SampleRTT measurement at a time. That is, at any point
in time, the SampleRTT is being estimated for only one of the transmitted but currently
unacknowledged segments, leading to a new value of SampleRTT approximately once
every RTT.
TCP Retransmission
The TCP retransmission means resending the packets over the network that have been
either lost or damaged. Here, retransmission is a mechanism used by protocols such
as TCP to provide reliable communication. Here, reliable communication means that
the protocol guarantees packet's delivery even if the data packet has been lost or
damaged.
In this scenario, the packet is received on the other side, but the acknowledgment is
lost, i.e., the ACK is not received on the sender side. Once the timeout period expires,
the packet is resent. There are two copies of the packets on the other side; though the
packet is received correctly, the acknowledgment is not received, so the sender
retransmits the packet. In this case, the retransmission could have been avoided, but
due to the loss of the ACK, the packet is retransmitted.
Fast Retransmit
One of the problems with timeout-triggered retransmissions is that the timeout period
can be relatively long. When a segment is lost, this long timeout period forces the sender
to delay resending the lost packet, thereby increasing the end-to-end delay. Fortunately,
the sender can often detect packet loss well before the timeout event occurs by noting
so-called duplicate ACKs. A duplicate ACK is an ACK that reacknowledges a segment
for which the sender has already received an earlier acknowledgment.
If the TCP sender receives three duplicate ACKs for the same data, it takes this as an
indication that the segment following the segment that has been ACKed three times has
been lost. (In the homework problems, we consider the question of why the sender waits
for three duplicate ACKs, rather than just a single duplicate ACK.) In the case that three
duplicate ACKs are received, the TCP sender performs a fast retransmit [RFC 5681],
retransmitting the missing segment before that segment’s timer expires. This is shown
in Figure 3.37, where the second segment is lost, then retransmitted before its timer
expires
Flow Control
TCP provides a flow-control service to its applications to eliminate the possibility of
the sender overflowing the receiver’s buffer. Flow control is thus a speed matching
service—matching the rate at which the sender is sending against the rate at which the
receiving application is reading TCP provides flow control by having the sender
maintain a variable called the receive window.
Informally, the receive window is used to give the sender an idea of how much free
buffer space is available at the receiver. Because TCP is full-duplex, the sender at each
side of the connection maintains a distinct receive window. Let’s investigate the receive
window in the context of a file transfer.
Suppose that Host A is sending a large file to Host B over a TCP connection. Host B
allocates a receive buffer to this connection; denote its size by RcvBuffer. From time
to time, the application process in Host B reads from the buffer.
• LastByteRead: the number of the last byte in the data stream read from the buffer by
the application process in B
• LastByteRcvd: the number of the last byte in the data stream that has arrived from the
network and has been placed in the receive buffer at B Because TCP is not permitted
to overflow the allocated buffer,
The receive window, denoted rwnd is set to the amount of spare room in the buffer:
rwnd = RcvBuffer – [LastByteRcvd – LastByteRead]
Because the spare room changes with time, rwnd is dynamic. The variable rwnd is
illustrated in Figure 3.38.
How does the connection use the variable rwnd to provide the flow-control service?
Host B tells Host A how much spare room it has in the connection buffer by placing its
current value of rwnd in the receive window field of every segment it sends to A.
Initially, Host B sets rwnd = RcvBuffer. Note that to pull this off, Host B must keep
track of several connection-specific variables.
Host A in turn keeps track of two variables, LastByteSent and LastByteAcked, which
have obvious meanings. Note that the difference between these two variables,
LastByteSent – LastByteAcked, is the amount of unacknowledged data that A has sent
into the connection. By keeping the amount of unacknowledged data less than the value
of rwnd, Host A is assured that it is not overflowing the receive buffer at Host B. Thus,
Host A makes sure throughout the connection’s life that
Congestion refers to a network state where- The message traffic becomes so heavy that
it slows down the network response time.
Congestion leads to the loss of packets in transit.
So, it is necessary to control the congestion in network.
It is not possible to completely avoid the congestion.
Congestion Control
Congestion control refers to techniques and mechanisms that can- Either prevent
congestion before it happens
Or remove congestion after it has happened
The size of the sender window is determined by the following two factors-
1. Receiver window size
2. Congestion window size
Receiver Window Size
Sender should not send data greater than receiver window size.
Congestion Window
Sender should not send data greater than congestion window size.
Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
So, sender should always send data less than or equal to congestion window size.
Different variants of TCP use different approaches to calculate the size of congestion
window.
Congestion window is known only to the sender and is not sent over the links.
So, always-
Sender window size = Minimum (Receiver window size, Congestion window size)
TCP’s general policy for handling congestion consists of following three phases-
Initially, sender sets congestion window size = Maximum Segment Size (1 MSS).
After receiving each acknowledgment, sender increases the congestion window size
by 1 MSS.
In this phase, the size of congestion window increases exponentially.
This phase continues until the congestion window size reaches the slow start threshold.
Threshold
= Maximum number of TCP segments that receiver window can accommodate / 2
= (Receiver window size / Maximum Segment Size) / 2
This phase continues until the congestion window size becomes equal to the
receiver window size.
Numericals:
Consider the effect of using slow start on a line with a 10 msec RTT and no
congestion. The receiver window is 24 KB and the maximum segment size is 2 KB.
How long does it take before the first full window can be sent?
Solution-
Given-
= 24 KB / 2 KB
= 12 MSS
= 12 MSS / 2
= 6 MSS
Since the threshold is reached, so it marks the end of slow start phase.
From here,
Window size at the end of 9th transmission or at the start of 10th transmission is 12
MSS.
Thus, 9 RTT’s will be taken before the first full window can be sent.
So,
= 9 RTT’s
= 9 x 10 msec
= 90 msec
Problem-02:
1. 8 MSS
2. 14 MSS
3. 7 MSS
4. 12 MSS
Solution-
Given-
Since the threshold is reached, so it marks the end of slow start phase.
Setting the slow start threshold to half of the current congestion window size.
Decreasing the congestion window size to 2 MSS (Given value is used).
Resuming the slow start phase.
So now,
Since the threshold is reached, so it marks the end of slow start phase.
From here,
= 8 MSS