Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
13 views

Module 4 - Notes

Uploaded by

Joyal Sojan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Module 4 - Notes

Uploaded by

Joyal Sojan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Module - 4

Transport Layer
o The transport layer is a 4th layer from the top.
o The main role of the transport layer is to provide the communication
services directly to the application processes running on different hosts.
o The transport layer provides a logical communication between application
processes running on different hosts. Although the application processes
on different hosts are not physically connected, application processes use
the logical communication provided by the transport layer to send the
messages to each other.
o The transport layer protocols are implemented in the end systems but not
in the network routers.
o A computer network provides more than one protocol to the network
applications. For example, TCP and UDP are two transport layer protocols
that provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service.
It also provides other services such as reliable data transfer, bandwidth
guarantees, and delay guarantees.
o Each of the applications in the application layer has the ability to send a
message by using TCP or UDP. The application communicates by using
either of these two protocols. Both TCP and UDP will then communicate
with the internet protocol in the internet layer. The applications can read
and write to the transport layer. Therefore, we can say that communication
is a two-way process.

The services provided by the transport layer protocols can be divided into
five categories:

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
Transport Layer

o The transport layer is a 4th layer from the top.


o The main role of the transport layer is to provide the communication
services directly to the application processes running on different hosts.
o The transport layer provides a logical communication between
application processes running on different hosts. Although the
application processes on different hosts are not physically connected,
application processes use the logical communication provided by the
transport layer to send the messages to each other.
o The transport layer protocols are implemented in the end systems but
not in the network routers.
o A computer network provides more than one protocol to the network
applications. For example, TCP and UDP are two transport layer
protocols that provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing
service. It also provides other services such as reliable data transfer,
bandwidth guarantees, and delay guarantees.
o Each of the applications in the application layer has the ability to send a
message by using TCP or UDP. The application communicates by using
either of these two protocols. Both TCP and UDP will then communicate
with the internet protocol in the internet layer. The applications can read
and write to the transport layer. Therefore, we can say that
communication is a two-way process.
Services provided by the Transport Layer

The services provided by the transport layer are similar to those of the data link
layer. The data link layer provides the services within a single network while
the transport layer provides the services across an internetwork made up of
many networks. The data link layer controls the physical layer while the
transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into
five categories:

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
End-to-end delivery:

The transport layer transmits the entire message to the destination. Therefore, it
ensures the end-to-end delivery of an entire message from a source to the
destination.

Reliable delivery:

The transport layer provides reliability services by retransmitting the lost and
damaged packets.

The reliable delivery has four aspects:

o Error control
o Sequence control
o Loss control
o Duplication control

Error Control

o The primary role of reliability is Error Control. In reality, no transmission


will be 100 percent error-free delivery. Therefore, transport layer protocols
are designed to provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it
ensures only node-to-node error-free delivery. However, node-to-node
reliability does not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an error
is introduced inside one of the routers, then this error will not be caught by
the data link layer. It only detects those errors that have been introduced
between the beginning and end of the link. Therefore, the transport layer
performs the checking for the errors end-to-end to ensure that the packet
has arrived correctly.
Sequence Control

o The second aspect of the reliability is sequence control which is


implemented at the transport layer.
o On the sending end, the transport layer is responsible for ensuring that the
packets received from the upper layers can be used by the lower layers. On
the receiving end, it ensures that the various segments of a transmission
can be correctly reassembled.

Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the
sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to
identify the missing segment.

Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer


guarantees that no duplicate data arrive at the destination. Sequence numbers are
used to identify the lost packets; similarly, it allows the receiver to identify and
discard duplicate segments.

Flow Control

Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets
and asking for the retransmission of packets. This increases network congestion
and thus, reducing the system performance. The transport layer is responsible for
flow control. It uses the sliding window protocol that makes the data transmission
more efficient as well as it controls the flow of data so that the receiver does not
become overwhelmed. Sliding window protocol is byte oriented rather than frame
oriented.

Multiplexing

The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

o Upward multiplexing: Upward multiplexing means multiple transport


layer connections use the same network connection. To make more cost-
effective, the transport layer sends several transmissions bound for the
same destination along the same path; this is achieved through upward
multiplexing.
o Downward multiplexing: Downward multiplexing means one transport
layer connection uses the multiple network connections. Downward
multiplexing allows the transport layer to split a connection among several
paths to improve the throughput. This type of multiplexing is used when
networks have a low or slow capacity.

Transport Layer protocols

o The transport layer is represented by two protocols: TCP and UDP.


o The IP protocol in the network layer delivers a datagram from a source host
to the destination host.
o Nowadays, the operating system supports multiuser and multiprocessing
environments, an executing program is called a process. When a host sends
a message to other host means that source process is sending a process to
a destination process. The transport layer protocols define some
connections to individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from
source host to the destination host while transport layer protocols are port-
to-port protocols that work on the top of the IP protocols to deliver the
packet from the originating port to the IP services, and from IP services to
the destination port.
o Each port is defined by a positive integer address, and it is of 16 bits.
UDP

o UDP stands for User Datagram Protocol.


o UDP is a simple protocol and it provides nonsequenced transport
functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less
important than speed and size.
o UDP is an end-to-end transport level protocol that adds transport-level
addresses, checksum error control, and length information to the data from
the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format

The user datagram has a 16-byte header which is shown below:

Where,

o Source port address: It defines the address of the application process that
has delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process
that will receive the message. The destination port address is of a 16-bit
address.
o Total length: It defines the total length of the user datagram in bytes. It is
a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol


o UDP provides basic functions needed for the end-to-end delivery of a
transmission.
o It does not provide any sequencing or reordering functions and does not
specify the damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which
packet has been lost as it does not contain an ID or sequencing number of
a particular data segment.

TCP

o TCP stands for Transmission Control Protocol.


o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established
between both the ends of the transmission. For creating the connection,
TCP generates a virtual circuit between sender and receiver for the duration
of a transmission.

Features Of TCP protocol

o Stream data transfer: TCP protocol transfers the data in the form of
contiguous stream of bytes. TCP group the bytes in the form of TCP
segments and then passed it to the IP layer for transmission to the
destination. TCP itself segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and
expects a positive acknowledgement from the receiving TCP. If ACK is
not received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments
if they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to
the sender indicating the number the bytes it can receive without
overflowing its internal buffer. The number of bytes is sent in ACK in the
form of the highest sequence number that it can receive without any
problem. This mechanism is also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from
different applications and forwarding to the different applications on
different computers. At the receiving end, the data is forwarded to the
correct application. This process is known as demultiplexing. TCP
transmits the packet to the correct application by using the logical channels
known as ports.
o Logical Connections: The combination of sockets, sequence numbers,
and window sizes, is called a logical connection. Each connection is
identified by the pair of sockets used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both
the directions at the same time. To achieve Full Duplex service, each TCP
should have sending and receiving buffers so that the segments can flow in
both the directions. TCP is a connection-oriented protocol. Suppose the
process A wants to send and receive the data from process B. The following
steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.

TCP Segment Format

Where,

o Source port address: It is used to define the address of the application


program in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the
application program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP
segments. The 32-bit sequence number field represents the position of the
data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number
acknowledge the data from other communicating devices. If ACK field is
set to 1, then it specifies the sequence number that the receiver is expecting
to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit
words. The minimum size of the header is 5 words, and the maximum size
of the header is 15 words. Therefore, the maximum size of the TCP header
is 60 bytes, and the minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and
independently. A control bit defines the use of a segment or serves as a
validity check for other fields.

Stream Control Transmission Protocol (SCTP)


 Stream Control Transmission Protocol (SCTP) is a reliable, message-
oriented transport layer protocol.
 SCTP has mixed features of TCP and UDP.
 SCTP maintains the message boundaries and detects the lost data,
duplicate data as well as out-of-order data.
 SCTP provides the Congestion control as well as Flow control.
 SCTP is especially designed for internet applications.

SCTP Services

Some important services provided by SCTP are as stated below:

1. Process-to- Process communication


SCTP uses all important ports of TCP.

2. Multi- Stream Facility


SCTP provides multi-stream service to each connection, called as association. If
one stream gets blocked, then the other stream can deliver the data.

3.Full- Duplex Communication


SCTP provides full-duplex service ( the data can flow in both directions at the
same time).

4. Connection- Oriented Service


The SCTP is a connection oriented protocol, just like TCP with the only
difference that, it is called association in SCTP. If User1 wants to send and
receive message from user2, the steps are :

Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the
directions.
Step3: Finally, the association is terminated.

5. Reliability
SCTP uses an acknowledgement mechanism to check the arrival of data.

CONGESTION CONTROL

Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened. In
general, we can

divide congestion control mechanisms into two broad categories: open-loop


congestion control (prevention) and closed-loop congestion control (removal) as
shown in the Following figure.

Congestion control categories:


Open-Loop Congestion Control

In open-loop congestion control, policies are applied to prevent congestion


before it happens. In these mechanisms, congestion control is handled by
either the source or the destination. We give a brief list of policies that can
prevent congestion.

1.Retransmission Policy

Retransmission is sometimes unavoidable. If the sender feels that a sent


packet is lost or corrupted, the packet needs to be retransmitted.
Retransmission in general may increase congestion in the network.
However, a good retransmission policy can prevent congestion. The
retransmission policy and the retransmission timers must be designed to
optimize efficiency and at the same time prevent congestion. For example,
the retransmission policy used by TCP is designed to prevent or alleviate
congestion.

2.Window Policy

The type of window at the sender may also affect congestion. The Selective
Repeat window is better than the Go-Back-N window for congestion
control. In the Go-Back-N window, when the timer for a packet times out,
several packets may be resent, although some may have arrived safe and
sound at the receiver. This duplication may make the congestion worse. The
Selective Repeat window, on the other hand, tries to send the specific
packets that have been lost or corrupted.

3.Acknowledgment Policy

The acknowledgment policy imposed by the receiver may also affect


congestion. If the receiver does not acknowledge every packet it receives,
it may slow down the sender and help prevent congestion. Several
approaches are used in this case. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may
decide to acknowledge only N packets at a time. We need to know that

the acknowledgments are also part of the load in a network. Sending fewer
acknowledgments means imposing less load on the network.
4.Discarding Policy

A good discarding policy by the routers may prevent congestion and at the
same time may not harm the integrity of the transmission. For example, in
audio transmission, if the policy is to discard less sensitive packets when
congestion is likely to happen, the quality of sound is still preserved and
congestion is prevented or alleviated.

5.Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent
congestion

in virtual-circuit networks. Switches in a flow first check the resource


requirement

of a flow before admitting it to the network. A router can deny establishing a


virtual circuit connection if there is congestion in the network or if there is a
possibility of future congestion.

Closed-Loop Congestion Control

Closed-loop congestion control mechanisms try to alleviate congestion after


it happens. Several mechanisms have been used by different protocols. We
describe a few of them here.

1.Backpressure

The technique of backpressure refers to a congestion control mechanism in


which a congested node stops receiving data from the immediate upstream
node or nodes. This may cause the upstream node or nodes to become
congested, and they, in turn, reject data from their upstream nodes or nodes.
And so on. Backpressure is a node-to-node congestion control that starts with
a node and propagates, in the opposite direction of data flow, to the source.
The backpressure technique can be applied only to virtual circuit networks,
in which each node knows the upstream node from which a flow of data is
corning. Following figure shows the idea of backpressure.

Backpressure method for alleviating congestion:


Node III in the figure has more input data than it can handle. It drops some packets
in its input buffer and informs node II to slow down. Node II, in turn, may be
congested

because it is slowing down the output flow of data. If node II is congested, it


informs
node I to slow down, which in turn may create congestion. If so, node I informs
the

source of data to slow down. This, in time, alleviates the congestion. Note that
the pressure
on node III is moved backward to the source to remove the congestion.

None of the virtual-circuit networks we studied in this book use


backpressure. It was, however, implemented in the first virtual-circuit
network, X.25. The technique cannot be implemented in a datagram
network because in this type of network, a node (router) does not have the
slightest knowledge of the upstream router.

2.Choke Packet

A choke packet is a packet sent by a node to the source to inform it of congestion.

Note the difference between the backpressure and choke packet methods. In
backpressure, the warning is from one node to its upstream node, although the
warning may eventually reach the source station. In the choke packet method,
the warning is from the router, which has encountered congestion, to the
source station directly. The intermediate nodes through which the packet has
travelled are not warned. We have seen
an example of this type of control in ICMP. When a router in the Internet is
overwhelmed with IP datagrams, it may discard some of them; but it informs the
source host, using a source quench ICMP message. The warning message goes
directly to the

source station; the intermediate routers, and does not take any action. Following
figure
shows the idea of a choke packet.

Choke packet:

3.Implicit Signalling

In implicit signalling, there is no communication between the congested


node or nodes and the source. The source guesses that there is a congestion
somewhere in the network from other symptoms. For example, when a
source sends several packets and there is no acknowledgment for a while,
one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source
should slow down. We will see this type of signalling when we discuss TCP
congestion control later in the chapter.

4.Explicit Signalling

The node that experiences congestion can explicitly send a signal to the source
or destination. The explicit signalling method, however, is different from the
choke packet
method. In the choke packet method, a separate packet is used for this
purpose; in the explicit signalling method, the signal is included in the
packets that carry data. Explicit signalling, as we will see in Frame Relay
congestion control, can occur in either the forward or the backward
direction.

5.Backward Signalling
A bit can be set in a packet moving in the direction opposite

to the congestion. This bit can warn the source that there is congestion and
that it needs to slow down to avoid the discarding of packets.

6.Forward Signalling
A bit can be set in a packet moving in the direction of the

congestion. This bit can warn the destination that there is congestion. The
receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the i congestion.

Quality of Service (QoS)

In one word, Quality of Service (QoS) can be referred as efficiency. We define


Quality of Service as "How well or efficiently data transmissions are taking
place".
The enterprise networks or commercial networks (networks which connects all
the users and the system connected through a local area network, to the
applications in the data center) they provide services in the form of applications,
for example, WhatsApp, from which we can send and receive various kinds of
data, like audio, video, text, etc. Organizations like this use Quality of Service
to meet the traffic requirements which results in the prevention of the
degradation of the quality, which can be caused by packet loss, delay, and jitter.

Quality of Service Parameters:

QoS can be measured quantitatively by using several parameters


 Packet loss: it happens when the network links become congested and the
routers and switches start dropping the packets. When these packets are
dropped during real-time communication, such as audio or video, these
sessions can experience jitter and gaps in speech.
 Jitter: occurs as the result of network congestion, timing drift, and route
changes. And also, too much jitter can degrade the quality of audio
communication.
 Latency: is the time delay, which is taken by a packet to travel from its
source to its destination. For a great system, latency should be as low as
possible, ideally, it should be close to zero.
 Bandwidth: is the capacity of a network channel to transmit maximum
possible data through the channel in a certain amount of time. QoS
optimizes a network by managing its bandwidth and setting the priorities
for those applications which require more resources as compared to other
applications.
 Mean opinion score: it is a metric for rating the audio quality which uses
a five-point scale, with a five indicating the highest or best quality.

Implementing Quality of Service:

We can implement Quality of service through three of the following existing


models:
1. Best Effort: if we are applying this model then, it means that we are
prioritizing all the data packets equally. But since we all setting the priority
order like this, then there is no guarantee that all the data packets will be
delivered, but it will put up the best effort to deliver all of them. Point to
remember is, that the best-effort model is applied when networks haven’t
configured with the QoS policies or incase their network infrastructure
does not support QoS.
2. Integrated Services: or IntServ, this QoS model reserves the bandwidth
along a specific path on the network. The applications ask the network's
resource reservation for themselves and parallelly the network devices
monitor the flow of packets to make sure network resources can accept
packets. Point to remember: while implementing Integrated Services
Model, the IntServ-capable routers and resource reservation protocol is
necessary. This model has limited scalability and high consumption of the
network resources.
3. Differentiated Services: in this QoS model, the network elements such as
routers and switches are configured to serve multiple categories of traffic
with different priority orders. A company can categorize the network
traffic based on its requirements. Eg. Assigning higher priority to audio
traffic etc.
Application Layer
The application layer in the OSI model is the closest layer to the end user which
means that the application layer and end user can interact directly with the software
application. The application layer programs are based on client and servers.

The Application layer includes the following functions:


o Identifying communication partners: The application layer identifies the
availability of communication partners for an application with data to
transmit.

o Determining resource availability: The application layer determines


whether sufficient network resources are available for the requested
communication.

o Synchronizing communication: All the communications occur between the


applications requires cooperation which is managed by an application layer.

Services of Application Layers


o Network Virtual terminal: An application layer allows a user to log on to a
remote host. To do so, the application creates a software emulation of a
terminal at the remote host. The user's computer talks to the software
terminal, which in turn, talks to the host. The remote host thinks that it is
communicating with one of its own terminals, so it allows the user to log on.

o File Transfer, Access, and Management (FTAM): An application allows a


user to access files in a remote computer, to retrieve files from a computer
and to manage files in a remote computer. FTAM defines a hierarchical virtual
file in terms of file structure, file attributes and the kind of operations
performed on the files and their attributes.

o Addressing: To obtain communication between client and server, there is a


need for addressing. When a client made a request to the server, the request
contains the server address and its own address. The server response to the
client request, the request contains the destination address, i.e., client
address. To achieve this kind of addressing, DNS is used.

o Mail Services: An application layer provides Email forwarding and storage.


o Directory Services: An application contains a distributed database that
provides access for global information about various objects and services.

Authentication: It authenticates the sender or receiver's message or both.

Network Application Architecture

Application architecture is different from the network architecture. The network


architecture is fixed and provides a set of services to applications. The application
architecture, on the other hand, is designed by the application developer and defines
how the application should be structured over the various end systems.

Application architecture is of two types:

o Client-server architecture: An application program running on the local


machine sends a request to another application program is known as a client,
and a program that serves a request is known as a server. For example, when
a web server receives a request from the client host, it responds to the
request to the client host.

Characteristics Of Client-server architecture:

o In Client-server architecture, clients do not directly communicate with each


other. For example, in a web application, two browsers do not directly
communicate with each other.

o A server is fixed, well-known address known as IP address because the server


is always on while the client can always contact the server by sending a
packet to the sender's IP address.

Disadvantage Of Client-server architecture:

It is a single-server based architecture which is incapable of holding all the requests


from the clients. For example, a social networking site can become overwhelmed
when there is only one server exists.

o P2P (peer-to-peer) architecture: It has no dedicated server in a data


center. The peers are the computers which are not owned by the service
provider. Most of the peers reside in the homes, offices, schools, and
universities. The peers communicate with each other without passing the
information through a dedicated server, this architecture is known as peer-to-
peer architecture. The applications based on P2P architecture includes file
sharing and internet telephony.

Features of P2P architecture


o Self-scalability: In a file sharing system, although each peer generates a
workload by requesting the files, each peer also adds a service capacity by
distributing the files to the peer.

o Cost-effective: It is cost-effective as it does not require significant server


infrastructure and server bandwidth.

Client and Server processes


o A network application consists of a pair of processes that send the messages
to each other over a network.

o In P2P file-sharing system, a file is transferred from a process in one peer to


a process in another peer. We label one of the two processes as the client and
another process as the server.

o With P2P file sharing, the peer which is downloading the file is known as a
client, and the peer which is uploading the file is known as a server. However,
we have observed in some applications such as P2P file sharing; a process
can be both as a client and server. Therefore, we can say that a process can
both download and upload the files.

DNS
An application layer protocol defines how the application processes running on
different systems, pass the messages to each other.

o DNS stands for Domain Name System.

o DNS is a directory service that provides a mapping between the name of a


host on the network and its numerical address.

o DNS is required for the functioning of the internet.

o Each node in a tree has a domain name, and a full domain name is a
sequence of symbols specified by dots.
o DNS is a service that translates the domain name into IP addresses. This
allows the users of networks to utilize user-friendly names when looking for
other hosts instead of remembering the IP addresses.

o For example, suppose the FTP site at EduSoft had an IP address of


132.147.165.50, most people would reach this site by specifying
ftp.EduSoft.com. Therefore, the domain name is more reliable than IP
address.

DNS is a TCP/IP protocol used on different platforms. The domain name space is
divided into three different sections: generic domains, country domains, and inverse
domain.

Generic Domains
o It defines the registered hosts according to their generic behavior.

o Each node in a tree defines the domain name, which is an index to the DNS
database.

o It uses three-character labels, and these labels describe the organization


type.

Label Description

aero Airlines and aerospace companies

biz Businesses or firms

com Commercial Organizations


coop Cooperative business Organizations

edu Educational institutions

gov Government institutions

info Information service providers

int International Organizations

mil Military groups

museum Museum & other nonprofit organizations

name Personal names

net Network Support centers

org Nonprofit Organizations

pro Professional individual Organizations


Country Domain

The format of country domain is same as a generic domain, but it uses two-character
country abbreviations (e.g., us for the United States) in place of three character
organizational abbreviations.

Working of DNS
o DNS is a client/server network communication protocol. DNS clients send
requests to the. server while DNS servers send responses to the client.

o Client requests contain a name which is converted into an IP address known


as a forward DNS lookups while requests containing an IP address which is
converted into a name known as reverse DNS lookups.
o DNS implements a distributed database to store the name of all the hosts
available on the internet.

o If a client like a web browser sends a request containing a hostname, then a


piece of software such as DNS resolver sends a request to the DNS server to
obtain the IP address of a hostname. If DNS server does not contain the IP
address associated with a hostname, then it forwards the request to another
DNS server. If IP address has arrived at the resolver, which in turn completes
the request over the internet protocol.

FTP
o FTP stands for File transfer protocol.

o FTP is a standard internet protocol provided by TCP/IP used for transmitting


the files from one host to another.

o It is mainly used for transferring the web page files from their creator to the
computer that acts as a server for other computers on the internet.

o It is also used for downloading the files to computer from other servers.

Objectives of FTP
o It provides the sharing of files.

o It is used to encourage the use of remote computers.

o It transfers the data more reliably and efficiently.

Why FTP?

Although transferring files from one system to another is very simple and
straightforward, but sometimes it can cause problems. For example, two systems may
have different file conventions. Two systems may have different ways to represent
text and data. Two systems may have different directory structures. FTP protocol
overcomes these problems by establishing two connections between hosts. One
connection is used for data transfer, and another connection is used for the control
connection.

Telnet
o The main task of the internet is to provide services to users. For example,
users want to run different application programs at the remote site and
transfers a result to the local site. This requires a client-server program such
as FTP, SMTP. But this would not allow us to create a specific program for
each demand.

o The better solution is to provide a general client-server program that lets the
user access any application program on a remote computer. Therefore, a
program that allows a user to log on to a remote computer. A popular client-
server program Telnet is used to meet such demands. Telnet is an
abbreviation for Terminal Network.

o Telnet provides a connection to the remote computer in such a way that a


local terminal appears to be at the remote side.

There are two types of login:


Local Login

o When a user logs into a local computer, then it is known as local login.

o When the workstation running terminal emulator, the keystrokes


entered by the user are accepted by the terminal driver. The terminal
driver then passes these characters to the operating system which in
turn, invokes the desired application program.

o However, the operating system has special meaning to special


characters. For example, in UNIX some combination of characters have
special meanings such as control character with "z" means suspend.
Such situations do not create any problem as the terminal driver
knows the meaning of such characters. But, it can cause the problems
in remote login.

Remote login

o When the user wants to access an application program on a remote


computer, then the user must perform remote

SMTP
o SMTP stands for Simple Mail Transfer Protocol.

o SMTP is a set of communication guidelines that allow software to transmit an


electronic mail over the internet is called Simple Mail Transfer Protocol.

o It is a program used for sending messages to other computer users based on


e-mail addresses.

o It provides a mail exchange between users on the same or different


computers, and it also supports:
o It can send a single message to one or more recipients.

o Sending message can include text, voice, video or graphics.

o It can also send the messages on networks outside the internet.

o The main purpose of SMTP is used to set up communication rules between


servers. The servers have a way of identifying themselves and announcing
what kind of communication they are trying to perform. They also have a way
of handling the errors such as incorrect email address. For example, if the
recipient address is wrong, then receiving server reply with an error message
of some kind.

Working of SMTP

1. Composition of Mail: A user sends an e-mail by composing an electronic


mail message using a Mail User Agent (MUA). Mail User Agent is a program
which is used to send and receive mail. The message contains two parts:
body and header. The body is the main part of the message while the header
includes information such as the sender and recipient address. The header
also includes descriptive information such as the subject of the message. In
this case, the message body is like a letter and header is like an envelope
that contains the recipient's address.

2. Submission of Mail: After composing an email, the mail client then submits
the completed e-mail to the SMTP server by using SMTP on TCP port 25.

3. Delivery of Mail: E-mail addresses contain two parts: username of the


recipient and domain name. For example, vivek@gmail.com, where "vivek" is
the username of the recipient and "gmail.com" is the domain name.
If the domain name of the recipient's email address is different from the
sender's domain name, then MSA will send the mail to the Mail Transfer
Agent (MTA). To relay the email, the MTA will find the target domain. It
checks the MX record from Domain Name System to obtain the target
domain. The MX record contains the domain name and IP address of the
recipient's domain. Once the record is located, MTA connects to the exchange
server to relay the message.
4. Receipt and Processing of Mail: Once the incoming message is received,
the exchange server delivers it to the incoming server (Mail Delivery Agent)
which stores the e-mail where it waits for the user to retrieve it.

5. Access and Retrieval of Mail: The stored email in MDA can be retrieved by
using MUA (Mail User Agent). MUA can be accessed by using login and
password.

SNMP
o SNMP stands for Simple Network Management Protocol.

o SNMP is a framework used for managing devices on the internet.

o It provides a set of operations for monitoring and managing the internet.

SNMP Concept

o SNMP has two components Manager and agent.

o The manager is a host that controls and monitors a set of agents such as
routers.

o It is an application layer protocol in which a few manager stations can handle


a set of agents.

o The protocol designed at the application level can monitor the devices made
by different manufacturers and installed on different physical networks.
o It is used in a heterogeneous network made of different LANs and WANs
connected by routers or gateways.

Managers & Agents

o A manager is a host that runs the SNMP client program while the agent is a
router that runs the SNMP server program.

o Management of the internet is achieved through simple interaction between a


manager and agent.

o The agent is used to keep the information in a database while the manager is
used to access the values in the database. For example, a router can store
the appropriate variables such as a number of packets received and
forwarded while the manager can compare these variables to determine
whether the router is congested or not.

o Agents can also contribute to the management process. A server program on


the agent checks the environment, if something goes wrong, the agent sends
a warning message to the manager.

HTTP
o HTTP stands for HyperText Transfer Protocol.

o It is a protocol used to access the data on the World Wide Web (www).

o The HTTP protocol can be used to transfer the data in the form of plain text,
hypertext, audio, video, and so on.

o This protocol is known as HyperText Transfer Protocol because of its efficiency


that allows us to use in a hypertext environment where there are rapid jumps
from one document to another document.

o HTTP is similar to the FTP as it also transfers the files from one host to
another host. But, HTTP is simpler than FTP as HTTP uses only one
connection, i.e., no control connection to transfer the files.

o HTTP is used to carry the data in the form of MIME-like format.

o HTTP is similar to SMTP as the data is transferred between client and server.
The HTTP differs from the SMTP in the way the messages are sent from the
client to the server and from server to the client. SMTP messages are stored
and forwarded while HTTP messages are delivered immediately.

Features of HTTP:
o Connectionless protocol: HTTP is a connectionless protocol. HTTP client
initiates a request and waits for a response from the server. When the server
receives the request, the server processes the request and sends back the
response to the HTTP client after which the client disconnects the connection.
The connection between client and server exist only during the current
request and response time only.

o Media independent: HTTP protocol is a media independent as data can be


sent as long as both the client and server know how to handle the data
content. It is required for both the client and server to specify the content
type in MIME-type header.

o Stateless: HTTP is a stateless protocol as both the client and server know
each other only during the current request. Due to this nature of the protocol,
both the client and server do not retain the information between various
requests of the web pages.

HTTP Transactions
The above figure shows the HTTP transaction between client and server. The client
initiates a transaction by sending a request message to the server. The server
replies to the request message by sending a response message.

Messages
HTTP messages are of two types: request and response. Both the message types
follow the same message format.

Request Message: The request message is sent by the client that consists of a
request line, headers, and sometimes a body

Response Message: The response message is sent by the server to the client that
consists of a status line, headers, and sometimes a body.
Uniform Resource Locator (URL)
o A client that wants to access the document in an internet needs an address
and to facilitate the access of documents, the HTTP uses the concept of
Uniform Resource Locator (URL).

o The Uniform Resource Locator (URL) is a standard way of specifying any kind
of information on the internet.

o The URL defines four parts: method, host computer, port, and path.

o Method: The method is the protocol used to retrieve the document from a
server. For example, HTTP.

o Host: The host is the computer where the information is stored, and the
computer is given an alias name. Web pages are mainly stored in the
computers and the computers are given an alias name that begins with the
characters "www". This field is not mandatory.

o Port: The URL can also contain the port number of the server, but it's an
optional field. If the port number is included, then it must come between the
host and path and it should be separated from the host by a colon.

o Path: Path is the pathname of the file where the information is stored. The
path itself contain slashes that separate the directories from the
subdirectories and files.

You might also like