Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

TCP IP Network Guide

A network is a collection of interconnected devices that exchange information. Network security is crucial for protecting assets and managing access to prevent threats, employing various technologies such as firewalls and antivirus software. The document also outlines the OSI model layers, detailing their functions and responsibilities in data transmission and network management.

Uploaded by

rogerfed027
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

TCP IP Network Guide

A network is a collection of interconnected devices that exchange information. Network security is crucial for protecting assets and managing access to prevent threats, employing various technologies such as firewalls and antivirus software. The document also outlines the OSI model layers, detailing their functions and responsibilities in data transmission and network management.

Uploaded by

rogerfed027
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

Network

A network is simply a collection of computers or other hardware devices that are connected together,
either physically or logically, using special hardware and software, to allow them to exchange
information and cooperate.

Network Security Meaning

Network security is an organization’s strategy that enables guaranteeing the security of its assets
including all network traffic. It includes both software and hardware technologies. Access to the
network is managed by effective network security, which targets a wide range of threats and then
arrests them from spreading or entering in the network.

Network Security Definition

Network security is an integration of multiple layers of defenses in the network and at the
network. Policies and controls are implemented by each network security layer. Access to networks is
gained by authorized users, whereas, malicious actors are indeed blocked from executing threats and
exploits.

Our world has presently been transformed by digitization, resulting in changes in almost all
our daily activities. It is essential for all organizations to protect their networks if they aim at delivering
the services demanded by employees and customers. This eventually protects the reputation of your
organization. With hackers increasing and becoming smarter day by day, the need to utilize network
security tool becomes more and more impotent.

Types of Network Security

1. Antivirus and Antimalware Software


2. Application Security
3. Behavioral Analytics
4. Data Loss Prevention (DLP)
5. Email Security
6. Firewalls
7. Mobile Device Security
8. Network Segmentation
9. Security Information and Event Management (SIEM)
10. Virtual Private Network (VPN)
11. Web Security
12. Wireless Security
13. Endpoint Security
14. Network Access Control (NAC)

Antivirus and Antimalware Software:

This software is used for protecting against malware, which includes spyware, ransomware,
Trojans, worms, and viruses. Malware can also become very dangerous as it can infect a network and
then remain calm for days or even weeks. This software handles this threat by scanning for malware
entry and regularly tracks files afterward in order to detect anomalies, remove malware, and fix
damage.

Application Security

It is important to have an application security since no app is created perfectly. It is possible


for any application to comprise of vulnerabilities, or holes, that are used by attackers to enter your
network. Application security thus encompasses the software, hardware, and processes you select for
closing those holes.
Behavioral Analytics

In order to detect abnormal network behavior, you will have to know what normal behavior
looks like. Behavioral analytics tools are capable of automatically discerning activities that deviate
from the norm. Your security team will thus be able to efficiently detect indicators of compromise that
pose a potential problem and rapidly remediate threats.

Data Loss Prevention (DLP)

Organizations should guarantee that their staff does not send sensitive information outside
the network. They should thus use DLP technologies, network security measures, that prevent people
from uploading, forwarding, or even printing vital information in an unsafe manner.

Email Security

Email gateways are considered to be the number one threat vector for a security breach.
Attackers use social engineering tactics and personal information in order to build refined phishing
campaigns to deceive recipients and then send them to sites serving up malware. An email security
application is capable of blocking incoming attacks and controlling outbound messages in order to
prevent the loss of sensitive data.

Firewalls

Firewalls place a barrier between your trusted internal network and untrusted outside
networks, like the Internet. A set of defined rules are employed to block or allow traffic. A firewall can
be software, hardware, or both. The free firewall efficiently manages traffic on your PC, monitors
in/out connections, and secures all connections when you are online.

Intrusion Prevention System (IPS)

An IPS is a network security capable of scanning network traffic in order to actively block
attacks. The IPS Setting interface permits the administrator to configure the ruleset updates for Snort.
It is possible to schedule the ruleset updates allowing them to automatically run at particular intervals
and these updates can be run manually on demand.

Mobile Device Security

Mobile devices and apps are increasingly being targeted by cybercriminals. 90% of IT
organizations could very soon support corporate applications on personal mobile devices. There is
indeed the necessity for you to control which devices can access your network. It is also necessary to
configure their connections in order to keep network traffic private.

Network Segmentation

Software-defined segmentation places network traffic into varied classifications and makes
enforcing security policies a lot easier. The classifications are ideally based on endpoint identity, not
just IP addresses. Rights can be accessed based on location, role, and more so that the right people
get the correct level of access and suspicious devices are thus contained and remediated.

Security Information and Event Management (SIEM):

SIEM products bring together all the information needed by your security staff in order to
identify and respond to threats. These products are available in different forms, including virtual and
physical appliances and server software.

Virtual Private Network (VPN):

A VPN is another type of network security capable of encrypting the connection from an
endpoint to a network, mostly over the Internet. A remote-access VPN typically uses IPsec or Secure
Sockets Layer in order to authenticate the communication between network and device.
Web Security:

A perfect web security solution will help in controlling your staff’s web use, denying access to
malicious websites, and blocking

Wireless Security:

The mobile office movement is presently gaining momentum along with wireless networks
and access points. However, wireless networks are not as secure as wired ones and this makes way
for hackers to enter. It is thus essential for the wireless security to be strong. It should be noted that
without stringent security measures installing a wireless LAN could be like placing Ethernet ports
everywhere. Products specifically designed for protecting a wireless network will have to be used in
order to prevent an exploit from taking place.

Endpoint Security:

Endpoint Security, also known Endpoint Protection or Network Security, is a methodology


used for protecting corporate networks when accessed through remote devices such as laptops or
several other wireless devices and mobile devices. For instance, Comodo Advanced Endpoint
Protection software presents seven layers of defense that include viruscope, file reputation, auto-
sandbox, host intrusion prevention, web URL filtering, firewall, and antivirus software. All this is
offered under a single offering in order to protect them from both unknown and known threats.

Network Access Control (NAC):

This network security process helps you to control who can access your network. It is essential
to recognize each device and user in order to keep out potential attackers. This indeed will help you
to enforce your security policies. Noncompliant endpoint devices can be given only limited access or
just blocked.

A networking protocol defines a set of rules, algorithms, messages and other mechanisms that
enable software and hardware in networked devices to communicate effectively. A protocol usually
describes a means for communication between corresponding entities at the same OSI Reference
Model layer in two or more devices.

A connection-oriented protocol is one where a logical connection is first established between


devices prior to data being sent. In a connectionless protocol, data is just sent without a connection
being created.

The three terms used most often to refer to the overall performance of a network are speed,
bandwidth, and throughput. These are related and often used interchangeably, but are not identical.
The term speed is the most generic and often refers to the rated or nominal speed of a networking
technology. Bandwidth can refer either to the of a frequency band used by a technology, or more
generally to data capacity, where it is more of a theoretical measure. Throughput is a specific measure
of how much data flows over a channel in a given period of time. It is usually a practical measurement.

In most cases in discussions of networking performance, the lower-case letter “b” refers to
“bits” and the upper-case “B” to “bytes”. However, these conventions are not always universally
followed, so context must be used to interpret a particular measurement.
OSI LAYERS

PHYSICAL LAYER:

Located at the lowest layer of the Open Systems Interconnection (OSI) communications
model, the physical layer's function is to transport data using electrical, mechanical or procedural
interfaces.

For example, the physical layer determines how to use electricity to place a stream of raw bits
from Layer 2, the data link layer, onto the pins and across the wires of a twisted pair cable. From an
optical standpoint, the physical layer converts a stream of 0s and 1s onto fiber using light as its physical
medium. Lastly, the physical layer uses a wireless transmitter to convert these bits into radio waves
for transport using radio waves.

The physical layer is usually a combination of software and hardware programming and may
include electromechanical devices. Popular transport technology types include 1000BASE-T Ethernet,
1000BASE-SX Ethernet, T1, SONET/SDH, DSL and 802.11 physical layer variants.

Functions of the physical layer

The physical layer is responsible for sending computer bits from one device to another along
the network. It does not understand the bits; rather, its role is determining how physical connections
to the network are set up and how bits are represented into predictable signals as they are transmitted
either electrically, optically or via radio waves.

To do this, the physical layer performs a variety of functions, including:

Defining bits: Determines how bits are converted from 0s and 1s to a signal.

Data rate: Determines how fast the data flows, in bits per second.

Synchronization: Ensures that sending and receiving devices are synchronized.

Transmission mode: Determines the direction of transmissions and whether those are simplex (one
signal is transmitted in one direction), half-duplex (data goes in both directions, but not at the same
time) and full-duplex (data is transmitted in both directions, simultaneously).

Interface: Determines how devices are connected to a transmission medium such as Ethernet or radio
waves.

Physical layer devices: The physical layer covers a variety of devices and mediums, among them
cabling, connectors, receivers, transceivers and repeaters.
DATA LINK LAYER:

Sublayers: Logical Link Control (LLC) and Media Access Control (MAC)

The data link layer is often conceptually divided into two sublayers: logical link control (LLC)
and media access control (MAC). This split is based on the architecture used in the IEEE 802 Project,
which is the IEEE working group responsible for creating the standards that define many networking
technologies (including all of the ones I mentioned above except FDDI). By separating LLC and MAC
functions, interoperability of different network technologies is made easier, as explained in our earlier
discussion of networking model concepts.

Data Link Layer Functions:

The following are the key tasks performed at the data link layer:

Logical Link Control (LLC):

Logical link control refers to the functions required for the establishment and control of logical
links between local devices on a network. As mentioned above, this is usually considered a DLL
sublayer; it provides services to the network layer above it and hides the rest of the details of the data
link layer to allow different technologies to work seamlessly with the higher layers. Most local area
networking technologies use the IEEE 802.2 LLC protocol.

Media Access Control (MAC):

This refers to the procedures used by devices to control access to the network medium. Since
many networks use a shared medium (such as a single network cable, or a series of cables that are
electrically connected into a single virtual medium) it is necessary to have rules for managing the
medium to avoid conflicts. For example. Ethernet uses the CSMA/CD method of media access control,
while Token Ring uses token passing.

Data Framing:

The data link layer is responsible for the final encapsulation of higher-level messages into
frames that are sent over the network at the physical layer.

Addressing:

The data link layer is the lowest layer in the OSI model that is concerned with addressing:
labeling information with a particular destination location. Each device on a network has a unique
number, usually called a hardware address or MAC address, that is used by the data link layer protocol
to ensure that data intended for a specific machine gets to it properly.

Error Detection and Handling:

The data link layer handles errors that occur at the lower levels of the network stack. For
example, a cyclic redundancy check (CRC) field is often employed to allow the station receiving data
to detect if it was received correctly.

Some of the most popular technologies and protocols generally associated with layer 2 are
Ethernet, Token Ring, FDDI (plus CDDI), HomePNA, IEEE 802.11, ATM, and TCP/IP's Serial Link Interface
Protocol (SLIP) and Point-To-Point Protocol (PPP).
NETWORK LAYER:

Some of the specific jobs normally performed by the network layer include:

Logical Addressing:

Every device that communicates over a network has associated with it a logical address,
sometimes called a layer three address. For example, on the Internet, the Internet Protocol (IP) is the
network layer protocol and every machine has an IP address. Note that addressing is done at the data
link layer as well, but those addresses refer to local physical devices. In contrast, logical addresses are
independent of particular hardware and must be unique across an entire internetwork.

Routing:

Moving data across a series of interconnected networks is probably the defining function of
the network layer. It is the job of the devices and software routines that function at the network layer
to handle incoming packets from various sources, determine their final destination, and then figure
out where they need to be sent to get them where they are supposed to go. I discuss routing in the
OSI model more completely in this topic on the topic on indirect device connection, and show how it
works by way of an OSI model analogy.

Datagram Encapsulation:

The network layer normally encapsulates messages received from higher layers by placing
them into datagrams (also called packets) with a network layer header.

Fragmentation and Reassembly:

The network layer must send messages down to the data link layer for transmission. Some
data link layer technologies have limits on the length of any message that can be sent. If the packet
that the network layer wants to send is too large, the network layer must split the packet up, send
each piece to the data link layer, and then have pieces reassembled once they arrive at the network
layer on the destination machine. A good example is how this is done by the Internet Protocol.

Error Handling and Diagnostics:

Special protocols are used at the network layer to allow devices that are logically connected,
or that are trying to route traffic, to exchange information about the status of hosts on the network
or the devices themselves.

Key Concept: The OSI Reference Model’s third layer is called the network layer. This is one of the
most important layers in the model; it is responsible for the tasks that link together individual
networks into internetworks. Network layer functions include internetwork-level addressing, routing,
datagram encapsulation, fragmentation and reassembly, and certain types of error handling and
diagnostics. The network layer and transport layer are closely related to each other.
TRANSPORT LAYER

The fourth and middle OSI Reference Model layer is the transport layer. This is another very
important conceptual layer in the model; it represents the transition point between the lower layers
that deal with data delivery issues, and the higher layers that work with application software. The
transport layer is responsible for enabling end-to-end communication between application processes,
which it accomplishes in part through the use of process-level addressing and
multiplexing/demultiplexing. Transport layer protocols are responsible for dividing application data
into blocks for transmission, and may be either connection-oriented or connectionless. Protocols at
this layer also often provide data delivery management services such as reliability and flow control.

Transport Layer Functions

Let’s look at the specific functions often performed at the transport layer in more detail:

Process-Level Addressing:

Addressing at layer two deals with hardware devices on a local network, and layer three
addressing identifies devices on a logical internetwork. Addressing is also performed at the transport
layer, where it is used to differentiate between software programs. This is part of what enables many
different software programs to use a network layer protocol simultaneously, as mentioned above. The
best example of transport-layer process-level addressing is the TCP and UDP port mechanism used in
TCP/IP, which allows applications to be individually referenced on any TCP/IP device.

Multiplexing and Demultiplexing:

Using the addresses I just mentioned, transport layer protocols on a sending device multiplex
the data received from many application programs for transport, combining them into a single stream
of data to be sent. The same protocols receive data and then demultiplex it from the incoming stream
of datagrams, and direct each package of data to the appropriate recipient application processes.

Segmentation, Packaging and Reassembly:

The transport layer segments the large amounts of data it sends over the network into smaller
pieces on the source machine, and then reassemble them on the destination machine. This function
is similar conceptually to the fragmentation function of the network layer; just as the network layer
fragments messages to fit the limits of the data link layer, the transport layer segments messages to
suit the requirements of the underlying network layer.

Connection Establishment, Management and Termination:

Transport layer connection-oriented protocols are responsible for the series of


communications required to establish a connection, maintain it as data is sent over it, and then
terminate the connection when it is no longer required.

Acknowledgments and Retransmissions:

As mentioned above, the transport layer is where many protocols are implemented that
guarantee reliable delivery of data. This is done using a variety of techniques, most commonly the
combination of acknowledgments and retransmission timers. Each time data is sent a timer is started;
if it is received, the recipient sends back an acknowledgment to the transmitter to indicate successful
transmission. If no acknowledgment comes back before the timer expires, the data is retransmitted.
Other algorithms and techniques are usually required to support this basic process.
Flow Control:

Transport layer protocols that offer reliable delivery also often implement flow control
features. These features allow one device in a communication to specify to another that it must
"throttle back" the rate at which it is sending data, to avoid bogging down the receiver with data.
These allow mismatches in speed between sender and receiver to be detected and dealt with.

SESSION LAYER (LAYER 5)

The fifth layer in the OSI Reference Model layer is the session layer. As its name suggests, it is
the layer intended to provide functions for establishing and managing sessions between software
processes. Session layer technologies are often implemented as sets of software tools called
application program interfaces (APIs), which provide a consistent set of services that allow
programmers to develop networking applications without needing to worry about lower-level details
of transport, addressing and delivery.

PRESENTATION LAYER (LAYER 6)

The sixth OSI model layer is called the presentation layer. Protocols at this layer take care of
manipulation tasks that transform data from one representation to another, such as translation,
compression and encryption. In many cases, no such functions are required in a particular networking
stack; if so, there may not be any protocol active at layer six.

APPLICATION LAYER (LAYER 7)

At the very top of the OSI Reference Model stack of layers, we find layer 7, the application
layer. Continuing the trend that we saw in layers 5 and 6, this one too is named very appropriately:
the application layer is the one that is used by network applications. These programs are what actually
implement the functions performed by users to accomplish various tasks over the network.

It's important to understand that what the OSI model calls an “application” is not exactly the
same as what we normally think of as an “application”. In the OSI model, the application layer provides
services for user applications to employ. For example, when you use your Web browser, that actual
software is an application running on your PC. It doesn't really “reside” at the application layer. Rather,
it makes use of the services offered by a protocol that operates at the application layer, which is called
the Hypertext Transfer Protocol (HTTP). The distinction between the browser and HTTP is subtle, but
important.

The reason for pointing this out is because not all user applications use the application layer
of the network in the same way. Sure, your Web browser does, and so does your e-mail client and
your Usenet news reader. But if you use a text editor to open a file on another machine on your
network, that editor is not using the application layer. In fact, it has no clue that the file you are using
is on the network: it just sees a file addressed with a name that has been mapped to a network
somewhere else. The operating system takes care of redirecting what the editor does, over the
network.

There are dozens of different application layer protocols that enable various functions at this
layer. Some of the most popular ones include HTTP, FTP, SMTP, DHCP, NFS, Telnet, SNMP, POP3, NNTP
and IRC.

Key Concept: The seventh and highest layer in the OSI Reference Model is the application layer.
Application protocols are defined at this layer, which implement specific user applications and other
high-level functions. Since they are at the top of the stack, application protocols are the only ones that
do not provide services to a higher layer; they make use of services provided by the layers below.
Ethernet frame:

Ethernet frame is a data link layer protocol data unit and uses the underlying Ethernet physical layer
transport mechanisms. In other words, a data unit on an Ethernet link transports an Ethernet frame
as its payload.

An Ethernet frame is preceded by a preamble and start frame delimiter (SFD), which are both part of
the Ethernet packet at the physical layer. Each Ethernet frame starts with an Ethernet header, which
contains destination and source MAC addresses as its first two fields. The middle section of the frame
is payload data including any headers for other protocols (for example, Internet Protocol) carried in
the frame. The frame ends with a frame check sequence (FCS), which is a 32-bit cyclic redundancy
check used to detect any in-transit corruption of data.

Preamble: The preamble contains a group of 64 bits that are used to help the hardware synchronize
itself with the data on the network. If a few bits of the preamble are lost during transmission, no harm
occurs to the message itself. The preamble therefore also acts as a buffer for the remainder of the
frame.

Start frame delimiter: is a single byte, 10101011, which is a frame flag, indicating the start of a frame.

Destination address: The destination address (48 bits) contains the physical address of the device that
is to receive the frame.

The first two bits of this field have special meaning. If the first bit is 0, then the address represents a
hardware address of a single device on the network. However, if the first bit is 1, then the address is
what is known as a multicast address and the frame is addressed to a group of devices. The second bit
indicates where physical device addresses have been set. If the value is 0, then addresses have been
set by the hardware manufacturer (global addressing). When addresses are set by those maintaining
the network, the value is 1 (local addressing).

Note: A device's physical address is distinct from its software address, such as the addresses used by
the Internet layer of the TCP/IP protocol stack. (For example, the author's printer has an Internet layer
address of 192.168.1.105 and an Ethernet address of 00:C0:B0:02:15:75.) One job of data
communications protocols is therefore to translate between hardware and software addresses.
TCP/IP, for example, uses Address Resolution Protocol (ARP) to map TCP/IP addresses onto Ethernet
addresses.

Source address: The 48 bits of the source address field contain the hardware address of the device
sending the frame.

Length field: The contents of the length field depend on the type of frame. If the frame is carrying
data, then the length field indicates how many bytes of meaningful data are present. However, if the
frame is carrying management information, then the length field indicates the type of management
information in the frame.

Data field: The data field carries a minimum of 46 bytes and a maximum of 1500 bytes. If there are
fewwer than 46 bytes of data, the field will be padded to the minimum length.

Frame check sequence (FCS): The last field (also known as a cyclical redundancy check, or CRC, field)
contains 32 bits used for error checking. The bits in this field are set by the transmitting device based
on the pattern of bits in the data field. The receiving device then regenerates the FCS. If what the
receiving device obtains does not match what is in the frame, then some bits were changed during
transmission and some type of transmission error has occurred.
VLAN
What is VLAN?

VLAN is a logical grouping of networking devices. When we create VLAN, we actually break large
broadcast domain in smaller broadcast domains. Consider VLAN as a subnet. Same as two different
subnets cannot communicate with each other without router, different VLANs also requires router to
communicate.

Advantage of VLAN

• Solve broadcast problem


• Reduce the size of broadcast domains
• Allow us to add additional layer of security
• Make device management easier
• Allow us to implement the logical grouping of devices by function instead of location

Solve broadcast problem

When we connect devices into the switch ports, switch creates separate collision domain for each port
and single broadcast domain for all ports. Switch forwards a broadcast frame from all possible ports.
In a large network having hundreds of computers, it could create performance issue. Of course we
could use routers to solve broadcast problem, but that would be costly solution since each broadcast
domain requires its own port on router. Switch has a unique solution to broadcast issue known as
VLAN. In practical environment we use VLAN to solve broadcast issue instead of router.
Each VLAN has a separate broadcast domain. Logically VLANs are also subnets. Each VLAN requires a
unique network number known as VLAN ID. Devices with same VLAN ID are the members of same
broadcast domain and receive all broadcasts. These broadcasts are filtered from all ports on a switch
that aren’t members of the same VLAN.

Reduce the size of broadcast domains

VLAN increase the numbers of broadcast domain while reducing their size. For example we have a
network of 100 devices. Without any VLAN implementation we have single broadcast domain that
contain 100 devices. We create 2 VLANs and assign 50 devices in each VLAN. Now we have two
broadcast domains with fifty devices in each. Thus more VLAN means more broadcast domain with
less devices.
Allow us to add additional layer of security

VLANs enhance the network security. In a typical layer 2 network, all users can see all devices by
default. Any user can see network broadcast and responds to it. Users can access any network
resources located on that specific network. Users could join a workgroup by just attaching their system
in existing switch. This could create real trouble on security platform. Properly configured VLANs gives
us total control over each port and users. With VLANs, you can control the users from gaining
unwanted access over the resources. We can put the group of users that need high level security into
their own VLAN so that users outside from VLAN can’t communicate with them.

Make device management easier

Device management is easier with VLANs. Since VLANs are a logical approach, a device can be located
anywhere in the switched network and still belong to the same broadcast domain. We can move a
user from one switch to another switch in same network while keeping his original VLAN. For example
our company has a five story building and a single layer two network. In this scenario, VLAN allows us
to move the users from one floor to another floor while keeping his original VLAN ID. The only
limitation we have is that device when moved, must still be connected to the same layer 2 network.

Allow us to implement the logical grouping of devices by function instead of location

VLANs allow us to group the users by their function instead of their geographic locations. Switches
maintain the integrity of your VLANs. Users will see only what they are supposed to see regardless
what their physical locations are.

VLAN Examples

To understand VLAN more clearly let's take an example.

• Our company has three offices.


• All offices are connected with back links.
• Company has three departments Development, Production and Administration.
• Development department has six computers.
• Production department has three computers.
• Administration department also has three computers.
• Each office has two PCs from development department and one from both production and
administration department.
• Administration and production department have sensitive information and need to be
separate from development department.

With default configuration, all computers share same broadcast domain. Development department
can access the administration or production department resources.
With VLAN we could create logical boundaries over the physical network. Assume that we created
three VLANs for our network and assigned them to the related computers.

• VLAN Admin for Administration department


• VLAN Dev for Development department
• VLAN Pro for Production department

Physically we changed nothing but logically we grouped devices according to their function. These
groups [VLANs] need router to communicate with each other. Logically our network look likes
following diagram.

With the help of VLAN, we have separated our single network in three small networks. These networks
do not share broadcast with each other improving network performance. VLAN also enhances the
security. Now Development department cannot access the Administration and Production
department directly. Different VLAN can communicate only via Router where we can configure wild
range of security options.

So far in this article we have explained VLAN, in following section we will explain VLAN terms in more
details.

VLAN Membership

VLAN membership can be assigned to a device by one of two methods

1. Static
2. Dynamic

These methods decide how a switch will associate its ports with VLANs.

Static

Assigning VLANs statically is the most common and secure method. It is pretty easy to set up and
supervise. In this method we manually assign VLAN to switch port. VLANs configured in this way are
usually known as port-based VLANs.

Static method is the most secure method also. As any switch port that we have assigned a VLAN will
keep this association always unless we manually change it. It works really well in a networking
environment where any user movement within the network needs to be controlled.
Dynamic

In dynamic method, VLANs are assigned to port automatically depending on the connected device. In
this method we have configure one switch from network as a server. Server contains device specific
information like MAC address, IP address etc. This information is mapped with VLAN. Switch acting as
server is known as VMPS (VLAN Membership Policy Server). Only high end switch can configured as
VMPS. Low end switch works as client and retrieve VLAN information from VMPS.

Dynamic VLANs supports plug and play movability. For example if we move a PC from one port to
another port, new switch port will automatically be configured to the VLAN which the user belongs.
In static method we have to do this process manually.

VLAN Connections

During the configuration of VLAN on port, we need to know what type of connection it has.

Switch supports two types of VLAN connection

• Access link
• Trunk link

Access link

Access link connection is the connection where switch port is connected with a device that has a
standardized Ethernet NIC. Standard NIC only understand IEEE 802.3 or Ethernet II frames. Access link
connection can only be assigned with single VLAN. That means all devices connected to this port will
be in same broadcast domain.

For example, twenty users are connected to a hub, and we connect that hub with an access link port
on switch, then all of these users belong to same VLAN. If we want to keep ten users in another VLAN,
then we have to purchase another hub. We need to plug in those ten users in that hub and then
connect it with another access link port on switch.

Trunk link

Trunk link connection is the connection where switch port is connected with a device that is capable
to understand multiple VLANs. Usually trunk link connection is used to connect two switches or switch
to router. Remember earlier in this article I said that VLAN can span anywhere in network, that is
happen due to trunk link connection. Trunking allows us to send or receive VLAN information across
the network. To support trunking, original Ethernet frame is modified to carry VLAN information.
Trunk Tagging

In trunking a separate logical connection is created for each VLAN instead of a single physical
connection. In tagging switch adds the source port’s VLAN identifier to the frame so that other end
device can understands what VLAN originated this frame. Based on this information destination switch
can make intelligent forwarding decisions on not just the destination MAC address, but also the source
VLAN identifier.

Since original Ethernet frame is modified to add information, standard NICs will not understand this
information and will typically drop the frame. Therefore, we need to ensure that when we set up a
trunk connection on a switch’s port, the device at the other end also supports the same trunking
protocol and has it configured. If the device at the other end doesn’t understand these modified
frames it will drop them. The modification of these frames, commonly called tagging. Tagging is done
in hardware by application-specific integrated circuits (ASICs).

Switch supports two types of Ethernet trunking methods:

• ISL [ Inter Switch Link, Cisco’s proprietary protocol for Ethernet ]


• Dot1q [ IEEE’s 802.1Q, protocol for Ethernet]

Links to other switches are known as “Trunk” ports and links to end devices like PCs are known as
“Access” ports.

On a port, which is an Access Port, the Untagged VLAN is called the Access VLAN

On a port, which is a Trunk Port, the Untagged VLAN is called the Native VLAN.

In VLAN configuration a switch port can operate in two mode; access and trunk. In access mode it can
carry only single VLAN information while in trunk mode it can carry multiple VLANs information. Access
mode is used to connect the port with end devices while trunk mode is used to connect two switching
devices.
Access Link and Trunk Link
Advertisements

An access link can carry single VLAN information while trunk link can carry multiple VLANs information.
Configuring VLANs on single switch does not require trunk link. It is required only when you configure
VLANs across the multiple switches.

For example if we do not connect all switches in our network, we do not require to configure the trunk
link. In this case PC0, PC2 and PC4 cannot communicate with each other. Although they all belongs to
same VLAN group but they have no link to share this information.

Trunk link connections are used to connect multiple switches sharing same VLANs information.

You may think why we cannot use access link to connect these switches. Of course we can use access
link to connect these switches but in that case we need to use a separate link for each VLAN. If we
have two VLANs we need two links.
With this implementation we need links equal to VLANs that does not scale very well. For example if
our design require 30 VLANs, we will have to use 30 links to connect switches.

In short

• An access link can carry single VLAN information.


• Theoretically we can use access link to connect switches.
• If we use access link to connect switches, we have to use links equal to VLANs.
• Due to scalability we do not use access link to connect the switches.
• A trunk link can carry multiple VLAN information.
• Practically we use trunk links to connect switches.

VLAN Tagging

Trunk links use VLAN tagging to carry the multiple VLANs traffic separately.

In VLAN tagging process sender switch add a VLAN identifier header to the original Ethernet frame.
Receiver switch read VLAN information from this header and remove it before forwarding to the
associate ports. Thus original Ethernet frame remains unchanged. Destination PC receives it in its
original shape.
VLAN Tagging process with example

• PC1 generates a broadcast frame.


• Office1 switch receives it and know that it is a broadcast frame for VLAN20.
• It will forward this frame from all of its port associated with VLAN20 including trunk links.
• While forwarding frame from access links, switch does not make any change in original frame.
So any other port having same VLAN ID in switch will receive this frame in original shape.
• While forwarding frame from trunk links, switch adds a VLAN identifier header to the original
frame. In our case switch will add a header indicating that this frame belongs to VLAN20
before forwarding it from trunk link.
• Office2 switch will receive this frame from trunk link.
• It will read VLAN identifier header to know the VLAN information.
• From header it will learn that this is a broadcast frame and belong to VLAN20.
• It will remove header after learning the VLAN information.
• Once header is removed, switch will have original broadcast frame.
• Now office2 switch has original broadcast frame with necessary VLAN information.
• Office2 Switch will forward this frame from all of its ports associated with VLAN20 including
trunk links. For trunk link same process will be repeated.
• Any device connected in ports having VLAN20 ID in Office2 switch will receive original frame.

Now we know that in VLAN tagging process sender switch adds VLAN identifier header to the original
frame while receive switch removes it after getting necessary VLAN information. Switches use VLAN
trunking protocol for VLAN tagging process.

VLAN Trunking Protocol

Cisco switches supports two types of trunking protocols ISL and 802.1Q.

ISL

ISL (Inter-Switch Link) is a Cisco proprietary protocol. It was developed a long time before the 802.1Q.
It adds a 26-byte header (containing a 15-bit VLAN identifier) and a 4-byte CRC trailer to the frame.

802.1Q

It is an open standard protocol developed by IEEE. It inserts 4 byte tag in original Ethernet frame. Over
the time 802.1Q becomes more popular trunking protocols.

Key difference between ISL and 802.1Q

• ISL was developed Cisco while 802.1Q was developed by IEEE.


• ISL is a proprietary protocol. It will works only in Cisco switches. 802.1Q is an open standard
based protocol. It will works on all switches.
• ISL adds 26 bytes header and 4 byte trailer to the frame.
• 802.1Q inserts 4 byte tag in original frame.

802.1Q is a lightweight and advance protocol with several enhanced security features. Even Cisco has
adopted it as a standard protocol for tagging in newer switches. 2960 Switch supports only 802.1Q
tagging protocol.
What happens inside the switch

A switch has an FDB (Forwarding DataBase) which

• In a switch that is not VLAN capable (sometimes called "unmanaged" or "dumb", ...):
associates a host (MAC address) to a port: the FDB is a table comprised of tuples of two
elements: (MAC, port)
• in a switch that is VLAN capable (sometimes called "managed" or "smart", ...): associates
(VLAN, MAC) tuples to a port: the FDB is a table comprised of tuples of three elements: (MAC,
port, VLAN).
• The only restriction here is that one MAC address cannot appear in the same VLAN twice, even
if on different ports (essentially the VLAN in VLAN-capable switches replaces the notion of
port in non-VLAN-capable switches). In other words:
• There can be multiple VLANs per port (which is why there need to be tags at some point).
• There can be multiple VLANs per port and per MAC: the same MAC address can appear in
different VLANs and on the same port (although I wouldn't recommend that for sanity
purposes).
• The same MAC address still cannot appear on the same VLAN but on different ports (different
hosts having the same MAC address in the same layer 2 network).
ARP:
ARP was developed to facilitate dynamic address resolution between IP and Ethernet, and can
now be used on other layer two technologies as well. It works by allowing an IP device to send a
broadcast on the local network, requesting that another device on the same local network respond
with its hardware address.

GRATUITOUS ARP:
Gratuitous ARP is a sort of "advance notification", it updates the ARP cache of other systems
before they ask for it (no ARP request) or to update outdated information.

When talking about gratuitous ARP, the packets are actually special ARP request packets, not ARP
reply packets as one would perhaps expect. Some reasons for this are explained in RFC 5227.

The gratuitous ARP packet has the following characteristics:

• Both source and destination IP in the packet are the IP of the host issuing the gratuitous ARP
• The destination MAC address is the broadcast MAC address (ff:ff:ff:ff:ff:ff)
• This means the packet will be flooded to all ports on a switch
• No reply is expected

Gratuitous ARP is used for some reasons:

• Update ARP tables after a MAC address for an IP changes (failover, new NIC, etc.)
• Update MAC address tables on L2 devices (switches) that a MAC address is now on a different
port
• Send gratuitous ARP when interface goes up to notify other hosts about new MAC/IP bindings
in advance so that they don't have to use ARP requests to find out
• When a reply to a gratuitous ARP request is received you know that you have an IP address
conflict in your network
As for the second part of your question, HSRP, VRRP etc. use gratuitous ARP to update the MAC
address tables on L2 devices (switches). Also there is the option to use the burned-in MAC address for
HSRP instead of the "virtual"one. In that case the gratuitous ARP would also update the ARP tables on
L3 devices/hosts.

PROXY ARP

Proxy Address Resolution Protocol, as defined in RFC 1027, was implemented to enable devices that
are separated into physical network segments connected by a router in the same IP network or
subnetwork to resolve IP-to-MAC addresses. When devices are not in the same data link layer network
but are in the same IP network, they try to transmit data to each other as if they were on the local
network. However, the router that separates the devices will not send a broadcast message because
routers do not pass hardware-layer broadcasts. Therefore, the addresses cannot be resolved.

Proxy ARP is enabled by default so the "proxy router" that resides between the local networks
responds with its MAC address as if it were the router to which the broadcast is addressed. When the
sending device receives the MAC address of the proxy router, it sends the datagram to the proxy
router, which in turns sends the datagram to the designated device.

Proxy ARP is invoked by the following conditions:

• The target IP address is not on the same physical network (LAN) on which the request is
received.
• The networking device has one or more routes to the target IP address.
• All of the routes to the target IP address go through interfaces other than the one on which
the request is received.

When proxy ARP is disabled, a device responds to ARP requests received on its interface only if the
target IP address is the same as its IP address or if the target IP address in the ARP request has a
statically configured ARP alias.

ARP was designed to be used by devices that are directly connected on a local network. Each
device on the network should be capable of sending both unicast and broadcast transmissions directly
to each other one. Normally, if device A and device B are separated by a router, they would not be
considered local to each other. Device A would not send directly to B or vice-versa; they would send
to the router instead at layer two and would be considered “two hops apart” at layer three.
RARP
The Reverse Address Resolution Protocol (RARP) is the earliest and simplest protocol designed
to allow a device to obtain an IP address for use on a TCP/IP network. It is based directly on ARP and
works in basically the same way, but in reverse: a device sends a request containing its hardware
address and a device set up as an RARP server responds back with the device’s assigned IP address.

INVERSE ARP
Frame-relay uses inverse ARP. Inverse ARP is to associate a DLCI with an ip address. From the
perspective of the router doing the inverse ARP, the DLCI is known. So basically, inverse ARP allows
us to ask what IP address is at the other end of a pvc associated with a given DLCI. Arp on the other
hand is an Ethernet concept and works the other way. The router performing the ARP knows the IP
address it needs to communicate with, but not the mac address. So ARP allows us to send out a
broadcast requesting the layer 2 information. Inverse ARP sends out a request asking for layer 3
information.
NETWORK LAYER
IP:
IP is a connectionless protocol. This means that when A wants to send data to B, it doesn't first set up
a connection to B and then send the data—it just makes the datagram and sends it..

An IP is not classfull or classless but this term is applicable to networks and routing protocols.

Classful addressing:

In the classful addressing system all the IP addresses that are available are divided into the
five classes A,B,C,D and E, in which class A,B and C address are frequently used because class D is for
Multicast and is rarely used and class E is reserved and is not currently used. Each of the IP address
belongs to a particular class that's why they are classful addresses. Earlier this addressing system did
not have any name, but when classless addressing system came into existence then it is named as
Classful addressing system. The main disadvantage of classful addressing is that it limited the flexibility
and number of addresses that can be assigned to any device. One of the major disadvantage of classful
addressing is that it does not send subnet information but it will send the complete network address.
The router will supply its own subnet mask based on its locally configured subnets. As long as you have
the same subnet mask and the network is contiguous, you can use subnets of a classful network
address.

Classless Addressing (CIDR is sometimes called supernetting):

Classless addressing system is also known as CIDR (Classless Inter-Domain Routing). Classless
addressing is a way to allocate and specify the Internet addresses used in inter-domain routing more
flexibly than with the original system of Internet Protocol (IP) address classes. What happened in
classful addressing is that if any company needs more than 254 host machines but far fewer than the
65,533 host addresses then the only option for the company is to take the class B address. Now
suppose company needs only 1000 IP addresses for its host computers then in this (65533-
1000=64533) IP addresses get wasted. For this reason, the Internet was, until the arrival of CIDR,
running out of address space much more quickly than necessary. CIDR effectively solved the problem
by providing a new and more flexible way to specify network addresses in routers.

A CIDR network address looks like this: 192.30.250.00/15

Example RIP (Routing Information Protocol) protocol uses classful addressing.

Example: BGP (Border Gateway Protocol), RIPv2 protocols use classless addressing.
IP DATAGRAM GENERAL FORMAT

Version (4 bits)

Identifies the version of IP used to generate the datagram. For IPv4, this is of course the
number 4. The purpose of this field is to ensure compatibility between devices that may be running
different versions of IP. In general, a device running an older version of IP will reject datagrams created
by newer implementations, under the assumption that the older version may not be able to interpret
the newer datagram correctly.

Internet Header Length (IHL):

Specifies the length of the IP header, in 32-bit words. This includes the length of any options fields and
padding. The normal value of this field when no options are used is 5 (5 32-bit words = 5*4 = 20 bytes).
Contrast to the longer Total Length field below.

Type Of Service (TOS):

A field designed to carry information to provide quality of service features, such as prioritized
delivery, for IP datagrams. It was never widely used as originally defined, and its meaning has been
subsequently redefined for use by a technique called Differentiated Services (DS). See below for more
information.

Total Length (TL):

Specifies the total length of the IP datagram, in bytes. Since this field is 16 bits wide, the
maximum length of an IP datagram is 65,535 bytes, though most are much smaller.

Identification:

This field contains a 16-bit value that is common to each of the fragments belonging to a
particular message; for datagrams originally sent unfragmented it is still filled in, so it can be used if
the datagram must be fragmented by a router during delivery. This field is used by the recipient to
reassemble messages without accidentally mixing fragments from different messages. This is needed
because fragments may arrive from multiple messages mixed together, since IP datagrams can be
received out of order from any device. See the discussion of IP message fragmentation.
Flags:

MORE FRAGMENTS (MF):

The More Fragments (MF) flag is a single bit in the Flag field used with the Fragment Offset
for the fragmentation and reconstruction of packets. The More Fragments flag bit is set, it means that
it is not the last fragment of a packet. When a receiving host sees a packet arrive with the MF = 1, it
examines the Fragment Offset to see where this fragment is to be placed in the reconstructed packet.
When a receiving host receives a frame with the MF = 0 and a non-zero value in the Fragment offset,
it places that fragment as the last part of the reconstructed packet. An unfragmented packet has all
zero fragmentation information (MF = 0, fragment offset =0).

DON'T FRAGMENT (DF):

The Don't Fragment (DF) flag is a single bit in the Flag field that indicates that fragmentation
of the packet is not allowed. If Don't Fragment flag bit is set, then fragmentation of this packet is NOT
permitted. If a router needs to fragment a packet to allow it to be passed downward to the Data Link
layer but the DF bit is set to 1, then the router will discard this packet.

Fragment Offset:

When fragmentation of a message occurs, this field specifies the offset, or position, in the
overall message where the data in this fragment goes. It is specified in units of 8 bytes (64 bits). The
first fragment has an offset of 0. Again, see the discussion of fragmentation for a description of how
the field is used.

Protocol

In the layered protocol model, the layer that determines which application the data is from or
which application the data is for is indicated using the Protocol field. This field does not identify the
application, but identifies a protocol that sits above the IP layer that is used for application
identification.

Time To Live (TTL) Field

Since IP datagrams are sent from router to router as they travel across an internetwork, it is
possible that a situation could result where a datagram gets passed from router A to router B to router
C and then back to router A. Router loops are not supposed to happen, and rarely do, but are possible.

To ensure that datagrams don't circle around endlessly, the TTL field was intended to be filled
in with a time value (in seconds) when a datagram was originally sent. Routers would decrease the
time value periodically, and if it ever hit zero, the datagram would be destroyed. This was also
intended to be used to ensure that time-critical datagrams wouldn’t linger past the point where they
would be “stale”.

In practice, this field is not used in exactly this manner. Routers today are fast and usually take
far less than a second to forward a datagram; measuring the time that a datagram “lives” would be
impractical. Instead, this field is used as a “maximum hop count” for the datagram. Each time a router
processes a datagram, it reduces the value of the TTL field by one. If doing this results in the field being
zero, the datagram is said to have expired. It is dropped, and usually an ICMP Time Exceeded message
is sent to inform the originator of the message that this happened.

The TTL field is one of the primary mechanisms by which networks are protected from router
loops (see the description of ICMP Time Exceeded messages for more on how TTL helps IP handle
router loops.)
Header Checksum:

A checksum computed over the header to provide basic protection against corruption in
transmission. This is not the more complex CRC code typically used by data link layer technologies
such as Ethernet; it's just a 16-bit checksum. It is calculated by dividing the header bytes into words (a
word is two bytes) and then adding them together. The data is not checksummed, only the header. At
each hop the device receiving the datagram does the same checksum calculation and on a mismatch,
discards the datagram as damaged.

Source Address:

The 32-bit IP address of the originator of the datagram. Note that even though intermediate
devices such as routers may handle the datagram, they do not normally put their address into this
field—it is always the device that originally sent the datagram.

Destination Address

The 32-bit IP address of the intended recipient of the datagram. Again, even though devices
such as routers may be the intermediate targets of the datagram, this field is always for the ultimate
destination.

Options: One or more of several types of options may be included after the standard headers in certain
IP datagrams. I discuss them in the topic that follows this one.

Padding: If one or more options are included, and the number of bits used for them is not a multiple
of 32, enough zero bits are added to “pad out” the header to a multiple of 32 bits (4 bytes).

Data: The data to be transmitted in the datagram, either an entire higher-layer message or a fragment
of one.

Fragmentation:

Key Concept: The size of the largest IP datagram that can be transmitted over a physical network is
called that network’s maximum transmission unit (MTU). If a datagram is passed from a network with
a high MTU to one with a low MTU, it must be fragmented to fit the network with the smaller MTU.

Ethernet frame uses a frame format that limits the size of the payload it sends to 1,500 bytes. This
means Ethernet can't deal with IP datagrams greater than 1,500 bytes in size.

Internet Minimum MTU: 576 Bytes

Each router must be able to fragment as needed to handle IP datagrams up to the size of the
largest MTU used by networks to which they attach. Routers are also required, as a minimum, to
handle an MTU of at least 576 bytes. This value is specified in RFC 791, and was chosen to allow a
“reasonable sized” data block of at least 512 bytes, plus room for the standard IP header and options.
Since it is the minimum size specified in the IP standard, 576 bytes has become a common default
MTU value used for IP datagrams. Even if a host is connected over a local network with an MTU larger
than 576, it may choose to use an MTU value of 576 anyway, to ensure that no further fragmentation
will be required by intermediate routers.

Maximum size allowed for an IP datagram 65,528 Bytes.


MTU Path Discovery

The source node typically sends a datagram that has the MTU of its local physical link, since
that represents an upper bound on the MTU of any path to or from that device. If this goes through
without any errors, it knows it can use that value for future datagrams to that destination. If it gets
back any Destination Unreachable - Fragmentation Needed and DF Set messages, this means some
other link between it and the destination has a smaller MTU. It tries again using a smaller datagram
size, and continues until it finds the largest MTU that can be used on the path.

Key Concept: When an MTU requirement forces a datagram to be fragmented, it is split into
several smaller IP datagrams, each containing part of the original. The header of the original datagram
is changed into the header of the first fragment, and new headers are created for the other fragments.
Each is set to the same Identification value to mark them as part of the same original datagram. The
Fragment Offset of each is set to the location where the fragment belongs in the original. The More
Fragments field is set to 1 for all fragments but the last, to let the recipient know when it has received
all the fragments.

The Don’t Fragment Flag

This flag can be set to 1 by a transmitting device to specify that a datagram not be fragmented
in transit. This may be used in certain circumstances where the entire message must be delivered
intact as pieces may not make sense. It may also be used if the destination device has a limited IP
implementation and can't reassemble fragments, and is also used for testing the maximum
transmission unit (MTU) of a link. Normally, however, devices don't care about fragmentation and this
field is left at zero.

What happens if a router encounters a datagram too large to pass over the next physical
network but with the Don't Fragment bit set to 1? It can't fragment the datagram and it can't pass it
along either, so it is “stuck”. It will generally drop the datagram, and then send back a special ICMP
Destination Unreachable error message: “Fragmentation Needed and Don't Fragment Bit Set”. This is
used in MTU Path Discovery as described in the preceding section.

Asymmetry of Fragmentation and Reassembly

Intermediate devices do not perform reassembly. This is done only by the ultimate destination of the
IP message.

Fragment Recognition and Fragmented Message Identification:

The recipient knows it has received a message fragment the first time it sees a datagram with
the More Fragments bit set to one or the Fragment Offset a value other than zero. It identifies the
message based on: the source and destination IP addresses; the protocol specified in the header; and
the Identification field generated by the sender.

Reassembly is finished when the entire buffer has been filled and the fragment with the More
Fragments bit set to zero is received, indicating that it is the last fragment of the datagram. The
reassembled datagram is then processed like a normal, unfragmented datagram would be. On the
other hand, if the timer for the reassembly expires with any of the fragments missing, the message
cannot be reconstructed. The fragments are discarded, and an ICMP Time Exceeded message
generated. Since IP is unreliable, it relies on higher layer protocols such as TCP to determine that the
message was not properly received and then retransmit it.

Note: There is a difference between a routable protocol and a routing protocol. IP is a routable
protocol, which means its messages (datagrams) can be routed. Examples of routing protocols are RIP
or BGP, which are used to exchange routing information between routers.
IP FRAGMENTATION IN WIRESHARK

Fragmentation. It's what happens when a big packet spawns a lot of smaller baby packets
because the MTU is not big enough, be it anywhere in transit (IPv4) or only at the source (IPv6). It also
might cause engineers to lose their sanity while troubleshooting weird problems.

Up until recently, I have to shamefully admit, I had no idea how to read a Wireshark capture
of fragmented packets. It always looked dodgy to me and I didn't make the effort to make some sense
out of it.

You can see a bunch of fragments, which it says are Reassembled in #7, but packet number 7
has a size of 134. Errr, what? Worry not, it shall all be explained below!

What's the capture about?

First thing's first, the screenshot above shows a capture of a ping between two routers in GNS3 with
a size of 9000. As the link between those two routers runs a 1500 MTU, this bad boy has to be
fragmented.

One tiny bit of information: a ping command in IOS with a size of 9000 will calculate the ICMP payload
so that the total IP packet is 9000 Bytes in length. The ping command on Linux or Windows will
put 9000 Bytes inside the ICMP packet, resulting in a 9028 Byte IP packet.

GNS3 allows you to take live packet captures on any link (extremely handy) and it's also a very
controlled environment. I have a couple mind-boggling examples from the real world though, but I'm
saving those for later.

How do I read all of that?

A few fields in the IP header are of particular interest, so here's a quick refresher:

Identification - this value identifies a group of fragments. It's what tells the reassembling device which
fragments make up the original packet.

Fragment offset - once all the fragments have been received, they need to be put back in the correct
order. This field tells the reassembling device where in the original packet to place the data from each
fragment (after stripping the L2&L3 headers).

The value for the first fragment will be 0

Flags - MF bit - More Fragments means that there are additional packets coming in after this one.

it is set (1) in all but the last fragment (0)


The most important information is in the last entry (#7 for the request and #14 for the reply). It shows
a combination of the contents (and size) of the last fragment to arrive (134 bytes), but it also shows
the reassembled packet in all its glory (8980 bytes).

They key to that is noticing the tab that appears at the bottom which says Reassembled IPv4 (8980
bytes).

To make matters worse, the IP header shown inside the reassembled packet is the one from the last
fragment (notice Fragment offset is 8880 and MF is 0). On the flip side, it does tell you that the packet
has been reassembled from 7 fragments and it gives you the sizes and links to the fragments
themselves. Convenient.

The ICMP header is there and the 8972 bytes of garbage that come with it for you to analyze. In the
fragmentation process, everything coming after the IP header will be split up - in this case the ICMP
header (8 bytes) and the data (8972 bytes).
This means that the ICMP header will only be present in the first fragment (offset=0). You can check
by taking the next 8 bytes after the IP header in the reassembled frame (08 00 25 f1 00 03 00 00)
and looking for them in the first fragment. It's nowhere to be seen in the following fragments, as
expected.

What is NAT?

NAT (Network Address Translation) is a process of changing the source and destination IP addresses
and ports. Address translation reduces the need for IPv4 public addresses and hides private network
address ranges. The process is usually done by routers or firewalls.

There are three types of address translation:

1. Static NAT: translates one private IP address to a public one. The public IP address is always the
same.
2. Dynamic NAT: Private IP addresses are mapped to the pool of public IP addresses.
3. Port Address Translation (PAT): one public IP address is used for all internal devices, but a different
port is assigned to each private IP address. Also known as NAT Overload.

An example will help you understand the concept.

Computer A request a web page from an Internet server. Because Computer A uses private IP
addressing, the source address of the request has to be changed by the router because private IP
addresses are not routable on the Internet. Router R1 receives the request, changes the source IP
address to its public IP address and sends the packet to server S1. Server S1 receives the packet and
replies to router R1. Router R1 receives the packet, changes the destination IP addresses to the private
IP address of Computer A and sends the packet to Computer A.
IP Security (IPSec) Protocols (IPSec works at the IP/Network layer)
IPSec modes (transport and tunnel)

IPSec core protocols: AH and ESP.

IPSec sub componets: Key exchange, Encryption/Hasing, SA.

IPSec suite:
Encryption/Hashing Algorithms:

AH and ESP are generic and do not specify the exact mechanism used for encryption. This
gives them the flexibility to work with a variety of such algorithms, and to negotiate which is used as
needed. Two common ones used with IPSec are Message Digest 5 (MD5) and Secure Hash Algorithm
1 (SHA-1). These are also called hashing algorithms because they work by computing a formula called
a hash based on input data and a key.

Security Policies and Associations, and Management Methods:

Since IPSec provides flexibility in letting different devices decide how they want to implement
security, some means is required to keep track of the security relationships between devices. This is
done in IPSec using constructs called security policies and security associations, and by providing ways
to exchange security association information (see below).

Key Exchange Framework and Mechanism:

For two devices to exchange encrypted information they need to be able to share keys for
unlocking the encryption. They also need a way to exchange security association information. In IPSec,
a protocol called the Internet Key Exchange (IKE) provides these capabilities.

ISAKMP (Internet Security Association and Key Management Protocol) exists for the purpose
of securely establishing keying material, over an insecure medium (Internet), for IPsec to use.

IPsec keys are being auto-magically determined across the Internet, then there needs to be a
way of first validating the other party you are securely exchanging keys with is who they say they are.
Which is the other part of what ISAKMP does.

Transport mode

The IPSec header is applied only over this IP payload, not the IP header. The AH and/or ESP headers
appears between the original, single IP header and the IP payload.

Tunnel Mode

In tunnel mode, IPSec is used to protect a complete encapsulated IP datagram after the IP
header has already been applied to it. The IPSec headers appear in front of the original IP header, and
then a new IP header is added in front of the IPSec header. That is to say, the entire original IP
datagram is secured and then encapsulated within another IP datagram.

Security Policies:

A security policy is a rule that is programmed into the IPSec implementation that tells it how
to process different datagrams received by the device. For example, security policies are used to
decide if a particular packet needs to be processed by IPSec or not; those that do not bypass AH and
ESP entirely. If security is required, the security policy provides general guidelines for how it should be
provided, and if necessary, links to more specific detail.

Security policies for a device are stored in the device's Security Policy Database (SPD).
Security Associations:

A Security Association (SA) is a set of security information that describes a particular kind of secure
connection between one device and another. You can consider it a "contract", if you will, that specifies
the particular security mechanisms that are used for secure communications between the two.

A device's security associations are contained in its Security Association Database (SAD).

Security Association Triples and the Security Parameter Index (SPI)

Security associations don't actually have names, however. They are instead defined by a set of three
parameters, called a triple:

Security Parameter Index (SPI):

A 32-bit number that is chosen to uniquely identify a particular SA for any connected device. The SPI
is placed in AH or ESP datagrams and thus links each secure datagram to the security association. It is
used by the recipient of a transmission so it knows what SA governs the datagram.

IP Destination Address: The address of the device for whom the SA is established.

Security Protocol Identifier:

Specifies whether this association is for AH or ESP. If both are in use with this device they have
separate SAs.

Authentication Header:

We use a special hashing algorithm and a specific key known only to the source and the
destination. A security association between two devices is set up that specifies these particulars so
that the source and destination know how to perform the computation but nobody else can. On the
source device, AH performs the computation and puts the result (called the Integrity Check Value or
ICV) into a special header with other fields for transmission. The destination device does the same
calculation using the key the two devices share, which enables it to see immediately if any of the fields
in the original datagram were modified (either due to error or malice).

Encapsulating Security Payload Fields:

ESP has several fields that are the same as those used in AH, but packages its fields in a very
different way. Instead of having just a header, it divides its fields into three components:

ESP Header:

This contains two fields, the SPI and Sequence Number, and comes before the encrypted data.
Its placement depends on whether ESP is used in transport mode or tunnel mode, as explained in the
topic on IPSec modes.

ESP Trailer:

This section is placed after the encrypted data. It contains padding that is used to align the encrypted
data, through a Padding and Pad Length field. Interestingly, it also contains the Next Header field for
ESP.
ESP Authentication Data:

This field contains an Integrity Check Value (ICV), computed in a manner similar to how the AH
protocol works, for when ESP's optional authentication feature is used.

There are two reasons why these fields are broken into pieces like this. The first is that some
encryption algorithms require the data to be encrypted to have a certain block size, and so padding
must appear after the data and not before it. That's why padding appears in the ESP Trailer. The
second is that the ESP Authentication Data appears separately because it is used to authenticate the
rest of the encrypted datagram after encryption. This means it cannot appear in the ESP Header or
ESP Trailer.

Key Concept: The IPSec Encapsulating Security Payload protocol allows the contents of a datagram to
be encrypted, to ensure that only the intended recipient is able to see the data. It is implemented
using three components: an ESP Header added to the front of a protected datagram, an ESP Trailer
that follows the protected data, and an optional ESP Authentication Data field that provides
authentication services similar to those provided by the Authentication Header (AH).

Ref: http://www.tcpipguide.com/free/t_IPSecKeyExchangeIKE.htm

IPSec Key Exchange (IKE)


IPSec, like many secure networking protocols sets, is based on the concept of a “shared
secret”. Two devices that want to send information securely encode and decode it using a piece of
information that only they know. Anyone who isn't “in” on the secret is able to intercept the
information but is prevented either from reading it (if ESP is used to encrypt the payload) or from
tampering with it undetected (if AH is used). Before either AH or ESP can be used, however, it is
necessary for the two devices to exchange the “secret” that the security protocols themselves will use.
The primary support protocol used for this purpose in IPSec is called Internet Key Exchange (IKE).

IKE is defined in RFC 2409, and is one of the more complicated of the IPSec protocols to
comprehend. In fact, it is simply impossible to truly understand more than a real simplification of its
operation without significant background in cryptography. I don't have a background in cryptography
and I must assume that you, my reader, do not either. So rather than fill this topic with baffling
acronyms and unexplained concepts, I will just provide a brief outline of IKE and how it is used.

IKE Overview and Relationship to Other Key Exchange Methods

The purpose of IKE is to allow devices to exchange information required for secure
communication. As the title suggests, this includes cryptographic keys used for encoding
authentication information and performing payload encryption. IKE works by allowing IPSec-capable
devices to exchange security associations (SAs), to populate their security association databases
(SADs). These are then used for the actual exchange of secured datagrams with the AH and ESP
protocols.

IKE is considered a “hybrid” protocol because it combines (and supplements) the functions of
three other protocols. The first of these is the Internet Security Association and Key Management
Protocol (ISAKMP). This protocol provides a framework for exchanging encryption keys and security
association information. It operates by allowing security associations to be negotiated through a series
of phases.
ISAKMP is a generic protocol that supports many different key exchange methods. In IKE, the
ISAKMP framework is used as the basis for a specific key exchange method that combines features
from two key exchange protocols:

OAKLEY: Describes a specific mechanism for exchanging keys through the definition of various key
exchange “modes”. Most of the IKE key exchange process is based on OAKLEY.

SKEME: Describes a different key exchange mechanism than OAKLEY. IKE uses some features from
SKEME, including its method of public key encryption and its fast re-keying feature.

IKE Operation

So, IKE doesn't strictly implement either OAKLEY or SKEME but takes bits of each to form its
own method of using ISAKMP. Clear as mud, I know. Since IKE functions within the framework of
ISAKMP, its operation is based on the ISAKMP phased negotiation process. There are two phases:

ISAKMP Phase 1:

The first phase is a “setup” stage where two devices agree on how to exchange further
information securely. This negotiation between the two units creates a security association for
ISAKMP itself; an ISAKMP SA. This security association is then used for securely exchanging more
detailed information in Phase 2.

ISAKMP Phase 2:

In this phase the ISAKMP SA established in Phase 1 is used to create SAs for other security
protocols. Normally, this is where the parameters for the “real” SAs for the AH and ESP protocols
would be negotiated.

An obvious question is why IKE bothers with this two-phased approach; why not just negotiate
the security association for AH or ESP in the first place? Well, even though the extra phase adds
overhead, multiple Phase 2 negotiations can be conducted after one Phase 1, which amortizes the
extra “cost” of the two-phase approach. It is also possible to use a simpler exchange method for Phase
2 once the ISAKMP security association has been established in Phase 1.

The ISAKMP security association negotiated during Phase 1 includes the negotiation of the
following attributes used for subsequent negotiations:

An encryption algorithm to be used, such as the Data Encryption Standard (DES).

A hash algorithm (MD5 or SHA, as used by AH or ESP).

An authentication method, such as authentication using previously shared keys.

A Diffie-Hellman group. Diffie and Hellman were two pioneers in the industry who invented public-
key cryptography. In this method, instead of encrypting and decrypting with the same key, data is
encrypted using a public key knowable to anyone and decrypted using a private key that is kept secret.
A Diffie-Hellman group defines the attributes of how to perform this type of cryptography. Four
predefined groups derived from OAKLEY are specified in IKE and provision is allowed for defining new
groups as well.

Note that even though security associations in general are unidirectional, the ISAKMP SA is established
bidirectionally. Once Phase 1 is complete, then, either device can set up a subsequent SA for AH or
ESP using it.
RFC # 2408
The Internet Security Association and Key Management Protocol (ISAKMP) defines the
procedures for authenticating a communicating peer, creation and management of Security Associations, key
generation techniques, and threat mitigation (e.g. denial of service and replay attacks). All of these are
necessary to establish and maintain secure communications (via IP Security Service or any other security
protocol) in an Internet environment.

ISAKMP Header.
Next Payload Types:

Value Next Payload Type

1 Security Association (SA)


2 Proposal (P)
4 Key Exchange (KE)
5 Identification (ID)
8 Hash (HASH)
11 Notification (N)

Exchange Types:
Value Exchange Type.
2 Identity Protection
4 Aggressive
32 Quick mode

Notify Messages - Error Types (1-8191)

Value Nofity Messages - Error Types Reference

1 INVALID-PAYLOAD-TYPE [RFC2408]
2 DOI-NOT-SUPPORTED [RFC2408]
3 SITUATION-NOT-SUPPORTED [RFC2408]
4 INVALID-COOKIE [RFC2408]
5 INVALID-MAJOR-VERSION [RFC2408]
6 INVALID-MINOR-VERSION [RFC2408]
7 INVALID-EXCHANGE-TYPE [RFC2408]
8 INVALID-FLAGS [RFC2408]
9 INVALID-MESSAGE-ID [RFC2408]
10 INVALID-PROTOCOL-ID [RFC2408]
11 INVALID-SPI [RFC2408]
12 INVALID-TRANSFORM-ID [RFC2408]
13 ATTRIBUTES-NOT-SUPPORTED [RFC2408]
14 NO-PROPOSAL-CHOSEN [RFC2408]
15 BAD-PROPOSAL-SYNTAX [RFC2408]
16 PAYLOAD-MALFORMED [RFC2408]
17 INVALID-KEY-INFORMATION [RFC2408]
18 INVALID-ID-INFORMATION [RFC2408]
19 INVALID-CERT-ENCODING [RFC2408]
20 INVALID-CERTIFICATE [RFC2408]
21 CERT-TYPE-UNSUPPORTED [RFC2408]
22 INVALID-CERT-AUTHORITY [RFC2408]
23 INVALID-HASH-INFORMATION [RFC2408]
24 AUTHENTICATION-FAILED [RFC2408]
25 INVALID-SIGNATURE [RFC2408]
26 ADDRESS-NOTIFICATION [RFC2408]
27 NOTIFY-SA-LIFETIME [RFC2408]
28 CERTIFICATE-UNAVAILABLE [RFC2408]
29 UNSUPPORTED-EXCHANGE-TYPE [RFC2408]
30 UNEQUAL-PAYLOAD-LENGTHS [RFC2408]
31-8191 RESERVED (Future Use)
The following notation is used throughout this memo.

HDR is an ISAKMP header whose exchange type is the mode. When writen as HDR* it indicates
payload encryption.

SA is an SA negotiation payload with one or more proposals. An initiator MAY provide multiple
proposals for negotiation; a responder MUST reply with only one.

<P>_b indicates the body of payload <P>-- the ISAKMP generic vpayload is not included.

SAi_b is the entire body of the SA payload (minus the ISAKMP generic header) -- i.e. the DOI, situation,
all proposals and all transforms offered by the Initiator.

CKY-I and CKY-R are the Initiator's cookie and the Responder's cookie, respectively, from the ISAKMP
header.

g^xi and g^xr are the Diffie-Hellman ([DH]) public values of the initiator and responder respectively.

g^xy is the Diffie-Hellman shared secret.

KE is the key exchange payload which contains the public information exchanged in a Diffie-Hellman
exchange. There is no particular encoding (e.g. a TLV) used for the data of a KE payload.

Nx is the nonce payload; x can be: i or r for the ISAKMP initiator and responder respectively. (Ni_b is
the Initiator's Nonce, and Nr_B is the Responder's Nonce)

IDx is the identification payload for "x". x can be: "ii" or "ir" for the ISAKMP initiator and responder
respectively during phase one negotiation; or "ui" or "ur" for the user initiator and responder
respectively during phase two. The ID payload format for the Internet DOI is defined in [Pip97].

SIG is the signature payload. The data to sign is exchange-specific.

CERT is the certificate payload.

HASH (and any derivitive such as HASH(2) or HASH_I) is the hash payload. The contents of the hash
are specific to the authentication method.

prf(key, msg) is the keyed pseudo-random function-- often a keyed hash function-- used to generate
a deterministic output that appears pseudo-random. prf's are used both for key derivations and for
authentication (i.e. as a keyed MAC). (See [KBC96]).

SKEYID is a string derived from secret material known only to the active players in the exchange.

SKEYID_e is the keying material used by the ISAKMP SA to protect the confidentiality of its messages.

SKEYID_a is the keying material used by the ISAKMP SA to authenticate its messages.

SKEYID_d is the keying material used to derive keys for non-ISAKMP security associations.

The length of nonce payload MUST be between 8 and 256 bytes inclusive.
EXCHANGES
Main Mode is an instantiation of the ISAKMP Identity Protect Exchange: The first two messages
negotiate policy; the next two nonces) necessary for the exchange; and the last two messages
authenticate the Diffie-Hellman Exchange. The authentication method negotiated as part of the initial
ISAKMP exchange influences the composition of the payloads but not their purpose. The XCHG for
Main Mode is ISAKMP Identity Protect.

Similarly, Aggressive Mode is an instantiation of the ISAKMP Aggressive Exchange. The first two
messages negotiate policy, exchange Diffie-Hellman public values and ancillary data necessary for the
exchange, and identities. In addition the second message authenticates the responder. The third
message authenticates the initiator and provides a proof of participation in the exchange. The XCHG
for Aggressive Mode is ISAKMP Aggressive.

For signatures: SKEYID = prf(Ni_b | Nr_b, g^xy)

For public key encryption: SKEYID = prf(hash(Ni_b | Nr_b), CKY-I | CKY-R)

For pre-shared keys: SKEYID = prf(pre-shared-key, Ni_b | Nr_b)

The result of either Main Mode or Aggressive Mode is three groups of authenticated keying material:

SKEYID_d = prf(SKEYID, g^xy | CKY-I | CKY-R | 0)

SKEYID_a = prf(SKEYID, SKEYID_d | g^xy | CKY-I | CKY-R | 1)

SKEYID_e = prf(SKEYID, SKEYID_a | g^xy | CKY-I | CKY-R | 2)

and agreed upon policy to protect further communications. The values of 0, 1, and 2 above are
represented by a single octet. The key used for encryption is derived from SKEYID_e in an algorithm-
specific manner (see appendix B).

To authenticate either exchange the initiator of the protocol generates HASH_I and the responder
generates HASH_R where:

HASH_I = prf(SKEYID, g^xi | g^xr | CKY-I | CKY-R | SAi_b | IDii_b )

HASH_R = prf(SKEYID, g^xr | g^xi | CKY-R | CKY-I | SAi_b | IDir_b )


Phase 1 Authenticated With a Pre-Shared Key:

A key derived by some out-of-band mechanism may also be used to authenticate the exchange. The
actual establishment of this key is out of the scope of this document.

When doing a pre-shared key authentication, Main Mode is defined as follows:

Initiator Responder

---------- -----------

HDR, SA -->

<-- HDR, SA

HDR, KE, Ni -->

<-- HDR, KE, Nr

HDR*, IDii, HASH_I -->

<-- HDR*, IDir, HASH_R

Aggressive mode with a pre-shared key is described as follows:

Initiator Responder

----------- -----------

HDR, SA, KE, Ni, IDii -->

<-- HDR, SA, KE, Nr, IDir, HASH_R

HDR*, HASH_I -->

When using pre-shared key authentication with Main Mode the key can only be identified by the IP
address of the peers since HASH_I must be computed before the initiator has processed IDir.
Aggressive Mode allows for a wider range of identifiers of the pre-shared secret to be used. In
addition, Aggressive Mode allows two parties to maintain multiple, different pre-shared keys and
identify the correct one for a particular exchange.
NAT-T (RFC – 3947)
Detecting Support of NAT-Traversal:

The NAT-Traversal capability of the remote host is determined by an exchange of vendor ID


payloads. In the first two messages of Phase 1, the vendor id payload for this specification MUST be
sent if supported (and it MUST be received by both sides) for the NAT-Traversal probe to continue.
The content of the payload is the MD5 hash of RFC 3947.

The exact content in hex for the payload is 4a131c81070358455c5728f20e95452f

Detecting the Presence of NAT

The NAT-D payload not only detects the presence of NAT between the two IKE peers, but also detects
where the NAT is. The location of the NAT device is important, as the keepalives have to initiate from
the peer "behind" the NAT.

To detect NAT between the two hosts, we have to detect whether the IP address or the port changes
along the path. This is done by sending the hashes of the IP addresses and ports of both IKE peers
from each end to the other. If both ends calculate those hashes and get same result, they know there
is no NAT between. If the hashes do not match, somebody has translated the address or port. This
means that we have to do NAT-Traversal to get IPsec packets through.

If the sender of the packet does not know his own IP address (in case of multiple interfaces, and the
implementation does not know which IP address is used to route the packet out), the sender can
include multiple local hashes to the packet (as separate NAT-D payloads). In this case, NAT is detected
if and only if none of the hashes match.

The hashes are sent as a series of NAT-D (NAT discovery) payloads. Each payload contains one hash,
so in case of multiple hashes, multiple NAT-D payloads are sent. In the normal case there are only
two NAT-D payloads.

The NAT-D payloads are included in the third and fourth packets of Main Mode, and in the second and
third packets in the Aggressive Mode.

RFC 3947 Negotiation of NAT-Traversal in the IKE January 2005

The format of the NAT-D packet is

The payload type for the NAT discovery payload is 20.

The HASH is calculated as follows: HASH = HASH(CKY-I | CKY-R | IP | Port)

This uses the negotiated HASH algorithm. All data inside the HASH is in the network byte-order. The
IP is 4 octets for an IPv4 address and 16 octets for an IPv6 address. The port number is encoded as a
2 octet number in network byte-order. The first NAT-D payload contains the remote end's IP address
and port (i.e., the destination address of the UDP packet). The remaining NAT-D payloads contain
possible local-end IP addresses and ports (i.e., all possible source addresses of the UDP packet).

If there is no NAT between the peers, the first NAT-D payload received should match one of the local
NAT-D payloads (i.e., the local NAT-D payloads this host is sending out), and one of the other NAT-D
payloads must match the remote end's IP address and port. If the first check fails (i.e., first NAT-D
payload does not match any of the local IP addresses and ports), it means that there is dynamic NAT
between the peers, and this end should start sending keepalives as defined in the [RFC3948] (this end
is behind the NAT).

If a NAT device has been determined to exist, NAT-T will change the ISAKMP transport with ISAKMP
Main Mode messages five and six, at which point all ISAKMP packets change from UDP port 500 to
UDP port 4500. NAT-T encapsulates the Quick Mode (IPsec Phase 2) exchange inside UDP 4500 as
well. After Quick Mode completes data that gets encrypted on the IPsec Security Association is
encapsulated inside UDP port 4500 as well, thus providing a port to be used in the PAT device for
translation.

NAT-T encapsulates ESP packets inside UDP and assigns both the Source and Destination ports as
4500. After this encapsulation there is enough information for the PAT database binding to build
successfully. Now ESP packets can be translated through a PAT device.

When a packet with source and destination port of 4500 is sent through a PAT device (from inside to
outside), the PAT device will change the source port from 4500 to a random high port, while keeping
the destination port of 4500. When a different NAT-T session passes through the PAT device, it will
change the source port from 4500 to a different random high port, and so on. This way each local host
has a unique database entry in the PAT devices mapping its RFC1918 ip address/port4500 to the public
ip address/high-port.
Perfect Forward Secrecy (PFS)
Performs an additional Diffie-Hellman exchange in Phase 2. Perfect Forward Secrecy (PFS) as
the method that the device uses to generate the encryption key. PFS generates each new encryption
key independently from the previous key.

Instead of making use of the DH Keys Calculated during Phase-1, PFS forces DH-Key calculation
during Phase-2 Setup as well as Phase-2 periodic Rekey. The PFS ensures that the same key will not be
generated and used again.

Think about a scenario that a private key has compromised by a hacker. The hacker would be
able to access the data in network transit which is protected by the same key. If we keep using the
same key, all future data will be compromised as well. By utilizing PFS, we force the IPSec VPN tunnel
to generate and use a different key when it first setup as well as during the periodic rekey. No future
data would have been compromised when using a new key.

On a Cisco ASA, if the peer initiates the negotiation and the local configuration specifies PFS,
the peer must perform a PFS exchange or the negotiation fails. If the local configuration does not
specify a group, the ASA assumes a default of group2. If the local configuration does not specify PFS,
it accepts any offer of PFS from the peer. The best practice is to configure all VPN peers with PFS and
matching group.

With PFS, every time a new security association (SA) is negotiated, a new Diffie-Hellman
exchange occurs, which requires additional processing time. On most modem hardware based VPN
appliances the overhead is negligible.

What is the difference between the AH and ESP protocols of IPSec?

Authentication Header (AH):

The AH protocol provides a mechanism for authentication only (It does not encrypt any data at all).
AH provides data integrity, data origin authentication, and an optional replay protection service. Data
integrity is ensured by using a message digest that is generated by an algorithm such as HMAC-MD5
or HMAC-SHA. Data origin authentication is ensured by using a shared secret key to create the
message digest. Replay protection is provided by using a sequence number field with the AH header.
AH authenticates IP headers and their payloads, with the exception of certain header fields that can
be legitimately changed in transit, such as the Time To Live (TTL) field.

Note: AH does not work through NATed network as it hashes both the payload and header of a packet
while NAT changes the IP header of a packet during translation which reflect on the receiving device
will believe the packet has been altered in transit and reject the packet.

Encapsulating Security Payload (ESP):

The ESP protocol provides data confidentiality (encryption) and authentication (data integrity, data
origin authentication, and replay protection). ESP can be used with confidentiality only, authentication
only, or both confidentiality and authentication. When ESP provides authentication functions, it uses
the same algorithms as AH, but the coverage is different. AH-style authentication authenticates the
entire IP packet, including the outer IP header, while the ESP authentication mechanism authenticates
only the IP datagram portion of the IP packet.

Note: Provides all of confidentiality, authentication, and integrity services; while ESP uses a hash
algorithm for data integrity, the hash does not include the IP header of the packet, thus ESP will work
normally through a NATed device.
AH Header

ESP Header

IPSec Protocol suit


IPSec Transport mode
IPSec Tunnel mode
ICMP
ICMP is a network layer protocol. There is no TCP or UDP port number associated with ICMP packets
as these numbers are associated with the transport layer above

ICMP uses the basic support of IP as if it were a higher level protocol, however, ICMP is actually an
integral part of IP. Although ICMP messages are contained within standard IP packets, ICMP messages
are usually processed as a special case, distinguished from normal IP processing. In many cases, it is
necessary to inspect the contents of the ICMP message and deliver the appropriate error message to
the application responsible for transmission of the IP packet that prompted the sending of the ICMP
message.

The Traceroute Command:

The traceroute command is used to discover the routes that packets actually take when
traveling to their destination. The device (for example, a router or a PC) sends out a sequence of User
Datagram Protocol (UDP) datagrams to an invalid port address at the remote host.

The source using UDP traceroute sends UDP packet to an "invalid port number". The source
does not expect the end device to recognize this port and expects the end device to send an ICMP
"port unreachable message" back to the source, suggesting it does not recognize the UDP port number
it is supposed to look into. However, the "destination has been contacted" and we have the path all
along the way. Again this is done by incrementing the TTL value till the destination device is reached
and can send an "port unreachable message”.

So now what if there is an ACL at the destination that denies the ICMP "echo reply" from the
destination or the incoming "echo request"? The source would not be able to trace the path because
it would not receive the ICMP "echo reply" from the destination device. So UDP traceroute is used.

Trace route gives an insight to your network problem.

• The entire path that a packet travel through.


• Names and identity of routers and devices in your path.
• Network Latency or more specifically the time taken to send and receive data to each device
on the path.

"It’s a tool that can be used to verify the path that your data will take to reach its destination, without
actually sending your data."
How Trace Route Works:

Let’s say you have to reach the destination R4 from R1 - - -(R1-R2-R3-R4)

1. ICMP request packet generated by R1. The initial TTL value in the IP header is set to 1.

2. The first router (R2) on route to the destination receives this packet. But, it will drop it as the TTL
value is decremented to "0". So, R2, sends ICMP time exceeded message back to the client (TTL
exceeded message back to the "source"(always the source).

3. The source (R1) receives this, and now knows about the details about the first router (R2) on the
path to the destination.

4. Now, the source (R1) will increment it TTL value to "2" this time. So, the ICMP request is able to
reach the third router (R3) on the way. The TTL value at R3 is decremented to "0", and a Time exceeded
message is sent back to R1. Now, R1 is aware of R3. This would go on till R4 (destination/target) echo
replies back to R1 (source) at a value of TTL set to 3.
PMTU

ICMP type 3 code 4 messages are "fragmentation needed but don't fragment set". This means your
device sent a packet larger than the MTU of the device sending the ICMP message to you. Normally,
the packet could be fragmented, but the DF bit was set.

ICMP Source Quench message (type 4, code 0)

In IPv4, a device that is forced to drop datagrams due to congestion provides feedback to the sources
that overwhelmed it by sending them ICMPv4 Source Quench messages. Just as we use water to
quench a fire, a Source Quench method is a signal that attempts to quench a source device that is
sending too fast. In other words, it's a polite way for one IP device to tell another: “SLOW DOWN!”
When a device receives one of these messages it knows it needs to cut down on how fast it is sending
datagrams to the device that sent it.

PING
The original PING command stood for "Packet Internet Groper”.

Ping is a basic Internet tool that allows a user to verify that a particular IP address exists and can accept
requests. The verb ping means the act of using the ping utility or command. Ping is used diagnostically
to ensure that a host computer you are trying to reach is actually operating. If, for example, a user
cannot ping a host, then the user will be unable to use the File Transfer Protocol (FTP) to send files to
that host. Ping can also be used with a host that is operating to see how long it takes to get a response
back. Using ping, you can learn the number form of the IP address from the symbolic domain name.

Loosely, ping means "to get the attention of" or "to check for the presence of" another party online.
Ping operates by sending a packet to a designated address and waiting for a response. The computer
acronym (for Packet Internet or Inter-Network Groper) was contrived to match the submariners' term
for the sound of a returned sonar pulse.

Ping can also refer to the process of sending a message to all the members of a mailing list requesting
an ACK (acknowledgment code). This is done before sending e-mail in order to confirm that all of the
addresses are reachable.

The Internet Ping command bounces a small packet off a domain or IP address to test network
communications, and then determines how long the packet took to make the round trip. The Ping
command is one of the most commonly used utilities on the Internet by both people and automated
programs for conducting the most basic network test: can your computer reach another computer on
the network, and if so how long does it take?

Every second of the day there are untold millions of pings flashing back and forth between computers
on the Internet like a continuous shower of electronic neural sparks. The following subsections provide
information on how Ping was invented, how Ping works, how to use Ping, Ping web sites, and info on
the original Unix Ping version.

How Ping works:

The Internet Ping program works much like a sonar echo-location, sending a small packet of
information containing an ICMP ECHO_REQUEST to a specified computer, which then sends an
ECHO_REPLY packet in return. The IP address 127.0.0.1 is set by convention to always indicate your
own computer. Therefore, a ping to that address will always ping yourself and the delay should be
very short. This provides the most basic test of your local communications.
How to use Ping:

You can use the Ping command to perform several useful Internet network diagnostic tests, such as
the following:

Access: You can use Ping to see if you can reach another computer. If you cannot ping a site at all, but
you can ping other sites, then it is a pretty good sign that your Internet network is working, and that
site is down. On the other hand, if you cannot ping any site, then likely your entire network connection
is down due to a bad connection.

Time & distance: You can use the Ping command to determine how long it takes to bounce a packet
off of another site, which tells you its Internet distance in network terms. For example, a web site
hosted on your neighbor's computer next door with a different Internet service provider might go
through more routers and be farther away in network distance than a site on the other side of the
ocean with a direct connection to the Internet backbone.

If a site seems slow, you can compare ping distances to other Internet sites to determine whether it
is the site, the network, or your system that is slow. You can also compare ping times to get an idea of
which sites have the fastest network access and would be most efficient for downloading, chat, and
other applications.

Domain IP address: You can use the Ping command to probe either a domain name or an IP address.
If you ping a domain name, it helpfully displays the corresponding IP address in the response.

You can run the ping command on a Windows computer by opening a command prompt window and
then typing "ping" followed by the domain name or IP address of the computer you wish to ping.
DHCP
DHCP (Dynamic Host Configuration Protocol) is one of the most common protocols that everyone
understands what it does. But very few of them spend time to learn how it work. So in this post we
will look at how DHCP works in wired & wireless network. I have setup simple lab (as shown below)
with a Switch, WLC, AP & DHCP server(Microsoft DHCP server on a VM). Switch has been configured
with basic SVI interfaces with listed gateway address.

First we will check how DHCP works in wired environment by capturing wireshark packets of wired
PC Ethernet interface while it is acquiring an IP from DHCP server.

As you can see there are 4 type of packets (Discover, Offer, Request, ACK ie DORA) exchanged prior
to PC get an IP. We will look at each of these packets in detail.

Here is the insight of DHCP discovery packet. As you can see in layer 4 it use UDP protocol with src
port 68 & des port 67 which is bootpc (client) & bootps(server). Actually, DHCP is an extension of
BootP protocol. This discovery msg include certain options (53, 61,12,60,55) sometimes these field
used to identify the client to DHCP server. In layer 3 src would be 0.0.0.0 (as not yet acquire an IP) &
dst (255.255.255.255) would be all subnet broadcast. In layer 2 src MAC would be PC’s NIC mac
address where as dst MAC would be broadcast MAC.
This layer 2 broadcast message would go to all host in that subnet & will reach the switch SVI (int vlan
13-GW). Since DHCP server is in a different subnet (vlan 200) this DHCP discover msg will not reach
that (broadcast will be limited only to local subnet). Once you configure “ ip helper-address
192.168.200.1” command under interface vlan 13, this DHCP discover msg send as a unicast packet to
the DHCP server. This function of the forwarding DHCP discover msg to DHCP server is called DHCP-
Relaying. Then DHCP server will send a DHCP offer msg.

As switch acting as DHCP-Relay (note that int vlan 13 IP of the switch listed as relay-agent IP in this
packet) it will receive the DHCP offer msg from DHCP server & then send to client. This packet includes
Bootp options like IP address, subnet mask, lease time, DHCP server IP, domain name, default
gateway, etc. UDP src port would be 67 (as coming from server) & dst port would be 68 (to client). In
layer 3, switch will set its vlan 13 IP address as src IP of this packet & dst IP would be layer 3 broadcast
(255.255.255.255). In layer 2 it will go as a broadcast frame.
Once client receive this offer message, it will send a DCHP request message for asking that IP. By this
time client knows what was “offered client IP” in the DHCP offer & therefore “Request msg” include
that IP (10.10.13.10 in this case). Also it lists DHCP server address (in this way even multiple DHCP
servers responded, client can choose which DHCP server to ask for IP). Since traffic is going from client,
UDP src port would be 68 & dst port would be 67. Still layer 3 src would be 0.0.0.0 & dst
255.255.255.255. In layer two this will go as broadcast.
Finally client will get DHCP ACK, confirming it can use this requested IP. Still this packet dst IP is layer
3 broadcast (as client does not has IP) & hence layer 2 frame go as a broadcast as well.

Once client get this frame & process he could confirm his MAC address listed as client MAC in bootp
field. Then it will assign the given IP to NIC. As you can see next thing it will do is send an ARP request
to find what is his gateway’s (10.10.13.1 listed in bootp options) MAC address. Then client know
everything (layer 2 & 3) to communicate with the rest of the network.

As you can see these DHCP messages go as local subnet broadcast any host (acting as rouge dhcp
sever) in that subnet can responded to clients DCHP request & could potentially issue wrong IP to
client (usually faster than proper DHCP server as it sits outside of a user subnet) . To prevent this
“DHCP snooping“ feature needs to enable (will describe this in a separate post)

Now we will look at how things work this in wireless set up. Now I am capturing packets at the WLC
connected switch port (G1/0/1). Here is the my wireshark capture while wireless client is getting an
IP. Since every packet encapsulated CAPWAP from AP <->WLC you will see each type of packet twice
at the switch port (ie AP-> WLC, WLC -> DHCP server & vice Versa).
look at the DHCP discover msg goes to WLC AP will encapsulate original packet with CAPWAP (UDP
dst port 5247). Traffic will go to AP Manager IP address from the AP. Inside information is identical to
what you saw in the wired DHCP discover message.

As you can see WLC is acting as DHCP relay to the client and forward this discovery msg to DHCP server.
It will use interface(vlan 14) IP assign to WLAN where client is trying to connect. Note that both src &
dst UDP port will be 67 as traffic goes from DHCP Relay to DHCP Server.
Then WLC will get DHCP offer msg from DHCP server and then forward it to the AP with CAPWAP
encapsulation.

forward this offer message to AP, it will use its virtual interface IP (1.1.1.1) as the source of this DHCP
offer msg. This is called “DHCP-proxy”. Therefore wireless client will think that is the DHCP server IP
and requesting that (in bootp fields) in DHCP request msg.
Here is the DHCP request msg coming from the wireless client to WLC. Once WLC forward this to
DHCP server, it will give the DHCP ACK msg.

Here is the DSCP ACK coming from DHCP server to WLC.


finally, wireless client will get this DHCP ack from virtual IP of the WLC (acting as DHCP for the
wireless client)

IP helper
The very basic job a router is to stop broadcast traffic. Since, DHCP is a broadcast traffic, it will be
stopped by the router. By enabling IP helper address we are allowing broadcast traffic to pass through
router.

IP helper turns a router into a relay agent where it will take a broadcast from one domain and forward
as a unicast to the helper address listed from the original broadcast domain. The receiving device will
send the applicable data to the relay agent who will relay to the client.

Example:

A PC says please give me a IP address. Since he has no knowledge of the LAN he sends out a broadcast.
A DHCP server sitting replies with IP, subnet, gateway, Now putting a DHCP server on every LAN or
VLAN does seem expensive, so IP Helper sees the broadcast and passes it to a DHCP server many
networks away and include the LAN network. The DHCP server tells the IP Helper the IP, subnet,
gateway and the IP Helper knows the MAC of the PC so sends the information to the DHCP.

DHCP isn't the only service that is allowed by default with the ip helper-address nine other UDP
protocols are allowed by default.
DHCP Lease Renewal and Rebinding Processes

Once a DHCP client completes the allocation or reallocation process, it enters the BOUND
state. The client is now in its regular operating mode, with a valid IP address and other configuration
parameters it received from the DHCP server and can be used like any regular TCP/IP host.

While the client is in the BOUND state, DHCP essentially lies dormant. As long as the client
stays on and functioning normally, no real DHCP activity will occur while in this state. The most
common occurrence that causes DHCP to “wake up” and come active again is arrival of the time when
the lease is to be renewed. Renewal ensures that a lease is perpetuated so it can be used for a
prolonged period of time and involves its own message exchange procedure. (The other way that a
client can leave the BOUND state is when it terminates the lease early.)

If DHCP's automatic allocation is used, or if dynamic allocation is used with an infinite lease
period, the client's lease will never expire, so it never needs to be renewed. Short of early termination,
the device will remain in the BOUND state forever, or at least until it is rebooted. However, as we've
already discussed, most leases are finite in nature. A client must take action to ensure that its lease is
extended, and normal operation continues.

To manage the lease extension process, two timers are set at the time that a lease is allocated.
The renewal timer (T1) goes off to tell the client it is time to try to renew the lease with the server that
initially granted it. The rebinding timer (T2) goes off if the client is not successful in renewing with that
server and tells it to try any server to have the lease extended. If the lease is renewed or rebound, the
client goes back to normal operation. If it cannot be rebound, it will expire, and the client will need to
seek a new lease.

First of all please note that a client would send the lease renewal request before the lease
time actually expires, there are two timers used (T1 and T2) when comes to lease renewal, basically
when T1 timer expires, a client would send a unicast DHCPREQUEST to the DHCP server that gave it
the original ip address asking to renew the lease for that ip address, if no DHCPACK message is received
within the T2 timer, then a client would send a broadcast DHCPREQUEST message, if no DHCPACK
message is received within the leas time then a client would stop using that ip address and will send
a broadcast DHCPDISCOVER message as if it has not been assigned an ip address previously. More
about that on RFC 2131 and RFC 2132.

After a client reboots, it tries always to reuse same ip address that has been assigned
previously, it would send a broadcast DHCPREQUET message including its previously assigned ip
address inside the message, if the client leas is still valid then the DHCP server would respond to the
client with a DHCPACK message.

Key Concept: If a client starts up and already has a lease, it need not go through the full lease
allocation process; instead, it can use the shorter reallocation process. The client broadcasts a request
to find the server that has the current information on its lease; that server responds back to confirm
that the client’s lease is still valid.
The Server Identification Option for DHCP
<draft-ietf-dhc-sio-00.txt>

The DHCP Server Identification Option

DHCP provides a powerful mechanism for automating and centralizing the administration of
IP host configuration and has become a critical service in many large IP networks. Because of its
importance in networks and because it can create a single point of failure for network operations
(from a DHCP client's perspective), many network administrators choose to deploy many DHCP servers
in order to enhance availability and/or performance of DHCP services.

However, for networks with multiple DHCP servers, the DHCP protocol does not provide a
means by which a DHCP client may "pre-specify" a preference for offers from a particular DHCP server
-- or set of servers -- on the network. Such a means would allow, for example, clients on a large,
switched LAN subnet to choose DHCPOFFERs from a preferred, "local" DHCP server (e.g.,one located
on the same floor of the building and administered by the client host user's department).

The DHCP protocol specification [see RFC1541 or current internet draft] currently states that:

"DHCP clients are free to use any strategy in selecting a DHCP server among those from which the
client receives a DHCPOFFER message."

Thus, currently, client "policy" -- of which there is essentially no standardization -- determines which
of many offers is selected. In practice, most vendors' implementation of "policy" here is very basic
(e.g., first offer received) and is "hard-coded" (i.e., non-configurable).

In order for a client to choose a DHCPOFFER from a particular DHCP server, it must have a means of
identifying the server. That is, unless a DHCP client can identify an individual server, the client has no
means by which to select it.

Thus, the problem of a client specifying a preference for a particular server is simply that of identifying
DHCP servers to the client so that the client can select a DHCPOFFER from a particular server (e.g., by
matching a pre-configured, preferred server identity against the set of server identities contained in
DHCPOFFERs received).

This document specifies an option that can be specified at DHCP servers by network administrators to
identify particular DHCP server (or servers) to DHCP clients in order to enable the DHCP clients to
select from available identities. The option, known as the DHCP

Server Identification Option specifies a simple DHCP server identification value to be included in
DHCPOFFERs so that DHCP clients can distinguish among DHCP servers when making an offer selection
decision.

DHCP Server Identification Option Format

The code for this option is TBD, and its length is 4 bytes.

Code Len DHCP Server ID

+-------+-------+---------+----------+

| TBD | 2 | server_id |

+-------+-------+---------+----------+

where:

server_id is an unsigned integer (x'00' thru x'FF',inclusive)which identifies the DHCP server originating
the DHCPOFFER packet in which the option is contained.
DHCP Server Behavior

A DHCP Server which supports the DHCP Server Identification Option MUST include the option
in (and only in?) DHCPOFFER packets to requesting clients. Note that there is no requirement for the
server_id values to be unique in a subnet or across the network. That is, two or more DHCP servers
may share the same server_id value and therefore be considered equivalent from the perspective of
the DHCP client's selection decision.

In the case where a DHCP Server Identification Option with server_id value is included in a
client's DHCPDISCOVER message and the server_id value does not match that of the server, then the
server MAY ignore the DHCPDISCOVER. If the DHCP Server Identification Option is included (in the
requested parameter list) without a

server_id value, then the DHCP Server SHOULD respond with a DHCPOFFER and include the
appropriate server_id value in the DHCP Server Identification option (assuming an available
address/binding and defined server_id value exists).

DHCP Client Behavior

A DHCP client MAY use the DHCP Server Identification Option to make a DHCPOFFER selection
decision. If two DHCPOFFERs have equivalent DHCP Server Identification Option values or if no DHCP
Server Identification Option is included, then the DHCP client SHOULD report the error and SHOULD
use another mechanism to choose from among the multiple offers.

Also, note that a client may specify a DHCP Server Identification Option in a DHCPDISCOVER
to express a preference for a particular DHCP server (Is this a good idea? ...seems harmless, but what's
the point...unless a particular implied behavior?).

Application Notes

The DHCP Server Identification option allows a DHCP client to select a DHCPOFFER from a preferred
server or servers. The following sections outline some useful applications of this capability:

DHCP Server Segregation within (large) Subnets

In large, flat networks (e.g., large, switched LANs), the DHCP Server Identification option can
be used to "assign" groups of clients to be served by a particular DHCP server (e.g., one which serves
a particular workgroup/department/organization or a particular building or floor of a building). This
is accomplished by configuring clients to prefer DHCPOFFERs with a designated DHCP server
identification option value.

Pre-production Testing of DHCP Servers

Similarly, in networks where a DHCP Server is being introduced into production, DHCP clients
which support the DHCP server identification option can be used to specifically exercise that newly
introduced DHCP server for the purposes of testing configuration correctness.
Transport Layer
Transmission Control Protocol (TCP):

A full-featured, connection-oriented, reliable transport protocol for TCP/IP applications. TCP


provides transport-layer addressing to allow multiple software applications to simultaneously use a
single IP address. It allows a pair of devices to establish a virtual connection and then pass data
bidirectionally. Transmissions are managed using a special sliding window system, with
unacknowledged transmissions detected and automatically retransmitted. Additional functionality
allows the flow of data between devices to be managed, and special circumstances to be addressed.

User Datagram Protocol (UDP):

A very simple transport protocol that provides transport-layer addressing like TCP, but little
else. UDP is barely more than a “wrapper” protocol that provides a way for applications to access the
Internet Protocol. No connection is established, transmissions are unreliable, and data can be lost.

Key Concept: The primary transport layer protocol in the TCP/IP suite is the Transmission Control
Protocol (TCP). TCP is a connection-oriented, acknowledged, reliable, fully-featured protocol designed
to provide applications with a reliable way to send data using the unreliable Internet Protocol. It allows
applications to send bytes of data as a stream of bytes, and automatically packages them into
appropriately-sized segments for transmission. It uses a special sliding window acknowledgment
system to ensure that all data is received by its recipient, to handle necessary retransmissions, and to
provide flow control so each device in a connection can manage the rate at which it is sent data.

Functions Performed By TCP:

1. Port Addressing
2. Connection Establishment, Management and Termination
3. Data Handling and Packaging:
4. Providing Reliability and Transmission Quality Services
5. Providing Flow Control and Congestion Avoidance Features

TCP Characteristics:

1. Connection-Oriented
2. Bi-directional
3. Reliable
4. Acknowledged
5. Stream-Oriented:
6. Data-Flow-Managed

The size of the segment is controlled by two primary factors. The first issue is that there is an
overall limit to the size of a segment, chosen to prevent unnecessary fragmentation at the IP layer.
This is governed by a parameter called the maximum segment size (MSS), which is determined during
connection establishment. The second is that TCP is designed so that once a connection is set up, each
of the devices tells the other how much data it is ready to accept at any given time. If this is lower than
the MSS value, a smaller segment must be sent. This is part of the sliding window system.

Key Concept: Since TCP works with individual bytes of data rather than discrete messages, it must use
an identification scheme that works at the byte level to implement its data transmission and tracking
system. This is accomplished by assigning each byte TCP processes a sequence number.
Reliability: Ensuring that data that is sent actually arrives at its destination, and if not, detecting this
and re-sending the data.

Data Flow Control: Managing the rate at which data is sent so that it does not overwhelm the device
that is receiving it.

Key Concept: A basic technique for ensuring reliability in communications uses a rule that requires a
device to send back an acknowledgment each time it successfully receives a transmission. If a
transmission is not acknowledged after a period of time, it is retransmitted by its sender. This system
is called positive acknowledgment with retransmission (PAR). One drawback with this basic scheme is
that the transmitter cannot send a second message until the first has been acknowledged.

Key Concept: The send window is the key to the entire TCP sliding window system: it represents the
maximum number of unacknowledged bytes a device is allowed to have outstanding at once. The
usable window is the amount of the send window that the sender is still allowed to send at any point
in time; it is equal to the size of the send window less the number of unacknowledged bytes already
transmitted.

Fast Retransmission:

The TCP sender SHOULD use the "fast retransmit" algorithm to detect and repair loss, based
on incoming duplicate ACKs. The fast-retransmit algorithm uses the arrival of 3 duplicate ACKs (4
identical ACKs without the arrival of any other intervening packets) as an indication that a segment
has been lost. After receiving 3 duplicate ACKs, TCP performs a retransmission of what appears to be
the missing segment, without waiting for the retransmission timer to expire.
Source Port: 16 bits - The source port number.

Destination Port: 16 bits - The destination port number.

Sequence Number: 32 bits

The sequence number of the first data octet in this segment (except when SYN is present). If SYN is
present the sequence number is the initial sequence number (ISN) and the first data octet is ISN+1.

Acknowledgment Number: 32 bits

If the ACK control bit is set this field contains the value of the next sequence number the sender of
the segment is expecting to receive. Once a connection is established this is always sent.

Data Offset: 4 bits

The number of 32 bit words in the TCP Header. This indicates where the data begins. The TCP header
(even one including options) is an integral number of 32 bits long.

Reserved: 6 bits - Reserved for future use. Must be zero.

Control Bits: 6 bits (from left to right):

URG: Urgent Pointer field significant

ACK: Acknowledgment field significant

PSH: Push Function

RST: Reset the connection

SYN: Synchronize sequence numbers

FIN: No more data from sender

Window: 16 bits

The number of data octets beginning with the one indicated in the acknowledgment field which the
sender of this segment is willing to accept.

Checksum: 16 bits

The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words
in the header and text. If a segment contains an odd number of header and text octets to be
checksummed, the last octet is padded on the right with zeros to form a 16 bit word for checksum
purposes. The pad is not transmitted as part of the segment. While computing the checksum, the
checksum field itself is replaced with zeros.

The TCP Length is the TCP header length plus the data length in octets (this is not an explicitly
transmitted quantity, but is computed), and it does not count the 12 octets of the pseudo header.

Urgent Pointer: 16 bits

This field communicates the current value of the urgent pointer as a positive offset from the sequence
number in this segment. The urgent pointer points to the sequence number of the octet following the
urgent data. This field is only be interpreted in segments with the URG control bit set.

RFC 793 (INCORRECT): The urgent pointer points to the sequence number of the octet following the
urgent data.

RFC 1122 (CORRECT): The urgent pointer points to the sequence number of the LAST octet (not
LAST+1) in a sequence of urgent data.
A TCP MUST support a sequence of urgent data of any length.

TCP Options:

Maximum Segment Size Option Data: 16 bits

If this option is present, then it communicates the maximum receive segment size at the TCP which
sends this segment. This field must only be sent in the initial connection request (i.e., in segments
with the SYN control bit set).

If an MSS option is not received at connection setup, TCP MUST assume a default send MSS of 536
(576-40) [TCP:4].

TCP provides an option that may be used at the time a connection is established (only) to indicate the
maximum size TCP segment that can be accepted on that connection. This Maximum Segment Size
(MSS) announcement (often mistakenly called a negotiation) is sent from the data receiver to the data
sender and says "I can accept TCP segments up to size X". The size (X) may be larger or smaller than
the default. The MSS can be used completely independently in each direction of data flow. The result
may be quite different maximum sizes in the two directions.

The MSS counts only data octets in the segment, it does not count the TCP header or the IP header.

A footnote: The MSS value counts only data octets, thus it does not count the TCP SYN and FIN control
bits even though SYN and FIN do consume TCP sequence numbers.

TCP Window Scale:

The TCP header value allocated for the window size is two bytes long. This means that the highest
possible numeric value for a receive window is 65,535 bytes. In today’s networks, this window size is
not enough to provide optimal traffic flow, especially on long, fat networks (links that have high
bandwidth and high latency). In its native state, TCP cannot take advantage of these high-performance
links since it can only send a maximum of 65,535 bytes at a time.

For this reason, TCP Options were introduced in RFC 1323 that enable the TCP receive window to be
increased exponentially. The specific function is called TCP Window Scaling, which is advertised in the
handshake process. When advertising its window, a client or server will also advertise the scale factor
(multiplier) that will be used for the life of the connection.
In the image above, the sender of this packet is advertising a TCP Window of 63,792 bytes and is using
a scaling factor of four. This means that that the true window size is 63,792 x 4 (255,168 bytes). Using
scaling windows allows endpoints to advertise a window size of over 1GB. To use window scaling, both
sides of the connection must advertise this capability in the handshake process. If one side or the
other cannot support scaling, then neither will use this function. The scale factor, or multiplier, will
only be sent in the SYN packets during the handshake and will be used for the life of the connection.
This is one reason why it is so important to capture the handshake process when performing TCP
analysis.

Window size value = Window size * Window scaling factor

Window size value = 63792 * 4 (Actual window size is: 255168)

What Is a Zero Window?

When a client (or server – but it is usually the client) advertises a zero value for its window
size, this indicates that the TCP receive buffer is full and it cannot receive any more data. It may have
a stuck processor or be busy with some other task, which can cause the TCP receive buffer to fill. Zero
Windows can also be caused by a problem within the application, where the TCP buffer is not being
retrieved.

A TCP Zero Window from a client will halt the data transmission from the server side, allowing
time for the problem station to clear its buffer. When the client begins to digest the data, it will let the
server know to resume the data flow by sending a TCP Window Update packet. This will advertise an
increased window size and the flow will resume.
TCP Selective Acknowledgments (SACK)
With selective acknowledgments, the data receiver can inform the sender about all segments
that have arrived successfully, so the sender need retransmit only the segments that have actually
been lost.

The selective acknowledgment extension uses two TCP options. The first is an enabling option,
"SACK-permitted", which may be sent in a SYN segment to indicate that the SACK option can be used
once the connection is established. The other is the SACK option itself, which may be sent over an
established connection once permission has been given by SACK-permitted.

The SACK option is to be included in a segment sent from a TCP that is receiving data to the
TCP that is sending that data; we will refer to these TCP's as the data receiver and the data sender,
respectively. We will consider a particular simplex data flow; any data flowing in the reverse direction
over the same connection can be treated independently.

The diagram below illustrates a TCP connection taking place between a client and server separated by
a network. Time progresses vertically from top to bottom as packets are sent.

The client sends some request to the server, and the server formulates a response broken into four
TCP segments (packets). The server transmits all four packets in response to the request. However,
the second response packet is dropped somewhere on the network and never reaches the host. Let's
walk through what happens.

Step 1: Response segment #2 is lost.

Step 2: The client receives segment #3. Upon examining the segment's sequence number, the client
realizes this segment is out of order; there is data missing between the last segment received and this
one. The client transmits a duplicate acknowledgment for packet #1 to alert the server that it has not
received any (reliable) data beyond packet #1.

Step 3: As the server is not yet aware that anything is wrong (because it has not yet received the
client's duplicate acknowledgment), it continues by sending segment #4. The client realizes that it is
still missing data, and repeats its behavior in step three by sending another duplicate acknowledgment
for packet #1.

Step 4: The server receives the client's first duplicate acknowledgment for packet #1. Because the
client has only confirmed receipt of the first of the four segments, the server must retransmit all three
remaining segments in the response. The second duplicate acknowledgment received from the client
is ignored.
Step 5: The client successfully receives and acknowledges the three remaining segments.

Enter Selective Acknowledgments

You've probably noticed that this design is inefficient: although only packet #2 was lost, the server was
required to retransmit packets #3 and #4 as well, because the client had no way to confirm that it had
received those packets.

This problem was originally addressed by RFC 1072, and more recently by RFC 2018, by introducing
the selective acknowledgment (SACK) TCP option. SACKs work by appending to a duplicate
acknowledgment packet a TCP option containing a range of noncontiguous data received. In other
words, it allows the client to say "I only have up to packet #1 in order, but I also have received packets
#3 and #4". This allows the server to retransmit only the packet(s) that were not received by the client.

Support for SACK is negotiated at the beginning of a TCP connection; if both hosts support it, it may
be used. Let's look at how our earlier example plays out with SACK enabled:

Step 1: Response segment #2 is lost.

Step 2: The client realizes it is missing a segment between segments #1 and #3. It sends a duplicate
acknowledgment for segment #1, and attaches a SACK option indicating that it has received segment
#3.

Step 3: The client receives segment #4 and sends another duplicate acknowledgment for segment #1,
but this time expands the SACK option to show that it has received segments #3 through #4.

Step 4: The server receives the client's duplicate ACK for segment #1 and SACK for segment #3 (both
in the same TCP packet). From this, the server deduces that the client is missing segment #2, so
segment #2 is retransmitted. The next SACK received by the server indicates that the client has also
received segment #4 successfully, so no more segments need to be transmitted.

Step 5: The client receives segment #2 and sends an acknowledgment to indicate that it has received
all data up to an including segment #4.
Enough Theory, Here's a Capture

This packet capture contains a demonstration of SACKs in action. We know that both end hosts
support selective acknowledgments by the presence of the SACK permitted option in the two SYN
packets, #1 and #2.

Toward the end of the capture, we can see that packet #30 was received out of order, and the client
has sent a duplicate acknowledgment in packet #31. This packet includes a SACK option indicating that
the segment in packet #30 was received.

Of course, the SACK option cannot simply specify which segment(s) were received. Rather, it specifies
the left and right edges of data that has been received beyond the packet's acknowledgment number.
A single SACK option can specify multiple noncontiguous blocks of data (e.g. bytes 200-299 and 400-
499).

We can see this duplicate acknowledgment repeated in packets #33, #35, and #37. In each, the SACK
is expanded to include the noncontiguous segments the server has continued sending. Finally, the
server retransmits the missing segment in packet #38, and the client updates its acknowledgment
number appropriately in packet #39.
DNS
Definition - What does DNS Record mean?

A DNS record is a database record used to map a URL to an IP address. DNS records are stored in DNS
servers and work to help users connect their websites to the outside world. When the URL is entered
and searched in the browser, that URL is forwarded to the DNS servers and then directed to the
specific Web server. This Web server then serves the queried website outlined in the URL or directs
the user to an email server that manages the incoming mail.

The most common record types are A (address), CNAME (canonical name), MX (mail exchange), NS
(name server), PTR (pointer), SOA (start of authority) and TXT (text record).

Different types of DNS records are as follows:

NS Resource Records

The name server (NS) resource record indicates the servers authoritative for the zone. They
indicate primary and secondary servers for the zone specified in the SOA resource record, and they
indicate the servers for any delegated zones. Every zone must contain at least one NS record at the
zone root.

For example, when the administrator on reskit.com delegated authority for the noam.reskit.com
subdomain to noamdc1.noam.reskit.com., the following line was added to the zones reskit.com and
noam.reskit.com:

noam.reskit.com. IN NS noamdc1.noam.reskit.com.

A Resource Records

The address (A) resource record maps an FQDN to an IP address, so the resolvers can request the
corresponding IP address for an FQDN. For example, the following A resource record, located in the
zone noam.reskit.com, maps the FQDN of the server to its IP address:

noamdc1 IN A 172.16.48.1
PTR Records

The pointer (PTR) resource record , in contrast to the A resource record, maps an IP address to an
FQDN. For example, the following PTR resource record maps the IP address of
noamdc1.noam.reskit.com to its FQDN:

1.48.16.172.in-addr.arpa. IN PTR noamdc1.noam.reskit.com.

CNAME Resource Records

The canonical name (CNAME) resource record creates an alias (synonymous name) for the
specified FQDN. You can use CNAME records to hide the implementation details of your network from
the clients that connect to it. For example, suppose you want to put an FTP server named
ftp1.noam.reskit.com on your noam.reskit.com subdomain, but you know that in six months you will
move it to a computer named ftp2.noam.reskit.com, and you do not want your users to have to know
about the change. You can just create an alias called ftp.noam.reskit.com that points to
ftp1.noam.reskit.com, and then when you move your computer, you need only change the CNAME
record to point to ftp2.noam.reskit.com. For example, the following CNAME resource record creates
an alias for ftp1.noam.reskit.com:

ftp.noam.reskit.com. IN CNAME ftp1.noam.reskit.com.


Once a DNS client queries for the A resource record for ftp.noam.reskit.com, the DNS server finds
the CNAME resource record, resolves the query for the A resource record for ftp1.noam.reskit.com,
and returns both the A and CNAME resource records to the client.

MX Resource Records

The mail exchange (MX) resource record specifies a mail exchange server for a DNS domain
name. A mail exchange server is a host that will either process or forward mail for the DNS domain
name. Processing the mail means either delivering it to the addressee or passing it to a different type
of mail transport. Forwarding the mail means sending it to its final destination server, sending it using
Simple Mail Transfer Protocol (SMTP) to another mail exchange server that is closer to the final
destination, or queuing it for a specified amount of time.

Note

Only mail exchange servers use MX records.

If you want to use multiple mail exchange servers in one DNS domain, you can have multiple MX
resource records for that domain. The following example shows MX resource records for the mail
servers for the domain noam.reskit.com.:

*.noam.reskit.com. IN MX 0 mailserver1.noam.reskit.com.

*.noam.reskit.com. IN MX 10 mailserver2.noam.reskit.com.

*.noam.reskit.com. IN MX 10 mailserver3.noam.reskit.com.

The first three fields in this resource record are the standard owner, class, and type fields. The fourth
field is the mail server priority , or preference value. The preference value specifies the preference
given to the MX record among MX records. Lower priority records are preferred. Thus, when a mailer
needs to send mail to a certain DNS domain, it first contacts a DNS server for that domain and retrieves
all the MX records. It then contacts the mailer with the lowest preference value.

For example, suppose Jane Doe sends an e-mail message to JohnDoe@noam.reskit.com on a day that
mailserver1 is down, but mailserver2 is working. Her mailer tries to deliver the message to
mailserver1, because it has the lowest preference value, but it fails because mailserver1 is down. This
time, Jane's mailer can choose either mailserver2 or mailserver3, because their preference values are
equal. It successfully delivers the message to mailserver2.

To prevent mail loops, if the mailer is on a host that is listed as an MX for the destination host, the
mailer can deliver only to an MX with a lower preference value than its own host.

Note

The send mail program requires special configuration if a CNAME is not referenced in the MX record.
SOA Resource Records

Every zone contains a Start of Authority (SOA) resource record at the beginning of the zone. SOA
resource records include the following fields:

• The Owner, TTL , Class , and Type fields, as described in "Resource Record Format" earlier in
this chapter.
• The authoritative server field shows the primary DNS server authoritative for the zone.
• The responsible person field shows the e-mail address of the administrator responsible for
the zone. It uses a period (.) instead of an at symbol (@).
• The serial number field shows how many times the zone has been updated. When a zone's
secondary server contacts the master server for that zone to determine whether it needs to
initiate a zone transfer, the zone's secondary server compares its own serial number with that
of the master. If the serial number of the master is higher, the secondary server initiates a
zone transfer.
• The refresh field shows how often the secondary server for the zone checks to see whether
the zone has been changed.
• The retry field shows how long after sending a zone transfer request the secondary server for
the zone waits for a response from the master server before retrying.
• The expire field shows how long after the previous zone transfer the secondary server for the
zone continues to respond to queries for the zone before discarding its own zone as invalid.
• The minimum TTL field applies to all the resource records in the zone whenever a time to live
value is not specified in a resource record. Whenever a resolver queries the server, the server
sends back resource records along with the minimum time to live. Negative responses are
cached for the minimum TTL of the SOA resource record of the authoritative zone.

What are the different types of DNS queries?

DNS queries can be classified according the manner in which a complete request is processed.
Generally, queries can be classified as follows.

1. Recursive query
2. Iterative query OR Non-recursive query
3. Inverse queries

What is a recursive query?

A recursive query is a kind of query, in which the DNS server, who received your query will do
all the job of fetching the answer and giving it back to you. During this process, the DNS server might
also query other DNS server's in the internet on your behalf, for the answer.

Let’s understand the entire process of recursive queries by the following steps.
Suppose you want to browse www.example.com, and your resolve.conf file has got the following
entry.
[root@myvm ~]# cat /etc/resolv.conf
nameserver 172.16.200.30
nameserver 172.16.200.31

The above resolve conf entry means that, Your DNS servers are 172.16.200.30 & 31. Whatever
application you use, the operating system will send DNS queries to those two DNS servers.

STEP 1: You enter www.example.com in the browser. So the operating system's resolver will send a
DNS query for the A record to the DNS server 172.16.200.30 .
STEP2: The DNS server 172.16.200.30 on receiving the query, will look through its tables(cache) to find
the IP address (A record) for the domain www.example.com. But it does not have the entry.

STEP 3: As the answer for the query is not available with the DNS server 172.16.200.30, this server
sends a query to one of the DNS root servers, for the answer. Now an important fact to note here is
that root servers are always iterative servers.

Related: DNS root servers and their Locations

STEP 4: The DNS root servers will reply with a list of server's (referral) that are responsible for handling
the .COM gTLD's.

STEP 5: Our DNS server 172.16.200.30 will select one of the .COM gTLD server from the list given by
the root server, to query the answer for "www.example.com"

STEP 6: Similar to the root server's , the gTLD servers are also iterative in nature, so it replies back to
our DNS server 172.16.200.30 with the list of IP addresses of the DNS server's responsible for the
domain(authoritative name server for the domain) www.example.com.

Related: DNS Zone File And Its Contents

STEP 7: This time also our DNS server will select one of the IP from the given list of authoritative name
servers, and queries the A record for www.example.com. The authoritative name server queried, will
reply back with the A record as below.

www.example.com = <XXX:XX:XX:XX> (Some IP address)

STEP 8: Our DNS server 172.16.200.30 will reply us back with the ip domain pair(and any other
resource if available). Now the browser will send request to the ip given, for the web page
www.example.com.

Below shown diagram might make the concept clear.

As you can see from the above figure. Our DNS server (172.16.200.30) queries through other DNS
server's on behalf of us.

Note: The above explained scenario of recursive query happened, only because, our DNS server
172.16.200.30 was configured as a recursive name server. You can also disable this feature for your
DNS server.
How does the name server select one from the given list of servers to query?

In the above case, you might have seen that our DNS server 172.16.200.30, had to select one server,
from the given list of servers to query, multiple times.

For example, there are 13 root servers (Well when i say 13 root servers, 13 is the number of addresses
that is universal. There are Hundreds of servers at different locations in the world. These 13 root server
addresses are anycasted addresses.), which root server will be queried, for an answer?

Related: What is IP Anycast, and how it works?


Almost all DNS server's uses an algorithm, to select one from the list, in order to distribute the load
and response time.

The most Famous DNS server software BIND uses a technique called as rtt metric (Round Trip Time
metric). Using this technique, the server tracks the RTT of each root server, and selects the one, with
lower RTT.

What is an iterative or Non-recursive query?

Before beginning the explanation for iterative query. An important thing to note is that, all DNS servers
must support iterative(non-recursive) query.

In an iterative query, the name server, will not go and fetch the complete answer for your query, but
will give back a referral to other DNS server's, which might have the answer. In our previous example
our DNS server 172.16.200.30, went to fetch the answer on behalf of our resolver, and provided us
with the final answer.

But if our DNS server 172.16.200.30 is not a recursive name server (which means its iterative), it will
give us the answer if it has in its records. Otherwise will give us the referral to the root servers (it will
not query the root server's and other servers by itself.).

Now its the job of our resolver to query the root server, .COM TLD servers, and authoritative name
server's, for the answer.

Lets go through the steps involved.

STEP 1: You enter www.example.com in the browser. So the operating system's resolver will send a
DNS query for the A record to the DNS server 172.16.200.30 .

STEP 2: The DNS server 172.16.200.30 on receiving the query, will look through its tables(cache) to
find the IP address (A record) for the domain www.example.com. But it does not have the entry.

STEP 3: Now instead of querying the root server's, our DNS server will reply us back with a referral to
root servers. Now our operating system resolver, will query the root servers for the answer.

Now the rest of the steps are all the same. The only difference in iterative query is that

if the DNS server does not have the answer, it will not query any other server for the answer, but
rather it will reply with the referral to DNS root server's

But if the DNS server has the answer, it will give back the answer (which is same in both iterative and
recursive queries)

in an iterative query, the job of finding the answer (from the given referral), lies to the local operating
system resolver.
It can be clearly noted from the above figure, that in an iterative query, a DNS server queried will never
go and fetch the answer for you (but will give you the answer if it already has the answer). But will
give your resolver a referral to other DNS server’s (root server in our case).

We will be discussing inverse queries in another post. Hope this post was helpful in understanding
iterative(non-recursive) & recursive DNS queries.

When does DNS use TCP instead of UDP?

UDP is used when you need a translation of a domain name. Client sends a question DNS
server answers with IP (if it has it in its record). All done with simple Datagram packets. No worry if a
packet is lost or corrupt, the client can ask for it again.

When DNS servers need to synchronize the information in their records there can be a lot of data and
its important that its correctly transferred or all the clients that DNS server serve can get that faulty
information served to them until the next DNS to DNS zone transfer. That is why servers doing “zone
transfer” need the fault correction capabilities of the Transmission Control Protocol. TCP will
guarantee that the information is transferred and received without faults and in correct order.
SSL
What is SSL?

The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) is the most widely deployed
security protocol used today. It is essentially a protocol that provides a secure channel between two
machines operating over the Internet or an internal network. In today’s Internet focused world, the
SSL protocol is typically used when a web browser needs to securely connect to a web server over
the inherently insecure Internet. what-is-ssl-video-med.jpg

Technically, SSL is a transparent protocol which requires little interaction from the end user when
establishing a secure session. In the case of a browser for instance, users are alerted to the presence
of SSL when the browser displays a padlock, or, in the case of Extended Validation SSL, when the
address bar displays both a padlock and a green bar. This is the key to the success of SSL – it is an
incredibly simple experience for end users.

SSL Record Overview


The basic unit of data in SSL is a record. Each record consists of a five-byte record header, followed by
data.

Record Type

There are four record types in SSL:


• Handshake (22, 0x16)
• Change Cipher Spec (20, 0x14)
• Alert (21, 0x15)
• Application Data (23, 0x17)

Handshake Records

Handshake records contain a set of messages that are used in order to handshake. These are the
messages and their values:
• Hello Request (0, 0x00)
• Client Hello (1, 0x01)
• Server Hello (2, 0x02)
• Certificate (11, 0x0B)
• Server Key Exchange (12, 0x0C)
• Certificate Request (13, 0x0D)
• Server Hello Done (14, 0x0E)
• Certificate Verify (15, 0x0F)
• Client Key Exchange (16, 0x10)
• Finished (20, 0x14)
In the simple case, handshake records are not encrypted. However, a handshake record that contains
a finished message is always encrypted, as it always occurs after a Change Cipher Spec (CCS) record.
CCS Records

CCS records are used in order to indicate a change in cryptographic ciphers. Immediately after the CCS
record, all data is encrypted with the new cipher. CCS records might or might not be encrypted; in a
simple connection with a single handshake, the CCS record is not encrypted.

Alert Records

Alert records are used in order to indicate to the peer that a condition has occurred. Some alerts are
warnings, while others are fatal and cause the connection to fail. Alerts might or might not be
encrypted and might occur during a handshake or during data transfer. There are two types of alerts:
• Closure Alerts: The connection between the client and the server must be properly closed in order
to avoid any kind of truncation attacks. A close_notify message is sent that indicates to the recipient
that the sender will not send anymore messages on that connection.
• Error Alerts: When an error is detected, the detecting party sends a message to the other
party. Upon transmission or receipt of a fatal alert message, both parties immediately close the
connection. Some examples of error alerts are:
• unexpected_message (fatal)
• decompression_failure
• handshake_failure

Application Data Record

These records contain the actual application data. These messages are carried by the record layer and
are fragmented, compressed, and encrypted, based on the current connection state.
This section describes a sample transaction between the client and server.
The Hello Exchange

When an SSL client and server begin to communicate, they agree on a protocol version, select
cryptographic algorithms, optionally authenticate each other, and use public key encryption
techniques in order to generate shared secrets. These processes are performed in the handshake
protocol. In summary, the client sends a Client Hello message to the server, which must respond with
a Server Hello message or a fatal error occurs and the connection fails. The Client Hello and Server
Hello are used to establish security enhancement capabilities between the client and server.
Client Hello
The Client Hello sends these attributes to the server:
• Protocol Version: The version of the SSL protocol by which the client wishes to communicate during
this session.
• Session ID: The ID of a session the client wishes to use for this connection. In the first Client Hello
of the exchange, the session ID is empty (refer to the packet capture screen shot after the note).
• If the ClientHello.session_id was non-empty, the server will look in its session cache for a
match. If a match is found and the server is willing to establish the new connection using the
specified session state, the server will respond with the same value as was supplied by the
client. This indicates a resumed session and dictates that the parties must proceed directly to
the Finished messages. otherwise, this field will contain a different value identifying the new
session. The server may return an empty session_id to indicate that the session will not be
cached and therefore cannot be resumed. If a session is resumed, it must be resumed using
the same cipher suite it was originally negotiated with. Note that there is no requirement that
the server resume any session even if it had formerly provided a session_id. Clients MUST be
prepared to do a full negotiation -- including negotiating new cipher suites -- during any
handshake.
• Cipher Suite: This is passed from the client to the server in the Client Hello message. It contains the
combinations of cryptographic algorithms supported by the client in order of the client's preference
(first choice first). Each cipher suite defines both a key exchange algorithm and a cipher spec. The
server selects a cipher suite or, if no acceptable choices are presented, returns a handshake failure
alert and closes the connection.
• Compression Method: Includes a list of compression algorithms supported by the client. If the
server does not support any method sent by the client, the connection fails. The compression
method can also be null.

Note: The server IP address in the captures is 10.0.0.2 and the client IP address is 10.0.0.1.

Server Hello
The server sends back these attributes to the client:
• Protocol Version: The chosen version of the SSL protocol that the client supports.
• Session ID: This is the identity of the session that corresponds to this connection. If the session ID
sent by the client in the Client Hello is not empty, the server looks in the session cache for a match.
If a match is found and the server is willing to establish the new connection using the specified
session state, the server responds with the same value that was supplied by the client. This indicates
a resumed session and dictates that the parties must proceed directly to the finished messages.
Otherwise, this field contains a different value that identifies the new session. The server might
return an empty session_id in order to indicate that the session will not be cached, and therefore
cannot be resumed.
• Cipher Suite: As selected by the server from the list that was sent from the client.
• Compression Method: As selected by the server from the list that was sent from the client.
• Certificate Request: The server sends the client a list of all the certificates that are configured on it,
and allows the client to select which certificate it wants to use for authentication.
For SSL session resumption requests:
• The server can send a Hello request to the client as well. This is only to remind the client that it
should start the renegotiation with a Client Hello request when convenient. The client ignores the
Hello request from the server if the handshake process is already underway.
• The handshake messages have more precedence over the transmission of application data. The
renegotiation must begin in no more than one or two times the transmission time of a maximum-
length application data message.
Server Hello Done
The Server Hello Done message is sent by the server in order to indicate the end of the server hello
and associated messages. After it sends this message, the server waits for a client response. Upon
receipt of the Server Hello Done message, the client verifies that the server provided a valid certificate,
if required, and checks that the Server Hello parameters are acceptable.

Server Certificate, Server Key Exchange, and Certificate Request (Optional)


• Server Certificate: If the server must be authenticated (which is generally the case), the server
sends its certificate immediately after the Server Hello message. The certificate type must be
appropriate for the selected cipher suite key exchange algorithm and is generally an X.509.v3
certificate.
• Server Key Exchange: The Server Key Exchange message is sent by the server if it has no certificate.
If the Diffie–Hellman (DH) parameters are included with the server certificate, this message is not
used.
• Certificate Request: A server can optionally request a certificate from the client, if appropriate for
the selected cipher suite.

Client Exchange

Client Certificate (Optional)


This is the first message that the client sends after he/she receives a Server Hello Done message. This
message is only sent if the server requests a certificate. If no suitable certificate is available, the client
sends a no_certificate alert instead. This alert is only a warning; however, the server might respond
with a fatal handshake failure alert if client authentication is required. Client DH certificates must
match the server specified DH parameters.
Client Key Exchange
The content of this message depends on the public key algorithm selected between the Client Hello
and the Server Hello messages. The client uses either a premaster key encrypted by the Rivest-Shamir-
Addleman (RSA) algorithm or DH for key agreement and authentication. When RSA is used for server
authentication and key exchange, a 48-byte pre_master_secret is generated by the client, encrypted
under the server public key, and sent to the server. The server uses the private key in order to decrypt
the pre_master_secret. Both parties then convert the pre_master_secret into the master_secret.

Certificate Verify (Optional)


If the client sends a certificate with signing ability, a digitally-signed Certificate Verify message is sent
in order to explicitly verify the certificate.

Cipher Change

Change Cipher Spec Messages


The Change Cipher Spec message is sent by the client, and the client copies the pending Cipher Spec
(the new one) into the current Cipher Spec (the one that was previously used). Change Cipher Spec
protocol exists in order to signal transitions in ciphering strategies. The protocol consists of a single
message, which is encrypted and compressed under the current (not the pending) Cipher Spec. The
message is sent by both the client and server in order to notify the receiving party that subsequent
records are protected under the most recently negotiated Cipher Spec and keys. Reception of this
message causes the receiver to copy the read pending state into the read current state. The client
sends a Change Cipher Spec message after the handshake key exchange and Certificate Verify
messages (if any), and the server sends one after it successfully processes the key exchange message
it received from the client. When a previous session is resumed, the Change Cipher Spec message is
sent after the Hello messages. In the captures, the Client Exchange, Change Cipher, and Finished
messages are sent as a single message from the client.
Computing the Master Secret:

For all key exchange methods, the same algorithm is used to convert the pre_master_secret into the
master_secret. The pre_master_secret should be deleted from memory once the master_secret has
been computed.

master_secret = PRF(pre_master_secret, "master secret", ClientHello.random + ServerHello.random)

The master secret is always exactly 48 bytes in length. The length of the premaster secret will vary
depending on key exchange method.

Finished Messages

A Finished message is always sent immediately after a Change Cipher Spec message in order to verify
that the key exchange and authentication processes were successful. The Finished message is the first
protected packet with the most recently negotiated algorithms, keys, and secrets. No
acknowledgment of the Finished message is required; parties can begin to send encrypted data
immediately after they send the Finished message. Recipients of Finished messages must verify that
the contents are correct.
TLS (RFC # 5246)
Client Server

ClientHello -------->

ServerHello

Certificate*

ServerKeyExchange*

CertificateRequest*

<-------- ServerHelloDone

Certificate*

ClientKeyExchange

CertificateVerify*

[ChangeCipherSpec]

Finished -------->

[ChangeCipherSpec]

<-------- Finished

Application Data <-------> Application Data

Figure 1. Message flow for a full handshake

When the client and server decide to resume a previous session or duplicate an existing session
(instead of negotiating new security parameters), the message flow is as follows:

The client sends a ClientHello using the Session ID of the session to be resumed. The server then
checks its session cache for a match. If a match is found, and the server is willing to re-establish the
connection under the specified session state, it will send a ServerHello with the same Session ID value.
At this point, both client and server MUST send ChangeCipherSpec messages and proceed directly to
Finished messages. Once the re-establishment is complete, the client and server MAY begin to
exchange application layer data. (See flow chart below.) If a Session ID match is not found, the server
generates a new session ID, and the TLS client and server perform a full handshake.

Client Server

ClientHello -------->

ServerHello

[ChangeCipherSpec]

<-------- Finished

[ChangeCipherSpec]

Finished -------->

Application Data <-------> Application Data

Figure 2. Message flow for an abbreviated handshake


What is SSL?

The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) is the most widely deployed security
protocol used today. It is essentially a protocol that provides a secure channel between two machines
operating over the Internet or an internal network. In today’s Internet focused world, the SSL protocol
is typically used when a web browser needs to securely connect to a web server over the inherently
insecure Internet.

Technically, SSL is a transparent protocol which requires little interaction from the end user when
establishing a secure session. In the case of a browser for instance, users are alerted to the presence
of SSL when the browser displays a padlock, or, in the case of Extended Validation SSL, when the
address bar displays both a padlock and a green bar. This is the key to the success of SSL – it is an
incredibly simple experience for end users.

What is an SSL Certificate?

SSL stands for Secure Sockets Layer and, in short, it's the standard technology for keeping an internet
connection secure and safeguarding any sensitive data that is being sent between two systems,
preventing criminals from reading and modifying any information transferred, including potential
personal details. The two systems can be a server and a client (for example, a shopping website and
browser) or server to server (for example, an application with personal identifiable information or
with payroll information). TLS (Transport Layer Security) is just an updated, more secure, version of
SSL.

Digital certificates offer a scalable and secure option for managing identity and encryption information
by automating the distribution of cryptographic key material and by offering effective identity
authenticity mechanisms.

Occasionally a Public Key Infrastructure (PKI) must revoke a certificate issued under certain conditions,
such as compromise of a certificate’s encryption keys or change in status of an encryption peer, which
holds this certificate (e.g. termination of employee or theft of encryption devices). By implementing
Certificate Revocation List (CRL) checking functionality in order to ascertain the validity status of digital
certificates presented by encryption peers. Establishing of certificate validity by checking CRLs is
effective for most circumstances, but some applications may require a more frequent update for
certificate revocation information. Online Certificate Status Protocol (OCSP), which offers an online
mechanism for determining certificate validity without placing an undue burden on the PKI and
associated network.

CERTIFICATE REVOCATION LISTS

A CRL is a list of revoked certificates that have been issued and subsequently revoked by a given
Certification Authority. Certificates may be revoked for a number of reasons including failure or
compromise of a device that is using a given cert, compromise of the key pair used by a certificate, or
errors within an issued certificate, such as an incorrect identity or the need to accommodate a name
change. The mechanism used for certificate revocation depends on the Certification Authority. Most
Certification Authorities support cert revocation from the management interface.
Revoked certificates are represented in the CRL by their serial numbers. If a network device is
attempting to verify the validity of a certificate, it will download and scan the current CRL for the serial
number of the presented cert. The CRL is signed by the Certification Authority to ensure the
authenticity of the document and may be distributed through a variety of protocols, such as HTTP,
LDAP, TFTP, or other services. CRLs are generally published on a periodic interval, or Certification
Authorities may publish a new CRL any time a certificate they are responsible for is revoked. Like most
documents created by a PKI, the CRL has an expiration time, date, and all components of a PKI that
will verify that certificates should download a new copy of the CRL, when the old CRL expires.

A new, “fresh” CRL is downloaded when certificate is presented for verification again and the cached
CRL has been deleted. Unfortunately, the router’s cached CRL causes one of the problems for using
CRLs. If a newer version of the CRL that lists certificate under examination is present on the server,
but the router is still using the CRL in its cache, which does not list the revoked cert, the certificate will
pass its revocation check even though it should have been disallowed.

CRLs are practical for most PKI applications, but may not be appropriate for some uses. Some instances
where CRLs are not adequate include:

• Large numbers of revoked certificates or multiple CRLs. CRLs in cache on devices can consume
a large quantity of memory. Downloading large CRLs over low-speed links may use excessive
bandwidth, which causes network congestion.
• Frequent CRL expiration. If CRLs expire frequently, the Certificate Distribution Point (CDP) will
be heavily loaded, and frequent CRL download will burden network devices and bandwidth
with non-production traffic.
• Immediate notification of cert revocation is required. Some high-security applications require
more immediate notification of cert revocation. If CRL has a two day expiration interval, it may
be up to 48 hours before a router downloads a new CRL. This leaves a long period of time
before a router is notified that a certificate is no longer valid.

These are circumstances where CRL is an inadequate mechanism for cert revocation notification. In
cases where CRLs are inappropriate for checking certificate status OCSP offers a better choice.
ONLINE CERTIFICATE STATUS PROTOCOL (OCSP):

OCSP addresses are some of the shortcomings of CRLs. They offer a real-time mechanism for
certificate status checking. An end host can query the OCSP server when a cert is presented to find
out if the certificate has been revoked. This resolves many of the issues that arise from the use of
CRLs, but some other problems may appear from the use of OCSP.

Some OCSP servers still use the CRL published by a Certification Authority to advise clients on the
revocation status of a digital certificate, whereas other OCSP servers integrate tightly enough with the
PKI to be able to query the certificate database directly for certificate revocation status. When crypto
peers need to check the revocation status of certificates they transmit a query to the OCSP server with
the serial number of the certificate in question. The OCSP server examines its copy or copies of the
CRL to determine if the Certification Authority has listed the certificate as being revoked and replies
with a message to the crypto peer that the certificate’s status is “revoked”, “good”, or “unknown”.

The "good" state indicates a positive response to the status inquiry. At a minimum, this positive
response indicates that no certificate with the requested certificate serial number currently within its
validity interval is revoked. This state does not necessarily mean that the certificate was ever issued
or that the time at which the response was produced is within the certificate's validity interval.
Response extensions may be used to convey additional information on assertions made by the
responder regarding the status of the certificate, such as a positive statement about issuance, validity,
etc.

The "revoked" state indicates that the certificate has been revoked, either temporarily (the revocation
reason is certificateHold) or permanently. This state MAY also be returned if the associated CA has no
record of ever having issued a certificate with the certificate serial number in the request, using any
current or previous issuing key (referred to as a "non-issued" certificate in this document).

The "unknown" state indicates that the responder doesn't know about the certificate being requested,
usually because the request indicates an unrecognized issuer that is not served by this responder.

NOTE: The "revoked" status indicates that a certificate with the requested serial number should be
rejected, while the "unknown" status indicates that the status could not be determined by this
responder, thereby allowing the client to decide whether it wants to try another source of status
information (such as a CRL).

This dialogue between the crypto peer and the OCSP server will consume less bandwidth than all, but
the smallest of CRL downloads. It also consumes no memory on the crypto peer, as it will not have to
cache the CRLs. In cases where an OCSP server relies on the CRL, the Certification Authority must only
publish the CRL for the OCSP server’s use. This will allow CRL to be updated on a more frequent interval
and to offer a more “real-time” certificate revocation status, without consuming large quantities of
network bandwidth with frequent, large CRL downloads, to all the cryptographic peers in a network.
If the OCSP server integrates directly with the PKI to have immediate access to certificate revocation
information, cryptographic peers will receive an immediate response to certificate revocation status
any time they query the OCSP server.
OCSP over HTTP

This section describes the formatting that will be done to the request and response to support HTTP.

It is used for getting an X.509 digital certificate’s revocation status. The messages transmitted via OCSP
over HTTP are encoded in ASN.1, which is a set of notations that describe rules and structures in
telecommunications and networking.

Request

HTTP based OCSP requests can use either the GET or the POST method to submit their requests. To
enable HTTP caching, small requests (that after encoding are less than 255 bytes), MAY be submitted
using GET. If HTTP caching is not important, or the request is greater than 255 bytes, the request
SHOULD be submitted using POST. Where privacy is a requirement, OCSP transactions exchanged
using HTTP MAY be protected using either TLS/SSL or some other lower layer protocol.

An OCSP request using the GET method is constructed as follows:

GET {url}/{url-encoding of base-64 encoding of the DER encoding of the OCSPRequest} where {url} may
be derived from the value of AuthorityInfoAccess or other local configuration of the OCSP client.

An OCSP request using the POST method is constructed as follows: The Content-Type header has the
value "application/ocsp-request" while the body of the message is the binary value of the DER
encoding of the OCSPRequest.

Response

An HTTP-based OCSP response is composed of the appropriate HTTP headers, followed by the binary
value of the DER encoding of the OCSPResponse. The Content-Type header has the value
"application/ocsp-response". The Content-Length header SHOULD specify the length of the response.
Other HTTP headers MAY be present and MAY be ignored if not understood by the requestor.

What are the benefits of OCSP over certificate revocation lists (CRL)?

• OCSP can provide more timely information regarding the revocation status of a certificate
• OCSP removes the need for clients to retrieve the CRL themselves (better bandwidth
management)
• OCSP allows users with an expired certificate a grace period (decreasing any downtime with
expired certificates)
Wildcard certificate

In SSL/TLS, domain name verification occurs by matching the FQDN of the system with the name
specified in the certificate. The certificate name can be in two locations, either the Subject or the
Subject Alternative Name (subjectAltName) extension. When present in the Subject, the name that is
used is the Common Name (CN) component of the X.500 Distinguished Name (DN). A second place
that is often checked is the Subject Alternative Name (SAN) extension which can contain a list of DNS
names, IP addresses, email addresses or URIs.

Both wildcard domains and subject alternative names are techniques to enable certificates to
authenticate more than one domain name. This is often useful as it is common for a system to have
more than one domain name.

• Wildcard Domains: Wildcard domains such as *.example.com are useful when protecting
multiple services on one domain such as www.example.com,
mail.example.com, mx.example.com, ftp.example.com, blog.example.com, etc. however it has
several limitations:
• Non-zero length subdomain: first is that many sites will use a combination
of www.example.com and example.com and a *.example.com wildcard will not match the
latter.
• Only flat subdomain support: wildcards will not support multiple subdomains, for
example *.m.example.com will not be matched by *.example.com.
• Only one domain: finally, *.example.com will not support an entirely different subdomain
such as foobar.com

Subject Alternative Name: Using the X.509 subjectAltName extension has been useful to address
some of the limitations of wildcard domains, namely they can contain multiple FQDNs of all types so
names with differing numbers of subdomains and entirely different domains can be supported. To
make SANs even more useful, the goal of this effort was to validate the support for using wildcard
domain names in the SAN.
What is a SSL Wildcard Certificate?

A SSL Wildcard certificate is a single certificate with a wildcard character in the domain name field.
This allows the certificate to secure multiple sub domain names (hosts) pertaining to the same base
domain.

For example, a wildcard certificate for *.(domainname).com, could be used for


www.(domainname).com, mail.(domainname).com, store.(domainname).com, in addition to any
additional sub domain name in the (domainname).com. When a client checks the sub domain name
in this type of certificate, it uses a shell expansion procedure to see if it matches.

What is the difference between a SAN certificate and a Wildcard certificate?

A Subject Alternative Name (SAN) certificate is capable of supporting multiple domains and multiple
host names with domains. SANS certificates are more flexible than Wildcard certificates since they are
not limited to a single domain. Combining the functionality of both allows you secure a much broader
set of domains along with the capability to use them on any number of sub-domains.
Note: Only non-Wildcard names can be added as SAN.

When should I request a SSL Wildcard Certificate?

A {SSL} Wildcard certificate should be considered an option when looking to secure a number of sub
domains, such as 'secure.(domainname).com', 'www.(domainname).com', and
'mail.(domainname).com' with a single certificate. The format of the common name entered for the
SSL Wildcard Certificate will be '*.(domainname).com'.

How do I add SAN?


SAN is an optional feature available during your Wildcard SSL purchase, you can add up to an
additional 24 SAN to a single certificate.

Note: It is imperative that software documentation is referenced to ensure that the server on which
the certificate will be installed on supports wildcard certificates.

Do SSL Wildcard Certificates work with all servers and browsers?

SSL Wildcard certificates work with most servers. If unsure, check with your server vendor for further
assistance.

Can I share the IP address with all the sub domain names?

Yes. As the same certificate will be used to secure all the sub domain names associated with a domain
name, an IP address can be shared amongst all of the sub domain names. SSL by nature of the protocol
is IP based but in this case, where the same certificate will be used by all sub domain names, a Wildcard
certificate can be configured for use with name-based virtual hosts instead of IP -based virtual hosts.
Updating licensing is not applicable.
HTTP

HTTP stands for Hypertext Transfer Protocol. It's a stateless, application-layer protocol for
communicating between distributed systems, and is the foundation of the modern web.

HTTP is a Stateless Protocol

HTTP is called a stateless protocol because each command is executed independently, without any
knowledge of the commands that came before it. This is the main reason that it is difficult to
implement Web sites that react intelligently to user input. This shortcoming of HTTP is being
addressed in a number of new technologies, including ActiveX, Java, JavaScript and cookies.

HTTP cookie

An HTTP cookie (web cookie, browser cookie) is a small piece of data that a server sends to the user's
web browser. The browser may store it and send it back with the next request to the same server.
Typically, it's used to tell if two requests came from the same browser — keeping a user logged-in, for
example. It remembers stateful information for the stateless HTTP protocol.

Cookies are mainly used for three purposes:

Session management:

• Logins, shopping carts, game scores, or anything else the server should remember

Personalization:

• User preferences, themes, and other settings

Tracking:

• Recording and analyzing user behavior


HTTP Methods:

GET: fetch an existing resource. The URL contains all the necessary information the server needs to
locate and return the resource.
POST: create a new resource. POST requests usually carry a payload that specifies the data for the
new resource.
PUT: update an existing resource. The payload may contain the updated data for the resource.
DELETE: delete an existing resource.
The above four verbs are the most popular, and most tools and frameworks explicitly expose these
request verbs. PUT and DELETE are sometimes considered specialized versions of the POST verb, and
they may be packaged as POST requests with the payload containing the exact
action: create, update or delete.
There are some lesser used verbs that HTTP also supports:
HEAD: this is similar to GET, but without the message body. It's used to retrieve the server headers for
a particular resource, generally to check if the resource has changed, via timestamps.
TRACE: used to retrieve the hops that a request takes to round trip from the server. Each intermediate
proxy or gateway would inject its IP or DNS name into the Via header field. This can be used for
diagnostic purposes.
CONNECT: The CONNECT method establishes a tunnel to the server identified by the target resource.

HTTP CONNECT method:


The most common form of HTTP tunneling is the standardized HTTP CONNECT method. In this
mechanism, the client asks an HTTP proxy server to forward the TCP connection to the desired
destination. The server then proceeds to make the connection on behalf of the client. Once the
connection has been established by the server, the proxy server continues to proxy the TCP stream to
and from the client. Only the initial connection request is HTTP - after that, the server simply proxies
the established TCP connection.
This mechanism is how a client behind an HTTP proxy can access websites using SSL or TLS (i.e. HTTPS).
Proxy servers may also limit connections by only allowing connections to the default HTTPS port 443,
whitelisting hosts, or blocking traffic which doesn't appear to be SSL.
PATCH: The PATCH method is used to apply partial modifications to a resource.
OPTIONS: used to retrieve the server capabilities. On the client-side, it can be used to modify the
request based on what the server can support.

The GET verb is meant to retrieve the content of the resource, while the HEAD verb will not return any
content and may be used, for example, to see if a resource has changed, to know its size or its type,
to check if it exists, and so on.
HTTP response status codes:

All HTTP response status codes are separated into five classes (or categories). The first digit of the
status code defines the class of response. The last two digits do not have any class or categorization
role. There are five values for the first digit:

1xx (Informational): The request was received, continuing process


2xx (Successful): The request was successfully received, understood, and accepted
3xx (Redirection): Further action needs to be taken in order to complete the request
4xx (Client Error): The request contains bad syntax or cannot be fulfilled
5xx (Server Error): The server failed to fulfill an apparently valid request

1xx Informational response

An informational response indicates that the request was received and understood. It is issued on a
provisional basis while request processing continues. It alerts the client to wait for a final response.
The message consists only of the status line and optional header fields and is terminated by an empty
line. As the HTTP/1.0 standard did not define any 1xx status codes, servers must not [note 1] send a
1xx response to an HTTP/1.0 compliant client except under experimental conditions.

100 Continue

The server has received the request headers and the client should proceed to send the request body
(in the case of a request for which a body needs to be sent; for example, a POST request). Sending a
large request body to a server after a request has been rejected for inappropriate headers would be
inefficient. To have a server check the request's headers, a client must send Expect: 100-continue as
a header in its initial request and receive a 100 Continue status code in response before sending the
body. If the client receives an error code such as 403 (Forbidden) or 405 (Method Not Allowed) then
it shouldn't send the request's body. The response 417 Expectation Failed indicates that the request
should be repeated without the Expect header as it indicates that the server doesn't support
expectations (this is the case, for example, of HTTP/1.0 servers).

101 Switching Protocols

The requester has asked the server to switch protocols and the server has agreed to do so.

2xx Success
This class of status codes indicates the action requested by the client was received, understood and
accepted.

200 OK
Standard response for successful HTTP requests. The actual response will depend on the request
method used. In a GET request, the response will contain an entity corresponding to the requested
resource. In a POST request, the response will contain an entity describing or containing the result of
the action.
201 Created
The request has been fulfilled, resulting in the creation of a new resource.

202 Accepted
The request has been accepted for processing, but the processing has not been completed. The
request might or might not be eventually acted upon and may be disallowed when processing occurs.

203 Non-Authoritative Information (since HTTP/1.1)


The server is a transforming proxy (e.g. a Web accelerator) that received a 200 OK from its origin but
is returning a modified version of the origin's response.

204 No Content
The server successfully processed the request and is not returning any content.

3xx Redirection
This class of status code indicates the client must take additional action to complete the request. Many
of these status codes are used in URL redirection.

A user agent may carry out the additional action with no user interaction only if the method used in
the second request is GET or HEAD. A user agent may automatically redirect a request. A user agent
should detect and intervene to prevent cyclical redirects.

300 Multiple Choices


Indicates multiple options for the resource from which the client may choose (via agent-driven content
negotiation). For example, this code could be used to present multiple video format options, to list
files with different filename extensions, or to suggest word-sense disambiguation.

301 Moved Permanently


This and all future requests should be directed to the given URI.

4xx Client errors

This class of status code is intended for situations in which the error seems to have been caused by
the client. Except when responding to a HEAD request, the server should include an entity containing
an explanation of the error situation, and whether it is a temporary or permanent condition. These
status codes are applicable to any request method. User agents should display any included entity to
the user.

400 Bad Request

The server cannot or will not process the request due to an apparent client error (e.g., malformed
request syntax, size too large, invalid request message framing, or deceptive request routing).
401 Unauthorized (RFC 7235)

Similar to 403 Forbidden, but specifically for use when authentication is required and has failed or has
not yet been provided. The response must include a WWW-Authenticate header field containing a
challenge applicable to the requested resource. See Basic access authentication and Digest access
authentication. 401 semantically means "unauthenticated", i.e. the user does not have the necessary
credentials.
Note: Some sites incorrectly issue HTTP 401 when an IP address is banned from the website (usually
the website domain) and that specific address is refused permission to access a website.

402 Payment Required

Reserved for future use. The original intention was that this code might be used as part of some form
of digital cash or micropayment scheme, as proposed for example by GNU Taler, but that has not yet
happened, and this code is not usually used. Google Developers API uses this status if a particular
developer has exceeded the daily limit on requests. Sipgate uses this code if an account does not have
sufficient funds to start a call. Shopify uses this code when the store has not paid their fees and is
temporarily disabled.

403 Forbidden
The request was valid, but the server is refusing action. The user might not have the necessary
permissions for a resource or may need an account of some sort.

404 Not Found


The requested resource could not be found but may be available in the future. Subsequent requests
by the client are permissible.

408 Request Timeout

The server timed out waiting for the request. According to HTTP specifications: "The client did not
produce a request within the time that the server was prepared to wait. The client MAY repeat the
request without modifications at any later time.

5xx Server errors

The server failed to fulfil a request.

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it
has encountered an error or is otherwise incapable of performing the request. Except when
responding to a HEAD request, the server should include an entity containing an explanation of the
error situation and indicate whether it is a temporary or permanent condition. Likewise, user agents
should display any included entity to the user. These response codes are applicable to any request
method.
500 Internal Server Error

The 500 (Internal Server Error) status code indicates that the server encountered an unexpected
condition that prevented it from fulfilling the request.
501 Not Implemented

The 501 (Not Implemented) status code indicates that the server does not support the functionality
required to fulfill the request. This is the appropriate response when the server does not recognize
the request method and is not capable of supporting it for any resource.
A 501 response is cacheable by default; i.e., unless otherwise indicated by the method definition or
explicit cache controls (see Section 4.2.2 of [RFC7234]).

502 Bad Gateway


The 502 (Bad Gateway) status code indicates that the server, while acting as a gateway or proxy,
received an invalid response from an inbound server it accessed while attempting to fulfill the request.

503 Service Unavailable

The 503 (Service Unavailable) status code indicates that the server is currently unable to handle the
request due to a temporary overload or scheduled maintenance, which will likely be alleviated after
some delay. The server MAY send a Retry-After header field (Section 7.1.3) to suggest an appropriate
amount of time for the client to wait before retrying the request.

Note: The existence of the 503-status code does not imply that a server has to use it when becoming
overloaded. Some servers might simply refuse the connection.

504 Gateway Timeout

The 504 (Gateway Timeout) status code indicates that the server, while acting as a gateway or proxy,
did not receive a timely response from an upstream server it needed to access in order to complete
the request.

505 HTTP Version Not Supported

The 505 (HTTP Version Not Supported) status code indicates that the server does not support, or
refuses to support, the major version of HTTP that was used in the request message. The server is
indicating that it is unable or unwilling to complete the request using the same major version as the
client, as described in Section 2.6 of [RFC7230], other than with this error message. The server
SHOULD generate a representation for the 505 response that describes why that version is not
supported and what other protocols are supported by that server.
Difference between HTTP 1.0 and 1.1:

Persistent connections:

HTTP 1.1 also allows you to have persistent connections which means that you can have more than
one request/response on the same HTTP connection.

In HTTP 1.0 you had to open a new connection for each request/response pair. And after each
response the connection would be closed. This lead to some big efficiency problems because of TCP
Slow Start.

OPTIONS method:

HTTP/1.1 introduces the OPTIONS method. An HTTP client can use this method to determine the
abilities of the HTTP server. It's mostly used for Cross Origin Resource Sharing in web applications.

Caching:

HTTP 1.0 had support for caching via the header: If-Modified-Since.

HTTP 1.1 expands on the caching support a lot by using something called 'entity tag'. If 2 resources
are the same, then they will have the same entity tags.

HTTP 1.1 also adds the If-Unmodified-Since, If-Match, If-None-Match conditional headers.

There are also further additions relating to caching like the Cache-Control header.

100 Continue status:

There is a new return code in HTTP/1.1 100 Continue. This is to prevent a client from sending a large
request when that client is not even sure if the server can process the request, or is authorized to
process the request. In this case the client sends only the headers, and the server will tell the client
100 Continue, go ahead with the body.

HTTP 1.1 (1996- 2015)

• Formalizes many extensions to version 1.0


• Supports persistent and pipelined connections
• Supports chunked transfers, compression/decompression
• Supports virtual hosting (a server with a single IP Address hosting multiple domains)
• Supports multiple languages
• Supports byte-range transfers; useful for resuming interrupted data transfers

HTTP 1.1 is an enhancement of HTTP 1.0. The following lists the four major improvements:

• Efficient use of IP addresses, by allowing multiple domains to be served from a single IP


address.
• Faster response, by allowing a web browser to send multiple requests over a single persistent
connection.
• Faster response for dynamically-generated pages, by support for chunked encoding, which
allows a response to be sent before its total length is known.
• Faster response and great bandwidth savings, by adding cache support.
HTTP Strict Transport Security (HSTS)
What is HSTS?

HTTP Strict Transport Security (HSTS) is a web server directive that informs user agents and web
browsers how to handle its connection through a response header sent at the very beginning and back
to the browser.

This sets the Strict-Transport-Security policy field parameter. It forces those connections over HTTPS
encryption, disregarding any script's call to load any resource in that domain over HTTP. HSTS is but
one arrow in a bundled sheaf of security settings for your web server or your web hosting service.

Why Should Your Company Implement HSTS?

If a website accepts a connection through HTTP and redirects to HTTPS, visitors may initially
communicate with the non-encrypted version of the site before being redirected, if, for example, the
visitor types http://www.foo.com/ or even just foo.com. This creates an opportunity for a man-in-the-
middle attack. The redirect could be exploited to direct visitors to a malicious site instead of the secure
version of the original site.

The HTTP Strict Transport Security header informs the browser that it should never load a site using
HTTP and should automatically convert all attempts to access the site using HTTP to HTTPS requests
instead.

Note: The Strict-Transport-Security header is ignored by the browser when your site is accessed using
HTTP; this is because an attacker may intercept HTTP connections and inject the header or remove it.
When your site is accessed over HTTPS with no certificate errors, the browser knows your site is HTTPS
capable and will honor the Strict-Transport-Security header.

Padlocking your website is sometimes not enough as people will still find a way to reach your website
over http://. HSTS forces browsers and app connections to use HTTPS if that is available. Even if
someone just types in the www or http://.

Setting up 301 redirects from http:// to https:// is not enough to completely secure your domain
name. The window of opportunity still exists in the insecure redirection of HTTP.

$ curl --head http://www.facebook.com HTTP/1.1 301 Moved Permanently Location:


https://www.facebook.com/

The hacker(s) are still able to capture site cookies, session ID (usually sent as a URL parameter) or force
a redirection to their phishing site that looks exactly like your website. Ouch!

By having a Strict-Transport-Security header installed, it will be nearly impossible for the bad guys to
glean any information at all! Not even your Yoga schedule!

$ curl --head https://www.facebook.com HTTP/1.1 200 OK Strict-Transport-Security: max-


age=15552000; preload

An example scenario Section

You log into a free WiFi access point at an airport and start surfing the web, visiting your online banking
service to check your balance and pay a couple of bills. Unfortunately, the access point you're using is
actually a hacker's laptop, and they're intercepting your original HTTP request and redirecting you to
a clone of your bank's site instead of the real thing. Now your private data is exposed to the hacker.

Strict Transport Security resolves this problem; as long as you've accessed your bank's web site once
using HTTPS, and the bank's web site uses Strict Transport Security, your browser will know to
automatically use only HTTPS, which prevents hackers from performing this sort of man-in-the-middle
attack.
How the browser handles it?

The first time your site is accessed using HTTPS and it returns the Strict-Transport-Security header, the
browser records this information, so that future attempts to load the site using HTTP will automatically
use HTTPS instead.

When the expiration time specified by the Strict-Transport-Security header elapses, the next attempt
to load the site via HTTP will proceed as normal instead of automatically using HTTPS.

Whenever the Strict-Transport-Security header is delivered to the browser, it will update the
expiration time for that site, so sites can refresh this information and prevent the timeout from
expiring. Should it be necessary to disable Strict Transport Security, setting the max-age to 0 (over a
https connection) will immediately expire the Strict-Transport-Security header, allowing access via
http.

How to Implement HSTS for Your Website:

If you employ subdomains in your content structure, you will need a Wildcard Certificate to cover
HTTPS ONLY. Otherwise, you're pretty safe with a Domain Validated, Organization Validated or
Extended Validation SSL Certificate. Make sure you have these installed and working correctly.

The initial stages below will test your web applications, user login and session management. It will
expire HSTS every 5 minutes. Continue to test for one week and one month. Fix any issues that may
arise in your deployment. Modify max-age=xxx. One week = 604800; One Month = 2592000. Append
preload after your tests are completed.

After you are confident that HSTS is working with your web applications, modify max-age to 63072000.
That will be two years. This is what the Chromium Project wants to see in your preload submission!

HSTS Perpetual Requirements:

• Your website must have a valid SSL Certificate. You can check the validity of your SSL at
GlobalSign's SSL Checker.
• Redirect ALL HTTP links to HTTPS with a 301 Permanent Redirect.
• All subdomains must be covered in your SSL Certificate. Consider ordering a Wildcard
Certificate.
• Serve an HSTS header on the base domain for HTTPS requests.
• Max-age must be at least 10886400 seconds or 18 Weeks. Go for the two years value, as
mentioned above!
• The includeSubDomains directive must be specified if you have them!
• The preload directive must be specified.

What is HSTS Preloading?

HSTS preloading is a function built into the browser whereby a global list of hosts enforces the use of
HTTPS ONLY on their site.

This list is compiled by Chromium Project and is utilized by Chrome, Firefox and Safari. These sites do
not depend on the issuing of the HSTS response headers to enforce the policy. Instead, the browser is
already aware that the domain name requires the use of HTTPS ONLY and pushes HSTS before any
connection or communication even takes place.

This removes the opportunity an attacker has to intercept and tamper with redirects over HTTP. The
HSTS response header is still needed in this scenario and must be left in place for those browsers that
don’t use preloaded HSTS lists.

You might also like