Computer Networks Notes Unit 3
Computer Networks Notes Unit 3
Subnetting
TCP/IP Model:
The U.S. Department of Defense (DOD) created the TCP/IP reference model because it wanted a network that
could survive any conditions
Application Layer:
The application layer handles high-level protocols, representation,
encoding, and dialog control. The TCP/IP protocol suite combines all
application related issues into one layer. It ensures that the data is
properly packaged before it is passed on to the next layer. TCP/IP
includes Internet and transport layer specifications such as IP and TCP
as well as specifications for common applications. TCP/IP has protocols
to support file transfer, e-mail, and remote login, in addition to the
following:
● File Transfer Protocol (FTP) – FTP is a reliable, connection-
oriented service that uses TCP to transfer files between systems
that support FTP. It supports bi-directional binary file and
ASCII file transfers.
● Trivial File Transfer Protocol (TFTP) – TFTP is a
connectionless service that uses the User Datagram Protocol
(UDP). TFTP is used on the router to transfer configuration files
and Cisco IOS images, and to transfer files between systems
that support TFTP. It is useful in some LANs because it
operates faster than FTP in a stable environment. TCP/IP Model
● Network File System (NFS) – NFS is a distributed file system
protocol suite developed by Sun Microsystems that allows file access to a remote storage device such as
a hard disk across a network.
● Simple Mail Transfer Protocol (SMTP) – SMTP administers the transmission of e-mail over computer
networks. It does not provide support for transmission of data other than plain text.
● Telnet – Telnet provides the capability to remotely access another computer. It enables a user to log into
an Internet host and execute commands. A Telnet client is referred to as a local host. A Telnet server is
referred to as a remote host.
● Simple Network Management Protocol (SNMP) – SNMP is a protocol that provides a way to monitor
and control network devices. SNMP is also used to manage configurations, statistics, performance, and
security.
● Domain Name System (DNS) – DNS is a system used on the Internet to translate domain names and
publicly advertised network nodes into IP addresses.
Transport Layer:
The transport layer provides a logical connection between a source host and a destination host. Transport
protocols segment and reassemble data sent by upper-layer applications into the same data stream, or logical
connection, between end points.
• Creates packet from bytes stream received from the application layer.
• Uses port number to create process to process communication.
Page: 88
• Uses a sliding window protocol to achieve flow control.
• Uses acknowledgement packet, timeout and retransmission to achieve error control.
The primary duty of the transport layer is to provide end-to-end control and reliability as data travels through
this cloud. This is accomplished through the use of sliding windows, sequence numbers, and acknowledgments.
The transport layer also defines end-to-end connectivity between host applications. Transport layer protocols
include TCP and UDP.
TCP is a connection-oriented transport layer protocol that provides reliable full-duplex data transmission. TCP is
part of the TCP/IP protocol stack. In a connection-oriented environment, a connection is established between
both ends before the transfer of information can begin. TCP breaks messages into segments, reassembles them at
the destination, and resends anything that is not received. TCP supplies a virtual circuit between end-user
applications.
Page: 89
● Option – One option currently defined, maximum TCP segment size
● Data – Upper-layer protocol data
Code Bits or Flags (6 bits).
• URG: Urgent pointer field significant.
• ACK: Acknowledgment field significant.
• PSH: Push function.
• RST: Reset the connection.
• SYN: Synchronize the sequence numbers.
• FIN: No more data from sender.
TCP vs UDP:
S.no TCP - Transmission Control Protocol UDP - User Datagram Protocol
1 connection-oriented, reliable (virtual circuit) connectionless, unreliable, does not check message
delivery
2 Divides outgoing messages into segments sends “datagrams”
3 reassembles messages at the destination does not reassemble incoming messages
4 re-sends anything not received Does-not acknowledge.
5 provides flow control provides no flow control
Page: 90
6 more overhead than UDP (less efficient) low overhead - faster than TCP
7 Examples:HTTP, NFS, SMTP Eg. VOIP,DNS,TFTP
Internet Layer:
The purpose of the Internet layer is to select the best path through the network for packets to travel. The main
protocol that functions at this layer is IP. Best path determination and packet switching occur at this layer.
The following protocols operate at the TCP/IP Internet layer:
● IP provides connectionless, best-effort delivery routing of packets. IP is not concerned with the content
of the packets but looks for a path to the destination.
● Internet Control Message Protocol (ICMP) provides control and messaging capabilities.
● Address Resolution Protocol (ARP) determines the data link layer address, or MAC address, for known
IP addresses.
● Reverse Address Resolution Protocol (RARP) determines the IP address for a known MAC address.
IP performs the following operations:
● Defines a packet and an addressing scheme
● Transfers data between the Internet layer and network access layer
● Routes packets to remote hosts
Page: 91
Here are some differences of the OSI and TCP/IP models:
● TCP/IP combines the OSI application, presentation, and session layers into its application layer.
● TCP/IP combines the OSI data link and physical layers into its network access layer.
● TCP/IP appears simpler because it has fewer layers.
● When the TCP/IP transport layer uses UDP it does not provide reliable delivery of packets. The transport
layer in the OSI model always does.
The Internet was developed based on the standards of the TCP/IP protocols. The TCP/IP model gains credibility
because of its protocols. The OSI model is not generally used to build networks. The OSI model is used as a
guide to help students understand the communication process.
IP Address:
Each computer in a TCP/IP network must be given a unique identifier, or IP address. This address, which
operates at Layer 3, allows one computer to locate another computer on a network. All computers also have a
unique physical address, which is known as a MAC address. These are assigned by the manufacturer of the NIC.
MAC addresses operate at Layer 2 of the OSI model.
An IP address (IPv4) is a 32-bit sequence of ones and zeros.To make the IP address easier to work with, it is
usually written as four decimal numbers separated by periods. For example, an IP address of one computer is
192.168.1.2. Another computer might have the address 128.10.2.1. This is called the dotted decimal format. Each
part of the address is called an octet because it is made up of eight binary digits. For example, the IP address
192.168.1.8 would be 11000000.10101000.00000001.00001000 in binary notation. The dotted decimal notation
is an easier method to understand than the binary ones and zeros method. This dotted decimal notation also
prevents a large number of transposition errors that would result if only the binary numbers were used.
Page: 92
Ipv4 Header:
Fig:IPV4 Header
Version:(4 bits): Indicates the version number, to allow evolution of the protocol.
Internet Header Lenght(IHL 4 bits): Length of header in 32 bit words. The minimum value is five for a
minimum header length of 20 octets.
Type-of-Service :
The Type-of-Service field contains an 8-bit binary value that is used to determine the priority of each packet.
This value enables a Quality-of-Service (QoS) mechanism to be applied to high priority packets, such as those
carrying telephony voice data. The router processing the packets can be configured to decide which packet it is
to forward first based on the Type-of-Service value.
Identifier (16 bits): A sequence number that, together with the source address, destination address, and user
protocol, is intended to uniquely identify a datagram. Thus, the identifier should be unique for the datagram's
source address, destination address, and user protocol for the time during which the datagram will remain in the
internet.
Fragment Offset : A router may have to fragment a packet when forwarding it from one medium to another
medium that has a smaller MTU. When fragmentation occurs, the IPv4 packet uses the Fragment Offset field and
the MF flag in the IP header to reconstruct the packet when it arrives at the destination host. The fragment offset
field identifies the order in which to place the packet fragment in the reconstruction.
Flags(3 bits): Only two of the bits are currently defined: MF(More Fragments) and DF(Don't Fragment):
More Fragments flag (MF):The More Fragments (MF) flag is a single bit in the Flag field used with the
Fragment Offset for the fragmentation and reconstruction of packets. The More Fragments flag bit is set, it
means that it is not the last fragment of a packet. When a receiving host sees a packet arrive with the MF = 1, it
examines the Fragment Offset to see where this fragment is to be placed in the reconstructed packet. When a
receiving host receives a frame with the MF = 0 and a non-zero value in the Fragment offset, it places that
fragment as the last part of the reconstructed packet. An unfragmented packet has all zero fragmentation
information (MF = 0, fragment offset =0).
Page: 93
Don't Fragment flag (DF):The Don't Fragment (DF) flag is a single bit in the Flag field that indicates that
fragmentation of the packet is not allowed. If the Don't Fragment flag bit is set, then fragmentation of this packet
is NOT permitted. If a router needs to fragment a packet to allow it to be passed downward to the Data Link
layer but the DF bit is set to 1, then the router will discard this packet.
IP Destination Address
The IP Destination Address field contains a 32-bit binary value that represents the packet destination Network
layer host address.
IP Source Address
The IP Source Address field contains a 32-bit binary value that represents the packet source Network layer host
address.
Time-to-Live
The Time-to-Live (TTL) is an 8-bit binary value that indicates the remaining "life" of the packet. The TTL value
is decreased by at least one each time the packet is processed by a router (that is, each hop). When the value
becomes zero, the router discards or drops the packet and it is removed from the network data flow. This
mechanism prevents packets that cannot reach their destination from being forwarded indefinitely between
routers in a routing loop. If routing loops were permitted to continue, the network would become congested with
data packets that will never reach their destination. Decrementing the TTL value at each hop ensures that it
eventually becomes zero and that the packet with the expired TTL field will be dropped.
Protocol:
This 8-bit binary value indicates the data payload type that the packet is carrying. The Protocol field enables the
Network layer to pass the data to the appropriate upper-layer protocol.
Header checksum (16 bits): An error-detecting code applied to the header only. Because some header fields
may change during transit (e.g., time to live, segmentation-related fields), this is reverified and recomputed at
each router. The checksum field is the 16-bit one's complement addition of all 16-bit words in the header. For
purposes of computation, the checksum field is itself initialized to a value of zero .
Page: 94
Class C 192-223 Unicast (Small Network)
Class D 224-239 Multicast
Class E 240-255 Reserved
Class A Blocks
A class A address block was designed to support extremely large networks with more than 16 million host
addresses. Class A IPv4 addresses used a fixed /8 prefix with the first octet to indicate the network address. The
remaining three octets were used for host addresses.
The first bit of a Class A address is always 0. With that first bit a 0, the lowest number that can be represented is
00000000, decimal 0. The highest number that can be represented is 01111111, decimal 127. The numbers 0 and
127 are reserved and cannot be used as network addresses. Any address that starts with a value between 1 and
126 in the first octet is a Class A address.
No of Class A Network: 27
No. of Usable Host address per Network: 224-2 (Minus 2 because 2 addresses are reserved for network and
broadcast address)
Class B Blocks
Class B address space was designed to support the needs of moderate to large size networks with more than
65,000 hosts. A class B IP address used the two high-order octets to indicate the network address. The other two
octets specified host addresses. As with class A, address space for the remaining address classes needed to be
reserved.
The first two bits of the first octet of a Class B address are always 10. The remaining six bits may be populated
with either 1s or 0s. Therefore, the lowest number that can be represented with a Class B address is 10000000,
decimal 128. The highest number that can be represented is 10111111, decimal 191. Any address that starts with
a value in the range of 128 to 191 in the first octet is a Class B address.
No of Class B Network: 214
No. of Usable Host address per Network: 216-2
Class C Blocks:
The class C address space was the most commonly available of the historic address classes. This address space
was intended to provide addresses for small networks with a maximum of 254 hosts.
Class C address blocks used a /24 prefix. This meant that a class C network used only the last octet as host
addresses with the three high-order octets used to indicate the network address.
A Class C address begins with binary 110. Therefore, the lowest number that can be represented is 11000000,
decimal 192. The highest number that can be represented is 11011111, decimal 223. If an address contains a
number in the range of 192 to 223 in the first octet, it is a Class C address.
No of Class C Network: 221
No. of Usable Host address per Network: 28-2
Page: 95
Class D Blocks:
The Class D address class was created to enable multicasting in an IP address. A multicast address is a unique
network address that directs packets with that destination address to predefined groups of IP addresses.
Therefore, a single station can simultaneously transmit a single stream of data to multiple recipients.
The Class D address space, much like the other address spaces, is mathematically constrained. The first four bits
of a Class D address must be 1110. Therefore, the first octet range for Class D addresses is 11100000 to
11101111, or 224 to 239. An IP address that starts with a value in the range of 224 to 239 in the first octet is a
Class D address.
Class E Block:
A Class E address has been defined. However, the Internet Engineering Task Force (IETF) reserves these
addresses for its own research. Therefore, no Class E addresses have been released for use in the Internet. The
first four bits of a Class E address are always set to 1s. Therefore, the first octet range for Class E addresses is
11110000 to 11111111, or 240 to 255.
Every IP address also has two parts. The first part identifies the network (Network ID)where the system is
connected and the second part identifies the system (Host ID).
Class A
Class B
Class C
Within the address range of each IPv4 network, we have three types of addresses:
Network address - The address by which we refer to the network
Broadcast address - A special address used to send data to all hosts in the network
Host addresses - The addresses assigned to the end devices in the network
Page: 96
Special Ipv4 addresses:
Default Route: we represent the IPv4 default route as 0.0.0.0. The default route is used as a "catch all" route
when a more specific route is not available. The use of this address also reserves all addresses in the 0.0.0.0 -
0.255.255.255 (0.0.0.0 /8) address block.
Network and Broadcast Addresses: As explained earlier, within each network the first and last addresses
cannot be assigned to hosts. These are the network address and the broadcast address, respectively.
Loopback: One such reserved address is the IPv4 loopback address 127.0.0.1. The loopback is a special address
that hosts use to direct traffic to themselves. Although only the single 127.0.0.1 address is used, addresses
127.0.0.0 to 127.255.255.255 are reserved. Any address within this block will loop back within the local host.
No address within this block should ever appear on any network.
Link-Local Addresses: IPv4 addresses in the address block 169.254.0.0 to 169.254.255.255 (169.254.0.0 /16)
are designated as link-local addresses. These addresses can be automatically assigned to the local host by the
operating system in environments where no IP configuration is available. These might be used in a small peer-to-
peer network or for a host that could not automatically obtain an address from a Dynamic Host Configuration
Protocol (DHCP) server.
TEST-NET Addresses : The address block 192.0.2.0 to 192.0.2.255 (192.0.2.0 /24) is set aside for teaching and
learning purposes. These addresses can be used in documentation and network examples
Network Prefixes: An important question is: How do we know how many bits represent the network portion
and how many bits represent the host portion? When we express an IPv4 network address, we add a prefix length
to the network address. The prefix length is the number of bits in the address that gives us the network portion.
For example, in 172.16.4.0 /24, the /24 is the prefix length - it tells us that the first 24 bits are the network
address. This leaves the remaining 8 bits, the last octet, as the host portion.
Subnet Mask:
To define the network and host portions of an address, the devices use a separate 32-bit pattern called a subnet
mask. We express the subnet mask in the same dotted decimal format as the IPv4 address. The subnet mask is
created by placing a binary 1 in each bit position that represents the network portion and placing a binary 0 in
Page: 97
each bit position that represents the host portion.
The prefix and the subnet mask are different ways of representing the same thing - the network portion of an
address.
CIDR:
A routing system used by routers and gateways on the backbone of the Internet for routing packets. CIDR
replaces the old class method of allocating 8, 16, or 24 bits to the network ID, and instead allows any number of
contiguous bits in the IP address to be allocated as the network ID. For example, if a company needs a few
thousand IP addresses for its network, it can allocate 11 or 12 bits of the address for the network ID instead of 8
bits for a class C (which wouldn’t work because you would need to use several class C networks) or 16 bits for
class B (which is wasteful).
How It Works
CIDR assigns a numerical prefix to each IP address. For example, a typical destination IP address using CIDR
might be 177.67.5.44/13. The prefix 13 indicates that the first 13 bits of the IP address identify the network,
while the remaining 32 - 13 = 19 bits identify the host. The prefix helps to identify the Internet destination
gateway or group of gateways to which the packet will be forwarded. Prefixes vary in size, with longer prefixes
indicating more specific destinations. Routers use the longest possible prefix in their routing tables when
determining how to forward each packet. CIDR enables packets to be sent to groups of networks instead of to
individual networks, which considerably simplifies the complex routing tables of the Internet’s backbone routers.
1. No of subnetwork = 2BB
2. No. of usable hosts per subnetwork=2BR-2
TB=BR + BB
TB=Total bits in host portion
BB=Bits borrowed
BR=Bits Remaining
Page: 98
Class C subnet masks can be the following:
Binary Decimal CIDR
00000000 = 0 /24
10000000 = 128 /25
11000000 = 192 /26
11100000 = 224 /27
11110000 = 240 /28
11111000 = 248 /29
11111100 = 252 /30
We can’t use a /31 or /32 because we have to have at least 2 host bits for assigning IP addresses
to hosts.
All you need to do is answer five simple questions:
How many subnets does the chosen subnet mask produce?
How many valid hosts per subnet are available?
What are the valid subnets?
1. What’s the broadcast address of each subnet?
2. What are the valid hosts in each subnet?
How many hosts per subnet? We have 6 host bits off (11000000), so the equation would
be 26 – 2 = 62 hosts.
What are the valid subnets? 256 – 192 = 64. Remember, we start at zero and count in our
block size, so our subnets are 0, 64, 128, and 192. (Magic Number=256-Subnet Mask)
What’s the broadcast address for each subnet? The number right before the value of the next
subnet is all host bits turned on and equals the broadcast address. For the zero subnet, the
Page: 99
next subnet is 64, so the broadcast address for the zero subnet is 63.
What are the valid hosts? These are the numbers between the subnet and broadcast address.
The easiest way to find the hosts is to write out the subnet address and the broadcast address.
This way, the valid hosts are obvious. The following table shows the 0, 64, 128, and 192 sub-
nets, the valid host ranges of each, and the broadcast address of each subnet:
255.255.128.0 (/17)
172.16.0.0 = Network address
255.255.128.0 = Subnet mask
Subnets? 21 = 2 (same as Class C).
Hosts? 215 – 2 = 32,766 (7 bits in the third octet, and 8 in the fourth).
Valid subnets? 256 – 128 = 128. 0, 128. Remember that subnetting is performed in the third octet, so the subnet
numbers are really 0.0 and 128.0, as shown in the next table.
These are the exact numbers we used with Class C; we use them in the third octet and add a 0 in the fourth octet
for the network address.
Page: 100
Subnets? 22 = 4.
Hosts? 214 – 2 = 16,382 (6 bits in the third octet, and 8 in the fourth).
Valid subnets? 256 – 192 = 64. 0, 64, 128, 192. Remember that the subnetting is performed in the third octet, so
the subnet numbers are really 0.0, 64.0, 128.0, and 192.0,
as shown in the next table.
The following table shows the four subnets available, the valid host range, and the broadcast address of each:
Another Example:172.16.0.0/25
255.255.255.128 (/25)
This is one of the hardest subnet masks you can play with. And worse, it actually is a really
good subnet to use in production because it creates over 500 subnets with 126 hosts for each
subnet—a nice mixture. So, don’t skip over it!
172.16.0.0 = Network address
255.255.255.128 = Subnet mask
Subnets? 29 = 512.
Hosts? 27 – 2 = 126.
Valid subnets? Okay, now for the tricky part. 256 – 255 = 1. 0, 1, 2, 3, etc. for the third octet. But you can’t
forget the one subnet bit used in the fourth octet. You actually get two subnets for each third octet value, hence
the 512 subnets. For example, if the third octet is showing subnet 3, the two subnets would actually be 3.0 and
3.128.
Broadcast address for each subnet?
Valid hosts?
The following table shows how you can create subnets, valid hosts, and broadcast addresses using the Class B
255.255.255.128 subnet mask (the first eight subnets are shown, and then the last two subnets):
Subnet 0.0 0.128 1.0 1.128 2.0 2.128 3.0 3.128 ... 255.0 255.128
Broadcast 0.127 0.255 1.127 1.255 2.127 2.255 3.127 3.255 ... 255.127 255.255
First host 0.1 0.129 1.1 1.129 2.1 2.129 3.1 3.129 ... 255.1 255.129
Last host 0.126 0.254 1.126 1.254 2.126 2.254 3.126 3.254 ... 255.126 255.254
Page: 101
Subnetting Class A network: 10.0.0.0/16
255.255.0.0 (/16)
Class A addresses use a default mask of 255.0.0.0, which leaves 22 bits for subnetting since you must leave 2
bits for host addressing. The 255.255.0.0 mask with a Class A address is using 8 subnet bits.
Subnets? 28 = 256.
Hosts? 216 – 2 = 65,534.
Valid subnets? What is the interesting octet? 256 – 255 = 1. 0, 1, 2, 3, etc. (all in the second octet). The subnets
would be 10.0.0.0, 10.1.0.0, 10.2.0.0, 10.3.0.0, etc., up to 10.255.0.0.
Broadcast address for each subnet?
Valid hosts?
The following table shows the first two and last two subnets, valid host range, and broad-
cast addresses for the private Class A 10.0.0.0 network:
Subnet 10.0.0.0 10.1.0.0 ... 10.254.0.0 10.255.0.0
Broadcast 10.0.255.255 10.1.255.255 ... 10.254.255.255 10.255.255.255
First host 10.0.0.1 10.1.0.1 ... 10.254.0.1 10.255.0.1
Last host 10.0.255.254 10.1.255.254 ... 10.254.255.254 10.255.255.254
IPV6:
Features of IPV6:
• Larger address space Offers improved global reachability and flexibility; the aggregation of prefixes
that are announced in routing tables; multihoming to several Internet service providers (ISPs) auto
configuration that can include link-layer addresses in the address space; plug-and-play options; public-to
private readdressing end to end without address translation; and simplified mechanisms for address
renumbering and modification.
Page: 102
• Simpler header: Provides better routing efficiency; no broadcasts and thus no potential threat of
broadcast storms; no requirement for processing checksums; simpler and more efficient extension header
mechanisms; and flow labels for per-flow processing with no need to open the transport inner packet to
identify the various traffic flows.
• Mobility and security: Ensures compliance with mobile IP and IPsec standards functionality; mobility
is built in, so any IPv6 node can use it when necessary; and enables people to move around in networks
with mobile network devices—with many having wireless connectivity.
Mobile IP is an Internet Engineering Task Force (IETF) standard available for both IPv4 and IPv6. The
standard enables mobile devices to move without breaks in established network connections. Because
IPv4 does not automatically provide this kind of mobility, you must add it with additional
configurations.
IPsec is the IETF standard for IP network security, available for both IPv4 and IPv6. Although the
functionalities are essentially identical in both environments, IPsec is mandatory in IPv6. IPsec is
enabled on every IPv6 node and is available for use. The availability of IPsec on all nodes makes the
IPv6 Internet more secure. IPsec also requires keys for each party, which implies a global key
deployment and distribution.
• Transition richness: You can incorporate existing IPv4 capabilities in IPv6 in the following ways:
• Configure a dual stack with both IPv4 and IPv6 on the interface of a network device.
• Use the technique IPv6 over IPv4 (also called 6to4 tunneling), which uses an IPv4 tunnel to
carry IPv6 traffic. This method (RFC 3056) replaces IPv4-compatible tunneling (RFC 2893).
Cisco IOS Software Release 12.3(2)T (and later) also allows protocol translation (NAT-PT)
between IPv6 and IPv4. This translation allows direct communication between hosts speaking
different protocols.
IPv4 VS IPv6
Page: 103
An IPv4 header contains the following fields:
version The IP version number, 4.
lengthThe length of the datagram header in 32-bit words.
type of serviceContains five subfields that specify the precedence, delay, throughput, reliability, and cost desired
for a packet. (The Internet does not guarantee this request.) This field is not widely used on the Internet.
total length The length of the datagram in bytes including the header, options, and the appended transport
protocol segment or packet.
Identification An integer that identifies the datagram.
Flags:Controls datagram fragmentation together with the identification field. The flags indicate whether the
datagram may be fragmented, whether the datagram is fragmented, and whether the current fragment is the final
one.
fragment offset The relative position of this fragment measured from the beginning of the original datagram in
units of 8 bytes.
time to live How many routers a datagram can pass through. Each router decrements this value by 1 until it
reaches 0 when the datagram is discarded. This keeps misrouted datagrams from remaining on the Internet
forever.
Protocol The high-level protocol type.
header checksum A number that is computed to ensure the integrity of the header values.
source address The 32-bit IPv4 address of the sending host.
destination address The 32-bit IPv4 address of the receiving host.
Options A list of optional specifications for security restrictions, route recording, and source routing. Not every
datagram specifies an options field.
Padding Null bytes which are added to make the header length an integral multiple of 32 bytes as required by
the header length field.
Ipv6 header:
Page: 104
• fragment offset (this is moved into fragmentation extension headers)
• header checksum (the upper-layer protocol or security extension header handles data integrity)
IPv6 options improve over IPv4 by being placed in separate extension headers that are located between the IPv6
header and the transport-layer header in a packet. Most extension headers are not examined or processed by any
router along a packet's delivery path until it arrives at its final destination. This mechanism improves router
performance for packets containing options. In IPv4, the presence of any options requires the router to examine
all options.
Another improvement is that IPv6 extension headers, unlike IPv4 options, can be of arbitrary length and the total
amount of options that a packet carries is not limited to 40 bytes. This feature, and the manner in which it is
processed, permit IPv6 options to be used for functions that were not practical in IPv4, such as the IPv6
Authentication and Security Encapsulation options.
By using extension headers, instead of a protocol specifier and options fields, newly defined extensions can be
integrated more easily into IPv6.
IPV6 Addressing:
Address Representation:
Represented by breaking 128 bit into Eight 16-bit segments (Each 4 Hex character each)
Each segment is written in Hexadecimal separated by colons.
Hex digit are not case sensitive.
Rule 1:
Drop leading zeros:
2001:0050:0000:0235:0ab4:3456:456b:e560
2001:050:0:235:ab4:3456:456b:e560
Rule2:
Successive fields of zeros can be represented as “::” , But double colon appear only once in the address.
FF01:0:0:0:0:0:0:1
FF01::1
Note : An address parser identifies the number of missing zeros by separating the two parts and entering 0 until
the 128 bits are complete. If two “::” notations are placed in the address, there is no way to identify the size of
each block of zeros.
Ipv4 vs ipv6
IPV4 IPV6
1. source and destination addresses are 32 bits.) 1. Source and destination addresses are 128 bits.
2. ipv4 support small address space. 2. Supports a very large address space sufficeint for
each and every people on earth.
3. ipv4 header includes checksum. 3. ipv6 header doesn't includes the checksum. (the
upper-layer protocol or security extension header
handles data integrity)
4. addresses are represented in dotted decimal format. 4. Addresses are represented in 16-bit segments
(Eg. 192.168.5.1) Each segment is written in Hexadecimal separated by
colons. (Eg.
2001:0050:020c:0235:0ab4:3456:456b:e560
5. Header includes options. All optional data is moved to IPV6 extension header..
6. Broadcast address are used to send traffic to all 6. There is no IPV6 broadcast address. Instead a link
Page: 105
nodes on a subnet. local scope all-nodes multicast address is used.
7. No identification of packet flow for QOS handling 7. Packet flow identification for QOS handling by
by router is present within the ipv4 header. routers is present within the IPV6 header using the
flow label field.
8. uses host address (A) resource records in the 8. Uses AAAA records in the DNS to map host names
Domain name system(DNS) to map host names to to ipv6 addresses.
ipv4 addresses.
9. Both routers and the sending host fragment 9. Only the sending host fragments packets; routers
packets. do not.
10. ICMP Router Discovery is used to determine the 10. ICMPv6 Router Solicitation and Router
IPv4 address of the best default gateway, and it is Advertisement messages are used to determine the IP
optional. address of the best default gateway, and they are
required.
Dual Stack:
Dual stack is an integration method where a node has implementation and connectivity to both Ipv4 and ipv6
network. If both ipv4 and ipv6 are configured on an interface, this interface is dual-stacked.
Tunneling Technique
With manually configured IPv6 tunnels, an IPv6 address is configured on a tunnel interface, and manually
configured IPv4 addresses are assigned to the tunnel source and the tunnel destination. The host or router at each
end of a configured tunnel must support both the IPv4 and IPv6 protocol stacks.
Page: 106
NAT-Protocol Translation (NAT-PT)
is a translation mechanism that sits between an IPv6 network and an Ipv4 network. The translator translates
IPv6 packets into IPv4 packets and vice versa.
Page: 107
Chapter: 7 Network and Internet Layer
Network Layer and Design Issues; Virtual Circuit and Data grams Subject; Introduction of Routing –
Shortest path Routing Algorithm, Flow Based Routing Algorithm. Distance Vector Routing Algorithm,
Spanning Tree Routing; Congestion Control; Traffic Shaping and Leaky Bucket Algorithm.
Page: 108
Its interesting to note here that it is easy to provide a connection oriented service over an inherently
connectionless service, so in fact defining the service of the network layer as connectionless is the general
solution. However, at the time of defining the network layer, the controversy between the two camps was (and
still is) unresolved, and so instead of deciding on one service, the ISO allowed both.
Circuit Switching:
A dedicated path between the source node and the destination node is set up for the duration of communication
session to transfer data. That path is a connected sequence of links between network nodes. On each physical
link, a logical channel is dedicated to the connection. Communication via circuit switching involves three
phases,
1. Circuit Establishment: Before any signals can be transmitted, an end-to-end (station-to-station) circuit
must be established .
2. Data Transfer: The data may be analog or digital, depending on the nature of the network
3. Circuit Disconnect:After some period of data transfer, the connection is terminated, usually by the
action of one of the two stations
Circuit Switching
1. In a typical userlhost data connection (e.g., personal computer user logged on to a database server),
much of the time the line is idle. Thus, with data connections, a circuit-switching approach is inefficient.
2. In a circuit-switching network, the connection provides for transmission at constant data rate. Thus, each
of the two devices that are connected must transmit and receive at the same data rate as the other; this
limits the utility of the network in interconnecting a variety of host computers and terminals.
Page: 109
Packet Switching:
Messages are divided into subsets of equal length called packets. In packet switching approach, data are
transmitted in short packets (few Kbytes). A long message is broken up into a series of packets as shown in Fig
Every packet contains some control information in its header, which is required for routing and other purposes.
Main difference between Packet switching and Circuit Switching is that the communication lines are not
dedicated to passing messages from the source to the destination. In Packet Switching, different messages (and
even different packets) can pass through different routes, and when there is a "dead time" in the communication
between the source and the destination, the lines can be used by other sources.
There are two basic approaches commonly used to packet Switching: virtual circuit packet switching and
datagram packet switching. In virtual-circuit packet switching a virtual circuit is made before actual data is
transmitted, but it is different from circuit switching in a sense that in circuit switching the call accept signal
comes only from the final destination to the source while in case of virtual-packet switching this call accept
signal is transmitted between each adjacent intermediate node as shown in Fig. Other features of virtual circuit
packet switching are discussed in the following subsection.
Virtual Circuit:
An initial setup phase is used to set up a route between the intermediate nodes for all the packets passed during
the session between the two end nodes. In each intermediate node, an entry is registered in a table to indicate the
route for the connection that has been set up. Thus, packets passed through this route, can have short headers,
containing only a virtual circuit identifier (VCI), and not their destination. Each intermediate node passes the
packets according to the information that was stored in it, in the setup phase. In this way, packets arrive at the
destination in the correct sequence, and it is guaranteed that essentially there will not be errors. This approach is
slower than Circuit Switching, since different virtual circuits may compete over the same resources, and an
initial setup phase is needed to initiate the circuit. As in Circuit Switching, if an intermediate node fails, all
virtual circuits that pass through it are lost. The most common forms of Virtual Circuit networks are X.25 and
Frame Relay, which are commonly used for public data networks (PDN).
Page: 110
Virtual circuit Packet Switching techniques
Virtual Circuit
Datagram:
This approach uses a different, more dynamic scheme, to determine the route through the network links. Each
packet is treated as an independent entity, and its header contains full information about the destination of the
packet. The intermediate nodes examine the header of the packet, and decide to which node to send the packet so
that it will reach its destination.
Page: 111
in this method, the packets don't follow a pre-established route, and the intermediate nodes (the routers) don't
have pre-defined knowledge of the routes that the packets should be passed through. Packets can follow different
routes to the destination, and delivery is not guaranteed . Due to the nature of this method, the packets can reach
the destination in a different order than they were sent, thus they must be sorted at the destination to form the
original message. This approach is time consuming since every router has to decide where to send each packet.
The main implementation of Datagram Switching network is the Internet, which uses the IP network protocol.
Router: (Introduction)
A Router is a computer, just like any other computer including a PC. Routers have many of the same hardware
and software components that are found in other computers including:
• CPU
• RAM
• ROM
• Operating System
Page: 112
Router is the basic backbone for the Internet. The main function of the router is to connect two or more than two
network and forwards the packet from one network to another. A router connects multiple networks. This means
that it has multiple interfaces that each belong to a different IP network. When a router receives an IP packet on
one interface, it determines which interface to use to forward the packet onto its destination. The interface that
the router uses to forward the packet may be the network of the final destination of the packet (the network with
the destination IP address of this packet), or it may be a network connected to another router that is used to reach
the destination network.
A router uses IP to forward packets from the source network to the destination network. The packets must
include an identifier for both the source and destination networks. A router uses the IP address of the destination
network to deliver a packet to the correct network. When the packet arrives at a router connected to the
destination network, the router uses the IP address to locate the specific computer on the network.
Page: 113
Routing and Routing Protocols:
The primary responsibility of a router is to direct packets destined for local and remote networks by:
• Determining the best path to send packets
• Forwarding packets toward their destination
The router uses its routing table to determine the best path to forward the packet. When the router receives a
packet, it examines its destination IP address and searches for the best match with a network address in the
router's routing table. The routing table also includes the interface to be used to forward the packet. Once a
match is found, the router encapsulates the IP packet into the data link frame of the outgoing or exit interface,
and the packet is then forwarded toward its destination.
Static Routes:
Static routes are configured manually, network administrators must add and delete static routes to reflect any
network topology changes. In a large network, the manual maintenance of routing tables could require a lot of
administrative time. On small networks with few possible changes, static routes require very little maintenance.
Static routing is not as scalable as dynamic routing because of the extra administrative requirements. Even in
large networks, static routes that are intended to accomplish a specific purpose are often configured in
conjunction with a dynamic routing protocol.
A network is connected to the Internet only through a single ISP. There is no need to use a dynamic routing
protocol across this link because the ISP represents the only exit point to the Internet.
Connected Routes:
Those network that are directly connected to the Router are called connected routes and are not needed to
configure on the router for routing. They are automatically routed by the Router.
Dynamic Routes:
Dynamic routing protocol uses a route that a routing protocol adjusts automatically for topology or traffic
changes.
non-adaptive routing algorithm When a ROUTER uses a non-adaptive routing algorithm it consults a static
table in order to determine to which computer it should send a PACKET of data. This is in contrast to an
ADAPTIVE ROUTING ALGORITHM, which bases its decisions on data which reflects current traffic
conditions (Also called static route)
adaptive routing algorithm When a ROUTER uses an adaptive routing algorithm to decide the next computer
to which to transfer a PACKET of data, it examines the traffic conditions in order to determine a route which is
as near optimal as possible. For example, it tries to pick a route which involves communication lines which have
light traffic. This strategy is in contrast to a NON-ADAPTIVE ROUTING ALGORITHM. (Also called Dynamic
route)
Page: 114
Routing Protocol:
A routing protocol is the communication used between routers. A routing protocol allows routers to share
information about networks and their proximity to each other. Routers use this information to build and maintain
routing tables.
Autonomous System:
An AS is a collection of networks under a common administration that share a common routing strategy. To the
outside world, an AS is
viewed as a single entity.
The AS may be run by one
or more operators while it
presents a consistent view
of routing to the external
world.
The American Registry of
Internet Numbers (ARIN), a
service provider, or an
administrator assigns a 16-
bit identification number to
each AS.
Page: 115
Dynamic Routing Protocol:
1. Interior Gateway protocol (IGP)
I). Distance Vector Protocol
II). Link State Protocol
2. Exterior Gateway Protocol (EGP)
Metric:
There are cases when a routing protocol learns of more than one route to the same destination. To select the best
path, the routing protocol must be able to evaluate and differentiate between the available paths. For this purpose
a metric is used. A metric is a value used by routing protocols to assign costs to reach remote networks. The
metric is used to determine which path is most preferable when there are multiple paths to the same remote
network.
Each routing protocol uses its own metric. For example, RIP uses hop count, EIGRP uses a combination of
bandwidth and delay, and Cisco's implementation of OSPF uses bandwidth.
Page: 116
R1 learns about the subnet, and a metric associated with that subnet, and nothing more. R1 must then pick the
best route to reach subnet X. In this case, it picks the two-hop route through R7, because that route has the
lowest metric.
Distance vector protocols typically use the Bellman-Ford algorithm for the best path route determination.
Initial Update:
R1
• Sends an update about network 10.1.0.0 out the Serial0/0/0 interface
• Sends an update about network 10.2.0.0 out the FastEthernet0/0 interface
• Receives update from R2 about network 10.3.0.0 with a metric of 1
• Stores network 10.3.0.0 in the routing table with a metric of 1
Page: 117
R2
• Sends an update about network 10.3.0.0 out the Serial 0/0/0 interface
• Sends an update about network 10.2.0.0 out the Serial 0/0/1 interface
• Receives an update from R1 about network 10.1.0.0 with a metric of 1
• Stores network 10.1.0.0 in the routing table with a metric of 1
• Receives an update from R3 about network 10.4.0.0 with a metric of 1
• Stores network 10.4.0.0 in the routing table with a metric of 1
R3
• Sends an update about network 10.4.0.0 out the Serial 0/0/0 interface
• Sends an update about network 10.3.0.0 out the FastEthernet0/0
• Receives an update from R2 about network 10.2.0.0 with a metric of 1
• Stores network 10.2.0.0 in the routing table with a metric of 1
After this first round of update exchanges, each router knows about the connected networks of their directly
connected neighbors. However, did you notice that R1 does not yet know about 10.4.0.0 and that R3 does not yet
know about 10.1.0.0? Full knowledge and a converged network will not take place until there is another
exchange of routing information.
Next Update:
R1
Sends an update about network 10.1.0.0 out the Serial 0/0/0 interface.
Sends an update about networks 10.2.0.0 and 10.3.0.0 out the FastEthernet0/0 interface.
Receives an update from R2 about network 10.4.0.0 with a metric of 2.
Stores network 10.4.0.0 in the routing table with a metric of 2.
Same update from R2 contains information about network 10.3.0.0 with a metric of 1. There is no change;
therefore, the routing information remains the same.
R2
Sends an update about networks 10.3.0.0 and 10.4.0.0 out of Serial 0/0/0 interface.
Sends an update about networks 10.1.0.0 and 10.2.0.0 out of Serial 0/0/1 interface.
Receives an update from R1 about network 10.1.0.0. There is no change; therefore, the routing information
remains the same.
Receives an update from R3 about network 10.4.0.0. There is no change; therefore, the routing information
remains the same.
Page: 118
R3
• Sends an update about network 10.4.0.0 out the Serial 0/0/0 interface.
• Sends an update about networks 10.2.0.0 and 10.3.0.0 out the FastEthernet0/0 interface.
• Receives an update from R2 about network 10.1.0.0 with a metric of 2.
• Stores network 10.1.0.0 in the routing table with a metric of 2.
• Same update from R2 contains information about network 10.2.0.0 with a metric of 1. There is no
change; therefore, the routing information remains the same.
Note: Distance vector routing protocols typically implement a technique known as split horizon. Split horizon
prevents information from being sent out the same interface from which it was received. For example, R2 would
not send an update out Serial 0/0/0 containing the network 10.1.0.0 because R2 learned about that network
through Serial 0/0/0.
Page: 119
state routing process to reach a state of convergence:
1. Each router learns about its own links, its own directly connected networks. This is done by
detecting that an interface is in the up state.
2. Each router is responsible for meeting its neighbors on directly connected networks. link state
routers do this by exchanging Hello packets with other link-state routers on directly connected networks.
3. Each router builds a Link-State Packet (LSP) containing the state of each directly connected link.
This is done by recording all the pertinent information about each neighbor, including neighbor ID, link
type, and bandwidth.
4. Each router floods the LSP to all neighbors, who then store all LSPs received in a database.
Neighbors then flood the LSPs to their neighbors until all routers in the area have received the LSPs.
Each router stores a copy of each LSP received from its neighbors in a local database.
5. Each router uses the database to construct a complete map of the topology and computes the best
path to each destination network. Like having a road map, the router now has a complete map of all
destinations in the topology and the routes to reach them. The SPF algorithm is used to construct the
map of the topology and to determine the best path to each network.
Faster Convergence:
When receiving a Link-state Packet (LSP), link-state routing protocols immediately flood the LSP out all
interfaces except for the interface from which the LSP was received. This way, it achieve the faster convergence.
With distance vector routing algorithm, router needs to process each routing update and update its routing table
before flooding them out other interfaces.
Page: 120
5 Susceptible to routing loops. Not as susceptible to routing loops.
6 Easy to configure and administer. Difficult to configure and administer.
7 Requires less memory and processing power Requires more precessing power and memory
of routers. than distance vector.
8 Consumes a lot of Bandwidth. Consumes less BW than distance vector.
9 Passes copies of routing table to neighbor Passes link-state routing updates to other routers.
routers.
10 Eg. RIP Eg. OSPF
Page: 121
across the river creates a redundant topology. The suburb is not cut off from the town center if one bridge is
impassable.
Page: 122
Issues with Redundancy:
Redundant link
Layer 2 loops
Ethernet frames do not have a time to live (TTL) like IP packets traversing routers. As a result, if they are not
terminated properly on a switched network, they continue to bounce from switch to switch endlessly or until a
link is disrupted and breaks the loop.
Broadcast stroms
A broadcast storm occurs when there are so many broadcast frames caught in a Layer 2 loop that all available
bandwidth is consumed. Consequently, no bandwidth is available bandwidth for legitimate traffic, and the
network becomes unavailable for data communication.
What is STP?
Redundancy increases the availability of the network topology by protecting the network from a single point of
failure, such as a failed network cable or switch. When redundancy is introduced into a Layer 2 design, loops
and duplicate frames can occur. Loops and duplicate frames can have severe consequences on a network. The
Spanning Tree Protocol (STP) was developed to address these issues.
STP ensures that there is only one logical path between all destinations on the network by intentionally blocking
redundant paths that could cause a loop. A port is considered blocked when network traffic is prevented from
entering or leaving that port. This does not include bridge protocol data unit (BPDU) frames that are used by
STP to prevent loops. You will learn more about STP BPDU frames later in the chapter. Blocking the redundant
paths is critical to preventing loops on the network. The physical paths still exist to provide redundancy, but
these paths are disabled to prevent the loops from occurring. If the path is ever needed to compensate for a
network cable or switch failure, STP recalculates the paths and unblocks the necessary ports to allow the
redundant path to become active.
Page: 123
Congestion control:
Congestion in a network may occur if the load on the network(the number of packets sent to the network) is
greater than the capacity of the network(the number of packets a network can handle). Congestion control refers
to the mechanisms and techniques to control the congestion and keep the load below the capacity .
• When too many packets are pumped into the system, congestion occur leading into degradation of
performance.
• Congestion tends to feed upon itself and back ups.
• Congestion shows lack of balance between various networking equipments.
• It is a global issue.
In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion
control (prevention) and closed-loop congestion control (removal) as shown in Figure
Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. Retransmission in general may increase congestion in the network. However, a good
retransmission policy can prevent congestion. The retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent congestion. For example, the retransmission policy
used by TCP is designed to prevent or alleviate congestion.
Page: 124
Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat window is better than the
Go-Back-N window for congestion control. In the Go-Back-N window, when the timer for a packet times out,
several packets may be resent, although some may have arrived safe and sound at the receiver. This duplication
may make the congestion worse. The Selective Repeat window, on the other hand, tries to send the specific
packets that have been lost or corrupted.
Acknowledgment Policy :
The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver does not
acknowledge every packet it receives, it may slow down the sender and help prevent congestion. Several
approaches are used in this case. A receiver may send an acknowledgment only if it has a packet to be sent or a
special timer expires. A receiver may decide to acknowledge only N packets at a time. We need to know that the
acknowledgments are also part of the load in a network. Sending fewer acknowledgments means imposing less
load on the network.
Discarding Policy :
A good discarding policy by the routers may prevent congestion and at the same time may not harm the integrity
of the transmission. For example, in audio transmission, if the policy is to discard less sensitive packets when
congestion is likely to happen, the quality of sound is still preserved and congestion is prevented or alleviated.
Admission Policy :
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual-circuit
networks. Switches in a flow first check the resource requirement of a flow before admitting it to the network. A
router can deny establishing a virtual- circuit connection if there is congestion in the network or if there is a
possibility of future congestion.
Back-pressure:
The technique of backpressure refers to a congestion control mechanism in which a congested node stops
receiving data from the immediate upstream node or nodes. This may cause the upstream node or nodes to
become congested, and they, in turn, reject data from their upstream nodes or nodes. And so on. Backpressure is
a node-to-node congestion control that starts with a node and propagates, in the opposite direction of data flow,
to the source. The backpressure technique can be applied only to virtual circuit networks, in which each node
knows the upstream node from which a flow of data is corning.
Page: 125
Node III in the figure has more input data than it can handle. It drops some packets in its input buffer and
informs node II to slow down. Node II, in turn, may be congested because it is slowing down the output flow of
data. If node II is congested, it informs node I to slow down, which in turn may create congestion. If so, node I
informs the source of data to slow down. This, in time, alleviates the congestion. Note that the pressure on node
III is moved backward to the source to remove the congestion. None of the virtual-circuit networks we studied in
this book use backpressure. It was, however, implemented in the first virtual-circuit network, X.25. The
technique cannot be implemented in a datagram network because in this type of network, a node (router) does
not have the slightest knowledge of the upstream router.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note the difference between
the backpressure and choke packet methods. In backpresure, the warning is from one node to its upstream node,
although the warning may eventually reach the source station. In the choke packet method, the warning is from
the router, which has encountered congestion, to the source station directly. The intermediate nodes through
which the packet has traveled are not warned. We have seen an example of this type of control in ICMP. When a
router in the Internet is overwhelmed datagrams, it may discard some of them; but it informs the source . host,
using a source quench ICMP message. The warning message goes directly to the source station; the intermediate
routers, and does not take any action. Figure shows the idea of a choke packet.
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the source. The
source guesses that there is a congestion somewhere in the network from other symptoms. For example, when a
source sends several packets and there is no acknowledgment for a while, one assumption is that the network is
congested. The delay in receiving an acknowledgment is interpreted as congestion in the network; the source
should slow down. We will see this type of signaling when we discuss TCP congestion control later in the
chapter.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or destination. The explicit
Page: 126
signaling method, however, is different from the choke packet method. In the choke packet method, a separate
packet is used for this purpose; in the explicit signaling method, the signal is included in the packets that carry
data. Explicit signaling, as we will see in Frame Relay congestion control, can occur in either the forward or the
backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the congestion. This bit can
warn the source that there is congestion and that it needs to slow down to avoid the discarding of packets.
Forward Signaling A bit can be set in a packet moving in the direction of the congestion. This bit can warn the
destination that there is congestion. The receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.
Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. Two
techniques can shape traffic: leaky bucket and token bucket.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long as there is
water in the bucket. The rate at which the water leaks does not depend on the rate at which the water is input to
the bucket unless the bucket is empty. The input rate can vary, but the output rate remains constant. Similarly, in
networking, a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket
and sent out at an average rate. Figure shows a leaky bucket and its effects.
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host. The use of the leaky
bucket shapes the input traffic to make it conform to this commitment. In Figure 24.19 the host sends a burst of
data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data. The host is silent for 5 s and then sends data at a
rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky
bucket smooths the traffic by sending out data at a rate of 3 Mbps during the same 10 s. Without the leaky
bucket, the beginning burst may have hurt the network
by consuming more bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent
congestion.
Page: 127
Leaky Bucket Implementation
A simple leaky bucket implementation is shown in Figure 24.20. A FIFO queue holds the packets. If the traffic
consists of fixed-size packets (e.g., cells in ATM networks), the process removes a fixed number of packets from
the queue at each tick of the clock. If the traffic consists of variable-length packets, the fixed output rate must be
based on the number of bytes or bits.
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the
packets if the bucket is full.
Page: 128