Routing&Switching Notes
Routing&Switching Notes
NETWORK SWICHES
CSMA/CD
Ethernet signals are transmitted to every host connected to the LAN using a
special set of rules to determine which station can access the network. The set
of rules that Ethernet uses is based on the IEEE carrier sense multiple
access/collision detect (CSMA/CD) technology. CSMA/CD is only used with half-
duplex communication typically found in hubs. Full-duplex switches do not use
CSMA/CD.
Carrier Sense
In the CSMA/CD access method, all network devices that have messages to send
must listen before transmitting.
If a device detects a signal from another device, it waits for a specified amount
of time before attempting to transmit.
When there is no traffic detected, a device transmits its message. While this
transmission is occurring, the device continues to listen for traffic or collisions on
the LAN.
A Layer 2 LAN switch performs switching and filtering based only on the OSI
data link layer (Layer 2) MAC address. A switch is completely transparent to
network protocols and user applications. A Layer 2 switch builds a MAC address
table that it uses to make forwarding decisions. Layer 2 switches depend on
routers to pass data between independent IP subnetworks.
Once a MAC address for a specific node on a specific port is recorded in the
address table, the switch then knows to send traffic destined for that specific
node out the port mapped to that node for subsequent transmissions.
When an incoming data frame is received by a switch and the destination MAC address
is not in the table, the switch forwards the frame out all ports, except for the port
on which it was received.
When the destination node responds, the switch records the node's MAC address
in the address table from the frame's source address field.
Half Duplex
Full Duplex
duplex or a direct connection between two nodes that each support full duplex.
A Cisco Catalyst switch supports three duplex settings:
When the auto-MDIX feature is enabled, the switch detects the required cable
type for copper Ethernet connections and configures the interfaces accordingly.
Therefore, you can use either a crossover or a straight-through cable for
connections to a copper 10/100/1000 port on the switch, regardless of the type
of device on the other end of the connection.
In the past, switches used one of the following forwarding methods for switching data
between network ports:
Store-and-forward switching
Cut-through switching
Store-and-forward switching
In store-and-forward switching, when the switch receives the frame, it stores the
data in buffers until the complete frame has been received. During the storage
process, the switch analyzes the frame for information about its destination. In
this process, the switch also performs an error check using the Cyclic Redundancy
Check (CRC) trailer portion of the Ethernet frame.
CRC uses a mathematical formula, based on the number of bits (1s) in the
frame, to determine whether the received frame has an error. After confirming
the integrity of the frame, the frame is forwarded out the appropriate port
toward its destination. When an error is detected in a frame, the switch discards
the frame. Discarding frames with errors reduces the amount of bandwidth
consumed by corrupt data. Store-and-forward switching is required for Quality of
Service (QoS) analysis on converged networks where frame classification for
traffic prioritization is necessary. For example, voice over IP data streams need
to have priority over web-browsing traffic
Cut-through switching
In cut-through switching, the switch acts upon the data as soon as it is received,
even if the transmission is not complete. The switch buffers just enough of the
frame to read the destination MAC address so that it can determine to which
port to forward the data. The destination MAC address is located in the first 6
bytes of the frame following the preamble.
The switch looks up the destination MAC address in its switching table,
determines the outgoing interface port, and forwards the frame onto its
destination through the designated switch port. The switch does not perform any
error checking on the frame. Because the switch does not have to wait for the
entire frame to be completely buffered, and because the switch does not perform
any error checking, cut-through switching is faster than store-and-forward
switching. However, because the switch does not perform any error checking, it
forwards corrupt frames throughout the network. The corrupt frames consume
bandwidth while they are being forwarded. The destination NIC eventually
discards the corrupt frames
When selecting a switch, it is important to understand the key features of the switch
options available. This means that it is necessary to decide on features such as whether
Power over Ethernet (PoE) is necessary, and the preferred "forwarding rate".
PoE allows a switch to deliver power to a device, such as IP phones and some
wireless access points, over the existing Ethernet cabling. This allows more
flexibility for installation.
FORWADING RATES
The forwarding rate defines the processing capabilities of a switch by rating how much
data the switch can process per second. Switch product lines are classified by forwarding
rates. Entry-layer switches have lower forwarding rates than enterprise-layer switches.
Other considerations include whether the device is stackable or non-stackable as well as
the thickness of the switch (expressed in number of rack units), and port density, or
the number of ports available on a single switch. The port density of a device can vary
depending on whether the device is a fixed configuration device or a modular device.
TYPES OF SWITCHES
Fixed configuration switches are just as you might expect, fixed in their configuration.
What that means is that you cannot add features or options to the switch beyond
those that originally came with the switch.
2. Modular Switches
Modular switches offer more flexibility in their configuration. Modular switches typically
come with different sized chassis that allow for the installation of different numbers of
modular line cards. The line cards actually contain the ports.
As a security feature, Cisco IOS software separated the EXEC sessions into these access
levels:
User EXEC: Allows a person to access only a limited number of basic monitoring
commands. User EXEC mode is the default mode you enter after logging in to a Cisco
switch from the CLI. User EXEC mode is identified by the > prompt.
Privileged EXEC: Allows a person to access all device commands, such as those used for
configuration and management, and can be password-protected to allow only authorized
users to access the device. Privileged EXEC mode is identified by the # prompt
To change from user EXEC mode to privileged EXEC mode, enter the enable command.
To change from privileged EXEC mode to user EXEC mode, enter the disable command.
On a real network, the switch prompts for the password. Enter the correct password.
By default, the password is not configured.
Once you have entered privileged EXEC mode on the Cisco switch, you can access other
configuration modes. Cisco IOS software uses a hierarchy of commands in its command-
mode structure. Each command mode supports specific Cisco IOS commands related to a
type of operation on the device.
There are many configuration modes. For now, you will explore how to navigate two
common configuration modes: global configuration mode and interface configuration
mode.
To configure global switch parameters such as the switch hostname or the switch IP
address used for switch management purposes, use global configuration mode. To access
global configuration mode, enter the configure terminal command in privileged EXEC
mode. The prompt changes to (config)#.
1. Hostname
2. Banner
9. Configure ssh using rsa 1024 key, domin name, port security by allowing a
maximum of 2 devices per port,
The figure above shows the switch LEDs and the Mode button for a Cisco Catalyst
2960 switch. The Mode button is used to toggle through port status, port duplex,
port speed, and PoE (if supported) status of the port LEDs. The following describes
the purpose of the LED indicators, and the meaning of their colors:
System LED - Shows whether the system is receiving power and is functioning
properly. If the LED is off, it means the system is not powered on. If the LED
is green, the system is operating normally. If the LED is amber, the system is
receiving power but is not functioning properly.
Redundant Power System (RPS) LED - Shows the RPS status. If the LED is
off, the RPS is off or not properly connected. If the LED is green, the RPS is
connected and ready to provide back-up power. If the LED is blinking green, the
RPS is connected but is unavailable because it is providing power to another
device. If the LED is amber, the RPS is in standby mode or in a fault condition.
If the LED is blinking amber, the internal power supply in the switch has failed,
and the RPS is providing power.
Port Status LED - Indicates that the port status mode is selected when the
LED is green. This is the default mode. When selected, the port LEDs will
display colors with different meanings. If the LED is off, there is no link, or the
port was administratively shut down. If the LED is green, a link is present. If
the LED is blinking green, there is activity and the port is sending or receiving
data. If the LED is alternating green-amber, there is a link fault. If the LED is
amber, the port is blocked to ensure a loop does not exist in the forwarding
domain and is not forwarding data (typically, ports will remain in this state for
the first 30 seconds after being activated). If the LED is blinking amber, the
port is blocked to prevent a possible loop in the forwarding domain.
Port Duplex LED - Indicates the port duplex mode is selected when the LED is
green. When selected, port LEDs that are off are in half-duplex mode. If the
port LED is green, the port is in full-duplex mode.
Port Speed LED - Indicates the port speed mode is selected. When selected, the
port LEDs will display colors with different meanings. If the LED is off, the port
is operating at 10 Mb/s. If the LED is green, the port is operating at 100
Mb/s. If the LED is blinking green, the port is operating at 1000 Mb/s.
Power over Ethernet (PoE) Mode LED - If PoE is supported; a PoE mode LED
will be present. If the LED is off, it indicates the PoE mode is not selected and
that none of the ports have been denied power or placed in a fault condition. If
the LED is blinking amber, the PoE mode is not selected but at least one of the
ports has been denied power, or has a PoE fault. If the LED is green, it
indicates the PoE mode is selected and the port LEDs will display colors with
different meanings. If the port LED is off, the PoE is off. If the port LED is
green, the PoE is on. If the port LED is alternating green-amber, PoE is denied
because providing power to the powered device will exceed the switch power
capacity. If the LED is blinking amber, PoE is off due to a fault. If the LED is
amber, PoE for the port has been disabled.
VLANS
Vlan means Virtual Local Area Network, which is a logical partition of a Layer 2
network.
The partitioning of the Layer 2 network takes place inside a Layer 2 device,
usually via a switch.
The hosts grouped within a VLAN are unaware of the VLAN’s existence.
Before VLANs
To appreciate why VLANs are being widely used today, consider a small community
college with student dorms and the faculty offices all in one building. The figure shows
the student computers in one LAN and the faculty computers in another LAN. This
works fine because each department is physically together, so it is easy to provide them
with their network resources.
A year later, the college has grown and now has three buildings. In the figure, the
original network is the same, but student and faculty computers are spread out across
three buildings. The student dorms remain on the fifth floor and the faculty offices
remain on the third floor. However, now the IT department wants to ensure that
student computers all share the same security features and bandwidth controls. How
can the network accommodate the shared needs of the geographically separated
departments?
The solution for the community college is to use a networking technology called a virtual
LAN (VLAN). A VLAN allows a network administrator to create groups of logically
networked devices that act as if they are on their own independent network, even if
they share a common infrastructure with other VLANs. When you configure a VLAN,
you can name it to describe the primary role of the users for that VLAN. As another
example, all of the student computers in a school can be configured in the "Student"
VLAN. Using VLANs, you can logically segment switched networks based on functions,
departments, or project teams.
These VLANs allow the network administrator to implement access and security policies
to particular groups of users. For example, the faculty, but not the students, can be
allowed access to e-learning management servers for developing online course materials.
VLAN Details
A VLAN is a logically separate IP subnetwork. VLANs allow multiple IP networks and
subnets to exist on the same switched network. For computers to communicate on the
same VLAN, each must have an IP address and a subnet mask that is consistent for
that VLAN. The switch has to be configured with the VLAN and each port in the
VLAN must be assigned to the VLAN. A switch port with a singular VLAN configured
on it is called an access port. Remember, just because two computers are physically
connected to the same switch does not mean that they can communicate. Devices on
two separate networks and subnets must communicate via a router (Layer 3).
Benefits of a VLAN
Security - Groups that have sensitive data are separated from the rest of the network,
decreasing the chances of confidential information breaches. Faculty computers are on
VLAN 10 and completely separated from student and guest data traffic.
Cost reduction - Cost savings result from less need for expensive network upgrades and
more efficient use of existing bandwidth and uplinks.
Higher performance - Dividing flat Layer 2 networks into multiple logical workgroups
(broadcast domains) reduces unnecessary traffic on the network and boosts performance.
Broadcast storm mitigation - Dividing a network into VLANs reduces the number of
devices that may participate in a broadcast storm.
Improved IT staff efficiency - VLANs make it easier to manage the network because
users with similar network requirements.
VLAN ID Ranges
Used in small- and medium-sized business and enterprise networks. Identified by a VLAN
ID between 1 and 1005.
IDs 1 and 1002 to 1005 are automatically created and cannot be removed. 1002 –
1005 are reserved for Tocken ring and FDDI.
A cisco 2960 catalyst switch supports 255 normal and extended Vlans.
VLAN TYPES
However in the network there are a number of terms for VLANs. Some terms define
the type of network traffic they carry and others define a specific function a VLAN
performs. The following describes common VLAN terminology:
Data VLAN
Default VLAN
All switch ports become a member of the default VLAN after the initial boot up of
the switch. Having all the switch ports participate in the default VLAN makes them all
part of the same broadcast domain.This allows any device connected to any switch port
to communicate with other devices on other switch ports. The default VLAN for Cisco
switches is VLAN 1. VLAN 1 has all the features of any VLAN, except that you cannot
rename it and you can not delete it.
NATIVE VLANs
A native VLAN is assigned to an 802.1Q trunk port. An 802.1Q trunk port supports
traffic coming from many VLANs (tagged traffic) as well as traffic that does not come
from a VLAN (untagged traffic). The 802.1Q trunk port places untagged traffic on the
native VLAN.
MANAGEMENT VLANS
A management VLAN is any VLAN you configure to access the management capabilities
of a switch. VLAN 1 would serve as the management VLAN if you did not proactively
define a unique VLAN to serve as the management VLAN. You assign the management
VLAN an IP address and subnet mask.
Voice VLANs
By default There is a default vlan called Vlan 1, which cannot be renamed or deleted. All
switchport belong to Vlan 1 by default (before any configurations).
CONFIGURING VLANS
S1#configure terminal
4. Return to privileged EXEC mode. You must end your configuration session for the
configuration to be saved in the vlan.dat file and for configuration to take effect.
S1(config-vlan)#end
After you have created a VLAN, assign one or more ports to the VLAN. When you
manually assign a switch port to a VLAN, it is known as a static access port. A static
access port can belong to only one VLAN at a
Verifying Vlans
After you configure the VLAN, you can validate the VLAN configurations using Cisco
IOS show commands. The following are the various show commands.
There are a number of ways to manage VLANs and VLAN port memberships. The figure
shows the syntax for the no switchport access vlan command.
CHANGING VLAN PORTS MEMBERSHIP
When removing a Vlan from a switchport, go into the specified interface that you want
to change vlan membership, and issue the command no swictport access valn as shown
below.
DELETING VLANS
To remove or delete a vlan, enter into the global configuration mode and type the
command, no vlan vlan-id.
Vlan Trunks
A VLAN trunk is not associated to any VLANs; neither is the trunk ports used
to establish the trunk link.
VLANs help control the reach of broadcast frames and their impact in the
network.
Unicast and multicast frames are forwarded within the originating VLAN.
ROUTING CONCEPTS
Why Routing?
Routers use static routes and dynamic routing protocols to learn about remote
networks and build their routing tables.
Routers use routing tables to determine the best path to send packets.
STATIC ROUTING
Routers learn about remote networks either dynamically using routing protocols or
manually using static routes. In many cases routers use a combination of both dynamic
routing protocols and static routes. This chapter focuses on static routing. Static
routes are very common and do not require the same amount of processing and
overhead as we will see with dynamic routing protocol.
The router is a special-purpose computer that plays a key role in the operation of any
data network. Routers are primarily responsible for interconnecting networks by:
Routers perform packet forwarding by learning about remote networks and maintaining
routing information. The router is the junction or intersection that connects multiple IP
networks. The routers primary forwarding decision is based on Layer 3 information, the
destination IP address.
The router's routing table is used to find the best match between the destination IP of
a packet and a network address in the routing table. The routing table will ultimately
determine the exit interface to forward the packet and the router will encapsulate that
packet in the appropriated data link frame for that outgoing interface.
Switch-to-router
Switch-to-PC
Hub-to-PC
Hub-to-server
Switch-to-switch
PC-to-PC
Switch-to-hub
Hub-to-hub
Router-to-router
Router-to-server
Static routes are not advertised over the network, resulting in better security.
Static routes use less bandwidth than dynamic routing protocols, no CPU cycles
are used to calculate and communicate routes.
Does not scale well with growing networks; maintenance becomes cumbersome.
Providing ease of routing table maintenance in smaller networks that are not
expected to grow significantly.
Using a single default route to represent a path to any network that does not
have a more specific match with another route in the routing table. Default
routes are used to send traffic to any destination beyond the next upstream
router.
Stub network
the show ip route command is used to display the routing table. Initially, the routing
table is empty if no interfaces have been configured.
As you can see in the routing table for R1, no interfaces have been configured with an
IP address and subnet mask.
Note: Static routes and dynamic routes will not be added to the routing table until
the appropriate local interfaces, also known as the exit interfaces, have been configured
on the router.
show interfaces command - The show interfaces command shows the status and gives a
detailed description for all interfaces on the router. As you can see, the output from
the command can be rather lengthy. To view the same information, but for a specific
interface, such as FastEthernet 0/0, use the show interfaces command with a
parameter that specifies the interface. For example:
Notice that the interface is administratively down and the line protocol is down.
Administratively down means that the interface is currently in the shutdown mode, or
turned off. Line protocol is down means, in this case, that the interface is not
receiving a carrier signal from a switch or the hub.
show ip interface brief command - The show ip interface brief command can be used to
see a portion of the interface information in a condensed format.
show running-config command - The show running-config command displays the current
configuration file that the router is using. Configuration commands are temporarily
stored in the running configuration file and implemented immediately by the router.
As shown, R1 does not yet have any routes. Let's add a route by configuring an
interface and explore exactly what happens when that interface is activated. By default,
all router interfaces are shutdown, or turned off. To enable this interface, use the no
shutdown command, which changes the interface from administratively down to up.
R1(config-if)#no shutdown
The C at the beginning of the route indicates that this is a directly connected network.
With very few exceptions, routing tables have routes for network addresses rather than
individual host addresses. The 172.16.3.0/24 route in the routing table means that
this route matches all packets with a destination address belonging to this network.
Having a single route represent an entire network of host IP addresses makes the
routing table smaller, with fewer routes, which results in faster routing table lookups.
The routing table could contain all 254 individual host IP addresses for the
172.16.3.0/24 network, but that is an inefficient way of storing addresses.
The process we use for the configuration of the serial interface 0/0/0 is similar to the
process we used to configure the FastEthernet 0/0 interface.
A default route identifies the gateway IP address to which the router sends all
IP packets that it does not have a learned or static route.
Floating static routes are static routes that are used to provide a backup path
to a primary static or dynamic route, in the event of a link failure.
The floating static route is only used when the primary route is not available. To
accomplish this, the floating static route is configured with a higher
administrative distance than the primary route.
Configuring IPV4 Static Routes
Ip route command
The next hop can be identified by an IP address, exit interface, or both. How the
destination is specified creates one of the three following route types:
Directly connected static route - Only the router exit interface is specified.
Fully specified static route - The next-hop IP address and exit interface are
specified.
1. Looks for a match in the routing table and finds that it has to forward the
packets to the next-hop IPv4 address 172.16.2.2.
2. R1 must determine how to reach 172.16.2.2 ; therefore it searches a second
time for a 172.16.2.2 match.
Along with ping and traceroute, useful commands to verify static routes include:
show ip route
DYNAMIC ROUTING
This chapter introduces dynamic routing protocols, including how different routing
protocols are classified, what metrics they use to determine best path, and the benefits
of using a dynamic routing protocol.
Dynamic routing protocols are usually used in larger networks to ease the administrative
and operational overhead of using only static routes. Typically, a network uses a
combination of both a dynamic routing protocol and static routes.
Dynamic routing protocols have been used in networks since the early 1980s. One of
the earliest routing protocols was Routing Information Protocol (RIP). RIP has evolved
into a newer version RIPv2. However, the newer version of RIP still does not scale to
larger network implementations. To address the needs of larger networks, two advanced
routing protocols were developed: Open Shortest Path First (OSPF) and Intermediate
System-to-Intermediate System (IS-IS)
Routing protocols are used to facilitate the exchange of routing information between
routers. Routing protocols allow routers to dynamically share information about remote
networks and automatically add this information to their own routing tables.
Routing protocols determine the best path to each network which is then added to the
routing table. One of the primary benefits to using a dynamic routing protocol is that
routers exchange routing information whenever there is a topology change. This exchange
allows routers to automatically learn about new networks and also to find alternate
paths when there is a link failure to a current network.
routing protocol is a set of processes, algorithms, and messages that are used to
exchange routing information and populate the routing table with the routing protocol's
choice of best paths. The purpose of a routing protocol includes:
Data structures - Some routing protocols use tables and/or databases for its
operations. This information is kept in RAM.
Routing protocol messages - Routing protocols use various types of messages to discover
neighboring routers, exchange routing information, and other tasks to learn and maintain
accurate information about the network.
The router shares routing messages and routing information with other routers that are
using the same routing protocol.
When a router detects a topology change the routing protocol can advertise this change
to other routers.
Administrator has less work maintaining the configuration when adding or deleting
networks.
More scalable, growing the network usually does not present a problem.
Router resources are used (CPU cycles, memory and link bandwidth).
Distance vector means that routes are advertised as vectors of distance and direction.
Distance is defined in terms of a metric such as hop count and direction is simply the
next-hop router or exit interface. Distance vector protocols typically use the Bellman-
Ford algorithm for the best path route determination.
Some distance vector protocols periodically send complete routing tables to all connected
neighbors. In large networks, these routing updates can become enormous, causing
significant traffic on the links.
Distance vector protocols use routers as sign posts along the path to the final
destination. The only information a router knows about a remote network is the
distance or metric to reach that network and which path or interface to use to get
there. Distance vector routing protocols do not have an actual map of the network
topology.
Classful routing protocols do not send subnet mask information in routing updates. The
first routing protocols such as RIP, were classful. This was at a time when network
addresses were allocated based on classes, class A, B, or C. A routing protocol did not
need to include the subnet mask in the routing update because the network mask could
be determined based on the first octet of the network address.
Classful routing protocols can still be used in some of today's networks, but because
they do not include the subnet mask they cannot be used in all situations. Classful
routing protocols cannot be used when a network is subnetted using more than one
subnet mask, in other words classful routing protocols do not support variable length
subnet masks (VLSM).
Classless routing protocols include the subnet mask with the network address in routing
updates. Today's networks are no longer allocated based on classes and the subnet mask
cannot be determined by the value of the first octet. Classless routing protocols are
required in most networks today because of their support for VLSM.
In the figure, notice that the classless version of the network is using both /30 and
/27 subnet masks in the same topology. Also notice that this topology is using a
discontiguous design.
What is Convergence?
Convergence is when all routers' routing tables are at a state of consistency. The
network has converged when all routers have complete and accurate information about
the network. Convergence time is the time it takes routers to share information,
calculate best paths, and update their routing tables. A network is not completely
operable until the network has converged; therefore, most networks require short
convergence times.
Routing protocols can be rated based on the speed to convergence; the faster the
convergence, the better the routing protocol. Generally, RIP and IGRP are slow to
converge, whereas EIGRP and OSPF are faster to converge.
There are cases when a routing protocol learns of more than one route to the same
destination. To select the best path, the routing protocol must be able to evaluate and
differentiate between the available paths. For this purpose a metric is used. A metric is
a value used by routing protocols to assign costs to reach remote networks. The metric
is used to determine which path is most preferable when there are multiple paths to
the same remote network.
Each routing protocol uses its own metric. For example, RIP uses hop count, EIGRP
uses a combination of bandwidth and delay, and Cisco's implementation of OSPF uses
bandwidth.
Load Balancing
We have discussed that individual routing protocols use metrics to determine the best
route to reach remote networks. But what happens when two or more routes to the
same destination have identical metric values? How will the router decide which path to
use for packet forwarding? In this case, the router does not choose only one route.
Instead, the router "load balances" between these equal cost paths. The packets are
forwarded using all equal-cost paths.
Administrative distance (AD) defines the preference of a routing source. Each routing
source - including specific routing protocols, static routes, and even directly connected
networks - is prioritized in order of most- to least-preferable using an administrative
distance value. Cisco routers use the AD feature to select the best path when it learns
about the same destination network from two or more different routing sources.
Administrative distance is an integer value from 0 to 255. The lower the value the
more preferred the route source. An administrative distance of 0 is the most
preferred. Only a directly connected network has an administrative distance of 0, which
cannot be changed.
static routes are entered by an administrator who wants to manually configure the best
path to the destination. For that reason, static routes have a default AD value of 1.
This means that after directly connected networks, which have a default AD value of 0,
static routes are the most preferred route source.
Directly connected networks appear in the routing table as soon as the IP address on
the interface is configured and the interface is enabled and operational. The AD value of
directly connected networks is 0, meaning that this is the most preferred routing
source. There is no better route for a router than having one of its interfaces directly
connected to that network. For that reason, the administrative distance of a directly
connected network cannot be changed and no other route source can have an
administrative distance of 0.
CHARACTERISTICS OF RIP
Distance Vector Routing Protocols," RIP has the following key characteristics:
RIP is a distance vector routing protocol.
RIP uses hop count as its only metric for path selection.
Advertised routes with hop counts greater than 15 are unreachable.
Messages are broadcast every 30 seconds.
RIP uses the Bellman-Ford algorithm as its routing algorithm. Both RIP1 and RIP2 use
Hop Count as a metric, and the maximum hop count is 15. RIP1 and RIP2 has
administrative distance of 120.
RIP1 VS RIP2
Configuring RIP
Use the privileged exec , show ip protocols to view default RIP settings.
Enabling RIP2
The show ip protocols now states that automatic network summarization is not
in effect.
Sending out unneeded updates on a LAN interface impacts network in three ways:
Wasted Bandwidth
Wasted Resources
Security Risk
Open Shortest Path First (OSPF ) is a link-state routing protocol that was developed
as a replacement for the distance vector routing protocol RIP. RIP was an acceptable
routing protocol in the early days of networking and the Internet, but its reliance on
hop count as the only measure for choosing the best route quickly became unacceptable
in larger networks that needed a more robust routing solution. OSPF is a classless
routing protocol that uses the concept of areas for scalability. RFC 2328 defines the
OSPF metric as an arbitrary value called cost.
OSPF's major advantages over RIP are its fast convergence and its scalability to much
larger network implementations.
There are five different types of OSPF LSPs. Each packet serves a specific purpose in
the OSPF routing process:
1. Hello - Hello packets are used to establish and maintain adjacency with other OSPF
routers. The hello protocol is discussed in detail in the next topic.
2. DBD - The Database Description (DBD) packet contains an abbreviated list of the
sending router's link-state database and is used by receiving routers to check against the
local link-state database.
3. LSR - Receiving routers can then request more information about any entry in the
DBD by sending a Link-State Request (LSR).
4. LSU - Link-State Update (LSU) packets are used to reply to LSRs as well as to
announce new information. LSUs contain seven different types of Link-State
Advertisements (LSAs). LSUs and LSAs are briefly discussed in a later topic.
Neighbor Establishment
Before an OSPF router can flood its link-states to other routers, it must first
determine if there are any other OSPF neighbors on any of its links. In the figure, the
OSPF routers are sending Hello packets on all OSPF-enabled interfaces to determine if
there are any neighbors on those links. The information in the OSPF Hello includes the
OSPF Router ID of the router sending the Hello packet.
Receiving an OSPF Hello packet on an interface confirms for a router that there is
another OSPF router on this link. OSPF then establishes adjacency with the neighbor
OSPF Hello and Dead Intervals
Before two routers can form an OSPF neighbor adjacency, they must agree on three
values: Hello interval, Dead interval, and network type. The OSPF Hello interval
indicates how often an OSPF router transmits its Hello packets. By default, OSPF Hello
packets are sent every 10 seconds on multiaccess and point-to-point segments and every
30 seconds on non-broadcast multiaccess (NBMA) segments (Frame Relay, X.25,
ATM).
In most cases, OSPF Hello packets are sent as multicast to an address reserved for
ALLSPFRouters at 224.0.0.5. Using a multicast address allows a device to ignore the
packet if its interface is not enabled to accept OSPF packets. This saves CPU processing
time on non-OSPF devices.
The Dead interval is the period, expressed in seconds, that the router will wait to
receive a Hello packet before declaring the neighbor "down." Cisco uses a default of four
times the Hello interval. For multiaccess and point-to-point segments, this period is 40
seconds. For NBMA networks, the Dead interval is 120 seconds.
If the Dead interval expires before the routers receive a Hello packet, OSPF will remove
that neighbor from its link-state database. The router floods the link-state information
about the "down" neighbor out all OSPF enabled interfaces.
In the figure above , R1, R2, and R3 are connected through point-to-point links.
Therefore, no DR/BDR election occurs
OSPF ALGORITHM
Each OSPF router maintains a link-state database containing the LSAs received from all
other routers. Once a router has received all of LSAs and built its local link-state
database, OSPF uses Dijkstra's shortest path first (SPF) algorithm to create an SPF
tree. The SPF tree is then used to populate the IP routing table with the best paths
to each network.
Administrative Distance
OSPF is enabled with the router ospf process-id global configuration command. The
process-id is a number between 1 and 65535 and is chosen by the network
administrator. The process-id is locally significant, which means that it does not have to
match other OSPF routers in order to establish adjacencies with those neighbors.
In our topology, we will enable OSPF on all three routers using the same process ID of
1. We are using the same process ID simply for consistency.
R1(config)#router ospf 1
R1(config-router)#
Any interfaces on a router that match the network address in the network command
will be enabled to send and receive OSPF packets.
The network address along with the wildcard mask is used to specify the interface or
range of interfaces that will be enabled for OSPF using this network command.
2. If the router-id is not configured, the router chooses highest IP address of any of
its loopback interfaces.
3. If no loopback interfaces are configured, the router chooses highest active IP address
of any of its physical interfaces.
If an OSPF router is not configured with an OSPF router-id command and there are no
loopback interfaces configured, the OSPF router ID will be the highest active IP address
on any of its interfaces. The interface does not need to be enabled for OSPF, meaning
that it does not need to be included in one of the OSPF network commands. However,
the interface must be active - it must be in the up state.
Loopback Address
If the OSPF router-id command is not used and loopback interfaces are configured,
OSPF will choose highest IP address of any of its loopback interfaces. A loopback address
is a virtual interface and is automatically in the up state when configured. You already
know the commands to configure a loopback interface:
OSPF Metric
The OSPF metric is called cost. "A cost is associated with the output side of each
router interface. This cost is configurable by the system administrator. The lower the
cost, the more likely the interface is to be used to forward data traffic."
The Cisco IOS uses the cumulative bandwidths of the outgoing interfaces from the
router to the destination network as the cost value. At each router, the cost for an
interface is calculated as 10 to the 8th power divided by bandwidth in bps. This is
known as the reference bandwidth. Dividing 10 to the 8th power by the interface
bandwidth is done so that interfaces with the higher bandwidth values will have a lower
calculated cost.
The reference bandwidth defaults to 10 to the 8th power, 100,000,000 bps or 100
Mbps. This results in interfaces with a bandwidth of 100 Mbps and higher having the
same OSPF cost of 1. The reference bandwidth can be modified to accommodate
networks with links faster than 100,000,000 bps (100 Mbps) using the OSPF
command auto-cost reference-bandwidth. When this command is necessary, it is
recommended that it is used on all routers so the OSPF routing metric remains
consistent.
The cost of an OSPF route is the accumulated value from one router to the
destination network.
For example, in the figure, the routing table on R1 shows a cost of 65 to reach the
10.10.10.0/24 network on R2. Because 10.10.10.0/24 is attached to a FastEthernet
interface, R2 assigns the value 1 as the cost for 10.10.10.0/24. R1 then adds the
additional cost value of 64 to send data across the default T1 link between R1 and R2.
When the serial interface is not actually operating at the default T1 speed, the
interface requires manual modification. Both sides of the link should be configured to
have the same value. Both the bandwidth interface command or the ip ospf cost
interface command achieve this purpose - an accurate value for use by OSPF in
determining the best route.
The bandwidth command is used to modify the bandwidth value used by the IOS in
calculating the OSPF cost metric. The interface command syntax is the same syntax
that you learned in Chapter 9, "EIGRP":
Router(config-if)#bandwidth bandwidth-kbps
The figure shows the bandwidth commands used to modify the costs of all the serial
interfaces in the topology. For R1, the show ip ospf interface command shows that the
cost of the Serial 0/0/0 link is now 1562, the result of the Cisco OSPF cost
calculation 100,000,000/64,000.
An alternative method to using the bandwidth command is to use the ip ospf cost
command, which allows you to directly specify the cost of an interface. For example, on
R1 we could configure Serial 0/0/0 with the following command:
INTER-VLAN ROUTING
In this chapter, you will learn about inter-VLAN routing and how it is used to permit
devices on separate VLANs to communicate. Now that you know how to configure
VLANs on a network switch, the next step is to allow devices connected to the various
VLANs to communicate with each other. In a previous chapter, you learned that each
VLAN is a unique broadcast domain, so computers on separate VLANs are, by default,
not able to communicate. There is a way to permit these end stations to communicate;
it is called inter-VLAN routing.
There are two types of inter-vlan routing, the Legacy system and the router on a stick
method.
1. Legacy system
In the past:
Packets would arrive on the router through one through interface, be routed and
leave through another.
Because the router interfaces were connected to VLANs and had IP addresses
from that specific VLAN, routing between VLANs was achieved.
Large networks with large number of VLANs required many router interfaces.
Network devices use the router as a gateway to access the devices connected to
the other VLANs.
Preparation
The physical interface of the router must be connected to a trunk link on the
adjacent switch.
Each subinterface is assigned an IP address specific to its subnet or VLAN and is also
configured to tag frames for that VLAN
Sub-interfaces have to be created and configured to tag frames and also configured with
an IP address and subnet mask. For every Vlan, a sub interface has to be created and
configured.
The router interface is configured to operate as a trunk link and is connected to a
switch port configured in trunk mode. The router performs the inter-VLAN routing by
accepting VLAN tagged traffic on the trunk interface coming from the adjacent switch
and internally routing between the VLANs using subinterfaces. The router then forwards
the routed traffic-VLAN tagged for the destination VLAN-out the same physical
interface.
Subinterfaces are multiple virtual interfaces, associated with one physical interface.
These subinterfaces are configured in software on a router that is independently
configured with an IP address and VLAN assignment to operate on a specific VLAN.
Subinterfaces are configured for different subnets corresponding to their VLAN
assignment to facilitate logical routing before the data frames are VLAN tagged and
sent back out the physical interface.
Both physical interfaces and subinterfaces are used to perform inter-VLAN routing.
There are advantages and disadvantage to each method.
Port Limits
Physical interfaces are configured to have one interface per VLAN on the network. On
networks with many VLANs, using a single router to perform inter-VLAN routing is not
possible. Routers have physical limitations that prevent them from containing large
numbers of physical interfaces. Instead, you could use multiple routers to perform
inter-VLAN routing for all VLANs if avoiding the use of subinterfaces is a priority.
Subinterfaces allow a router to scale to accommodate more VLANs than the physical
interfaces permit. Inter-VLAN routing in large environments with many VLANs can
usually be better accommodated by using a single physical interface with many
subinterfaces.
Performance
Because there is no contention for bandwidth on separate physical interfaces, physical
interfaces have better performance when compared to using subinterfaces. Traffic from
each connected VLAN has access to the full bandwidth of the physical router interface
connected to that VLAN for inter-VLAN routing.
When subinterfaces are used for inter-VLAN routing, the traffic being routed competes
for bandwidth on the single physical interface. On a busy network, this could cause a
bottleneck for communication. To balance the traffic load on a physical interface,
subinterfaces are configured on multiple physical interfaces resulting in less contention
between VLAN traffic.
Connecting physical interfaces for inter-VLAN routing requires that the switch ports be
configured as access ports. Subinterfaces require the switch port to be configured as a
trunk port so that it can accept VLAN tagged traffic on the trunk link. Using
subinterfaces, many VLANs can be routed over a single trunk link rather than a single
physical interface for each VLAN.
Cost
Complexity
On the other hand, using subinterfaces with a trunk port results in a more complex
software configuration, which can be difficult to troubleshoot. In the router-on-a-stick
model, only a single interface is used to accommodate all the different VLANs.