Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
17 views

Routing&Switching Notes

The document discusses network switches and how they work. Switches provide segmentation of a LAN into collision domains and isolate broadcast traffic. Switches learn MAC addresses to filter traffic to appropriate ports and can operate in different forwarding modes like store-and-forward or cut-through.

Uploaded by

Malxy55 D
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Routing&Switching Notes

The document discusses network switches and how they work. Switches provide segmentation of a LAN into collision domains and isolate broadcast traffic. Switches learn MAC addresses to filter traffic to appropriate ports and can operate in different forwarding modes like store-and-forward or cut-through.

Uploaded by

Malxy55 D
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

ROUTING & SWITCHING NOTES

NETWORK SWICHES

Switches provide segmentation of a LAN, dividing the LAN into independent


collision domains. Each port on a switch represents a separate collision domain
and provides the full media bandwidth to the node or nodes connected on that
port. While switches do not by default prevent broadcasts from propagating to
connected devices, they do isolate unicast Ethernet communications so that they
are only "heard" by the source and destination devices. So if there are a large
number of ARP requests, each ARP reply will only be between two devices.

With regard to mitigating various types of broadcast attacks, to which Ethernet


networks are prone, network engineers implement Cisco switch security
technologies such as specialized access lists and port security.

Recall that the logical topology of an Ethernet network is a multi-access bus in


which devices all share access to the same medium. This logical topology
determines how hosts on the network view and process frames sent and received
on the network. However, the physical topology of most Ethernet networks
today is that of a star or extended star. This means that on most Ethernet
networks, end devices are typically connected, in a point-to-point basis, to a
Layer 2 LAN switch.

CSMA/CD

Ethernet signals are transmitted to every host connected to the LAN using a
special set of rules to determine which station can access the network. The set
of rules that Ethernet uses is based on the IEEE carrier sense multiple
access/collision detect (CSMA/CD) technology. CSMA/CD is only used with half-
duplex communication typically found in hubs. Full-duplex switches do not use
CSMA/CD.

Carrier Sense
In the CSMA/CD access method, all network devices that have messages to send
must listen before transmitting.
If a device detects a signal from another device, it waits for a specified amount
of time before attempting to transmit.

When there is no traffic detected, a device transmits its message. While this
transmission is occurring, the device continues to listen for traffic or collisions on
the LAN.

A Layer 2 LAN switch performs switching and filtering based only on the OSI
data link layer (Layer 2) MAC address. A switch is completely transparent to
network protocols and user applications. A Layer 2 switch builds a MAC address
table that it uses to make forwarding decisions. Layer 2 switches depend on
routers to pass data between independent IP subnetworks.

Switches use MAC addresses to direct network communications through their


switch fabric to the appropriate port toward the destination node. The switch
fabric is the integrated circuits and the accompanying machine programming that
allows the data paths through the switch to be controlled. For a switch to know
which port to use to transmit a unicast frame, it must first learn which nodes
exist on each of its ports.
A switch determines how to handle incoming data frames by using its MAC
address table. A switch builds its MAC address table by recording the MAC
addresses of the nodes connected to each of its ports.

Once a MAC address for a specific node on a specific port is recorded in the
address table, the switch then knows to send traffic destined for that specific
node out the port mapped to that node for subsequent transmissions.

When an incoming data frame is received by a switch and the destination MAC address
is not in the table, the switch forwards the frame out all ports, except for the port
on which it was received.

When the destination node responds, the switch records the node's MAC address
in the address table from the frame's source address field.

In networks with multiple interconnected switches, the MAC address tables


record multiple MAC addresses for the ports connecting the switches which
reflect the node's beyond. Typically, switch ports used to interconnect two
switches have multiple MAC addresses recorded in the MAC address table.

Though transparent to network protocols and user applications, switches can


operate in different modes that can have both positive and negative effects when
forwarding Ethernet frames on a network. One of the most basic settings of a
switch is the duplex setting of each individual port connected to each host
device. A port on a switch must be configured to match the duplex settings of
the media type. There are two types of duplex settings used for communications
on an Ethernet network: half duplex and full duplex.

Half Duplex

Half-duplex communication relies on unidirectional data flow where sending and


receiving data are not performed at the same time. This is similar to how
walkie-talkies or two-way radios function in that only one person can talk at any
one time. If someone talks while someone else is already speaking, a collision
occurs. As a result, half-duplex communication implements CSMA/CD to help
reduce the potential for collisions and detect them when they do happen. Half-
duplex communications have performance issues due to the constant waiting,
because data can only flow in one direction at a time. Half-duplex connections
are typically seen in older hardware, such as hubs.

Because of these limitations, full-duplex communication has replaced half duplex in


more current hardware.

Full Duplex

In full-duplex communication, data flow is bidirectional, so data can be sent and


received at the same time. The bidirectional support enhances performance by
reducing the wait time between transmissions.
Most Ethernet, Fast Ethernet, and Gigabit Ethernet NICs sold today offer full-
duplex capability.
In full-duplex mode, the collision detect circuit is disabled. Frames sent by the
two connected end nodes cannot collide because the end nodes use two separate
circuits in the network cable. Each full-duplex connection uses only one port.
Full-duplex connections require a switch that supports full

duplex or a direct connection between two nodes that each support full duplex.
A Cisco Catalyst switch supports three duplex settings:

The full option sets full-duplex mode.

The half option sets half-duplex mode.

The auto option sets autonegotiation of duplex mode. With autonegotiation


enabled, the two ports communicate to decide the best mode of operation.
In addition to having the correct duplex setting, it is also necessary to have the
correct cable type defined for each port. The Correct cable type for switch-to-
switch, and router-to-host device, once required the use of a sp ions between
specific devices, such as switch-to-switch, switch-to-router, switch-to-host
specific cable types (crossover or straight-through). Instead, most switch devices
now support the mdix auto interface configuration command in the CLI to enable
the automatic medium-dependent interface crossover (auto-MDIX) feature.

When the auto-MDIX feature is enabled, the switch detects the required cable
type for copper Ethernet connections and configures the interfaces accordingly.
Therefore, you can use either a crossover or a straight-through cable for
connections to a copper 10/100/1000 port on the switch, regardless of the type
of device on the other end of the connection.
In the past, switches used one of the following forwarding methods for switching data
between network ports:

Store-and-forward switching

Cut-through switching

Store-and-forward switching

In store-and-forward switching, when the switch receives the frame, it stores the
data in buffers until the complete frame has been received. During the storage
process, the switch analyzes the frame for information about its destination. In
this process, the switch also performs an error check using the Cyclic Redundancy
Check (CRC) trailer portion of the Ethernet frame.
CRC uses a mathematical formula, based on the number of bits (1s) in the
frame, to determine whether the received frame has an error. After confirming
the integrity of the frame, the frame is forwarded out the appropriate port
toward its destination. When an error is detected in a frame, the switch discards
the frame. Discarding frames with errors reduces the amount of bandwidth
consumed by corrupt data. Store-and-forward switching is required for Quality of
Service (QoS) analysis on converged networks where frame classification for
traffic prioritization is necessary. For example, voice over IP data streams need
to have priority over web-browsing traffic

Cut-through switching
In cut-through switching, the switch acts upon the data as soon as it is received,
even if the transmission is not complete. The switch buffers just enough of the
frame to read the destination MAC address so that it can determine to which
port to forward the data. The destination MAC address is located in the first 6
bytes of the frame following the preamble.
The switch looks up the destination MAC address in its switching table,
determines the outgoing interface port, and forwards the frame onto its
destination through the designated switch port. The switch does not perform any
error checking on the frame. Because the switch does not have to wait for the
entire frame to be completely buffered, and because the switch does not perform
any error checking, cut-through switching is faster than store-and-forward
switching. However, because the switch does not perform any error checking, it
forwards corrupt frames throughout the network. The corrupt frames consume
bandwidth while they are being forwarded. The destination NIC eventually
discards the corrupt frames

There are two variants of cut-through switching:

1. Fast-forward switching: Fast-forward switching offers the lowest level of latency.


Fast-forward switching immediately forwards a packet after reading the
destination address. Because fast-forward switching starts forwarding before the
entire packet has been received, there may be times when packets are relayed
with errors. This occurs infrequently, and the destination network adapter
discards the faulty packet upon receipt.
2. Fragment-free switching: In fragment-free switching, the switch stores the first
64 bytes of the frame before forwarding. . The reason fragment-free switching
stores only the first 64 bytes of the frame is that most network errors and
collisions occur during the first 64 bytes. Fragment-free switching tries to
enhance fast-forward switching by performing a small error check on the first 64
bytes of the frame to ensure that a collision has not occurred before forwarding
the frame.

Switches also perform buffering, that is storing of frames when received, or


before sending some frames.

When selecting a switch, it is important to understand the key features of the switch
options available. This means that it is necessary to decide on features such as whether
Power over Ethernet (PoE) is necessary, and the preferred "forwarding rate".

PoE allows a switch to deliver power to a device, such as IP phones and some
wireless access points, over the existing Ethernet cabling. This allows more
flexibility for installation.
FORWADING RATES

The forwarding rate defines the processing capabilities of a switch by rating how much
data the switch can process per second. Switch product lines are classified by forwarding
rates. Entry-layer switches have lower forwarding rates than enterprise-layer switches.
Other considerations include whether the device is stackable or non-stackable as well as
the thickness of the switch (expressed in number of rack units), and port density, or
the number of ports available on a single switch. The port density of a device can vary
depending on whether the device is a fixed configuration device or a modular device.

TYPES OF SWITCHES

1. Fixed Configuration Switches

Fixed configuration switches are just as you might expect, fixed in their configuration.
What that means is that you cannot add features or options to the switch beyond
those that originally came with the switch.
2. Modular Switches

Modular switches offer more flexibility in their configuration. Modular switches typically
come with different sized chassis that allow for the installation of different numbers of
modular line cards. The line cards actually contain the ports.

Navigating command-Line Interface Modes

As a security feature, Cisco IOS software separated the EXEC sessions into these access
levels:

User EXEC: Allows a person to access only a limited number of basic monitoring
commands. User EXEC mode is the default mode you enter after logging in to a Cisco
switch from the CLI. User EXEC mode is identified by the > prompt.

Privileged EXEC: Allows a person to access all device commands, such as those used for
configuration and management, and can be password-protected to allow only authorized
users to access the device. Privileged EXEC mode is identified by the # prompt

To change from user EXEC mode to privileged EXEC mode, enter the enable command.
To change from privileged EXEC mode to user EXEC mode, enter the disable command.
On a real network, the switch prompts for the password. Enter the correct password.
By default, the password is not configured.

Once you have entered privileged EXEC mode on the Cisco switch, you can access other
configuration modes. Cisco IOS software uses a hierarchy of commands in its command-
mode structure. Each command mode supports specific Cisco IOS commands related to a
type of operation on the device.
There are many configuration modes. For now, you will explore how to navigate two
common configuration modes: global configuration mode and interface configuration
mode.

To configure global switch parameters such as the switch hostname or the switch IP
address used for switch management purposes, use global configuration mode. To access
global configuration mode, enter the configure terminal command in privileged EXEC
mode. The prompt changes to (config)#.

Interface Configuration Mode

Configuring interface-specific parameters is a common task. To access interface


configuration mode from global configuration mode, enter the interface interface-name>
command. The prompt changes to S1(config-if)#. To exit interface configuration mode,
use the exit command. The prompt switches back to (config)#, letting you know that
you are in global configuration mode. To exit global configuration mode, enter the exit
command again. The prompt switches to #, signifying privileged EXEC mode.

BASIC SWITCH CONFIGURATIONS

1. Hostname
2. Banner

3. Configure the console, aux, vty ports

4. Configure priviledge mode with secret password.

5. Encrypt all passwords.

6. Configure switch virtual interfaces.

7. Configure default gateway

8. Shutdown unused ports.

9. Configure ssh using rsa 1024 key, domin name, port security by allowing a
maximum of 2 devices per port,

SWITCH LED LIGHTS


Cisco Catalyst switches have several status LED indicator lights. You can use the switch
LEDs to quickly monitor switch activity and its performance. Switches of different
models and feature sets will have different LEDs and their placement on the front
panel of the switch may also vary.

The figure above shows the switch LEDs and the Mode button for a Cisco Catalyst
2960 switch. The Mode button is used to toggle through port status, port duplex,
port speed, and PoE (if supported) status of the port LEDs. The following describes
the purpose of the LED indicators, and the meaning of their colors:

System LED - Shows whether the system is receiving power and is functioning
properly. If the LED is off, it means the system is not powered on. If the LED
is green, the system is operating normally. If the LED is amber, the system is
receiving power but is not functioning properly.

Redundant Power System (RPS) LED - Shows the RPS status. If the LED is
off, the RPS is off or not properly connected. If the LED is green, the RPS is
connected and ready to provide back-up power. If the LED is blinking green, the
RPS is connected but is unavailable because it is providing power to another
device. If the LED is amber, the RPS is in standby mode or in a fault condition.
If the LED is blinking amber, the internal power supply in the switch has failed,
and the RPS is providing power.

Port Status LED - Indicates that the port status mode is selected when the
LED is green. This is the default mode. When selected, the port LEDs will
display colors with different meanings. If the LED is off, there is no link, or the
port was administratively shut down. If the LED is green, a link is present. If
the LED is blinking green, there is activity and the port is sending or receiving
data. If the LED is alternating green-amber, there is a link fault. If the LED is
amber, the port is blocked to ensure a loop does not exist in the forwarding
domain and is not forwarding data (typically, ports will remain in this state for
the first 30 seconds after being activated). If the LED is blinking amber, the
port is blocked to prevent a possible loop in the forwarding domain.

Port Duplex LED - Indicates the port duplex mode is selected when the LED is
green. When selected, port LEDs that are off are in half-duplex mode. If the
port LED is green, the port is in full-duplex mode.
Port Speed LED - Indicates the port speed mode is selected. When selected, the
port LEDs will display colors with different meanings. If the LED is off, the port
is operating at 10 Mb/s. If the LED is green, the port is operating at 100
Mb/s. If the LED is blinking green, the port is operating at 1000 Mb/s.

Power over Ethernet (PoE) Mode LED - If PoE is supported; a PoE mode LED
will be present. If the LED is off, it indicates the PoE mode is not selected and
that none of the ports have been denied power or placed in a fault condition. If
the LED is blinking amber, the PoE mode is not selected but at least one of the
ports has been denied power, or has a PoE fault. If the LED is green, it
indicates the PoE mode is selected and the port LEDs will display colors with
different meanings. If the port LED is off, the PoE is off. If the port LED is
green, the PoE is on. If the port LED is alternating green-amber, PoE is denied
because providing power to the powered device will exceed the switch power
capacity. If the LED is blinking amber, PoE is off due to a fault. If the LED is
amber, PoE for the port has been disabled.

Switch Boot Sequence

1. Power-on self test (POST).


2. Run boot loader software.
3. Boot loader performs low-level CPU initialization.
4. Boot loader initializes the flash file system
5. Boot loader locates and loads a default IOS operating system software image into
memory and passes control of the switch over to the IOS.

VLANS

Vlan means Virtual Local Area Network, which is a logical partition of a Layer 2
network.

Some of the characteristics of Vlans are as follows.

Multiple partitions can be created, allowing for multiple VLANs to co-exist.

Each VLAN is a broadcast domain, usually with its own IP network.


VLANs are mutually isolated and packets can only pass between them via a
router.

The partitioning of the Layer 2 network takes place inside a Layer 2 device,
usually via a switch.

The hosts grouped within a VLAN are unaware of the VLAN’s existence.

Network performance can be a factor in an organization's productivity and its


reputation for delivering as promised. One of the contributing technologies to excellent
network performance is the separation of large broadcast domains into smaller ones with
VLANs. Smaller broadcast domains limit the number of devices participating in
broadcasts and allow devices to be separated into functional groupings, such as database
services for an accounting department and high-speed data transfer for an engineering
department.

Before VLANs

To appreciate why VLANs are being widely used today, consider a small community
college with student dorms and the faculty offices all in one building. The figure shows
the student computers in one LAN and the faculty computers in another LAN. This
works fine because each department is physically together, so it is easy to provide them
with their network resources.
A year later, the college has grown and now has three buildings. In the figure, the
original network is the same, but student and faculty computers are spread out across
three buildings. The student dorms remain on the fifth floor and the faculty offices
remain on the third floor. However, now the IT department wants to ensure that
student computers all share the same security features and bandwidth controls. How
can the network accommodate the shared needs of the geographically separated
departments?
The solution for the community college is to use a networking technology called a virtual
LAN (VLAN). A VLAN allows a network administrator to create groups of logically
networked devices that act as if they are on their own independent network, even if
they share a common infrastructure with other VLANs. When you configure a VLAN,
you can name it to describe the primary role of the users for that VLAN. As another
example, all of the student computers in a school can be configured in the "Student"
VLAN. Using VLANs, you can logically segment switched networks based on functions,
departments, or project teams.

These VLANs allow the network administrator to implement access and security policies
to particular groups of users. For example, the faculty, but not the students, can be
allowed access to e-learning management servers for developing online course materials.

VLAN Details
A VLAN is a logically separate IP subnetwork. VLANs allow multiple IP networks and
subnets to exist on the same switched network. For computers to communicate on the
same VLAN, each must have an IP address and a subnet mask that is consistent for
that VLAN. The switch has to be configured with the VLAN and each port in the
VLAN must be assigned to the VLAN. A switch port with a singular VLAN configured
on it is called an access port. Remember, just because two computers are physically
connected to the same switch does not mean that they can communicate. Devices on
two separate networks and subnets must communicate via a router (Layer 3).

Benefits of a VLAN

Security - Groups that have sensitive data are separated from the rest of the network,
decreasing the chances of confidential information breaches. Faculty computers are on
VLAN 10 and completely separated from student and guest data traffic.

Cost reduction - Cost savings result from less need for expensive network upgrades and
more efficient use of existing bandwidth and uplinks.
Higher performance - Dividing flat Layer 2 networks into multiple logical workgroups
(broadcast domains) reduces unnecessary traffic on the network and boosts performance.

Broadcast storm mitigation - Dividing a network into VLANs reduces the number of
devices that may participate in a broadcast storm.

Improved IT staff efficiency - VLANs make it easier to manage the network because
users with similar network requirements.

VLAN ID Ranges

Normal Range VLANs

Used in small- and medium-sized business and enterprise networks. Identified by a VLAN
ID between 1 and 1005.

IDs 1 and 1002 to 1005 are automatically created and cannot be removed. 1002 –
1005 are reserved for Tocken ring and FDDI.

Extended Range VLANs


Ranges from 1006 to 4094. They are reserved for service providers. They have fewer
options than normal Vlans. The Vlan configurations are stored I the running config file.

A cisco 2960 catalyst switch supports 255 normal and extended Vlans.

VLAN TYPES

However in the network there are a number of terms for VLANs. Some terms define
the type of network traffic they carry and others define a specific function a VLAN
performs. The following describes common VLAN terminology:

Data VLAN

A data VLAN is a VLAN that is configured to carry only user-generated traffic. A


VLAN could carry voice-based traffic or traffic used to manage the switch, but this
traffic would not be part of a data VLAN.

Default VLAN

All switch ports become a member of the default VLAN after the initial boot up of
the switch. Having all the switch ports participate in the default VLAN makes them all
part of the same broadcast domain.This allows any device connected to any switch port
to communicate with other devices on other switch ports. The default VLAN for Cisco
switches is VLAN 1. VLAN 1 has all the features of any VLAN, except that you cannot
rename it and you can not delete it.

NATIVE VLANs

A native VLAN is assigned to an 802.1Q trunk port. An 802.1Q trunk port supports
traffic coming from many VLANs (tagged traffic) as well as traffic that does not come
from a VLAN (untagged traffic). The 802.1Q trunk port places untagged traffic on the
native VLAN.

MANAGEMENT VLANS

A management VLAN is any VLAN you configure to access the management capabilities
of a switch. VLAN 1 would serve as the management VLAN if you did not proactively
define a unique VLAN to serve as the management VLAN. You assign the management
VLAN an IP address and subnet mask.

Voice VLANs

It is easy to appreciate why a separate VLAN is needed to support Voice over IP


(VoIP). Imagine you are receiving an emergency call and suddenly the quality of the
transmission degrades so much you cannot understand what the caller is saying. VoIP
traffic requires:

Assured bandwidth to ensure voice quality

Transmission priority over other types of network traffic

Ability to be routed around congested areas on the network

Delay of less than 150 milliseconds (ms) across the network

Default Vlan (Vlan 1)

By default There is a default vlan called Vlan 1, which cannot be renamed or deleted. All
switchport belong to Vlan 1 by default (before any configurations).
CONFIGURING VLANS

1.Enter Global configuration mode by typing:

S1#configure terminal

2. Create the vlan and give it an Id as:

S1(config)# vlan vlan-id

3. Give the vlan a name by the command:

S1(config-vlan)#name vlan name

4. Return to privileged EXEC mode. You must end your configuration session for the
configuration to be saved in the vlan.dat file and for configuration to take effect.

S1(config-vlan)#end

ASIGNING A SWITCHPORT TO A VLAN

After you have created a VLAN, assign one or more ports to the VLAN. When you
manually assign a switch port to a VLAN, it is known as a static access port. A static
access port can belong to only one VLAN at a
Verifying Vlans

After you configure the VLAN, you can validate the VLAN configurations using Cisco
IOS show commands. The following are the various show commands.

S1#Show vlan brief

S1# show interface vlan vlan-id

S1# show vlan name vlan-name

S1# show vlan summary

MANAGE PORT MEMBERSHIP

There are a number of ways to manage VLANs and VLAN port memberships. The figure
shows the syntax for the no switchport access vlan command.
CHANGING VLAN PORTS MEMBERSHIP

When removing a Vlan from a switchport, go into the specified interface that you want
to change vlan membership, and issue the command no swictport access valn as shown
below.

DELETING VLANS
To remove or delete a vlan, enter into the global configuration mode and type the
command, no vlan vlan-id.

S1(config)# no vlan vlan-id

Vlan Trunks

A VLAN trunk carries more than one VLAN.

A VLAN trunk is usually established between switches so same-VLAN devices can


communicate, even if physically connected to different switches.

A VLAN trunk is not associated to any VLANs; neither is the trunk ports used
to establish the trunk link.

Cisco IOS supports IEEE802.1q, a popular VLAN trunk protocol.

Controlling Broadcast Domains with VLANs

VLANs can be used to limit the reach of broadcast frames.

A VLAN is a broadcast domain of its own.


A broadcast frame sent by a device in a specific VLAN is forwarded within that
VLAN only.

VLANs help control the reach of broadcast frames and their impact in the
network.

Unicast and multicast frames are forwarded within the originating VLAN.

Configuring IEEE 802.1q Trunk Links

ROUTING CONCEPTS

Why Routing?

The router is responsible for the routing of traffic between networks.


Routers are Computers

Routers are specialized computers containing the following required components to


operate:

• Central processing unit (CPU)

• Operating system (OS) - Routers use Cisco IOS

• Memory and storage (RAM, ROM, NVRAM, Flash, hard drive)


Routers use specialized ports and network interface cards to interconnect to other
networks.

Routers Interconnect Networks

Routers can connect multiple networks.

Routers have multiple interfaces, each on a different IP network.

Every interface connects to a different network.

The interface can be a LAN interface or WAN interface.


Routers determine the best path

Routers use static routes and dynamic routing protocols to learn about remote
networks and build their routing tables.

Routers use routing tables to determine the best path to send packets.

Routers encapsulate the packet and forward it to the interface indicated in


routing table.

STATIC ROUTING

Routing is at the core of every data network, moving information across an


internetwork from source to destination. Routers are the devices responsible for the
transfer of packets from one network to the next.

Routers learn about remote networks either dynamically using routing protocols or
manually using static routes. In many cases routers use a combination of both dynamic
routing protocols and static routes. This chapter focuses on static routing. Static
routes are very common and do not require the same amount of processing and
overhead as we will see with dynamic routing protocol.

Role of the Router

The router is a special-purpose computer that plays a key role in the operation of any
data network. Routers are primarily responsible for interconnecting networks by:

Determining the best path to send packets.


Forwarding packets toward their destination.

Routers perform packet forwarding by learning about remote networks and maintaining
routing information. The router is the junction or intersection that connects multiple IP
networks. The routers primary forwarding decision is based on Layer 3 information, the
destination IP address.

The router's routing table is used to find the best match between the destination IP of
a packet and a network address in the routing table. The routing table will ultimately
determine the exit interface to forward the packet and the router will encapsulate that
packet in the appropriated data link frame for that outgoing interface.

Two types of cables can be used with Ethernet LAN interfaces:

Straight-through cables are used for:

Switch-to-router

Switch-to-PC

Hub-to-PC

Hub-to-server

Crossover cables are used for:

Switch-to-switch

PC-to-PC
Switch-to-hub

Hub-to-hub

Router-to-router

Router-to-server

Static routing provides some advantages over dynamic routing, including:

Static routes are not advertised over the network, resulting in better security.

Static routes use less bandwidth than dynamic routing protocols, no CPU cycles
are used to calculate and communicate routes.

The path a static route uses to send data is known.

Static routing has the following disadvantages:

Initial configuration and maintenance is time-consuming.

Configuration is error-prone, especially in large networks.

Administrator intervention is required to maintain changing route information.

Does not scale well with growing networks; maintenance becomes cumbersome.

Requires complete knowledge of the whole network for proper implementation.

When to use static routing

Static routing has three primary uses:

Providing ease of routing table maintenance in smaller networks that are not
expected to grow significantly.

Routing to and from stub networks. A stub network is a network accessed by a


single route, and the router has no other neighbors.

Using a single default route to represent a path to any network that does not
have a more specific match with another route in the routing table. Default
routes are used to send traffic to any destination beyond the next upstream
router.

Static Routes are often used to:

Connect to a specific network.

Provide a Gateway of Last Resort for a stub network.

Reduce the number of routes advertised by summarizing several contiguous


networks as one static route.

Create a backup route in case a primary route link fails.

Stub network

Examining Router Interfaces

the show ip route command is used to display the routing table. Initially, the routing
table is empty if no interfaces have been configured.
As you can see in the routing table for R1, no interfaces have been configured with an
IP address and subnet mask.

Note: Static routes and dynamic routes will not be added to the routing table until
the appropriate local interfaces, also known as the exit interfaces, have been configured
on the router.

Interfaces and their Status

show interfaces command - The show interfaces command shows the status and gives a
detailed description for all interfaces on the router. As you can see, the output from
the command can be rather lengthy. To view the same information, but for a specific
interface, such as FastEthernet 0/0, use the show interfaces command with a
parameter that specifies the interface. For example:

R1#show interfaces fastethernet 0/0

FastEthernet0/0 is administratively down, line protocol is down

Notice that the interface is administratively down and the line protocol is down.
Administratively down means that the interface is currently in the shutdown mode, or
turned off. Line protocol is down means, in this case, that the interface is not
receiving a carrier signal from a switch or the hub.
show ip interface brief command - The show ip interface brief command can be used to
see a portion of the interface information in a condensed format.

show running-config command - The show running-config command displays the current
configuration file that the router is using. Configuration commands are temporarily
stored in the running configuration file and implemented immediately by the router.

Configuring an Ethernet Interface

As shown, R1 does not yet have any routes. Let's add a route by configuring an
interface and explore exactly what happens when that interface is activated. By default,
all router interfaces are shutdown, or turned off. To enable this interface, use the no
shutdown command, which changes the interface from administratively down to up.

R1(config)#interface fastethernet 0/0

R1(config-if)#ip address 172.16.3.1 255.255.255.0

R1(config-if)#no shutdown

Reading the Routing Table


Now look at routing table shown in the figure. Notice R1 now has a "directly
connected" FastEthernet 0/0 interface a new network. The interface was configured
with the 172.16.3.1/24 IP address which makes it a member of the 172.16.3.0/24
network.

Examine the following line of output from the table:

C 172.16.3.0 is directly connected, FastEthernet0/0

The C at the beginning of the route indicates that this is a directly connected network.

Routers Usually Store Network Addresses

With very few exceptions, routing tables have routes for network addresses rather than
individual host addresses. The 172.16.3.0/24 route in the routing table means that
this route matches all packets with a destination address belonging to this network.
Having a single route represent an entire network of host IP addresses makes the
routing table smaller, with fewer routes, which results in faster routing table lookups.
The routing table could contain all 254 individual host IP addresses for the
172.16.3.0/24 network, but that is an inefficient way of storing addresses.

Configuring a Serial Interface

The process we use for the configuration of the serial interface 0/0/0 is similar to the
process we used to configure the FastEthernet 0/0 interface.

Default Static Route

A default static route is a route that matches all packets.

A default route identifies the gateway IP address to which the router sends all
IP packets that it does not have a learned or static route.

A default static route is simply a static route with 0.0.0.0/0 as the


destination IPv4 address.

Summary Static Route


Floating Static Route

Floating static routes are static routes that are used to provide a backup path
to a primary static or dynamic route, in the event of a link failure.

The floating static route is only used when the primary route is not available. To
accomplish this, the floating static route is configured with a higher
administrative distance than the primary route.
Configuring IPV4 Static Routes

Ip route command

Next Hop Options

The next hop can be identified by an IP address, exit interface, or both. How the
destination is specified creates one of the three following route types:

Next-hop route - Only the next-hop IP address is specified.

Directly connected static route - Only the router exit interface is specified.

Fully specified static route - The next-hop IP address and exit interface are
specified.

Configuring Next-Hop static route

When a packet is destined for the 192.168.2.0/24 network, R1:

1. Looks for a match in the routing table and finds that it has to forward the
packets to the next-hop IPv4 address 172.16.2.2.
2. R1 must determine how to reach 172.16.2.2 ; therefore it searches a second
time for a 172.16.2.2 match.

Configuring a directly connected static route


Verify a Static route

Along with ping and traceroute, useful commands to verify static routes include:

show ip route

show ip route static

show ip route network

DYNAMIC ROUTING

This chapter introduces dynamic routing protocols, including how different routing
protocols are classified, what metrics they use to determine best path, and the benefits
of using a dynamic routing protocol.

Dynamic routing protocols are usually used in larger networks to ease the administrative
and operational overhead of using only static routes. Typically, a network uses a
combination of both a dynamic routing protocol and static routes.

Dynamic routing protocols have been used in networks since the early 1980s. One of
the earliest routing protocols was Routing Information Protocol (RIP). RIP has evolved
into a newer version RIPv2. However, the newer version of RIP still does not scale to
larger network implementations. To address the needs of larger networks, two advanced
routing protocols were developed: Open Shortest Path First (OSPF) and Intermediate
System-to-Intermediate System (IS-IS)

Why Dynamic Routing?

Routing protocols are used to facilitate the exchange of routing information between
routers. Routing protocols allow routers to dynamically share information about remote
networks and automatically add this information to their own routing tables.

Routing protocols determine the best path to each network which is then added to the
routing table. One of the primary benefits to using a dynamic routing protocol is that
routers exchange routing information whenever there is a topology change. This exchange
allows routers to automatically learn about new networks and also to find alternate
paths when there is a link failure to a current network.

Compared to static routing, dynamic routing protocols require less administrative


overhead. However, the expense of using dynamic routing protocols is dedicating part of
a router's resources for protocol operation including CPU time and network link
bandwidth. Despite the benefits of dynamic routing, static routing still has its place.
There are times when static routing is more appropriate and other times when dynamic
routing is the better choice.

routing protocol is a set of processes, algorithms, and messages that are used to
exchange routing information and populate the routing table with the routing protocol's
choice of best paths. The purpose of a routing protocol includes:

Discovery of remote networks


Maintaining up-to-date routing information
Choosing the best path to destination networks
Ability to find a new best path if the current path is no longer available

What are the components of a routing protocol?

Data structures - Some routing protocols use tables and/or databases for its
operations. This information is kept in RAM.

Algorithm - An algorithm is a finite list of steps used in accomplishing a task. Routing


protocols use algorithms for facilitating routing information and for best path
determination.

Routing protocol messages - Routing protocols use various types of messages to discover
neighboring routers, exchange routing information, and other tasks to learn and maintain
accurate information about the network.

Dynamic Routing Protocol Operation


All routing protocols have the same purpose - to learn about remote networks and to
quickly adapt whenever there is a change in the topology. The method that a routing
protocol uses to accomplish this depends upon the algorithm it uses and the operational
characteristics of that protocol. The operations of a dynamic routing protocol vary
depending upon the type of routing protocol and the routing protocol itself.

In general, the operations of a dynamic routing protocol can be described as follows:

The router sends and receives routing messages on its interfaces.

The router shares routing messages and routing information with other routers that are
using the same routing protocol.

Routers exchange routing information to learn about remote networks.

When a router detects a topology change the routing protocol can advertise this change
to other routers.

Dynamic Routing Advantages and Disadvantages

Dynamic routing advantages:

Administrator has less work maintaining the configuration when adding or deleting
networks.

Protocols automatically react to the topology changes.

Configuration is less error-prone.

More scalable, growing the network usually does not present a problem.

Dynamic routing disadvantages:

Router resources are used (CPU cycles, memory and link bandwidth).

More administrator knowledge is required for configuration, verification, and


troubleshooting.

INTERIOR ROUTING PROTOCOLS


Interior Gateway Protocols (IGPs) can be classified as two types:

1. Distance vector routing protocols

2. Link-state routing protocols

Distance Vector Routing Protocol Operation

Distance vector means that routes are advertised as vectors of distance and direction.
Distance is defined in terms of a metric such as hop count and direction is simply the
next-hop router or exit interface. Distance vector protocols typically use the Bellman-
Ford algorithm for the best path route determination.

Some distance vector protocols periodically send complete routing tables to all connected
neighbors. In large networks, these routing updates can become enormous, causing
significant traffic on the links.

Distance vector protocols use routers as sign posts along the path to the final
destination. The only information a router knows about a remote network is the
distance or metric to reach that network and which path or interface to use to get
there. Distance vector routing protocols do not have an actual map of the network
topology.

Link-state Protocol Operation

In contrast to distance vector routing protocol operation, a router configured with a


link-state routing protocol can create a "complete view" or topology of the network by
gathering information from all of the other routers. To continue our analogy of sign
posts, using a link-state routing protocol is like having a complete map of the network
topology. The sign posts along the way from source to destination are not necessary,
because all link-state routers are using an identical "map" of the network. A link-state
router uses the link-state information to create a topology map and to select the best
path to all destination networks in the topology.

Link state vs Distance vector


With some distance vector routing protocols, routers send periodic updates of their
routing information to their neighbors. Link-state routing protocols do not use periodic
updates. After the network has converged, a link-state update only sent when there is
a change in the topology.

Classfull Routing Protocols

Classful routing protocols do not send subnet mask information in routing updates. The
first routing protocols such as RIP, were classful. This was at a time when network
addresses were allocated based on classes, class A, B, or C. A routing protocol did not
need to include the subnet mask in the routing update because the network mask could
be determined based on the first octet of the network address.

Classful routing protocols can still be used in some of today's networks, but because
they do not include the subnet mask they cannot be used in all situations. Classful
routing protocols cannot be used when a network is subnetted using more than one
subnet mask, in other words classful routing protocols do not support variable length
subnet masks (VLSM).

Classless Routing Protocols

Classless routing protocols include the subnet mask with the network address in routing
updates. Today's networks are no longer allocated based on classes and the subnet mask
cannot be determined by the value of the first octet. Classless routing protocols are
required in most networks today because of their support for VLSM.

In the figure, notice that the classless version of the network is using both /30 and
/27 subnet masks in the same topology. Also notice that this topology is using a
discontiguous design.

What is Convergence?

Convergence is when all routers' routing tables are at a state of consistency. The
network has converged when all routers have complete and accurate information about
the network. Convergence time is the time it takes routers to share information,
calculate best paths, and update their routing tables. A network is not completely
operable until the network has converged; therefore, most networks require short
convergence times.

Routing protocols can be rated based on the speed to convergence; the faster the
convergence, the better the routing protocol. Generally, RIP and IGRP are slow to
converge, whereas EIGRP and OSPF are faster to converge.

Purpose of Routing Metrics

There are cases when a routing protocol learns of more than one route to the same
destination. To select the best path, the routing protocol must be able to evaluate and
differentiate between the available paths. For this purpose a metric is used. A metric is
a value used by routing protocols to assign costs to reach remote networks. The metric
is used to determine which path is most preferable when there are multiple paths to
the same remote network.

Each routing protocol uses its own metric. For example, RIP uses hop count, EIGRP
uses a combination of bandwidth and delay, and Cisco's implementation of OSPF uses
bandwidth.

Load Balancing

We have discussed that individual routing protocols use metrics to determine the best
route to reach remote networks. But what happens when two or more routes to the
same destination have identical metric values? How will the router decide which path to
use for packet forwarding? In this case, the router does not choose only one route.
Instead, the router "load balances" between these equal cost paths. The packets are
forwarded using all equal-cost paths.

The Purpose of Administrative Distance

Administrative distance (AD) defines the preference of a routing source. Each routing
source - including specific routing protocols, static routes, and even directly connected
networks - is prioritized in order of most- to least-preferable using an administrative
distance value. Cisco routers use the AD feature to select the best path when it learns
about the same destination network from two or more different routing sources.

Administrative distance is an integer value from 0 to 255. The lower the value the
more preferred the route source. An administrative distance of 0 is the most
preferred. Only a directly connected network has an administrative distance of 0, which
cannot be changed.

static routes are entered by an administrator who wants to manually configure the best
path to the destination. For that reason, static routes have a default AD value of 1.
This means that after directly connected networks, which have a default AD value of 0,
static routes are the most preferred route source.
Directly connected networks appear in the routing table as soon as the IP address on
the interface is configured and the interface is enabled and operational. The AD value of
directly connected networks is 0, meaning that this is the most preferred routing
source. There is no better route for a router than having one of its interfaces directly
connected to that network. For that reason, the administrative distance of a directly
connected network cannot be changed and no other route source can have an
administrative distance of 0.

RIP1 AND RIP2

CHARACTERISTICS OF RIP

Distance Vector Routing Protocols," RIP has the following key characteristics:
RIP is a distance vector routing protocol.
RIP uses hop count as its only metric for path selection.
Advertised routes with hop counts greater than 15 are unreachable.
Messages are broadcast every 30 seconds.

RIP uses the Bellman-Ford algorithm as its routing algorithm. Both RIP1 and RIP2 use
Hop Count as a metric, and the maximum hop count is 15. RIP1 and RIP2 has
administrative distance of 120.

RIP1 VS RIP2
Configuring RIP

Examining default RIP settings

Use the privileged exec , show ip protocols to view default RIP settings.
Enabling RIP2

Disabling Automatic Summarization


Similarly to RIPv1, RIPv2 automatically summarizes networks at major network
boundaries by default.

To modify the default RIPv2 behavior of automatic summarization, use the no


auto-summary router configuration mode command.

This command has no effect when using RIPv1.

When automatic summarization has been disabled, RIPv2 no longer summarizes


networks to their classful address at boundary routers. RIPv2 now includes all
subnets and their appropriate masks in its routing updates.

The show ip protocols now states that automatic network summarization is not
in effect.

Configuring passive interfaces

Sending out unneeded updates on a LAN interface impacts network in three ways:

Wasted Bandwidth

Wasted Resources

Security Risk

Configuring passive interfaces


Propagating a default route
OSPF Routing Protocol

Open Shortest Path First (OSPF ) is a link-state routing protocol that was developed
as a replacement for the distance vector routing protocol RIP. RIP was an acceptable
routing protocol in the early days of networking and the Internet, but its reliance on
hop count as the only measure for choosing the best route quickly became unacceptable
in larger networks that needed a more robust routing solution. OSPF is a classless
routing protocol that uses the concept of areas for scalability. RFC 2328 defines the
OSPF metric as an arbitrary value called cost.

OSPF's major advantages over RIP are its fast convergence and its scalability to much
larger network implementations.

OSPF Packet Types

There are five different types of OSPF LSPs. Each packet serves a specific purpose in
the OSPF routing process:
1. Hello - Hello packets are used to establish and maintain adjacency with other OSPF
routers. The hello protocol is discussed in detail in the next topic.

2. DBD - The Database Description (DBD) packet contains an abbreviated list of the
sending router's link-state database and is used by receiving routers to check against the
local link-state database.

3. LSR - Receiving routers can then request more information about any entry in the
DBD by sending a Link-State Request (LSR).

4. LSU - Link-State Update (LSU) packets are used to reply to LSRs as well as to
announce new information. LSUs contain seven different types of Link-State
Advertisements (LSAs). LSUs and LSAs are briefly discussed in a later topic.

5. LSAck - When an LSU is received, the router sends a Link-State Acknowledgement


(LSAck) to confirm receipt of the LSU.

Neighbor Establishment

Before an OSPF router can flood its link-states to other routers, it must first
determine if there are any other OSPF neighbors on any of its links. In the figure, the
OSPF routers are sending Hello packets on all OSPF-enabled interfaces to determine if
there are any neighbors on those links. The information in the OSPF Hello includes the
OSPF Router ID of the router sending the Hello packet.

Receiving an OSPF Hello packet on an interface confirms for a router that there is
another OSPF router on this link. OSPF then establishes adjacency with the neighbor
OSPF Hello and Dead Intervals

Before two routers can form an OSPF neighbor adjacency, they must agree on three
values: Hello interval, Dead interval, and network type. The OSPF Hello interval
indicates how often an OSPF router transmits its Hello packets. By default, OSPF Hello
packets are sent every 10 seconds on multiaccess and point-to-point segments and every
30 seconds on non-broadcast multiaccess (NBMA) segments (Frame Relay, X.25,
ATM).

In most cases, OSPF Hello packets are sent as multicast to an address reserved for
ALLSPFRouters at 224.0.0.5. Using a multicast address allows a device to ignore the
packet if its interface is not enabled to accept OSPF packets. This saves CPU processing
time on non-OSPF devices.

The Dead interval is the period, expressed in seconds, that the router will wait to
receive a Hello packet before declaring the neighbor "down." Cisco uses a default of four
times the Hello interval. For multiaccess and point-to-point segments, this period is 40
seconds. For NBMA networks, the Dead interval is 120 seconds.
If the Dead interval expires before the routers receive a Hello packet, OSPF will remove
that neighbor from its link-state database. The router floods the link-state information
about the "down" neighbor out all OSPF enabled interfaces.

Electing a DR and BDR

To reduce the amount of OSPF traffic on multiaccess networks, OSPF elects a


Designated Router (DR) and Backup Designated Router (BDR). The DR is responsible
for updating all other OSPF routers (called DROthers) when a change occurs in the
multiaccess network. The BDR monitors the DR and takes over as DR if the current DR
fails.

In the figure above , R1, R2, and R3 are connected through point-to-point links.
Therefore, no DR/BDR election occurs

OSPF ALGORITHM

Each OSPF router maintains a link-state database containing the LSAs received from all
other routers. Once a router has received all of LSAs and built its local link-state
database, OSPF uses Dijkstra's shortest path first (SPF) algorithm to create an SPF
tree. The SPF tree is then used to populate the IP routing table with the best paths
to each network.

Administrative Distance

Administrative distance (AD) is the trustworthiness (or preference) of the route


source. OSPF has a default administrative distance of 110.

The Router OSPF Command

OSPF is enabled with the router ospf process-id global configuration command. The
process-id is a number between 1 and 65535 and is chosen by the network
administrator. The process-id is locally significant, which means that it does not have to
match other OSPF routers in order to establish adjacencies with those neighbors.
In our topology, we will enable OSPF on all three routers using the same process ID of
1. We are using the same process ID simply for consistency.

R1(config)#router ospf 1

R1(config-router)#

The Network Command

Any interfaces on a router that match the network address in the network command
will be enabled to send and receive OSPF packets.

This network (or subnet) will be included in OSPF routing updates.

The network command is used in router configuration mode.

Router(config-router)#network network-address wildcard-mask area area-id

The network address along with the wildcard mask is used to specify the interface or
range of interfaces that will be enabled for OSPF using this network command.

Determining the Router-ID


The OSPF router ID is used to uniquely identify each router in the OSPF routing
domain. A router ID is simply an IP address. Cisco routers derive the router ID based
on three criteria and with the following precedence:

1. Use the IP address configured with the OSPF router-id command.

2. If the router-id is not configured, the router chooses highest IP address of any of
its loopback interfaces.

3. If no loopback interfaces are configured, the router chooses highest active IP address
of any of its physical interfaces.

Highest Active IP Address

If an OSPF router is not configured with an OSPF router-id command and there are no
loopback interfaces configured, the OSPF router ID will be the highest active IP address
on any of its interfaces. The interface does not need to be enabled for OSPF, meaning
that it does not need to be included in one of the OSPF network commands. However,
the interface must be active - it must be in the up state.

Loopback Address

If the OSPF router-id command is not used and loopback interfaces are configured,
OSPF will choose highest IP address of any of its loopback interfaces. A loopback address
is a virtual interface and is automatically in the up state when configured. You already
know the commands to configure a loopback interface:

Router(config)#interface loopback number

Router(config-if)#ip address ip-address subnet-mask

OSPF Metric

The OSPF metric is called cost. "A cost is associated with the output side of each
router interface. This cost is configurable by the system administrator. The lower the
cost, the more likely the interface is to be used to forward data traffic."
The Cisco IOS uses the cumulative bandwidths of the outgoing interfaces from the
router to the destination network as the cost value. At each router, the cost for an
interface is calculated as 10 to the 8th power divided by bandwidth in bps. This is
known as the reference bandwidth. Dividing 10 to the 8th power by the interface
bandwidth is done so that interfaces with the higher bandwidth values will have a lower
calculated cost.

The reference bandwidth defaults to 10 to the 8th power, 100,000,000 bps or 100
Mbps. This results in interfaces with a bandwidth of 100 Mbps and higher having the
same OSPF cost of 1. The reference bandwidth can be modified to accommodate
networks with links faster than 100,000,000 bps (100 Mbps) using the OSPF
command auto-cost reference-bandwidth. When this command is necessary, it is
recommended that it is used on all routers so the OSPF routing metric remains
consistent.

OSPF Accumulates cost

The cost of an OSPF route is the accumulated value from one router to the
destination network.
For example, in the figure, the routing table on R1 shows a cost of 65 to reach the
10.10.10.0/24 network on R2. Because 10.10.10.0/24 is attached to a FastEthernet
interface, R2 assigns the value 1 as the cost for 10.10.10.0/24. R1 then adds the
additional cost value of 64 to send data across the default T1 link between R1 and R2.

Modifying the cost of an interface

When the serial interface is not actually operating at the default T1 speed, the
interface requires manual modification. Both sides of the link should be configured to
have the same value. Both the bandwidth interface command or the ip ospf cost
interface command achieve this purpose - an accurate value for use by OSPF in
determining the best route.

The bandwidth Command

The bandwidth command is used to modify the bandwidth value used by the IOS in
calculating the OSPF cost metric. The interface command syntax is the same syntax
that you learned in Chapter 9, "EIGRP":
Router(config-if)#bandwidth bandwidth-kbps

The figure shows the bandwidth commands used to modify the costs of all the serial
interfaces in the topology. For R1, the show ip ospf interface command shows that the
cost of the Serial 0/0/0 link is now 1562, the result of the Cisco OSPF cost
calculation 100,000,000/64,000.

The ip ospf cost Command

An alternative method to using the bandwidth command is to use the ip ospf cost
command, which allows you to directly specify the cost of an interface. For example, on
R1 we could configure Serial 0/0/0 with the following command:

R1(config)#interface serial 0/0/0

R1(config-if)#ip ospf cost 1562

INTER-VLAN ROUTING
In this chapter, you will learn about inter-VLAN routing and how it is used to permit
devices on separate VLANs to communicate. Now that you know how to configure
VLANs on a network switch, the next step is to allow devices connected to the various
VLANs to communicate with each other. In a previous chapter, you learned that each
VLAN is a unique broadcast domain, so computers on separate VLANs are, by default,
not able to communicate. There is a way to permit these end stations to communicate;
it is called inter-VLAN routing.

We define inter-VLAN routing as a process of forwarding network traffic from one


VLAN to another VLAN using a router. VLANs are associated with unique IP subnets
on the network.

There are two types of inter-vlan routing, the Legacy system and the router on a stick
method.

1. Legacy system

In the past:

Actual routers were used to route between VLANs.

Each VLAN was connected to a different physical router interface.

Packets would arrive on the router through one through interface, be routed and
leave through another.

Because the router interfaces were connected to VLANs and had IP addresses
from that specific VLAN, routing between VLANs was achieved.

Large networks with large number of VLANs required many router interfaces.

Configure Legacy Inter-Vlan Routing

Legacy inter-VLAN routing requires routers to have multiple physical interfaces.

Each one of the router’s physical interfaces is connected to a unique VLAN.


Each interface is also configured with an IP address for the subnet associated
with the particular VLAN.

Network devices use the router as a gateway to access the devices connected to
the other VLANs.

Switch Configurations for Legacy Inter-Vlan Routing


Router Interface Configurations for Legacy Inter-Vlan Routing

Every vlan is assigned a router interface as shown in the configurations below.

2. Router on a Stick method

The router-on-a-stick approach uses a different path to route between VLANs.

One of the router’s physical interfaces is configured as a 802.1Q trunk port so it


can understand VLAN tags.

Logical subinterfaces are created; one subinterface per VLAN.

Each subinterface is configured with an IP address from the VLAN it represents.

VLAN members (hosts) are configured to use the subinterface address as a


default gateway.

Only one of the router’s physical interface is used.

Preparation

An alternative to legacy inter-VLAN routing is to use VLAN trunking and


subinterfaces.
VLAN trunking allows a single physical router interface to route traffic for
multiple VLANs.

The physical interface of the router must be connected to a trunk link on the
adjacent switch.

On the router, subinterfaces are created for each unique VLAN.

Each subinterface is assigned an IP address specific to its subnet or VLAN and is also
configured to tag frames for that VLAN

Switch Configuration for Router on stick

Router Sub-interface Configuration for Router on a stick

Sub-interfaces have to be created and configured to tag frames and also configured with
an IP address and subnet mask. For every Vlan, a sub interface has to be created and
configured.
The router interface is configured to operate as a trunk link and is connected to a
switch port configured in trunk mode. The router performs the inter-VLAN routing by
accepting VLAN tagged traffic on the trunk interface coming from the adjacent switch
and internally routing between the VLANs using subinterfaces. The router then forwards
the routed traffic-VLAN tagged for the destination VLAN-out the same physical
interface.

Subinterfaces are multiple virtual interfaces, associated with one physical interface.
These subinterfaces are configured in software on a router that is independently
configured with an IP address and VLAN assignment to operate on a specific VLAN.
Subinterfaces are configured for different subnets corresponding to their VLAN
assignment to facilitate logical routing before the data frames are VLAN tagged and
sent back out the physical interface.

Configuring router subinterfaces is similar to configuring physical interfaces, except that


you need to create the subinterface and assign it to a VLAN.
In the example, create the router subinterface by entering the interface f0/0.10
command in global configuration mode. The syntax for the subinterface is always the
physical interface, in this case f0/0, followed by a period and a subinterface number.
The subinterface number is configurable, but it is typically associated to reflect the
VLAN number. In the example, the subinterfaces use 10 and 30 as subinterface
numbers to make it easier to remember which VLANs they are associated with. The
physical interface is specified because there could be multiple interfaces in the router,
each of which could be configured to support many subinterfaces.

Before assigning an IP address to a subinterface, the subinterface needs to be configured


to operate on a specific VLAN using the encapsulation dot1q vlan id command

Legacy inter-vlan vs router on a stick

Both physical interfaces and subinterfaces are used to perform inter-VLAN routing.
There are advantages and disadvantage to each method.

Port Limits

Physical interfaces are configured to have one interface per VLAN on the network. On
networks with many VLANs, using a single router to perform inter-VLAN routing is not
possible. Routers have physical limitations that prevent them from containing large
numbers of physical interfaces. Instead, you could use multiple routers to perform
inter-VLAN routing for all VLANs if avoiding the use of subinterfaces is a priority.

Subinterfaces allow a router to scale to accommodate more VLANs than the physical
interfaces permit. Inter-VLAN routing in large environments with many VLANs can
usually be better accommodated by using a single physical interface with many
subinterfaces.

Performance
Because there is no contention for bandwidth on separate physical interfaces, physical
interfaces have better performance when compared to using subinterfaces. Traffic from
each connected VLAN has access to the full bandwidth of the physical router interface
connected to that VLAN for inter-VLAN routing.

When subinterfaces are used for inter-VLAN routing, the traffic being routed competes
for bandwidth on the single physical interface. On a busy network, this could cause a
bottleneck for communication. To balance the traffic load on a physical interface,
subinterfaces are configured on multiple physical interfaces resulting in less contention
between VLAN traffic.

Access Ports and Trunk Ports

Connecting physical interfaces for inter-VLAN routing requires that the switch ports be
configured as access ports. Subinterfaces require the switch port to be configured as a
trunk port so that it can accept VLAN tagged traffic on the trunk link. Using
subinterfaces, many VLANs can be routed over a single trunk link rather than a single
physical interface for each VLAN.

Cost

Financially, it is more cost-effective to use subinterfaces over separate physical


interfaces. Routers that have many physical interfaces cost more than routers with a
single interface. Additionally, if you have a router with many physical interfaces, each
interface is connected to a separate switch port, consuming extra switch ports on the
network. Switch ports are an expensive resource on high performance switches. By
consuming additional ports for inter-VLAN routing functions, both the switch and the
router drive up the overall cost of the inter-VLAN routing solution.

Complexity

Using subinterfaces for inter-VLAN routing results in a less complex physical


configuration than using separate physical interfaces, because there are fewer physical
network cables interconnecting the router to the switch. With fewer cables, there is
less confusion about where the cable is connected on the switch. Because the VLANs are
being trunked over a single link, it is easier to troubleshoot the physical connections.

On the other hand, using subinterfaces with a trunk port results in a more complex
software configuration, which can be difficult to troubleshoot. In the router-on-a-stick
model, only a single interface is used to accommodate all the different VLANs.

You might also like