DC Design Guidelines
DC Design Guidelines
DC Design Guidelines
Layer 3
Layer 2
5-5
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 5 Spanning Tree Scalability
STP Active Logical Ports and Virtual Ports per Line Card
Figure 5-3 shows a pair of aggregation switches (single aggregation module) connecting to 45 access
layer switches in a looped access topology using four Gig-EtherChannel 802.1Q trunks. This might seem
like a large number of access switches to have on a single aggregation pair, but with 1RU switch
implementations, this amount or more is not uncommon. The following formula is used to determine the
total active logical interfaces on the system:
trunks on the switch * active VLANs on trunks + number of non-trunking interfaces on the switch
Using Figure 5-3 for example, the calculation for aggregation 1 and 2 is as follows:
46 trunks * 120 VLANs + 2 = 5,522 active logical interfaces
This value is below the maximum values of both 802.1s MST and 802.1w Rapid-PVST+.
Note An STP instance for all 120 VLANs defined in the system configuration is present on each trunk unless
manual VLAN pruning is performed. For example, on each trunk configuration the switchport trunk
allowed vlan X,Y command must be performed to reduce the number of spanning tree logical interfaces
being used on that port. The VTP Pruning feature does not remove STP logical instances from the port.
Calculating Virtual Ports per Line Card
Virtual ports are instances allocated to each trunk port on a line card. These ports are used to
communicate the spanning tree-related state to the switch processor on the Sup720. A maximum number
can be supported on each particular line card, as shown in Table 5-2. The following formula is used to
determine the number of spanning tree virtual instances used per line card:
sum of all ports used as trunks or part of a port-channel in a trunk * active VLANs on trunks
Figure 5-4 shows a single line card on Aggregation 1 switch connecting to 12 access layer switches in a
looped access topology using four Gig-EtherChannel 802.1Q trunks.
5-6
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 5 Spanning Tree Scalability
STP Active Logical Ports and Virtual Ports per Line Card
Figure 5-4 Calculating STP Virtual Ports per Line Card
A similar example could show two 6748 line cards connecting to 24 access switches using distributed
EtherChannel. The formula below applies equally to both scenarios.
Using Figure 5-4 for example, the calculation for the 6748 line card is as follows:
12 trunks with 4 ports each=48 ports * 120 vlans = 5,760 virtual ports
Note Virtual ports are allocated for each VLAN for every port participating in a trunk.
This value is well over the maximum values of 802.1w Rapid-PVST+ and very close to the maximum
for 802.1s MST. This scenario experiences various issues such as long convergence times and possibly
degraded system level stability.
Steps to Resolve Logical Port Count Implications
The following steps can be taken to reduce the total number of logical ports or to resolve the issues
related to a large amount of logical ports being used in a system:
Implementing multiple aggregation modulesAs covered in Chapter 2, Data Center Multi-Tier
Model Design, using multiple aggregation modules permits the spanning tree domain to be
distributed, thus reducing total port count implications.
Performing manual pruning on switchport trunk configurationsAlthough this can be somewhat
cumbersome, it dramatically reduces the total number of both active logical and virtual port
instances used.
Acc1 Acc2 Acc12
1
4
1
6
0
0
Access 1
6748 Line Card
EtherChannels to Access Layer
Access 45
Core
Loop
~120 VLANs defined in the configuration
Aggregation 1
Primary root
Aggregation 2
Secondary root
Layer 3
Layer 2
5-7
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 5 Spanning Tree Scalability
STP Active Logical Ports and Virtual Ports per Line Card
Using 801.1s MSTMST supports a very large number of logical port instances and is used in some
of the largest data centers in the world. The drawbacks of using MST are that it does not have as
much flexibility as other STP protocols, such as Rapid-PVST+, and it might not be supported in
certain service module configurations.
Note Refer to the Release Notes for information on MST interoperability with service modules.
Distributing trunks and non-trunk ports across line cardsThis can reduce the number of virtual
ports used on a particular line card.
Remove unused VLANs going to Content Switching Modules (CSMs)The CSM automatically
has all available VLANs defined in the system configuration extended to it via the internal 4GEC
bus connection. This is essentially the same as any other trunk configured in the system. Although
there is no officially documented method of removing unnecessary VLANs, it can be performed.
Figure 5-5 provides an example of the steps to remove VLANs from the CSM configuration. Note
that the command rejected output is not valid.
Figure 5-5 Removing VLANs from CSM Configuration
Note There is no way to view which VLANs are attached to the CSM module via the configuration on the CLI.
The only way to determine which VLANs are present is with the show spanning-tree interface
port-channel 259 command.
5-8
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 5 Spanning Tree Scalability
STP Active Logical Ports and Virtual Ports per Line Card
C H A P T E R
6-1
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
6
Data Center Access Layer Design
This chapter provides details of Cisco tested access layer solutions in the enterprise data center. It
includes the following topics:
Overview of Access Layer Design Options
Layer 2 Looped Access Layer Model
Layer 2 Loop-Free Access Layer Model
FlexLinks Access Model
Overview of Access Layer Design Options
Access layer switches are primarily deployed in Layer 2 mode in the data center. A Layer 2 access
topology provides the following unique capabilities required in the data center:
VLAN extensionThe Layer 2 access topology provides the flexibility to extend VLANs between
switches that are connected to a common aggregation module. This makes provisioning of servers
to a particular subnet/VLAN simple, and without the worry of physical placement of the server in a
particular rack or row.
Layer 2 adjacency requirementsNIC teaming, high availability clusters, and database clusters are
application examples that typically require NIC cards to be in the same broadcast domain (VLAN).
The list of applications used in a clustered environment is growing, and Layer 2 adjacency is a
common requirement.
Custom applicationsMany developers write custom applications without considering the Layer 3
network environment, either because of lack of skills or available tools. This can create challenges
in a Layer 3 IP access topology. These servers usually depend on Layer 2 adjacency with other
servers and could require rewriting code when changing IP addresses.
6-2
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Overview of Access Layer Design Options
Service modulesA Layer 2 access permits services provided by service modules or appliances to
be shared across the entire access layer. Examples of this are when using the FWSM, CSM, and
SSLSM. The active-standby modes of operation used by service modules require Layer 2 adjacency
with the servers that use them.
Administrative reasonsLarge enterprise customers commonly consist of multiple business units
and departments, often with their own individual set of IT personnel, which might be the result of
acquisitions or scaling of a business. IP address space is often divided and used by these business
units with specific boundaries defined, or it might be completely overlapping. As data center
consolidations occur, these business units/departments begin to share common floor and rack space.
The ability to group these departments with Layer 2 VLANs across multiple access switches could
be a critical requirement in these environments.
The table in Figure 6-1 outlines the available access layer design models and provides a comparison of
various factors to consider with each. Each access layer design model is covered in more detail in the
remainder of this chapter.
Note It might be more valuable to institute a point system in place of the plus-minus rating to determine which
access layer model would be more appropriate for a particular design.
Figure 6-1 Comparison Chart of Access Layer Designs
The table in Figure 6-1 contains the following column headings:
Uplinks in blocking or standby stateSome access layer designs can use both uplinks
(active-active), while others have one link active and the other blocked on a per-VLAN basis by
spanning tree, or completely unused in a backup mode only. A plus is given to those models that
have both uplinks active.
- + +/- + + +
+ - + + + -
+ + + - - +
- + + + + +
+ +
FlexLinks
+ -
+
-
(1, 2)
Looped
Triangle
Looped
Square
Loop-free
Inverted U
Loop-free
U
Single
Attached
Server Black-
Holing on
Uplink Failure
Access
Switch
Density per
Agg Module
1
5
3
0
4
6
Uplinks on
Agg Switch
in Blocking or
Standby State
VLAN
Extension
Supported
Across Access
Service
Module
Black-Holing
on Uplink
Failure (5)
Must
Consider
Inter-Switch
Link Scaling
(3)
(4)
1. Use of Distributed EtherChannel Greatly Reduces Chances of Black Holing Condition
2. NIC Teaming Can Eliminate Black Holing Condition
3. When Service Modules Are Used and Active Service Modules Are Aligned to Agg1
4. ACE Module Permits L2 Loopfree Access with per Context Switchover on Uplink failure
5. Applies to when using CSM or FWSM in active/standby arrangement
6-3
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Overview of Access Layer Design Options
VLAN extension across the access layerA plus is given to those access design models that permit
a VLAN to be extended to all access switches that are connected to a common aggregation module.
Service module black holingAn uplink failure on the access layer switch could break connectivity
between the servers and the service modules being used.
Single attached server black holingIf an access switch has a single uplink, it could be a large
failure exposure point. Uplinks that use Distributed EtherChannel can reduce the chances of black
holing. Server load balancing to a VIP that includes servers physically connected across multiple
access switches is another technique that can be used, as well as server NIC teaming.
Access switch density per aggregation moduleWhen 10GE uplinks are used, port density at the
aggregation layer can be a challenge. Some access layer designs permit a larger number of access
layer switches per aggregation module than others.
Inter-switch link bandwidth scalingSome access layer designs send all traffic towards the primary
root aggregation switch, while other designs send traffic towards both aggregation switches. When
sending to both aggregation switches, 50 percent of the traffic typically passes over the inter-switch
link to reach the active HSRP default gateway and active service module pair. The amount of
bandwidth used for the inter-switch links becomes very important in these designs and can create
scaling challenges.
Service Module Influence on Design
This section contains recommendations for service module implementations for each of the access layer
design models described. Because service modules can be implemented in many different ways or none
at all, the focus is on a single service module design that is commonly implemented using the FWSM
and CSM modules (see Figure 6-2).
Note The Application Control Engine (ACE) is a new module that introduces several enhancements with
respect to load balancing and security services. A key difference between the CSM, FWSM release 2.x,
and ACE is the ability to support active-active contexts across the aggregation module with per context
failover. The ACE module is not released at the time of this writing, so it is not covered.
6-4
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Overview of Access Layer Design Options
Figure 6-2 CSM One-arm and FWSM Transparent Mode Design
The CSM one-arm combined with the FWSM transparent mode is a common implementation in the
enterprise data center and has particular advantages over other designs. The main advantages lie in the
areas of performance and virtualization.
Regarding performance, the CSM in one-arm mode allows client-to-server traffic to use the load
balancer as necessary to balance web layer server access while allowing server-to-server traffic to
bypass the CSM and use the switching capacity on the MSFC. The one-arm design also permits the real
client IP address to remain intact (no client NAT) so that server applications can use it for demographic or
other purposes.
The FWSM can be virtualized when operating in transparent mode. This allows individual contexts of
firewall instances to be created, configured, and managed independently of each other. This allows a
single FWSM module to be used across different lines of business or operations as if multiple physical
firewalls existed.
Service Module/Appliance and Path Preferences
To achieve redundancy, service modules are deployed in pairs. One module in the pair acts as the
primary/active service module while the other module acts as the secondary/standby. Although service
module pairs can deployed in the same aggregation chassis, they are typically placed in separate chasses
to achieve the highest level of redundancy. Service modules are required to be Layer 2 adjacent on their
configured VLAN interfaces to permit session state and monitoring to occur. For example, in Figure 6-2,
vlan 11 on the FWSM in aggregation 1 must be extended to vlan 11 on the FWSM in aggregation 2 via
the 802.1Q trunk inter-switch link. This is also true for vlans 12,13,21,22, and 23. This also applies to
the server vlan 44 used by the CSM module in Figure 6-2.
1
5
3
0
4
7
Aggregation 1 L3 Links
Core 2
802.1Q Trunk
VLAN 23
VLAN 22
VLAN 21
STP Primary Root
HSRP Active
Service Module Active
STP Secondary Root
HSRP Standby
Service Module Standby
Core 1
Aggregation 2
CSM CSM
Policy Based Routing
FWSM FWSM
Context 1
Context 3
Context 2
44
22
23 21
12
13 11
22
23 21
12
13 11
44
MSFC MSFC
6-5
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Overview of Access Layer Design Options
Because only one service module in one aggregation switch can be active at any one time, Cisco
recommends aligning traffic flow towards the primary service module(s). The active default gateway
and spanning tree root bridge are two components that influence path selection in a Layer 2 network. If
primary service modules are located in the aggregation 1 switch, it is desirable to define the HSRP
primary default gateway and spanning tree root bridge to also be on the aggregation 1 switch. This
prevents session flow from hopping back and forth between aggregation switches, optimizing
inter-switch link usage and providing a more deterministic environment.
Note It is possible to double up on service modules and create a design such that active service modules are
in each aggregation switch. This permits load balancing of access layer VLANs across uplinks to each
aggregation switch without the need for flows to cross the inter-switch link between them. The
disadvantage of this type of design is that there are twice the number of devices, with a corresponding
increase in management and complexity.
When service modules/appliances are not used, access layer VLANs can be distributed across uplinks
without concern for traffic flow issues. This can be achieved by alternating the HSRP active default
gateway and spanning tree root configurations for each VLAN between the aggregation 1 and
aggregation 2 switch, or by using Gateway Load Balancing Protocol (GLBP) in place of HSRP.
Because most data center implementations use service modules or appliances, the remainder of this
chapter focuses on access layer topologies using service modules.
General Recommendations
The remainder of this chapter covers the details of the various access layer design models. Although each
meets their own specific requirements, the following general recommendations apply to all:
Spanning tree pathcostCisco recommends optimizing the spanning tree design by implementing
the spanning-tree pathcost method long global feature. The pathcost method long option causes
spanning tree to use a 32 bit-based value in determining port path costs compared to the default 16
bit, which improves the root path selection when various EtherChannel configurations exist.
EtherChannel protocolCisco also recommends using Link Aggregation Control Protocol (LACP)
as the link aggregation protocol for EtherChannel configurations. LACP considers the total
available bandwidth of an EtherChannel path to STP in determining the path cost. This is
advantageous in situations where only a portion of an EtherChannel link fails and the blocked
alternate link can provide a higher bandwidth path.
Failover tracking with service modules and HSRPHSRP tracking of an interface can be used to
control switchover of the primary default gateway between aggregation switches. Service modules
can also track interface state and be configured to failover based on various up/down criteria.
Unfortunately, the service modules and HSRP do not work together and have different mechanisms
to determine failover, which can create situations where active and standby components are
misaligned across the aggregation layer.
There are specific situations where tracking can be of benefit, but for the most part Cisco
recommends not using the various failover tracking mechanisms and relying instead on using the
inter-switch aggregation links to reach active default gateway and service module(s) during failure
conditions. For this reason, it is important to consider failure scenarios when determining the proper
inter-switch link bandwidth to be used.
6-6
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Service module timersThe convergence characteristics of various failure scenarios are influenced
by the service module(s) failover timer configurations. Test lab results show that average service
module failover times with these values are under ~6 seconds. The recommended service module
failover timer configurations are as follows:
CSM
module ContentSwitchingModule 3
ft group 1 vlan 102
priority 20
heartbeat-time 1
failover 3
preempt
FWSM
Unit Poll frequency 500 milliseconds, holdtime 3 seconds
Interface Poll frequency 3 seconds
Using Distributed EtherChannel (DEC)Cisco generally recommends that the inter-switch link
between aggregation switches be implemented with a DEC connection to provide the highest level
of resiliency. There are known caveats in certain Cisco IOS releases related to using DEC when
service modules are present. For more details, refer to the Release Notes for this guide.
Layer 2 Looped Access Layer Model
This section covers Layer 2 looped access topologies, and includes of the following topics:
Layer 2 Looped Access Topologies
Triangle Looped Topology
Square Looped Topology
Layer 2 Looped Access Topologies
In a Layer 2 looped access topology, a pair of access layer switches are connected to the aggregation
layer using 802.1Q trunks. Looped access topologies consist of a triangle and square design, as shown
in Figure 6-3.
6-7
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Figure 6-3 Triangle and Square Looped Access Topologies
In Figure 6-3, a VLAN is configured on each access switch on the corresponding 802.1Q uplink, and is
also extended between aggregation switches, forming a looped topology for that VLAN. The left side of
the diagram shows an access layer with a triangle looped topology, and the right side shows a square
looped topology. In the triangle looped topology, the access switch is dual homed to each aggregation
switch. In the square looped topology, a pair of access switches are interconnected together, with each
connected to a single aggregation switch.
Because a loop is present, all links cannot be in a forwarding state at all times. Because
broadcasts/multicast packets and unknown unicast MAC address packets must be flooded, they would
travel in an endless loop, completely saturating the VLAN and adversely affecting network performance.
A spanning tree protocol such as Rapid PVST+ or MST is required to automatically block a particular
link and break this loop condition.
The dashed black lines on the aggregation layer switches represent the demarcation between Layer 2 and
Layer 3 for the VLANs that are extended to the access layer switches. All packets processed in the
VLAN beneath this line are in the same Layer 2 broadcast domain and are Layer 3 routed above the line.
As denoted by the double solid lines, spanning tree automatically blocks one path to break the loop
condition.
In both looped topologies, the service module fault-tolerant VLANs are extended between aggregation
switches over the 802.1Q inter-switch link. This permits active-standby hellos and session state
communications to take place to support redundancy.
1
5
3
0
4
8
L3
L2
L3
L2
L2 Looped Triangle L2 Looped Square
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup/NIC Teaming
Blocked Path by Spanning Tree
Vlan 10 Vlan 20
6-8
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Triangle Looped Topology
The triangle looped topology is currently the most widely implemented in the enterprise data center. This
topology provides a deterministic design that makes it easy to troubleshoot while providing a high level
of flexibility (see Figure 6-4).
Figure 6-4 Triangle Looped Access Topology
Spanning Tree, HSRP, and Service Module Design
In a triangle looped access layer design, it is desirable to align the spanning tree root, HSRP default
gateway, and active service modules on the same aggregation switch, as shown in Figure 6-4. Aligning
the access layer switch uplink that is in the forwarding state directly to the same switch that is the
primary default gateway and active service module/appliance optimizes the traffic flows. Otherwise,
traffic flows can hop back and forth between aggregation switches, creating undesirable conditions and
difficulty in troubleshooting.
1
5
3
0
4
9
L3
L2
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup/NIC Teaming
Blocked Path by Spanning Tree
Vlan 10 Vlan 20
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
Row 2, Cabinet 3 Row 9, Cabinet 8
Vlan 10
6-9
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Failure Scenarios
The level of resiliency that is incorporated into the access layer design can vary based on the model used.
Other features such as route health injection and route tuning can influence this. This section describes
the four main failure scenarios that can occur in a looped access design. Understanding the amount of
exposure a customer faces in these scenarios helps in selecting the best access layer design.
Failure 1Access Layer Uplink Failure
In this failure scenario, spanning tree unblocks the uplink to aggregation 2 because no loop exists (see
Figure 6-5).
Figure 6-5 Triangle Looped Failure Scenario 1Uplink Down
Default gateway and active service modules remain on aggregation 1 unless tracking mechanisms are
configured and triggered. Traffic flow goes through aggregation 2 and uses the inter-switch link to
aggregation 1 to reach the active HSRP default gateway and active service module.
The convergence characteristics of this failure scenario depend on spanning tree. Test lab results show
that with Rapid-PVST+ implementations, this value should be under ~1.5 seconds, but can vary based
on the number of spanning tree logical and virtual ports per line card values used.
Failure 2Service Module Failure (Using CSM One-arm and FWSM Transparent Mode)
In this failure scenario, there is no spanning tree convergence, and the primary default gateway remains
active on the aggregation 1 switch (see Figure 6-6).
1
5
3
0
5
0
L3
L2
Vlan 10
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
1
6-10
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Figure 6-6 Triangle Looped Failure Scenario 2Service Modules
The backup service module moves to the active state on aggregation 2 because it no longer receives hello
packets from the active service module, and times out.
Figure 6-6 shows the two following failure instances:
2.1 (FWSM failure)Traffic flow goes through aggregation 1 and across the inter-switch link to
aggregation 2, through the now active FWSM module context, and back across the inter-switch link
to the active HSRP default gateway on the aggregation 1 MSFC. Because the CSM is still active in
aggregation 1, return traffic flow is directed to the CSM based on the PBR configuration on the
MSFC VLAN interface, and on to the client via the core.
2.2 (CSM failure)Traffic flow goes through aggregation 1, through the active FWSM module
context in aggregation 1, and to the MSFC VLAN interface. The MSFC VLAN interface PBR
configuration forces the return CSM traffic to travel across the inter-switch link to aggregation 2
and through the now active CSM module. Because the active default gateway of the CSM server
VLAN is still active on aggregation 1, the traffic must flow back across the inter-switch link to the
MSFC on aggregation 1 and then on to the client via the core.
1
5
3
0
5
1
msfc vlan interface
L3
L2
Vlan 10
Root Primary
HSRP Primary
Active CSM
Root Secondary
HSRP Secondary
Active FWSM
21
Vlan 10
Root Primary
HSRP Primary
Active FWSM
Root Secondary
HSRP Secondary
Active CSM
22
L3
L2
6-11
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Failure 3Inter-Switch Link Failure
Figure 6-7 shows failure scenario 3.
Figure 6-7 Triangle Looped Failure Scenario 3Inter-Switch Link Failure
This failure scenario has many side effects to consider. First, spanning tree unblocks the uplink to
aggregation 2 because no loop exists. RootGuard on the aggregation switch then automatically disables
the link to access 2 because it sees root BPDUs on the now-unblocked path to Aggregation 1 via the
access layer switch.
With the inter-switch link down and RootGuard disabling the path to Aggregation 1 via access2, HSRP
multicast hello messages no longer have a path between Aggregation 1 and 2, so HSRP goes into an
active state on both switches for all VLANs.
Because the service module failover VLANs are configured across the inter-switch link only, service
modules in both aggregation switches determine that the other has failed and become active (this is
referred to as a split-brain effect).
If inbound traffic from the core flows into the aggregation 2 switch during this failure scenario, it
attempts to flow through the now-active service modules and stop, because RootGuard has the path to
the servers blocked. If for some reason RootGuard is not configured, this still results in asymmetrical
flows and breaks connectivity. It is for these reasons that Cisco recommends tuning the aggregation-core
routing configuration such that the aggregation 1 switch is the primary route advertised to the core for
the primary service module-related VLANs.
Route tuning plus RootGuard prevents asymmetrical connections and black holing in a split-brain
scenario because traffic flows are aligned with the same default gateway and service module
combination, preventing asymmetrical conditions. More detail on route tuning can be found in Establishing
Path Preference with RHI, page 7-1.
1
5
3
0
5
2
L3
L2
Vlan 10
Root Primary
HSRP Active
Active Services
Root Secondary
HSRP Active
Active Services
3
RootGuard
Blocks Path
6-12
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Failure 4Switch Power or Sup720 Failure (Non-redundant)
Figure 6-8 shows failure scenario 4.
Figure 6-8 Triangle Looped Failure Scenario 4Single Sup720 or Power Failure
In this failure scenario, the spanning tree root, primary default gateway, and active service modules
transition to the aggregation 2 switch.
The convergence characteristics of this failure scenario depend on spanning tree, HSRP, and service
module failover times. Because the spanning tree and HSRP failover times are expected to be under that
of service modules, the actual convergence time depends on service module timer configurations.
Square Looped Topology
The square-based looped topology is not as common today in the enterprise data center but has recently
gained more interest. The square looped topology increases the access layer switch density when compared
to a triangle loop topology while retaining the same loop topology characteristics. This becomes particularly
important when 10GE uplinks are used. This topology is very similar to the triangle loop topology, with
differences in where spanning tree blocking occurs (see Figure 6-9).
1
5
3
0
5
3
L3
L2
Vlan 10
Root Secondary
HSRP Active
Active Services
4
6-13
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Figure 6-9 Square Looped Access Topology
Spanning tree blocks the link between the access layer switches, with the lowest cost path to root being
via the uplinks to the aggregation switches, as shown in Figure 6-9. This allows both uplinks to be active
to the aggregation layer switches while providing a backup path in the event of an uplink failure. The
backup path can also be a lower bandwidth path because it is used only in a backup situation. This might
also permit configurations such as 10GE uplinks with GEC backup.
The possible disadvantages of the square loop design relate to inter-switch link use, because 50 percent
of access layer traffic might cross the inter-switch link to reach the default gateway/active service
module. There can also be degradation in performance in the event of an uplink failure because, in this
case, the oversubscription ratio doubles.
Figure 6-9 shows the spanning tree blocking point on the link between the access switch pair. This is
ideal if active services are deployed in each aggregation switch because it permits the uplinks to be load
balanced without traversing the aggregation layer inter-switch trunk. If active services are only on Agg1,
it might be desirable to adjust the STP cost such that the uplink to Agg2 is blocking instead of the link
between the access pair. This forces all traffic to the Agg1 switch without having to traverse the
aggregation layer inter-switch trunk.
1
5
3
0
5
4
L3
L2
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup/NIC Teaming
Blocked Path by Spanning Tree
Vlan 10 Vlan 20
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
Row 2, Cabinet 3 Row 9, Cabinet 8
Vlan 10
6-14
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Spanning Tree, HSRP, and Service Module Design
Similar to a triangle design, it is desirable in a square looped access layer design to align the spanning
tree root, HSRP default gateway, and active service modules on the same aggregation switch, as shown
in Figure 6-9. By aligning the access layer switch uplink that is in the forwarding state directly to the
same switch that is the primary default gateway and active service module/appliance, traffic flows are
optimized. Otherwise, traffic flows can hop back and forth between aggregation switches, creating
undesirable conditions that are unpredictable and difficult to troubleshoot.
Failure Scenarios
This section examines the square loop design in various failure scenarios.
Failure 1Access Layer Uplink Failure
Figure 6-10 shows failure scenario 1.
Figure 6-10 Square Looped Failure Scenario 1Uplink Down
In this failure scenario, spanning tree unblocks the link between access switches because a loop no
longer exists. The default gateway and active service modules remain on aggregation 1 unless tracking
mechanisms are configured and triggered. Traffic flows go through aggregation 2 and use the
inter-switch link to aggregation 1 to reach the active HSRP default gateway and active service module.
The convergence characteristics of this failure scenario depend on spanning tree. Test lab results show
that with Rapid-PVST+ implementations, this value should be under ~1.5 seconds, but can vary based
on the number of spanning tree logical and virtual ports per line card values present.
1
5
3
0
5
5
L3
L2
Vlan 10
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
1
6-15
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Failure 2Service Module Failure (using CSM One-arm and FWSM Transparent Mode)
In the failure scenario shown in Figure 6-11, there is no spanning tree convergence, and the primary
default gateway remains active on the aggregation 1 switch.
Figure 6-11 Square Looped Failure Scenario 2Service Modules
The backup service module moves to the active state on aggregation 2 because it no longer receives hello
packets from the active service module, and times out.
The following failure scenarios are shown:
2.1 (FWSM failure)Traffic flow goes through aggregation 1 and across the inter-switch link to
aggregation 2, through the now-active FWSM module context, and back across the inter-switch link
to the active HSRP default gateway on the aggregation 1 MSFC. Because the CSM is still active in
aggregation 1, return traffic flow is directed to the CSM based on the PBR configuration on the
MSFC VLAN interface, and then on to the client via the core.
2.2 (CSM failure)Traffic flow goes through aggregation 1, through the active FWSM module
context in aggregation 1, and to the MSFC VLAN interface. The MSFC VLAN interface PBR
configuration forces return CSM traffic to travel across the inter-switch link to aggregation 2 and
through the now-active CSM module. Because the active default gateway of the CSM server VLAN
is still active on aggregation 1, the traffic must flow back across the inter-switch link to the MSFC
on aggregation 1, and then on to the client via the core.
1
5
3
0
5
6
msfc vlan interface
L3
L2
Vlan 10
Root Primary
HSRP Primary
Active CSM
Root Secondary
HSRP Secondary
Active FWSM
21
Vlan 10
Root Primary
HSRP Primary
Active FWSM
Root Secondary
HSRP Secondary
Active CSM
22
L3
L2
6-16
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Looped Access Layer Model
Failure 3Inter-Switch Link Failure
Figure 6-12 shows failure scenario 3.
Figure 6-12 Square Looped Failure Scenario 3Inter-Switch Link Failure
This failure scenario has many side effects to consider. First, spanning tree unblocks the access layer
inter-switch link because a loop no longer exists. RootGuard on the aggregation switch then
automatically disables the link to access 2 because it sees root BPDUs via the now-unblocked path to
aggregation 1.
With the inter-switch link down and RootGuard disabling the path to aggregation 1 via access 2, HSRP
multicast hello messages no longer have a path between aggregation 1 and 2, so HSRP goes into an active
state on both switches for all VLANs.
Because the service module failover VLANs are configured across the inter-switch link only, service
modules in both aggregation switches determine that the other has failed. This results in service modules
in aggregation 1 remaining in the active state, and service modules in aggregation 2 moving from standby
to the active state as well. This is commonly referred to as a split-brain effect and is very undesirable.
If inbound traffic from the core flows into the aggregation 2 switch during this failure scenario, it
attempts to flow through the now-active service modules and stops, because RootGuard has the path to
the servers blocked. If for some reason RootGuard is not configured, this still results in asymmetrical
flows and breaks connectivity. For these reasons, Cisco recommends tuning the aggregation-core routing
configuration such that the aggregation 1 switch is the primary route advertised to the core for the
primary service module-related VLANs.
Route tuning plus RootGuard prevents asymmetrical connections and black holing in a split-brain
scenario because traffic flows are aligned with the same default gateway and service module
combination, preventing asymmetrical conditions.
1
5
3
0
5
7
L3
L2
Vlan 10
Root Primary
HSRP Active
Active Services
Root Secondary
HSRP Active
Active Services
3
RootGuard
Blocks Path
6-17
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure 4Switch Power or Sup720 Failure (Non-redundant)
Figure 6-13 shows failure scenario 4.
Figure 6-13 Square Looped Failure Scenario 3Switch Power or Sup720 Failure
In this failure scenario, the spanning tree root, primary default gateway, and active service modules
transition to the aggregation 2 switch.
The convergence characteristics of this failure scenario depend on spanning tree, HSRP, and service
module failover times. Because the spanning tree and HSRP failover times are expected to be under that
of service modules, the actual convergence time depends on service module timer configurations.
Layer 2 Loop-Free Access Layer Model
This section covers Layer 2 looped access topologies and includes the following topics:
Layer 2 Loop-Free Access Topologies
Layer 2 Loop-Free U Topology
Layer 2 Loop-Free Inverted U Topology
1
5
3
0
5
8
L3
L2
Vlan 10
Root Secondary
HSRP Active
Active Services
4
6-18
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Layer 2 Loop-Free Access Topologies
Figure 6-14 illustrates the access layer using the Layer 2 loop-free model, with loop-free U and loop-free
inverted U topologies.
Figure 6-14 Access Layer Loop-Free Topologies
Note that the Layer 2/Layer 3 line of demarcation is different in each design. In a loop-free U, a VLAN
is configured on each access switch, and on the 802.1Q inter-switch link between access switches and
its corresponding 802.1Q uplink, but it is not extended between aggregation switches; thereby avoiding
a looped topology.
In a loop-free inverted U design, a VLAN is configured on each access switch and its corresponding
802.1Q uplink, and is also extended between aggregation switches, but is not extended between access
switches, avoiding a looped topology.
Although no loop is present in either loop-free design topology, it is still necessary to run STP as a loop
prevention tool. In the event that a cabling or configuration error that creates a loop is encountered, STP
prevents the loop from possibly bringing down the network.
Note In the loop-free U design, you cannot use RootGuard on the aggregation to access layer links because
the aggregation 2 switch would automatically disable these links because root BPDUs would be seen.
Details on spanning tree protocol types and comparisons are covered in the version 1.1 of this guide.
1
5
3
0
5
9
Vlan 105
L3
L2
L3
L2
L2 Loop-Free U
L2 Loop-Free Inverted U
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup/NIC Teaming
Vlan 105 Vlan 110
6-19
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
In both loop-free topologies, the service module fault-tolerant VLANs are extended between aggregation
switches over the 802.1Q inter-switch link. This permits active-standby hellos and session state
communications to take place to support redundancy.
Layer 2 Loop-Free U Topology
The loop-free U topology design provides a Layer 2 access solution with active uplinks and redundancy
via an inter-switch link between the access layer switches. The chance of a loop condition is reduced but
spanning tree is still configured in the event of cabling or configuration errors occur (see Figure 6-15).
Figure 6-15 Loop-Free U Access Topology
With a loop-free U topology, there are no blocked paths by spanning tree because a loop does not exist.
The VLANs are configured on the access layer uplink 802.1Q trunks and access layer inter-switch
802.1Q trunks but are not extended between the aggregation layer switches (note the dashed line
designating the Layer 2 and Layer 3 boundaries). The service module fault tolerant VLANs are carried
across the 802.1Q trunk for redundancy operations.
This topology allows both uplinks to be active for all VLANs to the aggregation layer switches while
providing a backup path in the event of an uplink failure. This also permits a higher density of access
switches to be supported on the aggregation module.
1
5
3
0
6
0
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
FT Vlans
carried across
inter-switch link
L3
L2
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup/NIC Teaming
Vlan 105 Vlan 110
msfc vlan interface
6-20
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
The main disadvantages of the loop-free U design is the inability to extend VLANs outside of an access
pair, and failure conditions that can create black holes in the event of an uplink failure when service
modules are used. Extending VLANs outside of a single access pair creates a loop through the
aggregation layer, essentially creating a four-node looped topology with blocked links. The black holes
condition is covered in the failure scenarios later in this section.
Spanning Tree, HSRP, and Service Module Design
Because a loop does not exist in the topology, it does not actually require a spanning tree protocol to be
running. However, it is very wise to maintain spanning tree in case an error creates a loop condition. It
is also still recommended to maintain spanning tree primary root and secondary root configurations just
as in the triangle and square looped topology designs. This way, if a loop error condition does exist, the
service module and default gateway still operate optimally.
Note Cisco does not recommend using the loop-free U design in the presence of service modules because of
black holing in the event of an uplink failure. More detail is covered in the failure scenarios part of this
section. Service modules can be used with a loop-free inverted U topology when the design permits
server black holing conditions and uses other mechanisms, such as when load balancers are combined
with server distribution across the access layer.
Failure Scenarios
This section describes the loop-free U design in various failure scenarios.
6-21
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure 1Access Layer Uplink Failure
Figure 6-16 shows failure scenario 1.
Figure 6-16 Loop-Free U Failure Scenario 1Uplink Failure
In this failure scenario, HSRP multicast hellos are no longer exchanged between the aggregation
switches, which creates an active-active HSRP state for the vlan 5 and 10 MSFC interfaces on both
aggregation switches.
The servers on access pair 1 are not able to reach the active FWSM context on aggregation 1 because
there is no Layer 2 path for vlan 105 across the aggregation layer inter-switch links. Although the FWSM
can be configured to switchover the active-standby roles by using the interface monitoring features, this
requires the entire module to switchover (all contexts) on a single uplink failure. This is not a desirable
condition and is further complicated if there are multiple uplink failures, or when maintenance requires
taking down an access layer switch/uplink.
Note Because of the lack of single context failover, improved tracking and mis-aligned components, Cisco
does not recommend using service modules with the loop-free U topology.
1
5
3
0
6
2
1
Vlan 44
Vlan 5, 10 outside
Vlan 105, 110 inside
L3
L2
Vlan 105 Vlan 110
HSRP Active
Active Services
HSRP Active
Standby Services
FT Vlans
carried across
inter-switch link
Vlan 44
Vlan 5, 10 outside
Vlan 105, 110 inside
Access Pair 1 Access Pair 2
msfc vlan interface
6-22
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure 2Inter-Switch Link Failure
Figure 6-17 shows failure scenario 2.
Figure 6-17 Loop-Free U Failure Scenario 2Inter-Switch Link Failure
This failure scenario has many side effects to consider. Because the service module failover VLANs are
configured across the inter-switch link only, service modules in both aggregation switches determine that
the other has failed. This results in service modules in aggregation 1 remaining in the active state, and
service modules in aggregation 2 moving from standby to the active state as well. This is commonly
referred to as a split-brain effect, and is very undesirable because the opportunity for asymmetrical
connection failure exists.
The HSRP heartbeats travel along the access layer path, so HSRP remains in the same state with primary
on aggregation 1 and standby on aggregation 2.
If inbound traffic from the core flows into the aggregation 2 switch during this failure scenario, it reaches
the MSFC and then attempts to flow through the now-active service modules. By default, the core
switches are performing CEF-based load balancing, thereby distributing sessions to both aggregation 1
and 2. Because state is maintained on the service modules, it is possible that asymmetrical connection
failures can occur. For these reasons, Cisco recommends tuning the aggregation-core routing
configuration such that the aggregation 1 switch is the primary route from the core for the primary
service module-related VLANs.
Route tuning prevents asymmetrical connections and black holing in a split-brain scenario because
traffic flows are aligned with the same default gateway and service module combination, preventing
asymmetrical conditions. More information on route tuning can be found in Establishing Path Preference
with RHI, page 7-1.
1
5
3
0
6
3
msfc vlan interface
L3
L2
Vlan 10
HSRP Primary
Active CSM
Active FWSM
HSRP Secondary
Active CSM
Active FWSM
2
FT Vlans carried on
aggregation layer
inter-switch links
6-23
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure 3Switch Power or Sup720 Failure (Non-redundant)
Figure 6-18 shows failure scenario 3.
Figure 6-18 Loop-Free U Failure Scenario 3Single Sup720 or Power Failure
In this failure scenario, the spanning tree root, primary default gateway, and active service modules
transition to the aggregation 2 switch.
The convergence characteristics of this failure scenario depend on spanning tree, HSRP, and service
module failover times. Because the spanning tree and HSRP failover times are expected to be under that
of service modules, the actual convergence time depends on service module timer configurations. Test
lab results show this convergence time to be ~ 6 seconds.
Layer 2 Loop-Free Inverted U Topology
The loop-free inverted-U topology design provides a Layer 2 access solution with a single active access
layer uplink to a single aggregation switch, as shown in Figure 6-19.
1
5
3
0
6
4
msfc vlan interface
Vlan 10
HSRP Primary
Active FWSM
HSRP Secondary
Active CSM
3
L3
L2
FT Vlans carried on
aggregation layer
inter-switch links
6-24
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Figure 6-19 Loop-Free Inverted-U Access Topology
With a loop-free inverted-U topology, there are no blocked paths by spanning tree because a loop does
not exist. The VLANs are configured on the access layer uplink 802.1Q trunks and are extended between
the aggregation layer switches (note the dashed line designating the Layer 2 and Layer 3 boundaries).
The service module fault tolerant VLANs are carried across the aggregation inter-switch 802.1Q trunk
for redundancy operations. This topology allows both uplinks to be active for all VLANs to the
aggregation layer switches and permits VLAN extension across the access layer. The loop-free
inverted-U design does not provide a backup link at the access layer, but resiliency can be improved by
the use of distributed EtherChannel (DEC), as shown in Figure 6-19.
The main disadvantage of the loop-free inverted-U design can be attributed to an aggregation switch
failure or access switch uplink failure that black holes servers because there is no alternate path
available. The following improvements to the design can offset the effects of these failures and improve
overall resiliency:
Aggregation nodes with redundant Sup720s using NSF/SSO
Distributed EtherChannel uplinks
NIC teaming
Server load balancing with REALS spread across access switches
1
5
3
0
6
5
L3
L2
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup/NIC Teaming
Vlan 105 Vlan 110
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
Vlan 105
msfc vlan interface
6-25
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Spanning Tree, HSRP, and Service Module Design
Because a loop does not exist in the topology, it does not require a spanning tree protocol to be running.
However, Cisco recommends maintaining spanning tree in case an error creates a loop condition. Cisco
also still recommends maintaining spanning tree primary root and secondary root configurations just as
in the triangle and square looped topology designs. This way if a loop error condition does exist, the
service module and default gateway still operate optimally.
As in all other access layer designs that use service modules, Cisco recommends aligning the HSRP
default gateway, STP root, and active service modules on the same aggregation switch. If the primary
default gateway and active service modules are not aligned, it creates session flows that travel across the
inter-switch links unnecessarily.
When HSRP, STP root, and primary service modules are aligned, the session flows are more optimal,
easier to troubleshoot, and deterministic, as shown in Figure 6-20. Note that in a loop-free inverted-U
topology, 50 percent of the session flows use the aggregation layer inter-switch link to reach the active
HSRP default gateway and active service modules.
Figure 6-20 Loop-Free Inverted-U with HSRP and Service Modules Aligned Recommended
1
5
3
0
6
7
L3
L2
Vlan 105 Vlan 110
HSRP Secondary
Standby Services
HSRP Primary
Active Services
msfc vlan interface
6-26
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure Scenarios
This section describes the loop-free inverted-U design in various failure scenarios.
Failure 1Access Layer Uplink Failure
Figure 6-21 shows failure scenario 1.
Figure 6-21 Loop-Free Inverted-U Failure Scenario 1Uplink Failure
This failure is fairly obvious and straightforward. If servers are single attached, this results in a black
hole condition. If servers use NIC teaming, they should experience a fairly short outage as they transition
to the backup NIC and access switch.
As mentioned earlier, the use of DEC is recommended to reduce the chances of this failure scenario.
Convergence times with a single link failure within a port channel group are under one second. The use
of redundant supervisors in the access layer can also increase the resiliency of this design.
1
5
3
0
6
8
L3
L2
Black hole server
Vlan 105 Vlan 110
HSRP Primary
Active Services
HSRP Secondary
Standby Services
msfc vlan interface
1
6-27
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure 2Service Module Failure (using CSM One-arm and FWSM Transparent Mode)
Figure 6-22 shows failure scenario 2.
Figure 6-22 Failure Scenario 2Service Module Failure with Loop-Free Inverted U Topology
In this failure scenario, the backup service module moves to the active state on aggregation 2 because it
no longer receives hello packets from the active service module, and times out.
Figure 6-22 shows the following two scenarios:
2.1 (FWSM failure)Sessions cross the inter-switch link to aggregation 2 through the now-active
FWSM module context and return back through the inter-switch link to the active HSRP default
gateway on the aggregation 1 MSFC. Because the CSM is still active in aggregation 1, return traffic
flow is directed to the CSM based on the PBR configuration on the MSFC VLAN interface, and on
to the client via the core.
2.2 (CSM failure)Sessions flow through the active FWSM module context in aggregation 1 and
to the MSFC VLAN interface. The MSFC VLAN interface PBR configuration forces return CSM
traffic to travel across the inter-switch link to aggregation 2 and through the now-active CSM
module. Because the active default gateway of the CSM server VLAN is still active on aggregation
1, the traffic must return back across the aggregation layer inter-switch link to the MSFC on
aggregation 1, and then on to the client via the core.
1
5
3
0
6
9
msfc vlan interface
FT Vlans carried on
aggregation layer
inter-switch links
L3
L2
Vlan 10
HSRP Primary
Active CSM
HSRP Secondary
Active FWSM
21
Vlan 10
HSRP Primary
Active FWSM
HSRP Secondary
Active CSM
22
L3
L2
6-28
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Layer 2 Loop-Free Access Layer Model
Failure 3Inter-Switch Link Failure
Figure 6-23 shows failure scenario 3.
Figure 6-23 Loop-Free Inverted-U Failure Scenario 3Inter-Switch Link Failure
This failure scenario has many side effects to consider. Because the service module fault tolerant
(failover) VLANs are configured across the inter-switch link only, service modules in both aggregation
switches determine that the other has failed. This results in service modules in aggregation 1 remaining
in the active state and service modules in aggregation 2 moving from standby to the active state as well.
This is commonly referred to as a split-brain effect, and is very undesirable because the opportunity for
asymmetrical connection failure exists.
The path for HSRP heartbeats between MSFCs is also broken so both MSFC VLAN interfaces go into
the HSRP active state without a standby.
If inbound traffic from the core flows into the aggregation 2 switch during this failure scenario, it reaches
the MSFC and then attempts to flow through the now-active service modules. The core switches are
performing CEF-based load balancing, thereby distributing sessions to both aggregation 1 and 2.
Because state is maintained on the service modules, it is possible that asymmetrical connection failures
can occur. For these reasons, Cisco recommends tuning the aggregation-core routing configuration such
that the aggregation 1 switch is the primary route from the core for the primary service module-related
VLANs.
Route tuning prevents asymmetrical connections and black holing in a split-brain scenario because
traffic flows are aligned with the same default gateway and service module combinations, thus
preventing asymmetrical conditions. More information on route tuning is provided in Establishing Path
Preference with RHI, page 7-1.
1
5
3
0
7
0
msfc vlan interface
FT Vlans carried on
aggregation layer
inter-switch links
L3
L2
Vlan 10
HSRP Active
Active CSM
Active FWSM
HSRP Active
Active CSM
Active FWSM
3
6-29
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
Failure 4Switch Power or Sup720 Failure (Non-redundant)
Figure 6-24 shows failure scenario 4.
Figure 6-24 Loop-Free Inverted-U Failure Scenario 4Single Sup720 or Power Failure
In this failure scenario, the primary HSRP default gateway and active service modules transition to the
aggregation 2 switch. Servers that are single attached to an access layer switch are black holed. NIC
teaming can be used to prevent this failure.
The convergence characteristics of this failure scenario depend on HSRP and service module failover
times. Because the HSRP failover time is expected to be under that of service modules, the actual
convergence time depends on service module timer configurations. Test lab results show this
convergence time to be ~56 seconds.
FlexLinks Access Model
FlexLinks are an alternative to the looped access layer topology. FlexLinks provide an active-standby
pair of uplinks defined on a common access layer switch. After an interface is configured to be a part of
an active-standby FlexLink pair, spanning tree is turned off on both links and the secondary link is placed
in a standby state, which prevents it from being available for packet forwarding. FlexLinks operate in
single pairs only, participate only in a single pair at a time, and can consist of mixed interface types with
mixed bandwidth. FlexLinks are configured with local significance only because the opposite end of a
FlexLink is not aware of its configuration or operation. FlexLinks also has no support for preempt, or an
ability to return to the primary state automatically after a failure condition is restored.
1
5
3
0
7
1
FT Vlans carried on
aggregation layer
inter-switch links
Black hole server
msfc vlan interface
Vlan 10
HSRP Active
Active CSM and FWSM
4
L3
L2
6-30
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
The main advantage of using FlexLinks is that there is no loop in the design and spanning tree is not
enabled. Although this can have advantages in reducing complexity and reliance on STP, there is the
drawback of possible loop conditions that can exist, which is covered in more detail later in this chapter.
Other disadvantages are a slightly longer convergence time than R-PVST+, and the inability to balance
traffic across both uplinks. Failover times measured using FlexLinks were usually under two seconds.
Note When FlexLinks are enabled on the access layer switch, it is locally significant only. The aggregation
switch ports to which FlexLinks are connected do not have any knowledge of this state, and the link state
appears as up and active on both the active and standby links. CDP and UDLD packets still traverse and
operate as normal. Spanning tree is disabled (no BPDUs flow) on the access layer ports configured for
FlexLink operation, but spanning tree logical and virtual ports are still allocated on the aggregation
switch line card. VLANs are in the forwarding state as type P2P on the aggregation switch ports.
Figure 6-25 shows the FlexLinks access topology.
Figure 6-25 FlexLinks Access Topology
1
5
3
0
7
2
L3
L2
Vlan 10 Vlan 20
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
Row 2, Cabinet 3 Row 9, Cabinet 8
Vlan 10
FlexLink Active 10GE or 10GEC 802.1Q trunks
FlexLink Standby Link
GE or EtherChannel
Backup/NIC Teaming
6-31
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
The configuration steps for FlexLinks are as follows. FlexLinks are configured only on the primary
interface.
ACCESS1#conf t
ACCESS1(config-if)#interface tenGigabitEthernet 1/1
ACCESS1(config-if)#switchport backup interface tenGigabitEthernet 1/2
ACCESS1(config-if)#
May 2 09:04:14: %SPANTREE-SP-6-PORTDEL_ALL_VLANS: TenGigabitEthernet1/2 deleted from all
Vlans
May 2 09:04:14: %SPANTREE-SP-6-PORTDEL_ALL_VLANS: TenGigabitEthernet1/1 deleted from all
Vlans
ACCESS1(config-if)#end
To view the current status of interfaces configured as FlexLinks:
ACCESS1#show interfaces switchport backup
Switch Backup Interface Pairs:
Active Interface Backup Interface State
------------------------------------------------------------------------
TenGigabitEthernet1/1 TenGigabitEthernet1/2 Active Up/Backup Standby
Note that both the active and backup interface are in up/up state when doing a show interface
command:
ACCESS1#sh interfaces tenGigabitEthernet 1/1
TenGigabitEthernet1/1 is up, line protocol is up (connected)
Hardware is C6k 10000Mb 802.3, address is 000e.83ea.b0e8 (bia 000e.83ea.b0e8)
Description: to_AGG1
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 10Gb/s
input flow-control is off, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:09, output 00:00:09, output hang never
Last clearing of "show interface" counters 00:00:30
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 32000 bits/sec, 56 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1150 packets input, 83152 bytes, 0 no buffer
Received 1137 broadcasts (1133 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
26 packets output, 2405 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
ACCESS1#sh interfaces tenGigabitEthernet 1/2
TenGigabitEthernet1/2 is up, line protocol is up (connected)
Hardware is C6k 10000Mb 802.3, address is 000e.83ea.b0e9 (bia 000e.83ea.b0e9)
Description: to_AGG2
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
6-32
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
Full-duplex, 10Gb/s
input flow-control is off, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:51, output 00:00:03, output hang never
Last clearing of "show interface" counters 00:00:33
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 32000 bits/sec, 55 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1719 packets input, 123791 bytes, 0 no buffer
Received 1704 broadcasts (1696 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
7 packets output, 1171 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
ACCESS1#
Note that both the spanning-tree is no longer sending BPDUs in an effort to detect loops on interfaces
in the FlexLink pair:
ACCESS1#sh spanning-tree interface tenGigabitEthernet 1/1
no spanning tree info available for TenGigabitEthernet1/1
ACCESS1#sh spanning-tree interface tenGigabitEthernet 1/2
no spanning tree info available for TenGigabitEthernet1/2
CDP and UDLD packets are still transmitted across Flexlinks as shown below:
ACCESS1#show cdp neighbor
Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone
Device ID Local Intrfce Holdtme Capability Platform Port ID
Aggregation-1.cisco.com
Ten 1/1 156 R S WS-C6509 Ten 7/4
Aggregation-2.cisco.com
Ten 1/2 178 R S WS-C6509 Ten 7/4
ACCESS1#show udld neighbor
Port Device Name Device ID Port ID Neighbor State
---- ----------- --------- ------- --------------
Te1/1 TBM06108988 1 Te7/4 Bidirectional
Te1/2 SCA0332000T 1 Te7/4 Bidirectional
ACCESS1#
Spanning Tree, HSRP, and Service Module Design
FlexLinks automatically disable spanning tree BPDUs on both the active and standby links, as noted in
the preceding section. Cisco still recommends enabling spanning tree on the aggregation switches that
are connected to FlexLink-enabled access switches. It is also desirable to align the spanning tree root,
HSRP default gateway, and active service modules on the same aggregation switch just as recommended
in looped access layer designs. This is shown in Figure 6-25. By aligning the primary access layer switch
uplink directly to the same switch that is the primary default gateway and active service
module/appliance, traffic flows are optimized. Otherwise, traffic flows can hop back and forth between
aggregation switches, creating undesirable conditions and difficulty in troubleshooting.
6-33
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
Implications Related to Possible Loop Conditions
Because spanning tree is disabled on FlexLinks, there is the possibility that a loop condition can exist in
particular scenarios, such as a patch cable that is mistakenly connected between access layer switches
that are configured for FlexLinks. This is shown in Figure 6-26.
Figure 6-26 Possible Loop Condition
Figure 6-26 demonstrates two possible loop conditions that can be introduced by configuration error or
patch cable error. The first example demonstrates a connection between the aggregation switch and an
access switch. This can be the result of an incorrect patch/uplink cable or simply the configuration of a
separate link that is not part of the FlexLink channel group. Because STP BPDUs are not passed along
the FlexLink path, a loop in the topology cannot be detected, and an endless replication of
broadcast/multicast frames occurs that can have a very negative impact on the whole aggregation
module. Note that RootGuard is ineffective in this scenario because Agg1 does not see a path to the root
(Agg2) through the access switch with FlexLinks enabled.
The second example demonstrates a patch cable connection error between access switches.
If BPDU Guard is supported and enabled on access layer server ports, the port is automatically disabled
when BPDUs are detected, as shown in the following console message:
ACCESS1#
Apr 13 16:07:33: %SPANTREE-SP-2-BLOCK_BPDUGUARD: Received BPDU on port GigabitEthernet2/2
with BPDU Guard enabled. Disabling port.
1
5
3
9
7
8
vlan 6
Root Primary
HSRP Primary
Active Services
Campus Core
Root Secondary
HSRP Primary
Backup Services
Po-2
T7/4
T7/4
G2/1
Patch cable
error
G2/2
Po-3
BPDU-Guard
can prevent a
loop condition
6-34
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
Apr 13 16:07:33: %PM-SP-4-ERR_DISABLE: bpduguard error detected on Gi2/2, putting Gi2/2 in
err-disable state
ACCESS1#sh int g2/2
GigabitEthernet2/2 is administratively down, line protocol is down (disabled)
If BPDU Guard is not supported or is not enabled on access layer server ports, a loop condition occurs.
This loop condition endlessly forwards multicast and broadcast packets through the aggregation 1 switch
and back through the access switches via the patch cable link that now extends between them. This could
create negative conditions that affect all servers connected to this aggregation module.
Note Because spanning tree BPDUs are not passed from the access layer switch when using FlexLinks,
cabling or configuration errors can create a loop condition with negative implications. Although cabling
mistakes such as these might be considered rare, the degree of change control in your data center
environment can be the barometer in determining whether Flexlinks are a proper solution.
Failure Scenarios
The level of resiliency that is incorporated into the access layer design can vary based on the model used.
Other features such as route health injection and route tuning can influence this. The four main failure
scenarios that can occur in a looped access design are covered in this section. Understanding the amount
of exposure in these scenarios helps to determine the best access layer design selection.
6-35
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
Failure 1Access Layer Uplink Failure
Figure 6-27 shows failure scenario 1.
Figure 6-27 FlexLinks Failure Scenario 1Uplink Down
In this failure scenario, the backup FlexLink goes active and immediately begins to pass packets over its
interface. Default gateway and active service modules remain on aggregation 1 unless tracking
mechanisms are configured and triggered. Traffic flow goes through aggregation 2 and uses the
inter-switch link to aggregation 1 to reach the active HSRP default gateway and active service modules.
The convergence characteristics of this failure scenario are typically less than 2 seconds.
1
5
3
0
5
0
L3
L2
Vlan 10
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
1
6-36
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
Failure 2Service Module Failure (using CSM One-arm and FWSM Transparent Mode)
Figure 6-28 shows failure scenario 2.
Figure 6-28 FlexLinks Failure Scenario 2Service Modules
In this failure scenario, there is no FlexLink convergence and the primary default gateway remains active
on the aggregation 1 switch. The backup service module moves to the active state on aggregation 2
because it no longer receives hello packets from the failed active service module and times out.
Figure 6-28 shows the following two failure instances:
2.1 (FWSM failure)Traffic flow goes through aggregation 1 and across the inter-switch link to
aggregation 2, through the now-active FWSM module context, and back across the inter-switch link
to the active HSRP default gateway on the aggregation 1 MSFC. Because the CSM is still active in
aggregation 1, return traffic flow is directed to the CSM based on the PBR configuration on the
MSFC VLAN interface and on to the client via the core.
2.2 (CSM failure)Traffic flow goes through aggregation 1, through the active FWSM module
context in aggregation 1, and to the MSFC VLAN interface. The MSFC VLAN interface PBR
configuration forces return CSM traffic to travel across the inter-switch link to aggregation 2 and
through the now-active CSM module. Because the active default gateway of the CSM server VLAN
is still active on aggregation 1, the traffic must flow back across the inter-switch link to the MSFC
on aggregation 1 and then on to the client via the core.
1
5
3
0
7
3
msfc vlan interface
L3
L2
Vlan 10
Root Primary
HSRP Primary
Active CSM
Root Primary
HSRP Secondary
Active FWSM
Vlan 10
Root Primary
HSRP Primary
Active FWSM
Root Primary
HSRP Secondary
Active CSM
22
L3
L2
21
6-37
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
The convergence characteristics of these failure scenarios depend on the service module(s) failover time.
The recommended service module failover timer configurations are as follows:
CSM:
module ContentSwitchingModule 3
ft group 1 vlan 102
priority 20
heartbeat-time 1
failover 3
preempt
FWSM:
Unit Poll frequency 500 milliseconds, holdtime 3 seconds
Interface Poll frequency 3 seconds
Test lab results show that average service module failover times with these values is under ~5 seconds.
Failure 3Inter-Switch Link Failure
Figure 6-29 shows failure scenario 3.
Figure 6-29 FlexLinks Failure Scenario 3Inter-Switch Link Failure
FlexLinks do not converge in this failure scenario.
With the inter-switch link down, HSRP multicast hello messages no longer have a path between
aggregation 1 and 2, so HSRP goes into an active state on both switches for all VLANs.
Service modules in both aggregation switches determine that the other has failed and become active (this
is referred to as a split-brain effect).
1
5
3
0
7
4
L3
L2
Vlan 10
Root Primary
HSRP Active
Active Services
Root Secondary
HSRP Active
Active Services
3
6-38
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
FlexLinks Access Model
If inbound traffic from the core flows into the aggregation 2 switch during this failure scenario, it
attempts to flow through the now-active service modules and stops because the path to the servers is
blocked by a standby FlexLink. For these reasons, Cisco recommends tuning the aggregation-core
routing configuration such that the aggregation 1 switch is the primary route advertised to the core for
the primary service module-related VLANs.
Route tuning helps to prevent asymmetrical connections and black holing in a split-brain scenario
because traffic flows are aligned with the same default gateway and service module combination,
preventing asymmetrical conditions.
Failure 4Switch Power or Sup720 Failure (Non-redundant)
Figure 6-30 shows failure scenario 4.
Figure 6-30 FlexLinks Failure Scenario 4Switch Power of Sup720 Failure
In this failure scenario, the active FlexLinks, primary default gateway, and active service modules
transition to the aggregation 2 switch.
The convergence characteristics of this failure scenario depend on FlexLink failure detection, HSRP
failover, and service module failover times. Because the FlexLink and HSRP failover times are expected
to be under that of service modules, the actual convergence time depends on service module timer
configurations.
1
5
3
0
7
5
L3
L2
Vlan 10
Root Secondary
HSRP Active
Active Services
4
6-39
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Using EtherChannel Min-Links
Using EtherChannel Min-Links
EtherChannel Min-Links is a new feature as of the Cisco 12.2.18 SXF IOS Release. EtherChannel
Min-Links permit you to designate the minimum number of member ports that must be in the link-up
state and bundled in an LACP EtherChannel for a port channel interface to be in a link-up state. In the
data center access layer, this can be useful in making sure that a higher bandwidth uplink path is chosen
as the active path. For example, consider the diagram in Figure 6-31.
Figure 6-31 Using EtherChannel Min-Links
In the above example,4G EtherChannels connect the access layer switch to both aggregation layer
switches. A failure has occurred that has taken down two of the port members on the EtherChannel to
the aggregation 1 switch. Because two members of the port channel are still up, the port channel itself
remains up and server traffic uses this path as normal, although it is a path with less available bandwidth.
With EtherChannel Min-Links, you can designate a minimum number of required ports that must be
active or the port channel is taken down. In this example, if EtherChannel Min-Links are set to 3, the
port channel is taken down and server traffic is forced to use the higher 4G bandwidth path towards the
aggregation 2 switch.
The EtherChannel Min-Links feature requires the LACP EtherChannel protocol to be used. The access
layer topology can consist of looped, loop-free, or FlexLink models. The Min-Links feature works at the
physical interface level and is independent of spanning tree path selection.
1
5
3
9
7
9
Port-Channel 2
with 2 ports
down (2G)
Campus Core
Min-Links set to 3 would
bring the port-channel
down, forcing all traffic to
use higher bandwidth path
via Agg2
Port-Channel
3 with all
links up (4G)
6-40
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 6 Data Center Access Layer Design
Using EtherChannel Min-Links
Consider the following when deciding whether Min-Links should be used:
Active/standby service modules are usedIf active services are primarily on the aggregation 1
switch, a failure that forces Min-Links to use the path to aggregation 2 will likely cause all traffic
to also traverse the inter-switch link between the aggregation switches.
Looped topologies with spanning treeIf a looped access topology is used, it is possible to provide
a similar capability by using the spanning-tree pathcost method long global option. This permits
spanning tree to use larger cost values when comparing the cost of different paths to root, which in
turn can differentiate the cost value of various paths when a port member fails.
Dual failuresWith Min-Links, it is possible to have a situation where if both EtherChannels do not
have the minimum required port members, both uplinks would be forced down, which would
black-hole all connected servers.
The configuration steps for EtherChannel Min-Links are as follows:
ACCESS2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
ACCESS2(config)#interface port-channel 2
ACCESS2(config-if)#port-channel ?
min-links Minimun number of bundled ports needed to bring up this port
channel
ACCESS2(config-if)#port-channel min-links ?
<2-8> The minimum number of bundled ports needed before this port channel
can come up.
ACCESS2(config-if)#port-channel min-links 3
ACCESS2(config-if)#end
C H A P T E R
7-1
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
7
Increasing HA in the Data Center
This chapter provides details of Cisco tested high availability solutions in the enterprise data center. It
includes the following topics:
Establishing Path Preference with RHI
Service Module FT Paths
NSF-SSO in the Data Center
Establishing Path Preference with RHI
When active/standby service module pairs are used, it becomes important to align traffic flows such that
the active/primary service modules are the preferred path to a particular server application. This is
desirable because it creates a design that is more deterministic and easier to troubleshoot but it also
becomes particularly important in failure scenarios such as the inter-switch trunk failure described
previously.
Figure 7-1 shows an aggregation layer with route preference established toward the primary service
modules in an active/standby pair.
7-2
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
Establishing Path Preference with RHI
Figure 7-1 Route Preference toward Primary Service Modules in an Active/Standby Pair
By using Route Health Injection (RHI) combined with specific route map attributes, a path preference
is established with the core so that all sessions to a particular VIP go to agg1 where the primary service
modules are located.
The RHI configuration is accomplished by defining a probe type and related values in the Cisco Content
Switching Module (CSM) portion of the Cisco IOS configuration. The following probe types are
supported:
dns slb dns probe
ftp slb ftp probe
http slb http probe
icmp slb icmp probe
kal-ap-tcp KAL-AP TCP probe
kal-ap-udp KAL-AP UDP probe
name probe with this name
real SLB probe real suspects information
script slb script probe
smtp slb smtp probe
tcp slb tcp probe
1
5
3
1
1
0
Root Primary
HSRP Primary
Active Services
Root Secondary
HSRP Secondary
Standby Services
VIP=10.20.6.200
Real 10.20.6.56
RHI probe enabled with active
advertisement on vserver
Core
RHI route-map metric=x
RHI route-map metric=y
Firewall Context
Vlan 105 Vlan 106
10GE or 10GEC 802.1Q Trunks
GE or EtherChannel
Backup
Blocked Path by Spanning Tree
10.20.6.56
7-3
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
Establishing Path Preference with RHI
telnet slb telnet probe
udp slb udp probe
In the following configuration, a simple ICMP probe is defined, after which it is attached to the server
farm configuration. This initiates the probe packets and the monitoring of each real server defined in the
server farm.
The last step in configuring RHI is indicating that you want the VIP address to be advertised based on
the probe status being operational. If the probe determines that the servers are active and healthy, it
inserts a /32 static route for the VIP address into the MSFC configuration.
Aggregation 1 CSM Configuration
module ContentSwitchingModule 3
ft group 1 vlan 102
priority 20
heartbeat-time 1
failover 3
preempt
!
vlan 44 server
ip address 10.20.44.42 255.255.255.0
gateway 10.20.44.1
alias 10.20.44.44 255.255.255.0
!
probe RHI icmp
interval 3
failed 10
!
serverfarm SERVER200
nat server
no nat client
real 10.20.6.56
inservice
probe RHI
!
vserver SERVER200
virtual 10.20.6.200 any
vlan 44
serverfarm SERVER200
advertise active
sticky 10
replicate csrp sticky
replicate csrp connection
persistent rebalance
inservice
With the static host route installed by the CSM into the MSFC based on the health of the server farm,
you can now advertise the host route to the core with a route path preference to the active VIP in
aggregation 1. This is accomplished with the redistribute command in the router process that points to
a specific route map. The route map points to an access list that identifies the VIP addresses to match
against, and also permits metric attributes to be set to establish path preference. In the following OSPF
configuration, set metric-type type-1 is used to set the host route advertisement to an OSPF external
type-1. Alternative methods can include setting the actual OSPF metric value. By setting the host route
to an OSPF external type-1, the route appears in the core with the actual accumulated cost of the path
used. This approach could prove to be more desirable than attempting to set specific values because it
better reflects actual path costs in the case of link failures.
7-4
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
Establishing Path Preference with RHI
Note The VIP server subnet itself should not be included in the router network statements. If 10.20.6.0 were
to be advertised, it would be the next most exact route if the VIP host route, 10.20.6.200, were to not be
advertised in an actual failure situation. This would defeat the purpose of advertising the VIP as healthy,
allowing sessions to continue to be directed to the agg1 switch.
Aggregation 1 OSPF and Route Map Configurations
router ospf 10
log-adjacency-changes
auto-cost reference-bandwidth 10000
nsf
area 10 authentication message-digest
area 10 nssa
timers throttle spf 1000 1000 1000
redistribute static subnets route-map rhi
passive-interface default
no passive-interface Vlan3
no passive-interface TenGigabitEthernet7/2
no passive-interface TenGigabitEthernet7/3
network 10.10.20.0 0.0.0.255 area 10
network 10.10.40.0 0.0.0.255 area 10
network 10.10.110.0 0.0.0.255 area 10
(note: server subnet 10.20.6.0 is not advertised)
access-list 44 permit 10.20.6.200 log
route-map rhi permit 10
match ip address 44
set metric-type type-1
set metric +(value) (on Agg2)
Aggregation Inter-switch Link Configuration
The design in Figure 7-1 uses VLAN 3 between the aggregation switches to establish a Layer 3 OSPF
peering between them. This VLAN traverses a 10GE-802.1Q trunk. With VLANs that cross a 10GE
trunk, OSPF sets the bandwidth value equal to a GE interface, not a 10GE interface as one might expect.
If the bandwidth on the VLAN configuration is not adjusted to reflect an equal value to the
aggregation-core 10GE links, the route from agg2 to the active VIP appears better via the core instead
of via VLAN 3 on the inter-switch trunk. At first, this might not appear to be a real problem because the
core is going to use the preferred paths directly to agg1 anyway. However, certain link failures can create
a scenario where sessions come through the agg2 switch, which would then need to be routed to agg1,
so it makes sense to keep the optimal path via the inter-switch link rather that hopping around
unnecessarily back to the core. The following configuration reflects the bandwidth changes to show this:
interface Vlan3
description AGG1_to_AGG2_L3-RP
bandwidth 10000000
ip address 10.10.110.1 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 3
logging event link-status
7-5
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
Service Module FT Paths
Aggregation 2 Route Map Configuration
The aggregation 2 switch requires the same configurations as those outlined for aggregation 1. The only
difference is with the route map configuration for RHI. Because you want path preference to be toward
the active VIP in the Aggregation 1 switch, you need to adjust the metric for the advertised RHI route
to be less favorable in agg2. The reason this is necessary is to make sure that in an active-active service
module scenario, there is symmetry in the connections to prevent asymmetrical flows that would break
through the CSM and Cisco Firewall Service Module (FWSM). The following route map adds cost to
the OSPF route advertised by using the set metric + command. Note that the type-1 is also set for the
same reasons mentioned previously.
route-map rhi permit 10
match ip address 44
set metric +30
set metric-type type-1
The following command can be used to view the status of an RHI probe configuration:
Aggregation-1#sh module contentswitchingModule 3 probe detail
probe type port interval retries failed open receive
---------------------------------------------------------------------
RHI icmp 3 3 10 10
real vserver serverfarm policy status
------------------------------------------------------------------------------
10.20.6.56:0 SERVER200 SERVER200 (default) OPERABLE
10.20.6.25:0 SERVER201 SERVER201 (default) OPERABLE
Aggregation-1#
Service Module FT Paths
Service module redundant pairs monitor each other to ensure availability as well as to maintain state for
all sessions that are currently active. The availability is provided by a hello protocol that is exchanged
between the modules across a VLAN. Session state is provided by the active service module replicating
the packet headers to the standby across the same VLAN as the hellos, as in the case of the CSM, or on
a separate VLAN as in the case of the FWSM.
If the availability path between the redundant pairs becomes heavily congested or misconfigured, it is
likely that the service modules believe the other has failed. This can create a split-brain scenario where
both service modules move into an active state, which creates undesirable conditions including
asymmetric connection attempts.
The CSM exchanges hello and session state on a common VLAN. The FWSM uses separate VLANs for
both hello exchange and state replication. The standby FWSM assumes the active role when it no longer
sees its redundant peer on at least two VLAN interfaces. This VLAN could be a combination of the
context, failover, or state VLANs.
If a second inter-switch link were added with only service module FT vlans provisioned across it as
shown in Figure 7-2, the chance of a split-brain scenario because of these conditions is reduced.
7-6
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
NSF-SSO in the Data Center
Figure 7-2 Congestion on FT Path
The bandwidth required for the FT link must be considered. The maximum possible required can be
equal to the CSM bus interface (4G) or the FWSM bus interface (6G) as a worst case. The required
bandwidth is based on the amount and type of sessions that are being replicated.
Note More detail on access layer design is covered in Chapter 6, Data Center Access Layer Design.
NSF-SSO in the Data Center
Note The testing performed in support of this section included the use of CSM one-arm mode and FWSM
transparent mode design, which influences the behavior of NSF/SSOs failover characteristics.
The data center solutions that are covered in this and other data center guides are designed for a high
level of resiliency. For example, the core and aggregation layer switches are always in groups of two,
and are interconnected such that no individual module or full system failure can bring the network down.
The service modules and other software and hardware components are configured in redundant pairs that
are located in each of the aggregation switches to further remove any single point of failure. The access
layer also has many options that permit dual homing to the aggregation layer and leverage spanning tree,
FlexLinks, and NIC teaming to achieve high availability.
1
5
3
1
1
2
Agg1 State:
Root primary
HSRP Active
Active Services
Agg2 State:
Root Secondary
HSRP Active
Standby Services
STP unblocks
path but then
RootGuard
blocks it
FT vlans
L3
L2
Vlan 10
Data vlans
Congested
7-7
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
NSF-SSO in the Data Center
The main objective in building a highly available data center network design is to avoid TCP session
breakage while providing convergence that is unnoticeable, or as fast as possible. Each of the TCP/IP
stacks that are built into the various operating systems have a different level of tolerance for determining
when TCP will break a session. The least tolerant are the Windows Server and Windows XP client stacks,
which have been determined to have a ~9 second tolerance. Other TCP/IP stacks such as those found in
Linux, HP, and IBM are more tolerant and have a longer window before tearing down a TCP session.
This does not necessarily mean that the data center network should be designed to converge in less than
9 seconds, but it could serve as a guideline to a worst case. The most optimal acceptable failure
convergence time is zero, of course. Although each network has its own particular convergence time
requirements, many network designers usually break acceptable convergence times into the following
categories:
Minor failuresFailures that would be expected to happen because of more common events, such
as configuration errors or link outages because of cable pulls or GBIC/Xenpak failure, might be
considered to be a minor failure. A minor failure convergence time is usually expected to be in the
sub-second to 1 second range.
Major failuresAny failure that can affect a large number of users or applications is considered a
major failure. This could be because of a power loss, supervisor, or module failure. This type of
failure usually has a longer convergence time and is usually expected to be under 35 seconds.
The following describes the various recovery times of the components in the Cisco data center design:
HSRPWith recommended timers of Hello=1/Holddown=3, convergence occurs in under 3
seconds. This can be adjusted down to sub-second values, but CPU load must be considered.
Routing protocolsOSPF and EIGRP can both achieve sub-second convergence time with
recommended timer configurations.
Rapid PVST+ Spanning Tree802.1W permits sub-second convergence time for minor failures
when logical ports are under watermarks, and usually 12 seconds for major failure conditions.
CSMConvergence time is ~5 seconds with recommended timers.
FWSM Convergence time is ~3 seconds with recommended timers.
Figure 7-3 shows the various failover times of the components of the Cisco data center design.
7-8
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
NSF-SSO in the Data Center
Figure 7-3 Failover Times
The worst case convergence time for an individual component failure condition is the CSM at ~5
seconds. In the event of a Supervisor720 failure on the Aggregation 1 switch, all of these components
must converge to the Aggregation 2 switch, resulting in a minimum of ~5 second convergence time. This
convergence time will most likely be more because of the tables that have to be rebuilt (ARP, CAM
tables, and so on), so maximum convergence time can approach the 9 second limit of the Windows
TCP/IP stack. This convergence time and possible lost sessions can be avoided by using dual Sup720s
with NSF-SSO on the primary aggregation switch of the data center.
Supervisor 720 supports a feature called Non-Stop Forwarding with Stateful Switch-Over (NSF-SSO)
that can dramatically improve the convergence time in a Sup720 failure condition. NSF with SSO is a
supervisor redundancy mechanism on the Supervisor Engine 720 in Cisco IOS Release 12.2(18)SXD
that provides intra-chassis stateful switchover. This technology demonstrates extremely fast supervisor
switchover with lab tests resulting in approximately 1.62 seconds of packet loss.
The recommended data center design that uses service modules has a minimum convergence time of
~67 seconds primarily because of service modules. With NSF/SSO, the service modules do not
converge. This alone represents a large reduction in convergence time, making dual supervisors with
NSF-SSO a tool for achieving increased high availability in the data center network.
Possible Implications
HSRP
With the current 12.2.(18) release train, HSRP state is not maintained by NSF-SSO. The HSRP static
MAC address is statefully maintained between supervisors but the state of HSRP between aggregation
nodes is not. This means that if the primary Sup720 fails, the SSO-enabled secondary Sup720 takes over
and continues to forward traffic that is sent to the HSRP MAC address, but the HSRP hellos that are
normally communicated to the standby HSRP instance on agg2 are not communicated. This means that
during a switchover on aggregation 1, the aggregation 2 switch HSRP instances take over as primary
during the SSO control plane recovery.
1
5
3
1
0
9
F
a
i
l
o
v
e
r
T
i
m
e
Component
OSPF/EIGRP
Sub-second
Spanning Tree
~1sec
HSRP
~ 3s
FireWall
Service
Module
~ 3s
Content
Service
Module
~ 5
TCP Stack
Tolerance
~ 9s
7-9
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
NSF-SSO in the Data Center
Because the HSRP MAC address was statefully maintained on the agg1 standby Sup720 module, the
sessions continue to flow through agg1, regardless of the active state that appears on agg2. After the
control plane comes up on the agg1 switch, the HSRP hello messages begin to flow, and preemptively
move the active state back to the agg1 switch. The control plane recovery time is ~2 minutes.
With looped access layer topologies (triangle and square) that align HSRP, STP primary root, and active
service modules on the agg1 switch, this does not create an issue because the access layer to aggregation
layer traffic flow continues to be directed to the agg1 switch. If a loop-free access layer design is used,
the active HSRP default gateway instance on agg2 responds to ARP requests.
Note A square looped access also has active-active uplinks, but the FWSM transparent mode active context
on agg1 prevents packets from reaching the active HSRP default gateway on agg2. The VLAN on the
south side of the context follows the spanning tree path from agg2, across the inter-switch link to the
active FWSM on agg1, then to the HSRP default gateway instance on agg1.
IGP Timers
It is possible that IGP timers can be tuned low enough such that NSF/SSO is defeated because the failure
is detected by adjacent nodes before it is determined to be an SSO stateful switchover.
Slot Usage versus Improved HA
Placing two Sup720s in a data center aggregation node also has its drawbacks in terms of available slot
density. Particularly with 10GE port density challenges, using an available slot that could be available
for other purposes might not be a necessary trade-off. Consideration of using dual Sup720s should be
based on actual customer requirements. The recommendation is to use the dual supervisor NSF-SSO
solution in the primary aggregation node with service modules when slot density is not an issue or when
HA is critical at this level. If a Network Analysis Module is used, it can be placed in the agg2 switch
while the dual Sup720s are in the agg1 switch, balancing the slot usage across the two aggregation nodes.
Recommendations
NSF-SSO can provide a very robust solution to data centers that require a very high level of resiliency
and are willing to use available slots to achieve it. The processes and functions that are involved for
NSF/SSO to work correctly are very complex and have many inter-related dependencies. These
dependencies involve the access layer design and service module modes used, for example. This guide
provides a solution that permits NSF/SSO to provide a lower convergence time than the base
recommendation design can provide based on service module failover times. Cisco recommends testing
NSF/SSO to ensure that it works as expected in a specific customer design.
7-10
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 7 Increasing HA in the Data Center
NSF-SSO in the Data Center
C H A P T E R
8-1
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
8
Configuration Reference
This chapter provides the test bed diagram and configurations used in tests to support this guide. The
chapter is broken down into two main sections,Integrated Services Design Configurations and Services
Switch Design Configurations.
Integrated Services Design Configurations
The following configurations were used in testing the integrated services design:
Core Switch 1
Aggregation Switch 1
Core Switch 2
Aggregation Switch 2
Access Switch 4948-7
Access Switch 4948-8
Access Switch 6500-1
FWSM 1-Aggregation Switch 1 and 2
Figure 8-1 shows the test bed used without services switches.
8-2
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
Figure 8-1 Integrated Services Configuration Test Bed
Core Switch 1
version 12.2
no service pad
service timestamps debug datetime msec localtime
service timestamps log datetime msec localtime
no service password-encryption
service counters max age 10
!
hostname CORE1
!
boot system sup-bootflash:s720_18SXD3.bin
Service 2
1
5
3
1
1
3
7/4
8/2
1/1
1/2
1/49
1/50
7/4
8/3
Agg 1
Vlan3-10.10.110.0 for
L3 OSPF peering
Service 1
area0
area10 NSSA
2/5-6
2/1-2
Servers and Test Equipment
Square Loop Triangle Loop
1/45-46
1/43-44
3/33-34
2/9-10 3/41-42
2/13-14
1/45-46
1/43-44
Core 1 Core 2
10.10.55.0
10.10.20.0
10.10.30.0
4/1
4/2
10.10.50.0
10.10.40.0
4/1 4/2
4/3 4/3
7/2
7/3
7/2
7/3
1/19-20
1/19-20
1/13-14 1/13-14
10
11
12
13
2/1-2
2/3-2
1
8/1 + 8/4 8/1 + 8/4
Agg 2
Acc1-6506 4948-7 4948-8
1
4948-1 4948-4
4948's connected at L2 provide connection
point for servers and test equipment
8-3
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
logging snmp-authfail
enable secret 5 $1$3OjN$l/80W4JIQJf7l7fRlS7A2.
!
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
vtp domain datacenter
vtp mode transparent
udld enable
ip subnet-zero
no ip source-route
!
!
no ip ftp passive
no ip domain-lookup
ip domain-name cisco.com
!
no ip bootp server
ip multicast-routing
mls ip cef load-sharing full simple
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 2
!
vlan 15
name testgear
!
vlan 16
name testgear2
!
vlan 20
name DNS-CA
!
vlan 802
name mgmt_vlan
!
!
interface Loopback0
ip address 10.10.3.3 255.255.255.0
!
interface Port-channel1
description to 4948-1 testgear
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
!
interface Port-channel2
description to 4948-4 testgear
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
8-4
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
switchport trunk native vlan 2
switchport mode trunk
!
interface GigabitEthernet3/33
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet3/34
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet3/41
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 2 mode active
!
interface GigabitEthernet3/42
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 2 mode active
!
interface TenGigabitEthernet4/1
description to Agg1
ip address 10.10.20.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet4/2
description to Agg2
ip address 10.10.30.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
8-5
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet4/3
description to core2
ip address 10.10.55.1 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface GigabitEthernet6/1
no ip address
shutdown
!
interface GigabitEthernet6/2
********************
!
interface Vlan1
no ip address
shutdown
!
interface Vlan15
description test_client_subnet
ip address 10.20.15.1 255.255.255.0
no ip redirects
no ip proxy-arp
!
interface Vlan16
description test_client_ subnet2
ip address 10.20.16.2 255.255.255.0
no ip redirects
no ip proxy-arp
!
router ospf 10
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 10 authentication message-digest
area 10 nssa default-information-originate
timers throttle spf 1000 1000 1000
passive-interface default
no passive-interface TenGigabitEthernet4/1
no passive-interface TenGigabitEthernet4/2
no passive-interface TenGigabitEthernet4/3
network 10.10.3.0 0.0.0.255 area 10
network 10.10.20.0 0.0.0.255 area 10
network 10.10.30.0 0.0.0.255 area 10
network 10.10.55.0 0.0.0.255 area 10
network 10.20.15.0 0.0.0.255 area 0
network 10.20.16.0 0.0.0.255 area 0
!
ip classless
no ip http server
ip pim send-rp-discovery scope 2
!
!
8-6
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
control-plane
!
!
line con 0
exec-timeout 0 0
line vty 0 4
exec-timeout 60 0
password 7 05080F1C2243
login local
transport input telnet ssh
!
ntp authentication-key 1 md5 02050D480809 7
ntp trusted-key 1
ntp clock-period 17180053
ntp master 1
ntp update-calendar
end
Aggregation Switch 1
Current configuration : 22460 bytes
!
! No configuration change since last restart
!
upgrade fpd auto
version 12.2
service timestamps debug datetime msec localtime
service timestamps log datetime msec localtime
no service password-encryption
service counters max age 10
!
hostname Aggregation-1
!
boot system disk0:s720_18SXD3.bin
logging snmp-authfail
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
clock calendar-valid
firewall multiple-vlan-interfaces
firewall module 4 vlan-group 1
firewall vlan-group 1 5-6,20,100,101,105-106
analysis module 9 management-port access-vlan 20
analysis module 9 data-port 1 capture allowed-vlan 5,6,105,106
analysis module 9 data-port 2 capture allowed-vlan 106
ip subnet-zero
no ip source-route
ip icmp rate-limit unreachable 2000
!
!
!
ip multicast-routing
udld enable
udld message time 7
vtp domain datacenter
vtp mode transparent
mls ip cef load-sharing full
mls ip multicast flow-stat-timer 9
no mls flow ip
no mls flow ipv6
8-7
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
mls acl tcam default-result permit
no mls acl tcam share-global
mls cef error action freeze
!
redundancy
mode sso
main-cpu
auto-sync running-config
auto-sync standard
!
spanning-tree mode rapid-pvst
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
spanning-tree vlan 1-4094 priority 24576
module ContentSwitchingModule 3
ft group 1 vlan 102
priority 20
heartbeat-time 1
failover 3
preempt
!
vlan 44 server
ip address 10.20.44.42 255.255.255.0
gateway 10.20.44.1
alias 10.20.44.44 255.255.255.0
!
probe RHI icmp
interval 3
failed 10
!
serverfarm SERVER200
nat server
no nat client
real 10.20.6.56
inservice
probe RHI
!
serverfarm SERVER201
nat server
no nat client
real 10.20.6.25
inservice
probe RHI
!
vserver SERVER200
virtual 10.20.6.200 any
vlan 44
serverfarm SERVER200
advertise active
sticky 10
replicate csrp sticky
replicate csrp connection
persistent rebalance
inservice
!
vserver SERVER201
virtual 10.20.6.201 any
vlan 44
serverfarm SERVER201
advertise active
sticky 10
replicate csrp sticky
replicate csrp connection
8-8
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
persistent rebalance
inservice
!
port-channel load-balance src-dst-port
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 3
name AGG1_to_AGG2_L3-OSPF
!
vlan 5
!
vlan 6
Webapp Inside
!
vlan 7
!
vlan 10
name Database Inside
!
vlan 20
!
vlan 44
name CSM_Onearm_Server_VLAN
!
vlan 45
name Service_switch_CSM_Onearm
!
vlan 46
name SERV-CSM2-onearm
!
vlan 100
name AGG_FWSM_failover_interface
!
vlan 101
name AGG_FWSM_failover_state
!
vlan 102
name AGG_CSM_FT_Vlan
!
vlan 106
name WebappOutside
!
vlan 110
name DatabaseOutside
!
interface Loopback0
ip address 10.10.1.1 255.255.255.0
!
interface Null0
no ip unreachables
!
interface Port-channel1
description ETHERCHANNEL_TO_AGG2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-19,21-4094
switchport mode trunk
no ip address
logging event link-status
arp timeout 200
8-9
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
spanning-tree guard loop
!
interface Port-channel10
description to SERVICE_SWITCH1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard loop
!
interface Port-channel12
description to SERVICE_SWITCH2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard loop
!
!
interface GigabitEthernet1/13
description to Service_1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
channel-protocol lacp
channel-group 10 mode active
!
interface GigabitEthernet1/14
description to Service_1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
channel-protocol lacp
channel-group 10 mode active
!
interface GigabitEthernet1/19
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-5,7-105,107-300,1010-1110
switchport mode trunk
no ip address
channel-protocol lacp
channel-group 12 mode active
!
!
interface GigabitEthernet5/1
***************
!
interface GigabitEthernet5/2
****************
!
interface GigabitEthernet6/1
no ip address
8-10
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
shutdown
!
interface GigabitEthernet6/2
no ip address
shutdown
media-type rj45
!
interface TenGigabitEthernet7/2
description to Core2
ip address 10.10.40.1 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 7 112A481634424A
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet7/3
description to Core1
ip address 10.10.20.1 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 7 15315A1F277A6A
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet7/4
description TO_ACCESS1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 105
switchport mode trunk
no ip address
logging event link-status
!
interface TenGigabitEthernet8/1
description TO_AGG2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-19,21-4094
switchport mode trunk
no ip address
logging event link-status
channel-protocol lacp
channel-group 1 mode active
!
interface TenGigabitEthernet8/2
description TO_4948-7
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 106
switchport mode trunk
no ip address
logging event link-status
8-11
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
spanning-tree guard root
!
interface TenGigabitEthernet8/3
description TO_4948-8
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 106
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard root
!
interface TenGigabitEthernet8/4
description TO_AGG2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-19,21-4094
switchport mode trunk
no ip address
logging event link-status
channel-protocol lacp
channel-group 1 mode active
!
interface Vlan1
no ip address
shutdown
!
interface Vlan3
description AGG1_to_AGG2_L3-RP
bandwidth 10000000
ip address 10.10.110.1 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface Vlan6
description Outside_Webapp_Tier
ip address 10.20.6.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip policy route-map csmpbr
ntp disable
standby 1 ip 10.20.6.1
standby 1 timers 1 3
standby 1 priority 120
standby 1 preempt delay minimum 60
!
!
interface Vlan44
description AGG_CSM_Onearm
ip address 10.20.44.2 255.255.255.0
no ip redirects
no ip proxy-arp
standby 1 ip 10.20.44.1
standby 1 timers 1 3
standby 1 priority 120
8-12
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
standby 1 preempt delay minimum 60
!
router ospf 10
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 10 authentication message-digest
area 10 nssa
timers throttle spf 1000 1000 1000
redistribute static subnets route-map rhi
passive-interface default
no passive-interface Vlan3
no passive-interface TenGigabitEthernet7/2
no passive-interface TenGigabitEthernet7/3
network 10.10.1.0 0.0.0.255 area 10
network 10.10.20.0 0.0.0.255 area 10
network 10.10.40.0 0.0.0.255 area 10
network 10.10.110.0 0.0.0.255 area 10
distribute-list 1 in TenGigabitEthernet7/2 (for PBR testing purposes)
distribute-list 1 in TenGigabitEthernet7/3 (for PBR testing purposes)
!
ip classless
ip pim accept-rp auto-rp
!
access-list 1 deny 10.20.16.0
access-list 1 deny 10.20.15.0
access-list 1 permit any
access-list 44 permit 10.20.6.200 log
access-list 44 permit 10.20.6.201 log
!
route-map csmpbr permit 10
set ip default next-hop 10.20.44.44
!
route-map rhi permit 10
match ip address 44
set metric-type type-1
!
privilege exec level 1 show
!
line con 0
exec-timeout 0 0
password 7 110D1A16021F060510
login local
line vty 0 4
no motd-banner
exec-timeout 0 0
password 7 110D1A16021F060510
login local
transport input telnet ssh
!
!
no monitor session servicemodule
ntp authentication-key 1 md5 104D000A0618 7
ntp authenticate
ntp trusted-key 1
ntp clock-period 17179928
ntp update-calendar
ntp server *********.42 key 1
end
8-13
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
Core Switch 2
Current configuration : 10867 bytes
!
version 12.2
no service pad
service timestamps debug datetime msec localtime
service timestamps log datetime msec localtime
no service password-encryption
service counters max age 10
!
hostname CORE2
!
boot system sup-bootflash:s720_18SXD3.bin
enable secret 5 $1$k2Df$vfhT/CMz0IqFqluRCENw//
!
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
vtp domain datacenter
vtp mode transparent
udld enable
!
ip subnet-zero
no ip source-route
!
!
no ip domain-lookup
ip domain-name cisco.com
!
no ip bootp server
ip multicast-routing
mls ip multicast flow-stat-timer 9
no mls flow ip
no mls flow ipv6
mls cef error action freeze
!
power redundancy-mode combined
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 2,15-16
!
!
interface Loopback0
ip address 10.10.4.4 255.255.255.0
!
interface Port-channel1
description to 4948-1
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
8-14
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
!
interface Port-channel2
description to 4948-4
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
!
interface GigabitEthernet2/9
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet2/10
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet2/13
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 2 mode active
!
interface GigabitEthernet2/14
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 2 mode active
!
interface TenGigabitEthernet4/1
description to Agg1
ip address 10.10.40.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet4/2
8-15
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
description to Agg2
ip address 10.10.50.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet4/3
description to core1
ip address 10.10.55.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface GigabitEthernet6/1
no ip address
shutdown
!
interface GigabitEthernet6/2
*****************
!
interface Vlan1
no ip address
shutdown
!
interface Vlan15
ip address 10.20.15.2 255.255.255.0
!
interface Vlan16
description test_client_subnet
ip address 10.20.16.1 255.255.255.0
no ip redirects
no ip proxy-arp
!
router ospf 10
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 10 authentication message-digest
area 10 nssa default-information-originate
timers throttle spf 1000 1000 1000
passive-interface default
no passive-interface TenGigabitEthernet4/1
no passive-interface TenGigabitEthernet4/2
no passive-interface TenGigabitEthernet4/3
no passive-interface TenGigabitEthernet4/4
network 10.10.4.0 0.0.0.255 area 10
network 10.10.40.0 0.0.0.255 area 10
network 10.10.50.0 0.0.0.255 area 10
network 10.10.55.0 0.0.0.255 area 10
network 10.20.15.0 0.0.0.255 area 0
network 10.20.16.0 0.0.0.255 area 0
!
8-16
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
ip classless
no ip http server
ip pim send-rp-discovery scope 2
!
!
line con 0
exec-timeout 0 0
line vty 0 4
exec-timeout 60 0
password cisco
login local
transport input telnet ssh
!
ntp authentication-key 1 md5 104D000A0618 7
ntp authenticate
ntp trusted-key 1
ntp clock-period 17179940
ntp update-calendar
ntp server ********* key 1
end
Aggregation Switch 2
Current configuration : 18200 bytes
version 12.2
service timestamps debug datetime msec localtime
service timestamps log datetime msec
no service password-encryption
service counters max age 10
!
hostname Aggregation-2
!
boot system disk0:s720_18SXD3.bin
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
clock calendar-valid
firewall multiple-vlan-interfaces
firewall module 4 vlan-group 1
firewall vlan-group 1 5,6,20,100,101,105,106
vtp domain datacenter
vtp mode transparent
udld enable
!
udld message time 7
!
ip subnet-zero
no ip source-route
ip icmp rate-limit unreachable 2000
!
!
ip multicast-routing
no ip igmp snooping
mls ip cef load-sharing full
mls ip multicast flow-stat-timer 9
no mls flow ip
no mls flow ipv6
mls acl tcam default-result permit
mls cef error action freeze
!
!
8-17
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
spanning-tree mode rapid-pvst
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
spanning-tree vlan 1-4094 priority 28672
port-channel load-balance src-dst-port
module ContentSwitchingModule 3
ft group 1 vlan 102
priority 10
heartbeat-time 1
failover 3
preempt
!
vlan 44 server
ip address 10.20.44.43 255.255.255.0
gateway 10.20.44.1
alias 10.20.44.44 255.255.255.0
!
probe RHI icmp
interval 3
failed 10
!
serverfarm SERVER200
nat server
no nat client
real 10.20.6.56
inservice
probe RHI
!
serverfarm SERVER201
nat server
no nat client
real 10.20.6.25
inservice
probe RHI
!
vserver SERVER200
virtual 10.20.6.200 any
vlan 44
serverfarm SERVER200
advertise active
sticky 10
replicate csrp sticky
replicate csrp connection
persistent rebalance
inservice
!
vserver SERVER201
virtual 10.20.6.201 any
vlan 44
serverfarm SERVER201
advertise active
sticky 10
replicate csrp sticky
replicate csrp connection
persistent rebalance
inservice
!
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 3
8-18
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
name AGG1_to_AGG2_L3-RP
!
vlan 5
name Outside_Webapp
!
vlan 6
name Outside_Webapp
!
!
vlan 10
name Outside_Database_Tier
!
vlan 20
!
vlan 44
name AGG_CSM_Onearm
!
vlan 45
name Service_switch_CSM_Onearm
!
vlan 46
name SERV-CSM2-onearm
!
vlan 100
name AGG_FWSM_failover_interface
!
vlan 101
name AGG_FWSM_failover_state
!
vlan 102
name AGG_CSM_FT_Vlan
!
vlan 105
name Inside_Webapp_Tier
!
vlan 106
name Inside_Webapp
!
vlan 110
name Inside_Database_Tier
!
!
interface Loopback0
ip address 10.10.2.2 255.255.255.0
!
interface Null0
no ip unreachables
!
interface Port-channel1
description ETHERCHANNEL_TO_AGG1
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-19,21-299,301-4094
switchport mode trunk
arp timeout 200
spanning-tree guard loop
!
interface Port-channel11
description to SERVICE_SWITCH1
no ip address
logging event link-status
8-19
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
!
interface Port-channel13
description to SERVICE_SWITCH2
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
!
interface GigabitEthernet1/13
description to Service_2
no ip address
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 13 mode active
!
interface GigabitEthernet1/14
description to Service_2
no ip address
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 13 mode active
!
interface GigabitEthernet1/19
description to Service_1
no ip address
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 11 mode active
!
interface GigabitEthernet1/20
description to Service_1
no ip address
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 11 mode active
!
interface GigabitEthernet5/1
!
interface GigabitEthernet5/2
************
!
interface TenGigabitEthernet7/2
description to Core2
ip address 10.10.50.1 255.255.255.0
no ip redirects
no ip proxy-arp
8-20
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet7/3
description to Core1
ip address 10.10.30.1 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet7/4
description TO_ACCESS1
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 5,6
switchport mode trunk
channel-protocol lacp
!
interface TenGigabitEthernet8/1
description TO_AGG1
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-19,21-299,301-4094
switchport mode trunk
channel-protocol lacp
channel-group 1 mode passive
!
!
interface TenGigabitEthernet8/3
description TO_4948-8
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 106
switchport mode trunk
spanning-tree guard root
!
interface TenGigabitEthernet8/4
description TO_AGG1
no ip address
logging event link-status
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 1-19,21-299,301-4094
8-21
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
switchport mode trunk
channel-protocol lacp
channel-group 1 mode passive
!
interface Vlan1
no ip address
shutdown
!
interface Vlan3
description AGG1_to_AGG2_L3-RP
bandwidth 10000000
ip address 10.10.110.2 255.255.255.0
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface Vlan5
description Outside_Webapp_Tier
no ip address
no ip redirects
ntp disable
standby 1 ip 10.20.5.1
standby 1 timers 1 3
standby 1 priority 115
standby 1 preempt delay minimum 60
!
interface Vlan6
ip address 10.20.6.3 255.255.255.0
no ip redirects
no ip proxy-arp
ip policy route-map csmpbr
ntp disable
standby 1 ip 10.20.6.1
standby 1 timers 1 3
standby 1 priority 115
standby 1 preempt delay minimum 60
!
interface Vlan44
description AGG_CSM_Onearm
ip address 10.20.44.3 255.255.255.0
no ip redirects
no ip proxy-arp
standby 1 ip 10.20.44.1
standby 1 timers 1 3
standby 1 priority 115
standby 1 preempt delay minimum 60
!
!
router ospf 10
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 10 authentication message-digest
area 10 nssa
timers throttle spf 1000 1000 1000
redistribute static subnets route-map rhi
passive-interface default
no passive-interface Vlan3
8-22
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
no passive-interface TenGigabitEthernet7/2
no passive-interface TenGigabitEthernet7/3
network 10.10.2.0 0.0.0.255 area 10
network 10.10.30.0 0.0.0.255 area 10
network 10.10.50.0 0.0.0.255 area 10
network 10.10.110.0 0.0.0.255 area 10
distribute-list 1 in TenGigabitEthernet7/2
distribute-list 1 in TenGigabitEthernet7/3
!
ip classless
ip pim accept-rp auto-rp
!
access-list 1 deny 10.20.16.0
access-list 1 deny 10.20.15.0
access-list 1 permit any
access-list 44 permit 10.20.6.200 log
access-list 44 permit 10.20.6.201 log
!
route-map csmpbr permit 10
set ip default next-hop 10.20.44.44
!
route-map rhi permit 10
match ip address 44
set metric +40
set metric-type type-1
!
line con 0
exec-timeout 0 0
password dcsummit
login local
line vty 0 4
exec-timeout 0 0
password dcsummit
login local
transport input telnet ssh
transport output pad telnet ssh acercon
!
no monitor session servicemodule
ntp authentication-key 1 md5 08701C1A2D495547335B5A5572 7
ntp authenticate
ntp clock-period 17179998
ntp update-calendar
ntp server ***********key 1
end
Access Switch 4948-7
Current configuration : 4612 bytes
version 12.2
no service pad
service timestamps debug datetime localtime
service timestamps log datetime localtime
no service password-encryption
service compress-config
!
hostname 4948-7
!
boot-start-marker
boot system bootflash:cat4000-i5k91s-mz.122-25.EWA2.bin
boot-end-marker
!
logging snmp-authfail
8-23
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
clock calendar-valid
vtp domain datacenter
vtp mode transparent
udld enable
ip subnet-zero
no ip source-route
no ip domain-lookup
ip domain-name cisco.com
!
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree portfast bpduguard default
spanning-tree extend system-id
spanning-tree pathcost method long
port-channel load-balance src-dst-port
power redundancy-mode redundant
!
!
!
vlan internal allocation policy descending
vlan dot1q tag native
!
vlan 5-6
!
vlan 105
name Outside_Webapp
!
vlan 106
name Outside Webapp
!
vlan 110
name Outside_Database_Tier
!
interface Port-channel1
description inter_4948
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
logging event link-status
!
interface GigabitEthernet1/1 (all ports)
switchport access vlan 106
switchport mode access
no cdp enable
spanning-tree portfast
!
interface GigabitEthernet1/45
description to 4948-8
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet1/46
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
8-24
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet1/47
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet1/48
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
!
interface TenGigabitEthernet1/49
description to_AGG1
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
!
interface TenGigabitEthernet1/50
shutdown
!
interface Vlan1
no ip address
shutdown
!
!
line con 0
exec-timeout 0 0
stopbits 1
line vty 0 4
exec-timeout 0 0
password dcsummit
login local
!
ntp authenticate
ntp trusted-key 1
ntp update-calendar
ntp server *********** key 1
!
end
Access Switch 4948-8
Current configuration : 4646 bytes
!
version 12.2
no service pad
service timestamps debug datetime localtime
service timestamps log datetime localtime
no service password-encryption
service compress-config
!
hostname 4948-8
!
boot-start-marker
boot system bootflash:cat4000-i5k91s-mz.122-25.EWA2.bin
boot-end-marker
8-25
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
!
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
clock calendar-valid
vtp domain datacenter
vtp mode transparent
udld enable
!
ip subnet-zero
no ip source-route
no ip domain-lookup
ip domain-name cisco.com
!
no ip bootp server
!
no file verify auto
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree portfast bpduguard default
spanning-tree extend system-id
spanning-tree pathcost method long
port-channel load-balance src-dst-port
power redundancy-mode redundant
!
!
vlan internal allocation policy descending
vlan dot1q tag native
!
vlan 2,5-6
!
vlan 105
name Outside_Webapp_Tier
!
vlan 106
name Outside_Webapp_Tier
!
vlan 110
name Outside_Database_Tier
!
interface Port-channel1
description inter_4948
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
logging event link-status
!
interface GigabitEthernet1/1 (all ports)
switchport access vlan 106
switchport trunk encapsulation dot1q
switchport mode access
no cdp enable
spanning-tree portfast
!
interface GigabitEthernet1/45
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode passive
!
interface GigabitEthernet1/46
8-26
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode passive
!
interface GigabitEthernet1/47
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode passive
!
interface GigabitEthernet1/48
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
channel-protocol lacp
channel-group 1 mode passive
!
interface TenGigabitEthernet1/49
shutdown
!
interface TenGigabitEthernet1/50
description to_AGG2
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
!
interface Vlan1
no ip address
shutdown
!
line con 0
exec-timeout 0 0
stopbits 1
line vty 0 4
exec-timeout 0 0
password dcsummit
login local
!
ntp authenticate
ntp trusted-key 1
ntp update-calendar
ntp server ********* key 1
!
end
Access Switch 6500-1
ACCESS1-6500#
Building configuration...
Current configuration : 11074 bytes
!
! Last configuration change at 13:33:08 PST Thu Feb 9 2006
! NVRAM config last updated at 16:58:39 PST Thu Nov 17 2005
!
upgrade fpd auto
version 12.2
no service pad
service timestamps debug datetime localtime
8-27
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
service timestamps log datetime localtime
service password-encryption
service counters max age 10
!
hostname ACCESS1-6500
!
boot system sup-bootflash:s720_18SXD3.bin
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
clock calendar-valid
ip subnet-zero
no ip source-route
!
!
!
no ip bootp server
ip domain-list cisco.com
no ip domain-lookup
ip domain-name cisco.com
udld enable
!
udld message time 7
!
vtp domain datacenter
vtp mode transparent
no mls acl tcam share-global
mls cef error action freeze
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree portfast bpduguard default
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
!
power redundancy-mode combined
no diagnostic cns publish
no diagnostic cns subscribe
fabric buffer-reserve queue
port-channel load-balance src-dst-port
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 5
name Outside_Webapp_Tier
!
vlan 105
name Outside_Webapp_Tier
!
vlan 110
name Outside_Database_Tier
!
interface TenGigabitEthernet1/1
description to_AGG1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
logging event link-status
!
8-28
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
interface TenGigabitEthernet1/2
description to_AGG2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
logging event link-status
logging event spanning-tree status
!!
interface GigabitEthernet2/1 (all test ports)
description webapp_penguin_kvm5
switchport
switchport access vlan 5
switchport mode access
no ip address
no cdp enable
spanning-tree portfast
!
!
interface Vlan1
no ip address
shutdown
!
no ip http server
!
line con 0
exec-timeout 0 0
line vty 0 4
exec-timeout 0 0
password 7 05080F1C2243
login local
transport input telnet ssh
!
no monitor event-trace timestamps
ntp authentication-key 1 md5 110A1016141D 7
ntp authenticate
ntp trusted-key 1
ntp clock-period 17179938
ntp update-calendar
ntp server ***********key 1
no cns aaa enable
end
FWSM 1-Aggregation Switch 1 and 2
FWSM Version 2.3(2) <system>
firewall transparent
resource acl-partition 12
enable password 2KFQnbNIdI.2KYOU encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
hostname FWSM1-AGG1and2
ftp mode passive
pager lines 24
logging buffer-size 4096
logging console debugging
class default
limit-resource PDM 5
limit-resource All 0
limit-resource IPSec 5
limit-resource Mac-addresses 65535
limit-resource SSH 5
8-29
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
limit-resource Telnet 5
!
failover
failover lan unit primary
failover lan interface failover vlan 100
failover polltime unit msec 500 holdtime 3
failover polltime interface 3
failover interface-policy 100%
failover replication http
failover link state vlan 101
failover interface ip failover 10.20.100.1 255.255.255.0 standby 10.20.100.2
failover interface ip state 10.20.101.1 255.255.255.0 standby 10.20.101.2
arp timeout 14400
!
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 rpc 0:10:00 h323 0:05:00
h225 1:00:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00
timeout uauth 0:05:00 absolute
sysopt nodnsalias inbound
sysopt nodnsalias outbound
terminal width 511
admin-context admin
context admin
allocate-interface vlan20 outside
config-url disk:/admin.cfg
!
context vlan6-106
description vlan6-106 context
allocate-interface vlan6 outside
allocate-interface vlan106 inside
config-url disk:/vlan6-106.cfg
!
Cryptochecksum:a73fe039e4dbeb45a9c6730bc2a55201
: end
[OK]
FWSM1-AGG1and2# ch co vlan6-106
FWSM1-AGG1and2/vlan6-106# wr t
Building configuration...
: Saved
:
FWSM Version 2.3(2) <context>
firewall transparent
nameif outside vlan6 security0
nameif inside vlan106 security100
enable password 8Ry2YjIyt7RRXU24 encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
hostname vlan6-106
fixup protocol dns maximum-length 512
fixup protocol ftp 21
fixup protocol h323 H225 1720
fixup protocol h323 ras 1718-1719
fixup protocol rsh 514
fixup protocol sip 5060
no fixup protocol sip udp 5060
fixup protocol skinny 2000
8-30
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
fixup protocol smtp 25
fixup protocol sqlnet 1521
names
access-list deny-flow-max 4096
access-list alert-interval 300
access-list IP extended permit ip any any
access-list IP extended permit icmp any any
access-list BPDU ethertype permit bpdu
pager lines 24
logging on
logging timestamp
logging buffer-size 4096
logging trap informational
logging device-id hostname
mtu vlan6 1500
mtu vlan106 1500
ip address 10.20.6.104 255.255.255.0 standby 10.20.6.105
icmp permit any vlan6
icmp permit any vlan106
no pdm history enable
arp timeout 14400
access-group BPDU in interface vlan6
access-group IP in interface vlan6
access-group BPDU in interface vlan106
access-group IP in interface vlan106
!
interface vlan6
!
!
interface vlan106
!
!
route vlan6 0.0.0.0 0.0.0.0 10.20.6.1 1
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 rpc 0:10:00 h323 0:05:00
h225 1:00:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00
timeout uauth 0:05:00 absolute
aaa-server TACACS+ protocol tacacs+
aaa-server TACACS+ max-failed-attempts 3
aaa-server TACACS+ deadtime 10
aaa-server RADIUS protocol radius
aaa-server RADIUS max-failed-attempts 3
aaa-server RADIUS deadtime 10
aaa-server LOCAL protocol local
no snmp-server location
no snmp-server contact
snmp-server community public
snmp-server enable traps snmp
floodguard enable
fragment size 200 vlan6
fragment chain 24 vlan6
fragment size 200 vlan106
fragment chain 24 vlan106
telnet timeout 5
ssh 0.0.0.0 0.0.0.0 vlan6
ssh timeout 60
terminal width 511
Cryptochecksum:00000000000000000000000000000000
: end
[OK]
8-31
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Integrated Services Design Configurations
FWSM1-AGG1and2/vlan6-106# ch co admin
FWSM1-AGG1and2/admin# wr t
Building configuration...
: Saved
:
FWSM Version 2.3(2) <context>
firewall transparent
nameif outside vlan20 security0
enable password 8Ry2YjIyt7RRXU24 encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
hostname admin
domain-name example.com
fixup protocol dns maximum-length 512
fixup protocol ftp 21
fixup protocol h323 H225 1720
fixup protocol h323 ras 1718-1719
fixup protocol rsh 514
fixup protocol sip 5060
fixup protocol sip udp 5060
fixup protocol skinny 2000
fixup protocol smtp 25
fixup protocol sqlnet 1521
names
access-list deny-flow-max 4096
access-list alert-interval 300
access-list IP extended permit ip any any
access-list IP extended permit icmp any any
access-list IP extended permit udp any any
access-list BPDU ethertype permit bpdu
pager lines 24
logging on
logging timestamp
logging buffer-size 4096
logging trap informational
logging device-id hostname
mtu vlan20 1500
ip address *********.34 255.255.255.0 standby *********.35
icmp permit any vlan20
no pdm history enable
arp timeout 14400
access-group IP in interface vlan20
!
interface vlan20
!
!
route vlan20 0.0.0.0 0.0.0.0 *********.1 1
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 rpc 0:10:00 h323 0:05:00
h225 1:00:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00
timeout uauth 0:05:00 absolute
username mshinn password fgXai3fBCmTT1r2e encrypted privilege 15
aaa-server TACACS+ protocol tacacs+
aaa-server TACACS+ max-failed-attempts 3
aaa-server TACACS+ deadtime 10
aaa-server RADIUS protocol radius
aaa-server RADIUS max-failed-attempts 3
aaa-server RADIUS deadtime 10
aaa-server LOCAL protocol local
http server enable
8-32
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
http 0.0.0.0 0.0.0.0 vlan20
no snmp-server location
no snmp-server contact
snmp-server community public
snmp-server enable traps snmp
floodguard enable
fragment size 200 vlan20
fragment chain 24 vlan20
sysopt nodnsalias inbound
sysopt nodnsalias outbound
telnet timeout 5
ssh 0.0.0.
Services Switch Design Configurations
The following configurations were used in support of the service chassis testing:
Core Switch 1
Core Switch 2
Distribution Switch 1
Distribution Switch 2
Service Switch 1
Service Switch 2
Access Switch 6500
ACE and FWSM
Figure 8-2 shows the test bed used with services switches.
8-33
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
Figure 8-2 Service Switches Configuration Test Bed
Core Switch 1
hostname dcb-core-1
!
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF9.bin
!
no aaa new-model
clock timezone EDT -5
clock summer-time EDT recurring
ip subnet-zero
no ip source-route
!
no ip bootp server
ip multicast-routing
no ip domain-lookup
ip domain-name ese.cisco.com
udld enable
vtp domain datacenter
vtp mode transparent
mls ip cef load-sharing full simple
mls ip multicast flow-stat-timer 9
no mls flow ip
no mls flow ipv6
no mls acl tcam share-global
mls cef error action freeze
!
redundancy
DCB-Access1
DCB-SVC-1
DCB-Dist-1
DCB-Core2 DCB-Core1
10.199.0.x/30
10.160.1.x
.6 .5
.13
.9
.1
.5
.9
.10
.2 .14
.6
.1
DCB-Dist-2
DCB-SVC-2
T9/2-T3/2
T1/2-T1/2
T9/1-T1/5 T9/2-T1/6
T9/1-T1/6
G8/1-G8/1
G2/48-G3/2 G2/1-G3/2
2
2
2
9
7
7
8-34
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
mode sso
main-cpu
auto-sync running-config
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
!
fabric buffer-reserve queue
port-channel per-module load-balance
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
interface Loopback0
ip address 10.151.1.10 255.255.255.255
!
interface TenGigabitEthernet1/2
description To DCb-Dist-1 - Ten 1/8
ip address 10.160.1.1 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet1/3
description to DCB-Dist-2 Ten 1/8
ip address 10.160.1.5 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface TenGigabitEthernet1/4
description TO DCB-Core-2 - Ten 1/4
ip address 10.199.0.5 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface GigabitEthernet6/1
description flashnet
ip address 10.150.1.3 255.255.255.0
no mop enabled
media-type rj45
8-35
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
!
interface GigabitEthernet6/2
no ip address
shutdown
!
interface Vlan1
no ip address
shutdown
!
router ospf 2
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 0 authentication message-digest
area 0 nssa default-information-originate
area 0 range 10.199.0.0 255.255.0.0
area 2 authentication message-digest
area 2 nssa default-information-originate
area 2 range 10.160.0.0 255.255.255.0
area 2 range 10.161.0.0 255.255.0.0
area 2 range 10.151.1.0 255.255.255.0
timers throttle spf 1000 1000 1000
passive-interface default
no passive-interface TenGigabitEthernet1/1
no passive-interface TenGigabitEthernet1/2
no passive-interface TenGigabitEthernet1/3
no passive-interface TenGigabitEthernet1/4
network 10.160.1.0 0.0.0.3 area 2
network 10.161.0.0 0.0.0.3 area 2
network 10.199.0.0 0.0.0.3 area 0
!
ip classless
!
no ip http server
!
snmp-server community public RO
snmp-server community cisco RW
!
control-plane
!
dial-peer cor custom
!
line con 0
line vty 0 4
exec-timeout 0 0
password cisco
login
line vty 5 15
exec-timeout 0 0
password cisco
login
!
no cns aaa enable
end
Core Switch 2
hostname dcb-core-2
!
no aaa new-model
clock timezone EST -5
clock summer-time EDT recurring
8-36
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
ip subnet-zero
no ip source-route
!
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF9.bin
!
no ip ftp passive
no ip bootp server
ip multicast-routing
no ip domain-lookup
ip domain-name cisco.com
udld enable
!
vtp domain datacenter
vtp mode transparent
mls ip cef load-sharing full simple
mls ip multicast flow-stat-timer 9
no mls flow ip
no mls flow ipv6
no mls acl tcam share-global
mls cef error action freeze
!
redundancy
mode sso
main-cpu
auto-sync running-config
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
!
fabric buffer-reserve queue
port-channel per-module load-balance
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
interface Loopback0
ip address 10.151.1.11 255.255.255.255
!
interface TenGigabitEthernet1/2
description To DCb-Dist-1 - Ten 1/7
ip address 10.160.1.9 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
load-interval 30
!
interface TenGigabitEthernet1/3
description To DCb-Dist-2 - Ten 1/7
ip address 10.160.1.13 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
8-37
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
load-interval 30
!
interface TenGigabitEthernet1/4
description DCB-Core-1 - Ten 1/4
ip address 10.199.0.6 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-dense-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf network point-to-point
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
!
interface GigabitEthernet6/1
description flashnet
ip address 10.150.1.4 255.255.255.0
media-type rj45
!
interface GigabitEthernet6/2
no ip address
shutdown
!
interface Vlan1
no ip address
shutdown
!
router ospf 2
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 0 authentication message-digest
area 0 nssa default-information-originate
area 0 range 10.199.0.0 255.255.0.0
area 2 authentication message-digest
area 2 nssa default-information-originate
area 2 range 10.160.0.0 255.255.0.0
area 2 range 10.161.0.0 255.255.0.0
area 2 range 10.151.1.0 255.255.255.0
timers throttle spf 1000 1000 1000
passive-interface default
no passive-interface TenGigabitEthernet1/1
no passive-interface TenGigabitEthernet1/2
no passive-interface TenGigabitEthernet1/4
no passive-interface TenGigabitEthernet1/3
network 10.160.1.0 0.0.0.3 area 2
network 10.161.0.0 0.0.0.3 area 2
network 10.199.0.0 0.0.0.3 area 0
!
ip classless
!
no ip http server
!
snmp-server community public RO
snmp-server community cisco RW
!
control-plane
!
dial-peer cor custom
8-38
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
!
line con 0
line vty 0 4
exec-timeout 0 0
password cisco
login
line vty 5 15
exec-timeout 0 0
password cisco
login
!
no cns aaa enable
end
Distribution Switch 1
upgrade fpd auto
version 12.2
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
service counters max age 5
!
hostname dcb-Dist-1
!
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF10.bin
enable secret 5 $1$wVQ/$8nsaKkBneJbHVrph5VnS41
enable password cisco
!
no aaa new-model
clock timezone EDT -5
clock summer-time EDT recurring
vtp domain datacenter
vtp mode transparent
ip subnet-zero
no ip source-route
ip icmp rate-limit unreachable 2000
!
no ip domain-lookup
ip domain-name cisco.com
ip multicast-routing
no ip igmp snooping
!
udld enable
udld message time 7
no mls flow ip
mls acl tcam default-result permit
no mls acl tcam share-global
mls ip cef load-sharing full simple
mls ip multicast flow-stat-timer 9
mls cef error action freeze
!
fabric switching-mode force bus-mode
fabric buffer-reserve queue
port-channel per-module load-balance
port-channel load-balance src-dst-port
diagnostic cns publish cisco.cns.device.diag_results
diagnostic cns subscribe cisco.cns.device.diag_commands
!
redundancy
mode sso
8-39
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
main-cpu
auto-sync running-config
!
power redundancy-mode combined
!
spanning-tree mode rapid-pvst
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
spanning-tree vlan 1-4094 priority 24576
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 2-7,106,107,206,207
!
no crypto ipsec nat-transparency udp-encaps
!
interface Loopback0
ip address 10.151.1.12 255.255.255.255
!
interface TenGigabitEthernet1/1
description to_dcb-Acc-1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard loop
!
interface TenGigabitEthernet1/2
description dcb-dist2-6k Te1/2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard loop
!
interface TenGigabitEthernet1/5
description dcb-svc1-6k Te9/1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet1/6
description dcb-svc2-6k Te9/1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
8-40
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet1/7
description dcb-core-2 Te1/2
ip address 10.160.1.10 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
load-interval 30
!
interface TenGigabitEthernet1/8
description dcb-core-1 Te1/2
ip address 10.160.1.2 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
load-interval 30
!
interface Vlan7
ip address 10.80.1.2 255.255.0.0
no ip redirects
no ip proxy-arp
ip flow ingress
ip route-cache flow
logging event link-status
load-interval 30
standby 1 ip 10.80.1.1
standby 1 timers 1 3
standby 1 priority 51
standby 1 preempt delay minimum 120
!
router ospf 2
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 2 authentication message-digest
area 2 nssa default-information-originate
area 2 range 10.151.1.0 255.255.255.0
area 2 range 10.151.0.0 255.255.0.0
area 2 range 10.160.0.0 255.255.255.0
area 2 range 10.161.0.0 255.255.0.0
timers throttle spf 1000 1000 1000
redistribute static subnets route-map rhi
passive-interface default
no passive-interface TenGigabitEthernet1/7
no passive-interface TenGigabitEthernet1/8
no passive-interface GigabitEthernet3/24
network 10.74.0.0 0.0.255.255 area 2
network 10.80.0.0 0.0.255.255 area 2
network 10.81.0.0 0.0.255.255 area 2
network 10.151.1.0 0.0.0.0 area 2
network 10.151.0.0 0.0.255.255 area 2
8-41
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
network 10.160.1.0 0.0.0.255 area 2
network 10.161.0.0 0.0.0.0 area 2
!
ip classless
!
no ip http server
!
snmp-server community public RO
snmp-server community cisco RW
!
control-plane
!
dial-peer cor custom
!
line con 0
line vty 0 4
password cisco
login
!
exception core-file
no cns aaa enable
end
Distribution Switch 2
upgrade fpd auto
version 12.2
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
service counters max age 5
!
hostname dcb-Dist-2
!
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF10.bin
enable secret 5 $1$VUjJ$onovPQGW3pDtcxU2GlqY5.
enable password cisco
!
no aaa new-model
clock timezone EDT -5
clock summer-time EDT recurring
vtp domain datacenter
vtp mode transparent
ip subnet-zero
no ip source-route
ip icmp rate-limit unreachable 2000
!
no ip domain-lookup
ip domain-name cisco.com
ip multicast-routing
no ip igmp snooping
!
udld enable
udld message time 7
no mls flow ip
mls acl tcam default-result permit
no mls acl tcam share-global
mls ip cef load-sharing full
mls ip multicast flow-stat-timer 9
8-42
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
mls cef error action freeze
!
fabric switching-mode force bus-mode
fabric buffer-reserve queue
port-channel per-module load-balance
port-channel load-balance src-dst-port
diagnostic cns publish cisco.cns.device.diag_results
diagnostic cns subscribe cisco.cns.device.diag_commands
!
redundancy
mode sso
main-cpu
auto-sync running-config
!
power redundancy-mode combined
!
spanning-tree mode rapid-pvst
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
spanning-tree vlan 1-4094 priority 28672
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 2-7,106,107,206,207
!
no crypto ipsec nat-transparency udp-encaps
!
interface Loopback0
ip address 10.151.1.13 255.255.255.255
!
!
interface TenGigabitEthernet1/1
description to_dcb-Acc-1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard loop
!
interface TenGigabitEthernet1/2
description dcb-dist1-6k Te1/2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
spanning-tree guard loop
!
!
interface TenGigabitEthernet1/4
no ip address
!
interface TenGigabitEthernet1/5
description dcb-svc1-6k Te9/1
switchport
switchport trunk encapsulation dot1q
8-43
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet1/6
description dcb-svc2-6k Te9/1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet1/7
description dcb-core-2 Te1/2
ip address 10.160.1.14 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
load-interval 30
!
interface TenGigabitEthernet1/8
description dcb-core-1 Te1/2
ip address 10.160.1.6 255.255.255.252
no ip redirects
no ip proxy-arp
ip pim sparse-mode
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 C1sC0!
ip ospf hello-interval 2
ip ospf dead-interval 6
logging event link-status
load-interval 30
!
!
interface Vlan7
ip address 10.80.1.3 255.255.0.0
no ip redirects
no ip proxy-arp
ip flow ingress
logging event link-status
load-interval 30
standby 1 ip 10.80.1.1
standby 1 timers 1 3
standby 1 priority 50
standby 1 preempt
!
router ospf 2
log-adjacency-changes
auto-cost reference-bandwidth 1000000
nsf
area 2 authentication message-digest
8-44
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
area 2 nssa default-information-originate
area 2 range 10.151.0.0 255.255.0.0
area 2 range 10.160.0.0 255.255.255.0
area 2 range 10.161.0.0 255.255.0.0
timers throttle spf 1000 1000 1000
redistribute static subnets route-map rhi
passive-interface default
no passive-interface TenGigabitEthernet1/7
no passive-interface TenGigabitEthernet1/8
no passive-interface GigabitEthernet3/24
network 10.80.0.0 0.0.255.255 area 2
network 10.81.0.0 0.0.255.255 area 2
network 10.151.0.0 0.0.255.255 area 2
network 10.160.1.0 0.0.0.0 area 2
network 10.160.1.0 0.0.0.255 area 2
network 10.161.0.0 0.0.0.0 area 2
network 10.161.0.0 0.0.255.255 area 2
!
ip classless
!
no ip http server
!
snmp-server community public RO
snmp-server community cisco RW
!
control-plane
!
dial-peer cor custom
!
line con 0
line vty 0 4
password cisco
login
!
exception core-file
no cns aaa enable
end
Service Switch 1
upgrade fpd auto
version 12.2
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
service counters max age 5
!
hostname Svc-1
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF10.bin
!
enable secret 5 $1$rPXa$F4EKAVs1cCaD.X5WG68iK0
enable password cisco
!
no aaa new-model
ip subnet-zero
!
ipv6 mfib hardware-switching replication-mode ingress
vtp domain datacenter
vtp mode transparent
mls ip multicast flow-stat-timer 9
8-45
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
no mls flow ip
no mls flow ipv6
no mls acl tcam share-global
mls cef error action freeze
!
redundancy
mode sso
main-cpu
auto-sync running-config
spanning-tree mode pvst
diagnostic cns publish cisco.cns.device.diag_results
diagnostic cns subscribe cisco.cns.device.diag_commands
fabric buffer-reserve queue
port-channel per-module load-balance
!
vlan internal allocation policy ascending
vlan access-log ratelimit 2000
!
vlan 2-7,106,107,206,207
!
svclc autostate
svclc multiple-vlan-interfaces
svclc module 3 vlan-group 1,2
svclc vlan-group 1 6,206,207
svclc vlan-group 2 106,107
svclc vlan-group 3 3,4,5,7,
firewall multiple-vlan-interfaces
firewall module 2 vlan-group 2,3
!
interface Loopback0
ip address 10.151.1.17 255.255.255.255
!
!
interface TenGigabitEthernet9/1
description conx to dist1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet9/2
description conx to dist2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet9/3
description connx to svc2 switch
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 4,5,6
switchport mode trunk
8-46
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
no ip address
logging event link-status
logging event bundle-status
!
no ip http server
!
snmp-server community public RO
!
control-plane
!
dial-peer cor custom
!
line con 0
line vty 0 4
password cisco
login
!
no cns aaa enable
end
Service Switch 2
upgrade fpd auto
version 12.2
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
service counters max age 5
!
hostname Svc-2
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF10.bin
!
enable secret 5 $1$lB0P$HAIQrXSPQjLQtTDklRg2V.
enable password cisco
!
no aaa new-model
ip subnet-zero
!
ipv6 mfib hardware-switching replication-mode ingress
vtp domain datacenter
vtp mode transparent
mls ip multicast flow-stat-timer 9
no mls flow ip
no mls flow ipv6
no mls acl tcam share-global
mls cef error action freeze
!
redundancy
mode sso
main-cpu
auto-sync running-config
spanning-tree mode pvst
diagnostic cns publish cisco.cns.device.diag_results
diagnostic cns subscribe cisco.cns.device.diag_commands
fabric buffer-reserve queue
port-channel per-module load-balance
!
vlan internal allocation policy ascending
vlan access-log ratelimit 2000
!
vlan 2-7,106,107,206,207
8-47
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
!
svclc autostate
svclc multiple-vlan-interfaces
svclc module 3 vlan-group 1,2
svclc vlan-group 1 6,206,207
svclc vlan-group 2 106,107
svclc vlan-group 3 3,4,5,7
firewall multiple-vlan-interfaces
firewall module 2 vlan-group 2,3
!
interface Loopback0
ip address 10.151.1.18 255.255.255.255
!
!
interface TenGigabitEthernet9/1
description connection to 6500 dist1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet9/2
description connection to 6500 dist 2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 2,3,7,106,107,206,207
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
spanning-tree guard root
!
interface TenGigabitEthernet9/3
description connx to svc1 switch
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport trunk allowed vlan 4,5,6
switchport mode trunk
no ip address
logging event link-status
logging event bundle-status
!
no ip http server
!
snmp-server community public RO
!
control-plane
!
dial-peer cor custom
!
line con 0
line vty 0 4
password cisco
login
!
!
8-48
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
no cns aaa enable
end
Access Switch 6500
upgrade fpd auto
version 12.2
no service pad
service timestamps debug datetime localtime
service timestamps log datetime localtime
service password-encryption
service counters max age 10
!
hostname DCB-Access-1
!
boot system flash disk0:s72033-adventerprisek9_wan-vz.122-18.SXF9.bin
no aaa new-model
clock timezone PST -8
clock summer-time PDT recurring
clock calendar-valid
ip subnet-zero
no ip source-route
!
no ip bootp server
ip domain-list cisco.com
no ip domain-lookup
ip domain-name cisco.com
udld enable
!
udld message time 7
!
vtp domain datacenter
vtp mode transparent
no mls acl tcam share-global
mls cef error action freeze
!
spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree portfast bpduguard default
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
spanning-tree pathcost method long
!
power redundancy-mode combined
no diagnostic cns publish
no diagnostic cns subscribe
fabric buffer-reserve queue
port-channel load-balance src-dst-port
!
vlan internal allocation policy descending
vlan dot1q tag native
vlan access-log ratelimit 2000
!
vlan 207
name server Tier
!
interface TenGigabitEthernet1/1
description to_dcb-Dist-1
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
8-49
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
switchport mode trunk
no ip address
logging event link-status
!
interface TenGigabitEthernet1/2
description to_dcb-Dist-2
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 2
switchport mode trunk
no ip address
logging event link-status
logging event spanning-tree status
!!
interface GigabitEthernet2/1 (all test ports)
switchport
switchport access vlan 207
switchport mode access
no ip address
no cdp enable
spanning-tree portfast
!
!
interface Vlan1
no ip address
shutdown
!
no ip http server
!
line con 0
exec-timeout 0 0
line vty 0 4
exec-timeout 0 0
password 7 05080F1C2243
login local
transport input telnet ssh
!
no monitor event-trace timestamps
ntp authentication-key 1 md5 110A1016141D 7
ntp authenticate
ntp trusted-key 1
ntp clock-period 17179938
ntp update-calendar
ntp server ***********key 1
no cns aaa enable
end
ACE and FWSM
FWSM Baseline
firewall transparent
!
interface Vlan107
nameif inside
bridge-group 1
security-level 100
!
interface Vlan7
8-50
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
nameif outside
bridge-group 1
security-level 0
!
interface BVI1
ip address 10.80.1.12 255.255.255.0 standby 10.80.1.13
!
access-list outside extended permit ip any any log
access-list inside extended permit ip any any log
access-list BPDU ethertype permit bpdu
!
access-group BPDU in interface inside
access-group inside in interface inside
access-group BPDU in interface outside
access-group outside in interface outside
route outside 0.0.0.0 0.0.0.0 10.80.1.1
ACE Baseline
access-list BPDU ethertype permit bpdu
access-list anyone line 10 extended permit ip any any
class-map type management match-any PING
description Allowed Admin Traffic
10 match protocol icmp any
11 match protocol telnet any
policy-map type management first-match PING-POLICY
class PING
permit
interface vlan 107
description "Client-side Interface"
bridge-group 1
access-group input BPDU
access-group input anyone
service-policy input PING-POLICY
interface vlan 207
description "Server-side Interface"
bridge-group 1
access-group input BPDU
access-group input anyone
interface bvi 1
ip address 10.80.1.14 255.255.255.0
alias 10.80.1.16 255.255.255.0
peer ip address 10.80.1.13 255.255.255.0
no shutdown
ip route 0.0.0.0 0.0.0.0 10.80.1.1
8-51
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Services Switch Design Configurations
FWSM Failover
ACE Failover
ft interface vlan 6
ip address 10.81.6.6.1 255.255.255.0
peer ip address 10.81.6.2 255.255.255.0
no shutdown
ft peer 1
heartbeat interval 100
heartbeat count 10
ft-interface vlan 6
ft group 2
peer 1
no preempt
priority 210
peer priority 200
associate-context Admin
inservice
context v107
allocate-interface vlan107
allocate-interface vlan207
Table 8-1 FWSM Failover Configuration
Primary FWSM Failover Configuration Secondary FWSM Failover Configuration
interface VLAN4
description LAN Failover Interface
!
Interface VLAN5
description STATE Failover Interface
!
failover
failover lan unit primary
failover lan interface failover VLAN4
failover polltime unit msec 500 holdtime 3
failover polltime interface 3
failover replication http
failover link state VLAN5
failover interface ip failover 10.81.4.1
255.255.255.0 standby 10.81.4.2
failover interface ip state 10.81.5.1 255.255.255.0
standby 10.81.5.2
failover group 1
preempt
failover group 2
secondary
preempt 5
context V107
allocate-interface VLAN107
allocate-interface VLAN7
config-url disk:/V107.cfg
join-failover group 1
Interface VLAN4
description LAN Failover Interface
!
Interface VLAN5
description STATE Failover Interface
!
Failover
failover lan unit secondary
failover lan interface failover VLAN4
failover polltime unit msec 500 holdtime 3
failover polltime interface 3
failover replication http
failover link state VLAN5
failover interface ip failover 10.81.4.1
255.255.255.0 standby 10.81.4.2
failover interface ip state 10.81.5.1 255.255.255.0
standby
10.81.5.2
failover group 1
preempt
failover group 2
secondary
preempt 5
context V107
allocate-interface VLAN107
allocate-interface VLAN7
config-url disk:/V107.cfg
join-failover group 1
8-52
Cisco Data Center Infrastructure 2.5 Design Guide
OL-11565-01
Chapter 8 Configuration Reference
Additional References
ft group 3
peer 1
priority 220
peer priority 200
associate-context vlan107
inservice
Most of the configuration is done on the primary (primary on the admin context) ACE module. Only a
few items need to be defined on the secondary ACE module: the FT interface is defined with the
addresses reversed, the FT peer is configured the same, and the FT group for the admin context is
configured with the priorities reversed. With the FT VLAN up, this is enough for the ACE modules to
synch up correctly and all of the rest of the configuration is copied over and the priority values are
reversed.
Additional References
See the following URL for more information:
Cisco Catalyst 6500http://www.cisco.com/en/US/products/hw/switches/ps708/index.html