NE9000 V800R023C00SPC500 Configuration Guide 10 MPLS
NE9000 V800R023C00SPC500 Configuration Guide 10 MPLS
NE9000 V800R023C00SPC500 Configuration Guide 10 MPLS
V800R023C00SPC500
Configuration Guide
Issue 01
Date 2023-09-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.huawei.com
Email: support@huawei.com
Contents
1 Configuration............................................................................................................................1
1.1 MPLS............................................................................................................................................................................................ 1
1.1.1 About This Document........................................................................................................................................................ 1
1.1.2 MPLS Common Configurations....................................................................................................................................... 7
1.1.2.1 Overview of MPLS............................................................................................................................................................ 7
1.1.2.2 Feature Requirements for MPLS Common.............................................................................................................. 7
1.1.2.3 Specifying an MPLS TTL Handling Mode................................................................................................................. 7
1.1.2.3.1 Specifying the TTL Mode for MPLS........................................................................................................................ 8
1.1.2.3.2 Configuring the Path for ICMP Reply Messages.............................................................................................. 13
1.1.2.4 Configuring a Packet Load Balancing Mode........................................................................................................ 14
1.1.2.5 Configuring Packet Fragmentation on an MPLS Label Switching Node.................................................... 14
1.1.2.6 Configuring the TTL and EXP Processing Mode When the Explicit Null Label Is Used......................... 15
1.1.2.7 Configuring TTL and EXP Processing Modes for MPLS Packets with Label 7........................................... 16
1.1.2.8 Enabling the Mode of Decoupling a Service Next Hop from LDP and SR-MPLS BE.............................. 17
1.1.2.9 Enabling Private-and-Public-Network Separation for MPLS Services..........................................................18
1.1.2.10 Optimizing MPLS......................................................................................................................................................... 18
1.1.2.10.1 Configuring PHP....................................................................................................................................................... 19
1.1.2.10.2 Configuring an MPLS MTU on an Interface....................................................................................................19
1.1.2.10.3 Verifying the Configuration of Optimizing MPLS......................................................................................... 20
1.1.2.11 Configuring MPLS Resource Threshold-related Alarms................................................................................. 21
1.1.2.11.1 Configuring Alarm Thresholds for LDP LSPs...................................................................................................21
1.1.2.11.2 Configuring Alarm Thresholds for Dynamic Labels..................................................................................... 22
1.1.2.11.3 Configuring Conditions That Trigger LDP Resource Threshold-Reaching Alarms............................. 23
1.1.2.11.4 Configuring Alarm Thresholds for other TE Resource................................................................................. 24
1.1.2.11.5 Configuring Alarm Thresholds for RSVP LSPs................................................................................................ 25
1.1.3 MPLS TE Configuration................................................................................................................................................... 26
1.1.3.1 Overview of MPLS TE................................................................................................................................................... 26
1.1.3.2 Feature Requirements for MPLS TE......................................................................................................................... 27
1.1.3.3 Configuring Static CR-LSP........................................................................................................................................... 27
1.1.3.3.1 Enabling MPLS TE.......................................................................................................................................................28
1.1.3.3.2 (Optional) Configuring Link Bandwidth............................................................................................................. 28
1.1.3.3.3 Configuring the MPLS TE Tunnel Interface....................................................................................................... 29
1.1.3.3.4 (Optional) Configuring Global Dynamic Bandwidth Pre-Verification......................................................30
1.1.5.5.1 Establishing an MP-EBGP Peer Relationship Between Each AGG and MASG.....................................719
1.1.5.5.2 Enabling BGP Peers to Exchange Labeled IPv4 Routes...............................................................................720
1.1.5.5.3 Configuring a BGP LSP........................................................................................................................................... 721
1.1.5.5.4 (Optional) Configuring Traffic Statistics Collection for BGP LSPs.......................................................... 723
1.1.5.5.5 (Optional) Configuring the Mode in Which a BGP Label Inherits the QoS Priority in an Outer
Tunnel Label................................................................................................................................................................................ 723
1.1.5.5.6 (Optional) Configuring the Protection Switching Function.......................................................................724
1.1.5.5.7 (Optional) Configuring the Egress Protection Function............................................................................. 739
1.1.5.5.8 Verifying the Configuration.................................................................................................................................. 741
1.1.5.6 Configuring Dynamic BFD to Monitor a BGP Tunnel..................................................................................... 741
1.1.5.6.1 Enabling an MPLS Device to Dynamically Establish a BGP BFD Session..............................................742
1.1.5.6.2 Configuring a Policy for Dynamically Establishing a BGP BFD Session................................................ 743
1.1.5.6.3 (Optional) Adjusting BGP BFD Parameters.....................................................................................................743
1.1.5.6.4 Verifying the Configuration of Dynamic BFD to Monitor a BGP Tunnel.............................................. 744
1.1.5.7 Maintaining Seamless MPLS....................................................................................................................................745
1.1.5.7.1 Checking Network Connectivity and Reachability........................................................................................ 745
1.1.5.7.2 Clearing the Traffic Statistics of BGP LSPs...................................................................................................... 745
1.1.5.8 Configuration Examples............................................................................................................................................ 746
1.1.5.8.1 Example for Configuring Intra-AS Seamless MPLS...................................................................................... 746
1.1.5.8.2 Example for Configuring Inter-AS Seamless MPLS.......................................................................................753
1.1.5.8.3 Example for Configuring Inter-AS Seamless MPLS+HVPN........................................................................ 762
1.1.5.8.4 Example for Configuring Dynamic BFD to Monitor a BGP Tunnel......................................................... 773
1.1.6 GMPLS UNI Configuration........................................................................................................................................... 781
1.1.6.1 Overview of GMPLS UNI........................................................................................................................................... 781
1.1.6.2 Feature Requirements for GMPLS UNI.................................................................................................................782
1.1.6.3 Configuring a GMPLS UNI Tunnel......................................................................................................................... 782
1.1.6.3.1 (Optional) Configuring PCE to Calculate a Path Crossing Both the IP and Optical Layers........... 784
1.1.6.3.2 Configuring a Service Interface........................................................................................................................... 785
1.1.6.3.3 Configuring LMP and an LMP Neighbor.......................................................................................................... 786
1.1.6.3.4 Configuring a Control Channel............................................................................................................................787
1.1.6.3.5 Configuring an Explicit Path................................................................................................................................. 789
1.1.6.3.6 Configuring a Forward GMPLS UNI Tunnel.................................................................................................... 791
1.1.6.3.7 Configuring a Reverse GMPLS UNI Tunnel..................................................................................................... 792
1.1.6.3.8 Verifying the GMPLS UNI Tunnel Configuration........................................................................................... 793
1.1.6.4 Maintaining GMPLS UNI........................................................................................................................................... 793
1.1.6.4.1 Disabling a GMPLS UNI Tunnel...........................................................................................................................793
1.1.6.4.2 Resetting a GMPLS UNI Tunnel...........................................................................................................................794
1.1.6.5 Configuration Examples for GMPLS UNI Tunnels............................................................................................ 794
1.1.6.5.1 Configuring an In-Band GMPLS UNI Tunnel.................................................................................................. 794
1.1.6.5.2 Configuring an Out-of-Band GMPLS UNI Tunnel......................................................................................... 801
1 Configuration
1.1 MPLS
Purpose
This document provides the basic concepts, configuration procedures, and
configuration examples in different application scenarios of the MPLS feature.
Licensing Requirements
For details about the License, see the License Guide.
● Enterprise users: License Usage Guide
Related Version
The following table lists the product version related to this document.
Intended Audience
This document is intended for:
Security Declaration
● Notice on Limited Command Permission
The documentation describes commands when you use Huawei devices and
make network deployment and maintenance. The interfaces and commands
for production, manufacturing, repair for returned products are not described
here.
If some advanced commands and compatible commands for engineering or
fault location are incorrectly used, exceptions may occur or services may be
interrupted. It is recommended that the advanced commands be used by
engineers with high rights. If necessary, you can apply to Huawei for the
permissions to use advanced commands.
● Encryption algorithm declaration
The encryption algorithms DES/3DES/RSA (with a key length of less than
3072 bits)/MD5 (in digital signature scenarios and password encryption)/
SHA1 (in digital signature scenarios) have a low security, which may bring
security risks. If protocols allowed, using more secure encryption algorithms,
such as AES/RSA (with a key length of at least 3072 bits)/SHA2/HMAC-SHA2
is recommended.
For security purposes, insecure protocols Telnet, FTP, and TFTP as well as
weak security algorithms in BGP, LDP, PECP, MSDP, DCN, TCP-AO, MSTP, VRRP,
E-Trunk, AAA, IPsec, BFD, QX, port extension, SSH, SNMP, IS-IS, RIP, SSL, NTP,
OSPF, and keychain features are not recommended. To use such weak security
algorithms, run the undo crypto weak-algorithm disable command to enable
the weak security algorithm function. For details, see the Configuration Guide.
● Password configuration declaration
– When the password encryption mode is cipher, avoid setting both the
start and end characters of a password to "%^%#". This causes the
password to be displayed directly in the configuration file.
– To further improve device security, periodically change the password.
● MAC addresses and Public IP addresses Declaration
– For purposes of introducing features and giving configuration examples,
the MAC addresses and public IP addresses of real devices are used in the
product documentation. Unless otherwise specified, these addressees are
used as examples only.
– Open-source and third-party software may contain public addresses
(including public IP addresses, public URLs/domain names, and email
addresses), but this product does not use these public addresses. This
complies with industry practices and open-source software usage
specifications.
– For purposes of implementing functions and features, the device uses the
following public IP addresses:
Special Declaration
● This document package contains information about the NE9000. For details
about hardware, such as devices or boards sold in a specific country/region,
see Hardware Description.
● This document serves only as a guide. The content is written based on device
information gathered under lab conditions. The content provided by this
document is intended to be taken as general guidance, and does not cover all
scenarios. The content provided by this document may be different from the
information on user device interfaces due to factors such as version upgrades
and differences in device models, board restrictions, and configuration files.
The actual user device information takes precedence over the content
provided by this document. The preceding differences are beyond the scope of
this document.
● The maximum values provided in this document are obtained in specific lab
environments (for example, only a certain type of board or protocol is
configured on a tested device). The actually obtained maximum values may
be different from the maximum values provided in this document due to
factors such as differences in hardware configurations and carried services.
● Interface numbers used in this document are examples. Use the existing
interface numbers on devices for configuration.
● The pictures of hardware in this document are for reference only.
● The supported boards are described in the document. Whether a
customization requirement can be met is subject to the information provided
at the pre-sales interface.
● In this document, public IP addresses may be used in feature introduction and
configuration examples and are for reference only unless otherwise specified.
● The configuration precautions described in this document may not accurately
reflect all scenarios.
● Log Reference and Alarm Reference respectively describe the logs and alarms
for which a trigger mechanism is available. The actual logs and alarms that
the product can generate depend on the types of services it supports.
● All device dimensions described in this document are designed dimensions
and do not include dimension tolerances. In the process of component
manufacturing, the actual size is deviated due to factors such as processing or
measurement.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Command Conventions
The command conventions that may be found in this document are defined as
follows.
Convention Description
Change History
Changes between document issues are cumulative. The latest document issue
contains all the changes made in earlier issues.
V800R023C00SPC500 01 2023-09-30
Usage Scenario
MPLS nodes handle TTLs as follows:
● Nodes process TTLs in either uniform or pipe mode before the TTLs expire.
You can configure an MPLS TTL processing mode on the ingress PE.
Configure the pipe mode on the ingress for an MPLS virtual private network
(VPN) so that the MPLS backbone network structure can be hidden, which
improves network security.
● Nodes process expired MPLS TTLs and reply with ICMP reply messages.
On an ISP backbone network that transmits VPN traffic over MPLS tunnels,
after an MPLS node receives an MPLS packet with two labels and an expired
TTL, the MPLS node replies to the source with an ICMP reply message.
Because the MPLS node cannot send the ICMP reply message over IP routes,
the ICMP reply message travels along an LSP to the egress. The egress
forwards the ICMP reply message to the source over IP routes.
On an ISP backbone that transmits VPN traffic over MPLS tunnels, after an
MPLS node receives MPLS packets each with a single label and an expired
TTL, the MPLS node replies to the source with an ICMP reply message over IP
routes, without forwarding the message to the egress along the LSP. If the
MPLS node has no reachable route to the transmit end, the ICMP reply
message is discarded. As a result, traceroute results do not contain node
information.
On an autonomous system boundary router (ASBR) in the inter-AS VPN
Option B and a superstratum provider edge (SPE) of , each MPLS packet that
contains VPN information carries a single label. If the TTL in the MPLS packet
expires, a node replies with an ICMP reply message over an IP route, and the
ICMP reply message does not carry MPLS node information. Therefore, before
you perform the traceroute operation on the ASBR or SPE, run the undo ttl
expiration pop command on the ASBR or SPE to enable ICMP reply messages
to travel along original LSPs.
Pre-configuration Task
Before configuring an MPLS TTL processing mode, configure MPLS or the MPLS IP
VPN.
Context
You can configure an MPLS TTL processing mode on the ingress PE or egress PE.
NOTE
After the TTL mode of an MPLS public network or VPN is changed, the new mode takes
effect only for new MPLS LDP sessions. To make the change take effect for previously
established MPLS LDP sessions, run the reset mpls ldp command to reestablish the
sessions.
Procedure
● Set a TTL processing mode for MPLS LDP packets.
Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls ldp ttl-mode { pipe | uniform }
An MPLS TTL processing mode is set.
d. Run commit
The configuration is committed.
● Set a TTL processing mode for MPLS TE packets.
Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls te ttl-mode { pipe | uniform }
An MPLS TTL processing mode is set.
d. Run commit
The configuration is committed.
● Set a TTL processing mode for an MPLS SR.
Perform the following steps on the ingress PE:
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls sr ttl-mode { pipe | uniform }
An MPLS TTL processing mode is set.
d. Run commit
The configuration is committed.
● Set a TTL processing mode for a BGP LSP egress.
Perform the following steps on the egress PE:
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls bgp ttl-mode { pipe | uniform } egress
A TTL processing mode is set for a BGP LSP egress.
d. Run commit
The configuration is committed.
– The MPLS TTL processing modes for both the private network label and
public network label are uniform.
When an IP packet passes through an MPLS network, the IP TTL
decreases by one on the ingress and is mapped to the MPLS TTL of the
private network label. The MPLS TTL of the private network label is
mapped to the MPLS TTL of the public network label. Then, the packet is
processed in the standard TTL mode on the MPLS network. The egress
removes the public network label, copies and pastes the MPLS TTL in the
public network label to the MPLS TTL in the private network label, and
removes the private network label. The egress then reduces the MPLS TTL
by one and changes the IP TTL to a smaller value of the MPLS TTL and IP
TTL. Figure 1-1 illustrates such an MPLS TTL processing mode.
– The MPLS TTL processing mode for the private network label is pipe, and
that for the public network label is uniform.
When an IP packet passes through an MPLS network, the ingress reduces
the IP TTL and copies and pastes the MPLS TTL that is fixed at 255 in the
private network label to the MPLS TTL in the public network label. Then,
the packet is processed in the standard TTL mode on the MPLS network.
The egress removes the public network label, copies and pastes the MPLS
TTL to the MPLS TTL in the private network label, and removes the
private network label. The IP TTL is reduced by one only by the egress.
Figure 1-2 illustrates such an MPLS TTL processing mode.
– The MPLS TTL processing mode for the private network label is uniform,
and that for the public network label is pipe.
When an IP packet passes through an MPLS network, the ingress reduces
the IP TTL by one and copies and pastes the IP TTL to the MPLS TTL of
the private network label. The MPLS TTL of the public network label is
fixed at 255. Then, the packet is processed in the standard TTL mode on
the MPLS network. The egress removes the public network label and
private network label in sequence, reduces the MPLS TTL by one, and
changes the IP TTL to a smaller value of the MPLS TTL and the IP TTL.
Figure 1-3 illustrates such an MPLS TTL processing mode.
– The MPLS TTL processing modes for both the private network label and
public network label are pipe.
When an IP packet passes through an MPLS network, the IP TTL
decreases by one on the ingress. The MPLS TTLs in the private network
label and public network label are fixed at 255. Then, the packet is
processed in the standard TTL mode on the MPLS network. The egress
removes the public network label and private network label in sequence.
The IP TTL is reduced by one only by the egress. Figure 1-4 illustrates
such an MPLS TTL processing mode.
----End
Context
Perform the following steps on the ingress and egress:
Procedure
Step 1 Run system-view
----End
Context
The router supports per-flow and per-packet load balancing modes.
● Per-packet load balancing: evenly balances packets, regardless of packet
sequences.
● Per-destination load balancing: balances packets and forwards them in correct
sequence.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run load-balance { flow | packet } { slot slot-id | all }
A packet load balancing mode is configured.
Step 3 Run commit
The configuration is committed.
----End
Context
As the network scale expands and network complexity increases, devices of
different specifications are deployed on the same network. If the MTU configured
on the ingress PE is greater than the MRU configured on the egress PE, and if the
egress PE is configured with the disposing mechanism that discards packets with
sizes larger than the MRU, when the MPLS label switching node is not enabled
with the packet fragmentation function, the packets are transparently transmitted
from the ingress node to the egress node, and packet discarding may occur.
NOTE
Prerequisites
Complete the following task:
● Configure MPLS functions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run slot slot-id
The slot view is displayed.
Step 3 Run mpls fragment enable
MPLS fragmentation is enabled for the MPLS label switching node.
The configuration takes effect only on MPLS packets that have no more than five
labels and all inner labels are in the IPv4 format.
Step 4 Run commit
The configuration is committed.
----End
1.1.2.6 Configuring the TTL and EXP Processing Mode When the Explicit Null
Label Is Used
This section describes how to configure the TTL and EXP processing mode when
the explicit null label is used.
Context
If the explicit null label and the uniform mode are configured, an egress PE copies
the TTL and EXP values to the IP or inner packets before forwarding them out of a
public network. If the explicit null label and the pipe mode are configured, an
egress PE does not copy the TTL or EXP values to the IP or inner packets.
NOTE
Prerequisites
Complete the following tasks:
● Configure MPLS functions.
● Configure the egress PE to assign the explicit null label upstream.
Precautions
The configuration takes effect only on the egress PE.
Procedure
Step 1 Run system-view
The system view is displayed.
uniform: The egress PE copies the TTL value in an MPLS packet to the TTL field in
the IP or inner packet.
pipe: The egress PE does not copy the TTL value in an MPLS packet to the TTL
field in the IP or inner packet.
uniform: The egress PE copies the EXP value in an MPLS packet to the EXP field in
the IP or inner packet.
pipe: The egress PE does not copy the EXP value in an MPLS packet to the EXP
field in the IP or inner packet.
----End
1.1.2.7 Configuring TTL and EXP Processing Modes for MPLS Packets with
Label 7
This section describes how to configure the mode of handling the TTL and EXP
fields carried in MPLS packets with label 7 on the egress PE.
Prerequisites
When MPLS packets with label 7 are sent out of a public network, the mode of
handling TTL and EXP fields can be set to uniform or pipe.
NOTE
Procedure
Step 1 Run system-view
----End
1.1.2.8 Enabling the Mode of Decoupling a Service Next Hop from LDP and
SR-MPLS BE
This section describes how to configure the mode of decoupling a service next hop
from LDP and SR-MPLS BE.
Context
When private and public network separation services recurse to an LDP or SR-
MPLS BE tunnel, packets may be dropped due to a change in the outbound
interface of the LDP or SR-MPLS BE tunnel. To reduce the packet loss rate and
improve link change-induced convergence performance, decouple the next hops
from the LDP or SR-MPLS BE.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp-srbe convergence enhance
The mode of decoupling a service next hop from LDP and SR-MPLS BE is enabled.
Step 3 Run commit
The configuration is committed.
----End
Result
● Run the display mpls convergence mode command to check whether the
mode of decoupling a service next hop from LDP and SR-MPLS BE has taken
effect.
Context
When both VPLS and MPLS services are configured, private-and-public-network
separation for MPLS services triggers the generation of a FES table on the
forwarding plane to implement the separation of the private and public networks
for MPLS services. This function helps improve convergence performance if a
tunnel fails.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls multistep separate enable
Private-and-public-network separation is enabled for MPLS services.
Before running this command, run the mpls vpls convergence separate enable
command to enable VPLS public-and-private network decoupling.
Step 3 Run commit
The configuration is committed.
----End
Usage Scenario
The following MPLS parameters can be adjusted:
● Labels related to penultimate hop popping (PHP)
The PHP function is configured on the egress. The egress assigns labels to the
penultimate node based on the PHP status.
● MPLS maximum transmission unit (MTU) on MPLS interfaces
The MPLS MTU defines the maximum number of bytes in an MPLS packet
that an interface can forward without fragmenting the packet. The default
MPLS MTU on an interface is equal to the interface MTU.
Pre-configuration Task
Before adjusting MPLS parameters, configure MPLS.
Procedure
Step 1 Run system-view
PHP takes effect on LSPs that are to be established, but not on existing LSPs.
----End
Context
The MPLS MTU is a forwarding plane parameter and is irrelevant to LSP
establishment. The dependency between the MPLS MTU and the interface MTU is
as follows:
Procedure
Step 1 Run system-view
The configured MPLS MTU takes effect immediately, and there is no need to
restart the interface.
NOTE
If the MPLS MTU configured using the mpls mtu command is used as the MPLS forwarding
MTU, run the mpls path-mtu independent command to allow the MPLS MTU to take
effect without being affected the interface MTU. The mpls path-mtu independent
command is used when a Huawei device communicates with a non-Huawei device, which
prevents low MPLS forwarding efficiency stemming from different MTU implementations.
----End
Prerequisites
MPLS parameters have been adjusted.
Procedure
Step 1 Run the display mpls rsvp-te interface [ interface-type interface-number ]
command to check MPLS TE-enabled interface information, including the interface
MTU.
Step 2 Run the display mpls ldp interface [ interface-type interface-number | verbose ]
command to check the MTU information of MPLS LDP-enabled interfaces.
----End
Usage Scenario
If the proportion of used MPLS resources, such as LSPs and dynamic labels to all
supported ones reaches a specified upper limit, new MPLS services may fail to be
established because of insufficient resources. To facilitate operation and
maintenance, an upper alarm threshold of MPLS resource usage can be set. If
MPLS resource usage reaches the specified upper alarm threshold, an alarm is
generated.
Pre-configuration Tasks
Before configuring MPLS resource threshold-related alarms, configure basic MPLS
functions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls ldp-lsp-number threshold-alarm upper-limit upper-limit-value lower-
limit lower-limit-value
The upper and lower thresholds of alarms for LDP LSP usage are configured.
The parameters in this command are described as follows:
● upper-limit-value specifies the upper threshold of alarms for LDP LSP usage.
An alarm is generated when the proportion of established LDP LSPs to total
supported LDP LSPs reaches the upper limit.
● lower-limit-value specifies the lower threshold of clear alarms for LDP LSP
usage. A clear alarm is generated when the proportion of established LDP
LSPs to total supported LDP LSPs falls below the lower limit.
● The value of upper-limit-value must be greater than that of lower-limit-value.
NOTE
● This command configures the alarm threshold for LDP LSP usage. The alarm that the
number of LSPs reached the upper threshold is generated only when the snmp-agent
trap enable feature-name mpls_lspm trap-name { hwmplslspthresholdexceed }
command is configured, and the actual LDP LSP usage reaches the upper limit of the
alarm threshold. The alarm that the number of LSPs fell below the lower threshold is
generated only when the snmp-agent trap enable feature-name mpls_lspm trap-
name { hwmplslspthresholdexceedclear } command is configured, and the actual LDP
LSP usage falls below the lower limit of the clear alarm threshold.
● After the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplslsptotalcountexceed | hwmplslsptotalcountexceedclear } command is run
to enable LSP limit-crossing alarm and LSP limit-crossing clear alarm, an alarm is
generated in the following situations:
– If the total number of LDP LSPs reaches the upper limit, a limit-crossing alarm is
generated.
– If the total number of LDP LSPs falls below 95% of the upper limit, a limit-crossing
clear alarm is generated.
----End
Context
If dynamic labels run out but the system receives new dynamic label requests, the
system fails to satisfy the requests because the dynamic labels are insufficient. The
module that fails to be assigned labels works abnormally. The modules that apply
for labels include MPLS TE, MPLS LDP, BGP, L3VPN and L2VPN.
To help facilitate operation and maintenance, you can set dynamic label
thresholds for triggering alarms to alert users.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls dynamic-label-number threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value
The thresholds for triggering dynamic label alarms are set.
You can set the following parameters:
● Each command only configures the trigger conditions for an alarm and its clear alarm.
Although trigger conditions are met, the alarm and its clear alarm can be generated
only after the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwMplsDynamicLabelThresholdExceed |
hwMplsDynamicLabelThresholdExceedClear } command is run to enable the device to
generate a dynamic label insufficiency alarm and its clear alarm.
● After the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwMplsDynamicLabelTotalCountExceed |
hwMplsDynamicLabelTotalCountExceedClear } command is run to enable the device
to generate limit-reaching alarms and their clear alarms, the following situations occur:
– If the number of dynamic labels reaches the maximum number of dynamic labels
supported by a device, a limit-reaching alarm is generated.
– If the number of dynamic labels falls below 95% of the maximum number of
dynamic labels supported by the device, a clear alarm is generated.
----End
Procedure
Step 1 Run system-view
The upper and lower alarm thresholds for the number of outsegment entries
are configured.
● Run mpls mldp-tree-number threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value
The upper and lower alarm thresholds for the number of mLDP LSPs are
configured.
Configure conditions that trigger threshold-reaching alarms and the
corresponding clear alarms for other LDP resources.
● Run mpls mldp-branch-number threshold-alarm upper-limit upper-limit-
value lower-limit lower-limit-value
The upper and lower alarm thresholds for the number of mLDP sub-LSPs are
configured.
By default, the upper threshold for alarms is 80% and the lower threshold for
clear alarms is 75%.
NOTE
● Each command only configures the trigger conditions for an alarm and its clear alarm.
Although trigger conditions are met, the alarm and its clear alarm can be generated
only after the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplsresourcethresholdexceed | hwmplsresourcethresholdexceedclear }
command is run to enable the device to generate an LDP resource insufficiency alarm
and its clear alarm.
● After the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplsresourcetotalcountexceed | hwmplsresourcetotalcountexceedclear }
command is run to enable the device to generate an LDP resource insufficiency alarm
and its clear alarm, note the following issues:
– If the number of used LDP resources reaches the maximum number of LDP
resources supported by a device, a maximum number-reaching alarm is generated.
– If the number of used LDP resources falls to 95% or below of the maximum
number of LDP resources supported by a device, a clear alarm is generated.
----End
Procedure
Step 1 Run system-view
● Each command only configures the trigger conditions for an alarm and its clear alarm.
Although trigger conditions are met, the alarm and its clear alarm can be generated
only after the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplsresourcethresholdexceed | hwmplsresourcethresholdexceedclear }
command is run to enable the device to generate an MPLS resource insufficiency alarm
and its clear alarm.
● After the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplsresourcetotalcountexceed | hwmplsresourcetotalcountexceedclear }
command is run to enable the device to generate limit-reaching alarms and their clear
alarms, the following situations occur:
– If the number of used TE resources reaches the maximum number of TE resources
supported by a device, a limit-reaching alarm is generated.
– If the number of used TE resources falls below 95% of the maximum number of TE
resources supported by a device, a clear alarm is generated.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls rsvp-lsp-number threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value
The upper and lower thresholds of alarms for RSVP LSP usage are configured.
The parameters in this command are described as follows:
● upper-limit-value specifies the upper threshold of alarms for RSVP LSP usage.
An alarm is generated when the proportion of established RSVP LSPs to total
supported RSVP LSPs reaches the upper limit.
● lower-limit-value specifies the lower threshold of clear alarms for RSVP LSP
usage. A clear alarm is generated when the proportion of established RSVP
LSPs to total supported RSVP LSPs falls below the lower limit.
● The value of upper-limit-value must be greater than that of lower-limit-value.
Step 4 Run commit
The configuration is committed.
----End
TE
Network congestion is a major cause for backbone network performance
deterioration. The network congestion is resulted from insufficient resources or
locally induced by incorrect resource allocation. For the former, network device
expansion can prevent the problem. For the later, TE is used to allocate some
MPLS TE
MPLS TE establishes constraint-based routed label switched paths (LSPs) and
transparently transmits traffic over the LSPs. Based on certain constraints, the LSP
path is controllable, and links along the LSP reserve sufficient bandwidth for
service traffic. In the case of resource insufficiency, the LSP with a higher priority
can preempt the bandwidth of the LSP with a lower priority to meet the
requirements of the service with a higher priority. In addition, when an LSP fails or
a node on the network is congested, MPLS TE can provide protection through Fast
Reroute (FRR) and a backup path. MPLS TE allows network administrators to
deploy LSPs to properly allocate network resources and prevent network
congestion. As the number of LSPs increases, you can use a dedicated offline tool
to analyze traffic.
Usage Scenario
A static CR-LSP is easy to configure: Labels are manually allocated, and no
signaling protocol is used to exchange control packets. The setup of a static CR-
LSP consumes only a few resources, and you do not need to configure IGP TE or
CSPF for the static CR-LSP. However, static CR-LSPs cannot be dynamically
adjusted according to network changes. Therefore, static CR-LSPs have limited
applications.
The static CR-LSP configurations involve the operations on the following types of
nodes:
● Ingress: An LSP forwarding entry is configured, and an LSP configured on the
ingress is bound to the TE tunnel interface.
● Transit node: An LSP forwarding entry is configured.
● Egress: An LSP forwarding entry is configured.
Pre-configuration Tasks
Before configuring a static CR-LSP, complete the following tasks:
● Configure the static route or IGP to implement the reachability between LSRs.
● Configure an LSR ID for each LSR.
● Enable MPLS globally and on interfaces on all LSRs.
Context
Perform the following steps on each node along a static CR-LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te
MPLS TE is enabled on the node globally.
Before you enable MPLS TE on each interface, enable MPLS TE globally in the
MPLS view.
Step 4 Run quit
Return to the system view.
Step 5 Run interface interface-type interface-number
The interface view is displayed.
Step 6 Run mpls
MPLS is enabled on the interface.
Step 7 Run mpls te
MPLS TE is enabled on the interface.
NOTE
When the MPLS TE is disabled in the interface view, all CR-LSPs on the current interface go
Down.
When the MPLS TE is disabled in the MPLS view, the MPLS TE on each interface is disabled,
and all CR-LSPs are torn down.
----End
Context
Perform the following steps on each node along the CR-LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The MPLS-TE-enabled interface view is displayed.
Step 3 Run mpls te bandwidth max-reservable-bandwidth max-bw-value
The maximum reservable bandwidth of the link is set.
Step 4 Run mpls te bandwidth bc0 bc0-bw-value
The BC bandwidth of the link is set.
NOTE
● The maximum reservable bandwidth of a link cannot be higher than the actual
bandwidth of the link. A maximum of 80% of the actual bandwidth of the link is
recommended for the maximum reservable bandwidth of the link.
● The BC0 bandwidth cannot be higher than the maximum reservable bandwidth of the
link.
----End
Context
Perform the following steps on the ingress of a static CR-LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
The tunnel interface is created, and the tunnel interface view is displayed.
Step 3 To configure the IP address of the tunnel interface, select one of the following
commands.
● To specify the IP address of the tunnel interface, run ip address ip-address
{ mask | mask-length } [ sub ]
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To borrow an IP address from another interface, run ip address unnumbered
interface interface-type interface-number
The destination address of the tunnel is configured, which is usually the LSR ID of
the egress node.
----End
Context
When dynamic services or both static and dynamic services are configured, a
device only checks static bandwidth usage when a static CR-LSP or a static
bidirectional co-routed LSP is configured. The configuration is successful even if
the interface bandwidth is insufficient, and the interface status is Down. To
percent such an issue, global dynamic bandwidth pre-verification can be
configured. With this function enable, the device can prompt a message indicating
that the configuration fails in the preceding situation.
Procedure
Step 1 Run system-view
----End
Context
Perform the following steps on the ingress of a static CR-LSP:
Procedure
Step 1 Run system-view
tunnel interface-number specifies the MPLS TE tunnel interface that uses this
static CR-LSP. By default, the Bandwidth Constraints value is ct0, and the value of
bandwidth is 0. The bandwidth used by the tunnel cannot be higher than the
maximum reservable bandwidth of the link.
The next hop or outgoing interface is determined by the route from the ingress to
the egress. For the difference between the next hop and outbound interface, see
"Static Route Configuration" in Configuration Guide - IP Routing.
----End
Context
If the static CR-LSP has only the ingress and egress, configuring a transit node is
not needed. If the static CR-LSP has one or more transit nodes, perform the
following steps on each transit node:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run static-cr-lsp transit lsp-name incoming-interface interface-type interface-
number in-label in-label { nexthop next-hop-address | outgoing-interface
interface-type interface-number } * out-label out-label [ bandwidth [ ct0 ]
bandwidth ]
The transit node of the static CR-LSP is configured.
If you need to modify parameters except lsp-name, run the static-cr-lsp transit
command without the need for running the undo static-cr-lsp transit command
first. This means that these parameters can be updated.
The value of lsp-name on the transit node and the egress node cannot be the
same as the existing names on the nodes. There are no other restrictions on the
value.
If an Ethernet interface is used as the outbound interface of an LSP, the nexthop
next-hop-address parameter must be configured to ensure proper traffic
forwarding along the LSP.
Step 3 Run commit
The configuration is committed.
----End
Context
Perform the following steps on the egress of the static CR-LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Prerequisites
The static CR-LSP has been configured.
Procedure
● Run the display mpls static-cr-lsp [ lsp-name ] [ verbose ] command to
check information about the static CR-LSP.
● Run the display mpls te tunnel [ destination ip-address ] [ lsp-id ingress-lsr-
id session-id local-lsp-id ] [ lsr-role { all | egress | ingress | remote |
transit } ] [ name tunnel-name ] [ { incoming-interface | interface |
outgoing-interface } interface-type interface-number ] [ verbose ] command
to check information about the tunnel.
● Run the display mpls te tunnel statistics or display mpls lsp statistics
command to check the tunnel statistics.
● Run the display mpls te tunnel-interface command to check information
about the tunnel interface on the ingress.
----End
Usage Scenario
A static CR-LSP is easy to configure because its labels are manually assigned, and
no signaling protocol is used to exchange control packets. The setup of a static
CR-LSP consumes only a few resources, and you do not need to configure IGP TE
or CSPF for the static CR-LSP. As a static CR-LSP cannot dynamically adapt to
network changes.
NOTE
● The value of the outgoing label on each node is the value of the incoming label on its
next hop.
● The destination address of a static bidirectional co-routed LSP is the destination
address specified on the tunnel interface.
Pre-configuration Tasks
Before configuring a static bidirectional co-routed LSP, complete the following
tasks:
Context
Perform the following steps on each node along the CR-LSP:
Procedure
Step 1 Run system-view
To enable MPLS TE on each interface, you must first enable MPLS TE globally in
the MPLS view.
NOTE
When the MPLS TE is disabled in the interface view, all CR-LSPs on the current interface go
Down.
When the MPLS TE is disabled in the MPLS view, the MPLS TE on each interface is disabled,
and all CR-LSPs are deleted.
----End
Context
Plan bandwidths on links before you perform this procedure. The reserved
bandwidth must be higher than or equal to the bandwidth required by MPLS TE
traffic. Perform the following steps on each node along the CR-LSP to be
established:
Procedure
Step 1 Run system-view
----End
Context
Perform the following steps on the ingress of a static bidirectional co-routed LSP:
Procedure
Step 1 Run system-view
Step 3 To configure an IP address for the tunnel interface, run either of the following
commands:
● To assign an IP address to the tunnel interface, run ip address ip-address
{ mask | mask-length } [ sub ]
The primary IP address must be configured prior to the secondary IP address
for the tunnel interface.
● To configure the tunnel interface to borrow the IP address of another
interface, run ip address unnumbered interface interface-type interface-
number
The destination address is configured for the tunnel. It is usually the LSR ID of the
egress.
----End
Context
When dynamic services or both static and dynamic services are configured, a
device only checks static bandwidth usage when a static CR-LSP or a static
bidirectional co-routed LSP is configured. The configuration is successful even if
the interface bandwidth is insufficient, and the interface status is Down. To
percent such an issue, global dynamic bandwidth pre-verification can be
configured. With this function enable, the device can prompt a message indicating
that the configuration fails in the preceding situation.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
MPLS is enabled globally, and the MPLS view is displayed.
Step 3 Run mpls te
MPLS TE is enabled globally.
Global dynamic bandwidth pre-verification can only be configured after MPLS TE
is enabled globally.
Step 4 Run mpls te static-cr-lsp bandwidth-check deduct-rsvp-bandwidth
Global dynamic bandwidth pre-verification is enabled.
Step 5 Run commit
The configuration is committed.
----End
Context
Perform the following steps on the ingress of a static bidirectional co-routed LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bidirectional static-cr-lsp ingress tunnel-name
----End
Context
Skip this procedure if a static bidirectional co-routed LSP has only an ingress and
an egress. If a static bidirectional co-routed LSP has a transit node, perform the
following steps on this transit node:
Procedure
Step 1 Run system-view
The value of lsp-name cannot be the same as an existing LSP name on the device.
----End
Context
Perform the following steps on the egress of a static bidirectional co-routed CR-
LSP:
Procedure
Step 1 Run system-view
The bandwidth parameter specifies the reserved bandwidth for a reverse CR-LSP.
The bandwidth value cannot be higher than the maximum reservable link
bandwidth. If the specified bandwidth is higher than the maximum reservable link
bandwidth, the CR-LSP cannot go up.
----End
Context
Perform the following steps on the egress of a static bidirectional co-routed LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol to create an MPLS TE tunnel.
Step 4 Run destination ip-address
The destination address is configured for the tunnel. It is usually the LSR ID of the
ingress.
Various types of tunnels have different requirements for destination addresses. If a
tunnel protocol is changed to MPLS TE, the destination address configured using
the destination command is automatically deleted and needs to be reconfigured.
Step 5 Run mpls te tunnel-id tunnel-id
The tunnel ID is configured.
Step 6 Run mpls te signal-protocol cr-static
A static CR-LSP is configured.
Step 7 Run mpls te passive-tunnel
The reverse tunnel attribute is configured.
Step 8 Run mpls te binding bidirectional static-cr-lsp egress tunnel-name
The tunnel interface is bound to the specified static bidirectional co-routed LSP.
Step 9 Run commit
The configuration is committed.
----End
Prerequisites
A static bidirectional co-routed LSP has been established.
Procedure
● Run the display mpls te bidirectional static-cr-lsp [ lsp-name ] [ verbose ]
command to check the specified static bidirectional co-routed LSP.
----End
Context
Usage Scenario
MPLS networks face the following challenges:
● MPLS TE tunnels that transmit services are unidirectional. The ingress
forwards services to the egress along an MPLS TE tunnel. The egress forwards
services to the ingress over IP routes. As a result, the services may be
congested because IP links do not reserve bandwidth for these services.
● Two MPLS TE tunnels in opposite directions are established between the
ingress and egress. If a fault occurs on an MPLS TE tunnel, a traffic switchover
can only be performed for the faulty tunnel, but not for the reverse tunnel. As
a result, traffic is interrupted.
A forward CR-LSP and a reverse CR-LSP between two nodes are established. Each
CR-LSP is bound to the ingress of its reverse CR-LSP. The two CR-LSPs then form
an associated bidirectional CR-LSP. The associated bidirectional CR-LSP is primarily
used to prevent traffic congestion. If a fault occurs on one end, the other end is
notified of the fault so that both ends trigger traffic switchovers, which ensures
that traffic transmission is uninterrupted.
Pre-configuration Tasks
Before configuring an associated bidirectional CR-LSP, complete either of the
following tasks:
Procedure
Step 1 Run system-view
----End
Usage Scenario
CR-LSP backup provides an end-to-end path protection for an entire CR-LSP.
Time when a Created immediately after Created only after the primary
backup CR- the primary CR-LSP is CR-LSP fails.
LSP is established.
established
Primary and You can specify whether The path of the backup CR-LSP
backup the primary and backup can partially overlap the path
explicit paths paths can overlap. If an of the primary CR-LSP,
explicit path is allowed for regardless of whether the
a backup CR-LSP, the backup CR-LSP is set up over
explicit path is used as the an explicit path.
constraint to set up the
backup CR-LSP.
Whether a Yes No
best-effort
path is
supported
● Best-effort path
The hot standby function supports the establishment of best-effort paths. If
both the primary and hot-standby CR-LSPs fail, a best-effort path is
established and takes over traffic.
As shown in Figure 1-5, the primary CR-LSP uses the path PE1 -> P1 -> PE2,
and the backup CR-LSP uses the path PE1 -> P2 -> PE2. If both the primary
and backup CR-LSPs fail, router triggers the setup of a best-effort path PE1 ->
P2 -> P1 -> PE2.
NOTE
A best-effort path does not provide reserved bandwidth for traffic. The affinity attribute
and hop limit are configured as needed.
Pre-configuration Tasks
Before configuring CR-LSP backup, complete the following tasks:
Context
CR-LSP hot-standby is disabled by default. After CR-LSP hot-standby is configured
on the ingress of a primary CR-LSP, the system automatically selects a path for a
hot-standby CR-LSP.
Procedure
Step 1 Run system-view
Step 4 Run mpls te backup hot-standby [ mode { revertive [ wtr interval ] | non-
revertive } | wtr [ interval ]]
NOTE
The bypass and backup tunnels cannot be configured on the same tunnel interface. The
mpls te bypass-tunnel and mpls te backup commands cannot be configured on the same
tunnel interface. Also, the mpls te protected-interface and mpls te backup commands
cannot be configured on the same tunnel interface.
To enable the coexistence of FRR switching and MPLS TE HSB, TE FRR must be
deployed on the entire network. HSB must be deployed on the ingress, BFD for TE
LSP must be enabled, and the delayed down function must be enabled on the
outbound interface of the P. Otherwise, rapid switching cannot be performed in
case of the dual points of failure.
In a scenario where BFD is not configured, only protocol convergence can trigger
TE hot-standby switching if the primary LSP fails. To improve the switching speed,
you can configure CSPF-based fast switching so that hot-standby switching can be
performed quickly.
----End
Context
In best-effort path mode, perform the following steps on the ingress of the
primary tunnel.
NOTE
A best-effort path does not provide bandwidth guarantee for traffic. Configure the affinity
attribute and hop limit as needed.
CR-LSP hot standby can work with a best-effort path to further enhance reliability. Ordinary CR-
LSP backup cannot work with a best-effort path.
Procedure
Step 1 Run system-view
Step 4 (Optional) Run mpls te affinity property properties [ mask mask-value ] best-
effort
----End
Context
NOTE
Ordinary backup and best-effort path cannot be configured at the same time for CR-LSPs.
Procedure
Step 1 Run system-view
----End
Prerequisites
CR-LSP backup has been configured.
Procedure
● Run the display mpls te tunnel-interface command to check information
about a tunnel interface on the ingress of a tunnel.
● Run the display mpls te hot-standby state { all [ verbose ] | interface
tunnel-interface-name } command to check information about the hot-
standby status.
----End
Usage Scenario
BFD for TE CR-LSP monitors the primary and hot-standby CR-LSPs and triggers
traffic switchovers between them.
NOTE
When static BFD for TE CR-LSP is used and the BFD status is Up, the BFD status remains Up
even after the tunnel interface of the CR-LSP is shut down.
Pre-configuration Tasks
Before configuring static BFD for TE CR-LSP, configure an RSVP-TE tunnel or CR-
LSP backup.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
Configurations relevant to BFD can be performed only after the bfd command is
run globally.
----End
Procedure
Step 1 Run system-view
Step 2 Run bfd session-name bind mpls-te interface interface-type interface-number te-
lsp [ backup ]
If the backup parameter is specified, the BFD session is bound to the backup CR-
LSP.
NOTE
The local discriminator of the local device and the remote discriminator of the remote
device are the same, and the remote discriminator of the local device and the local
discriminator of the remote device are the same. A discriminator inconsistency causes the
BFD session to fail to be established.
● Effective local interval at which BFD packets are sent = MAX { Configured
local interval at which BFD packets are sent, Configured remote interval at
which BFD packets are received }
● Effective local interval at which BFD packets are received = MAX { Configured
remote interval at which BFD packets are sent, Configured local interval at
which BFD packets are received }
● Effective local detection interval = Effective local interval at which BFD
packets are received x Configured remote detection multiplier
For example:
● The local interval at which BFD packets are sent is set to 200 ms, the local
interval at which BFD packets are received is set to 300 ms, and the local
detection multiplier is set to 4.
● The remote interval at which BFD packets are sent is set to 100 ms, the
remote interval at which BFD packets are received is set to 600 ms, and the
remote detection multiplier is set to 5.
Then,
● Effective local interval at which BFD packets are sent = MAX { 200 ms, 600
ms } = 600 ms; effective local interval at which BFD packets are received =
MAX { 100 ms, 300 ms } = 300 ms; effective local detection period = 300 ms x
5 = 1500 ms
● Effective remote interval at which BFD packets are sent = MAX { 100 ms, 300
ms } = 300 ms; effective remote receiving interval = MAX { 200 ms, 600 ms } =
600 ms; effective remote detection period = 600 ms x 4 = 2400 ms
Step 6 (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is configured.
Step 7 (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is set.
Step 8 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 The IP link, LSP, or TE tunnel can be used as the reverse tunnel to inform the
ingress of a fault. If there is a reverse LSP or a TE tunnel, use the reverse LSP or
the TE tunnel. If no LSP or TE tunnel is established, use an IP link as a reverse
tunnel. If the configured reverse tunnel requires BFD detection, you can configure
a pair of BFD sessions for it. Run the following commands as required:
● Configure a BFD session to monitor reverse channels.
– For an IP link, run bfd session-name bind peer-ip ip-address [ vpn-
instance vpn-name ] [ source-ip ip-address ]
– For an LDP LSP, run bfd session-name bind ldp-lsp peer-ip ip-address
nexthop ip-address [ interface interface-type interface-number ]
– For a CR-LSP, run bfd session-name bind mpls-te interface tunnel
interface-number te-lsp [ backup ]
– For a TE tunnel, run bfd session-name bind mpls-te interface tunnel
interface-number
– A BFD session group is used when two devices are connected through an Eth-Trunk
link, and the two member interfaces of the Eth-Trunk interface are located on
different interfaces. That is, a BFD session group is used for an inter-board Eth-
Trunk interface.
– Two BFD sub-sessions created in a BFD session group are used to detect two inter-
board Eth-Trunk links. The status of a BFD session group depends on the status of
the two BFD sub-sessions. As long as one BFD sub-session is up, the BFD session
group is up. When both sub-sessions are down, the BFD session group is down.
NOTE
A BFD session group does not need to be configured with a local discriminator. Each sub-session
in a BFD session group has its own discriminator.
NOTE
The local discriminator of the local device and the remote discriminator of the remote
device are the same. The remote discriminator of the local device and the local
discriminator of the remote device are the same. A discriminator inconsistency causes the
BFD session to fail to be established.
A BFD session group does not need to be configured with a remote discriminator. Each sub-
session in a BFD session group has its own discriminator.
For example:
● The local interval at which BFD packets are sent is set to 200 ms, the local
interval at which BFD packets are received is set to 300 ms, and the local
detection multiplier is set to 4.
● The remote interval at which BFD packets are sent is set to 100 ms, the
remote interval at which BFD packets are received is set to 600 ms, and the
remote detection multiplier is set to 5.
Then,
● Effective local interval at which BFD packets are sent = MAX { 200 ms, 600
ms } = 600 ms; effective local interval at which BFD packets are received =
MAX { 100 ms, 300 ms } = 300 ms; effective local detection period = 300 ms x
5 = 1500 ms
● Effective remote interval at which BFD packets are sent = MAX { 100 ms, 300
ms } = 300 ms; effective remote receiving interval = MAX { 200 ms, 600 ms } =
600 ms; effective remote detection period = 600 ms x 4 = 2400 ms
Step 6 (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is set.
Step 7 (Optional) Run detect-multiplier multiplier
The BFD detection multiplier is set.
Step 8 Run commit
The configurations are committed.
----End
Prerequisites
The static BFD for TE CR-LSP has been configured.
Procedure
● Run the display bfd session mpls-te interface tunnel-name te-lsp
[ verbose ] command to check information about BFD sessions on the
ingress.
● Run the following commands to check information about BFD sessions on the
egress.
– Run the display bfd session all [ for-ip | for-lsp | for-te ] [ verbose ]
command to check information about all BFD sessions.
– Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ]
command to check information about static BFD sessions.
– Run the display bfd session peer-ip peer-ip [ vpn-instance vpn-name ]
[ verbose ] command to check information about BFD sessions with
reverse IP links.
----End
Usage Scenario
Compared with static BFD, dynamically creating BFD sessions simplifies
configurations and reduces configuration errors.
Currently, dynamic BFD for TE CR-LSP cannot detect faults in the entire TE tunnel.
NOTE
BFD for LSP can function properly though the forward path is an LSP and the reverse path
is an IP link. The forward and reverse paths must be established over the same link. If a
fault occurs, BFD cannot identify the faulty path. Before deploying BFD, ensure that the
forward and reverse paths are over the same link so that BFD can correctly identify the
faulty path.
Pre-configuration Tasks
Before configuring dynamic BFD for TE CR-LSP, configure an RSVP-TE tunnel.
Context
Perform the following steps on the ingress and the egress of a TE tunnel:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally.
Step 3 Run commit
The configuration is committed.
----End
Context
Perform either of the following operations:
● Enable MPLS TE BFD globally if most TE tunnels on the ingress need to
dynamically create BFD sessions.
● Enable MPLS TE BFD on a tunnel interface if some TE tunnels on the
ingress need to dynamically create BFD sessions.
Procedure
● Enable MPLS TE BFD globally on the ingress.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls te bfd enable
The capability of dynamically creating BFD sessions is enabled on the TE
tunnel.
After this command is run in the MPLS view, dynamic BFD for TE LSP is
enabled on all tunnel interfaces, excluding the interfaces on which
dynamic BFD for TE LSP is blocked.
1.1.3.8.3 Enabling the Capability of Passively Creating BFD Sessions on the Egress
On a unidirectional LSP, creating a BFD session on the ingress playing the active
role triggers the sending of LSP ping request messages to the egress node playing
the passive role. Only after the passive role receives the ping packets, a BFD
session can be automatically established.
Context
Perform the following steps on the egress:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
The BFD view is displayed.
Step 3 (Optional) Run passive-session udp-port 3784 peer peer-ip
The destination UDP port number is set for the specified passive BFD session.
After this command is run, a BFD session can be created only after the egress
receives an LSP ping request message containing the BFD TLV from the ingress.
----End
Context
Perform either of the following operations:
● Effective local interval at which BFD packets are sent = MAX { Configured local interval
at which BFD packets are sent, Configured remote interval at which BFD packets are
received }
● Effective local interval at which BFD packets are received = MAX { Configured remote
interval at which BFD packets are sent, Configured local interval at which BFD packets
are received }
● Effective local detection interval = Effective local interval at which BFD packets are
received x Configured remote detection multiplier
On the egress of the TE tunnel enabled with the capability of passively creating BFD
sessions, the default values of the receiving interval, the sending interval, and the detection
multiplier cannot be adjusted. The default values of these three parameters are the
configured minimum values on the ingress. Therefore, the BFD detection interval on the
ingress and that on the egress of a TE tunnel are as follows:
● Effective detection interval on the ingress = Configured interval at which BFD packets
are received on the ingress x 3
● Effective detection interval on the egress = Configured local interval at which BFD
packets are sent on the ingress x Configured detection multiplier on the ingress
Procedure
● Adjust global BFD parameters on the ingress of a TE tunnel.
a. Run system-view
----End
Procedure
● Run the display bfd session dynamic [ verbose ] command to check
information about the BFD session on the ingress.
● Run the display bfd session passive-dynamic [ peer-ip peer-ip remote-
discriminator discriminator ] [ verbose ] command to check information
about the BFD session passively created on the egress.
● Run the following commands to check BFD statistics.
– Run the display bfd statistics command to check statistics about all BFD
sessions.
– Run the display bfd statistics session dynamic command to check
statistics about dynamic BFD sessions.
● Run the display mpls bfd session [ protocol rsvp-te | outgoing-interface
interface-type interface-number ] [ verbose ] command to check information
about the MPLS BFD session.
----End
Usage Scenario
The dynamic RSVP-TE signaling protocol adjusts a path of a TE tunnel to adapt to
network topology changes. To help implement advanced functions, such as TE FRR
or CR-LSP backup, use the RSVP-TE signaling protocol to set up an MPLS TE
tunnel.
Pre-configuration Tasks
Before configuring an RSVP-TE tunnel, complete the following tasks:
Context
NOTE
● If MPLS TE is disabled in the MPLS view, MPLS TE enabled in the interface view is also
disabled. As a result, all CR-LSPs configured on this interface go Down, and all
configurations associated with these CR-LSPs are deleted.
● If MPLS TE is disabled in the interface view, all CR-LSPs on the interface go Down.
● If RSVP-TE is disabled on an LSR, RSVP-TE is also disabled on all interfaces on this LSR.
Procedure
Step 1 Run system-view
NOTICE
The undo mpls command deletes all MPLS configurations, including the
established LDP sessions and LSPs.
----End
Context
To enable the ingress to calculate a complete path, CSPF needs to be configured
on all nodes along a path.
CSPF calculates only the shortest path to the specified tunnel destination. During
path computation, if there are multiple paths with the same weight, the optimal
path is selected using the tie-breaking function.
Tie-breaking is based on the percentage of the available bandwidth to the
maximum reservable bandwidth. The maximum reservable bandwidth is
configured using the mpls te bandwidth max-reservable-bandwidth command,
not the BC bandwidth configured using the mpls te bandwidth bc0 command on
an interface. The following tie-breaking policies are available:
● Most-fill: The path with a larger percentage of the available bandwidth to the
maximum reservable bandwidth is preferred. That is, the path with lower
bandwidth usage is preferred.
● Least-fill: The path with a smaller percentage of the available bandwidth to
the maximum reservable bandwidth is preferred. That is, the path with higher
bandwidth usage is preferred.
● Random: The device selects a path at random. This mode allows LSPs to be
evenly distributed among links, regardless of their bandwidth.
NOTE
The Most-fill and Least-fill modes are only effective when the difference in bandwidth
usage between the two links exceeds 10%, such as 50% of link A bandwidth utilization and
45% of link B bandwidth utilization. The value is 5%. At this time, the Most-fill and Least-
fill modes do not take effect, and the Random mode is still used.
Procedure
Step 1 Run system-view
Step 4 (Optional) Run mpls te cspf preferred-igp { isis [ process-id [ level-1 | level-2 ] ]
| ospf [ process-id [ area area-id ] ] }
A preferred IGP is configured. Its process and area or level can also be configured.
CSPF is configured to calculate shortest paths among all IGP processes and areas.
NOTICE
explicit path scenarios. If multiple explicit paths are qualified and have the same
number of hops, the path with the smallest metric is preferentially selected.
CSPF provides a method for selecting a path in the MPLS domain. By default, the
optimization mode is used for path calculation, and the path calculation is
performed from Egress to Ingress. Compared with the common calculation
method, the optimization mode has higher efficiency.
The mpls te cspf optimize-mode disable command is used to disable the CSPF
optimization mode. After the configuration, the path is calculated from Ingress to
Egress.
----End
Context
Either OSPF TE or IS-IS TE can be used:
NOTE
If neither OSPF TE nor IS-IS TE is configured, LSRs generate no TE link state advertisement
(LSA) or TE Link State PDUs (LSPs) and construct no TEDBs.
TE tunnels cannot be used in inter-area scenarios. In an inter-area scenario, an explicit path
can be configured, and the inbound and outbound interfaces of the explicit path must be
specified, preventing a failure to establish a TE tunnel.
● OSPF TE
OSPF TE uses Opaque Type 10 LSAs to carry TE attributes. The OSPF Opaque
capability must be enabled on each LSR. In addition, TE LSAs are generated
only when at least one OSPF neighbor is in the Full state.
● IS-IS TE
IS-IS TE uses the sub-time-length-value (sub-TLV) in the IS-reachable TLV (22)
to carry TE attributes. The IS-IS wide metric attribute must be configured. Its
value can be wide, compatible, or wide-compatible.
Procedure
● Configure OSPF TE.
a. Run system-view
IS-IS TE is enabled.
There are no unified standards for sub-TLVs that carry DS-TE attributes in
non-IETF mode. To ensure interconnection between devices of different
vendors, you need to manually configure TLV values for these sub-TLVs.
After TLV values are configured for these sub-TLVs, the traffic engineering
database (TEDB) is regenerated, and then TE tunnels are reestablished.
The sub-TLVs can be sent only after TLV values are configured for them.
f. Run commit
The configuration is committed.
----End
Context
TE link attributes are as follows:
● Link bandwidth
The link bandwidth attribute can be set to limit the CR-LSP bandwidth.
NOTE
If no bandwidth is set for a link, the CR-LSP bandwidth may be higher than the
maximum reservable link bandwidth. As a result, the CR-LSP cannot be established.
● TE metric of the link
The IGP metric or TE metric of a link can be used for path calculation of a TE
tunnel. In this manner, the path calculation of the TE tunnel is more
independent of the IGP, and the path of the TE tunnel can be controlled more
flexibly.
● Administrative group and affinity
An affinity determines attributes for links to be used by an MPLS TE tunnel.
The affinity property, together with the link administrative group attribute, is
used to determine which links a tunnel uses.
An affinity can be set using either a hexadecimal number or a name.
– Hexadecimal number: A 32-bit hexadecimal number is set for each
affinity and link administrative group attribute, which causes plan and
computation difficulties. This is the traditional configuration mode of the
NE9000.
– Name: This mode is newly supported by the NE9000. Each bit of the 32-
bit administrative group and affinity attribute is named, which simplifies
configuration and maintenance. This mode is recommended.
● SRLG
A shared risk link group (SRLG) is a set of links which are likely to fail
concurrently because they share a physical resource (for example, an optical
fiber). In an SRLG, if one link fails, the other links in the SRLG also fail.
An SRLG enhances CR-LSP reliability on an MPLS TE network with CR-LSP hot
standby or TE FRR enabled. Two or more links are at the same risk if they
share physical resources. For example, links on an interface and its sub-
interfaces are in an SRLG. Sub-interfaces share risks with their interface. These
sub-interfaces will go down if the interface goes down. If the links of a
primary tunnel and a backup or bypass tunnel are in the same SRLG, the links
of the backup or bypass tunnel share risks with the links of the primary
tunnel. The backup or bypass tunnel will go down if the primary tunnel goes
down.
Procedure
● Configure link bandwidth.
a. Run system-view
NOTE
● The maximum reservable link bandwidth cannot be greater than the physical
link bandwidth. A maximum of 80% of the link bandwidth is recommended
for the maximum reservable link bandwidth.
● The BC0 bandwidth cannot be higher than the maximum reservable link
bandwidth.
e. Run commit
NOTE
● The modified administrative group takes effect only on LSPs that will be
established, not on LSPs that have been established.
● After the modified affinity is committed, the system will recalculate a path for the
TE tunnel, and the established LSPs in this TE tunnel are affected.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the interface on which the MPLS TE tunnel is established is
displayed.
c. Run mpls te link administrative group group-value
An administrative group is configured for the link.
d. Run quit
Return to the system view.
e. Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
f. Run mpls te affinity property properties [ mask mask-value ]
[ secondary | best-effort ]
An affinity is set for the MPLS TE tunnel.
g. Run commit
The configuration is committed.
● Name hexadecimal bits of an affinity and a link administrative group
attribute.
NOTE
● The modified administrative group takes effect only on LSPs that will be
established, not on LSPs that have been established.
● After the modified affinity is committed, the system will recalculate a path for the
TE tunnel, and the established LSPs in this TE tunnel are affected.
a. Run system-view
The system view is displayed.
b. Run path-constraint affinity-mapping
An affinity name template is configured, and the affinity mapping view is
displayed.
This template must be configured on each node involved in MPLS TE
path computation, and the global mappings between the names and
values of affinity bits must be the same on all the involved nodes.
c. Run attribute bit-name bit-sequence bit-number
A mapping between an affinity bit and name is configured.
There are 32 affinity bits in total. You can repeat this step to configure
some or all affinity bits.
NOTE
NOTE
To delete the SRLG attribute from all interfaces on a device, run the undo mpls te
srlg all-config command in the MPLS view.
g. Run commit
----End
Context
An explicit path consists of a series of nodes. These nodes are arranged in
sequence and form a vector path. An interface IP address on every node is used to
identify the node on an explicit path. The loopback IP address of the egress node
is usually used as the destination address of an explicit path.
Two adjacent nodes on an explicit path are connected in either of the following
modes:
The strict and loose modes are used either separately or simultaneously.
Procedure
Step 1 Run system-view
Step 3 Run next hop ip-address [ include [ [ strict | loose ] | [ incoming | outgoing ] ] *
| exclude ]
The include parameter indicates that the tunnel does pass through a specified
node; the exclude parameter indicates that the tunnel does not pass through a
specified node.
Step 4 (Optional) Run add hop ip-address1 [ include [ [ strict | loose ] | [ incoming |
outgoing ] ] * | exclude ] { after | before } ip-address2
Step 5 (Optional) Run modify hop ip-address1 ip-address2 [ include [ [ strict | loose ] |
[ incoming | outgoing ] ] * | exclude ]
----End
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run either of the following commands to configure the IP address of the tunnel
interface:
● To assign an IP address to the tunnel interface, run ip address ip-address
{ mask | mask-length } [ sub ]
The primary IP address must be configured before the secondary IP address
can be configured for the tunnel interface.
● To configure the tunnel interface to borrow the IP address of another
interface, run ip address unnumbered interface interface-type interface-
number
NOTE
The bandwidth used by the tunnel cannot be higher than the maximum reservable
link bandwidth.
The bandwidth used by a tunnel does not need to be set if only a path needs to
be configured for an MPLS TE tunnel.
An explicit path does not need to be configured if only the bandwidth needs to be
set for an MPLS TE tunnel.
The SE style is used in the make-before-break scenario, and the fixed filter (FF)
style is used in a few scenarios.
NOTE
The mpls te cspf disable command is only applicable in the inter-AS VPN Option C
scenario. In other scenarios, running this command is not recommended.
----End
Context
If there is no path meeting the bandwidth requirement of a desired tunnel, a
device can tear down an established tunnel and use bandwidth resources assigned
to that tunnel to establish a desired tunnel. This is called preemption. The
following preemption modes are supported:
● Hard preemption: A tunnel with a higher setup priority can preempt resources
assigned to a tunnel with a lower holding priority. Consequently, some traffic
is dropped on the tunnel with a lower holding priority during the hard
preemption process.
● Soft preemption: After a tunnel with a higher setup priority preempts the
bandwidth of a tunnel with a lower holding priority, the soft preemption
function retains the tunnel with a lower holding priority for a specified period
of time. If the ingress finds a better path for this tunnel after the time elapses,
the ingress uses the make-before-break (MBB) mechanism to reestablish the
tunnel over the new path. If the ingress fails to find a better path after the
time elapses, the tunnel goes Down.
Procedure
● Configure software preemption in the RSVP-TE tunnel interface view.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view is displayed.
c. Run tunnel-protocol mpls te
MPLS TE is configured as a tunneling protocol.
d. (Optional) Run mpls te signal-protocol rsvp-te
A signaling protocol is configured for the tunnel.
By default, the signaling protocol of a tunnel is RSVP-TE.
e. Run mpls te soft-preemption
Software preemption is configured in the RSVP-TE tunnel interface view.
f. Run commit
The configuration is committed.
● Configure software preemption in the MPLS view.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls te
MPLS TE is globally enabled.
d. Run mpls rsvp-te
RSVP-TE is enabled.
e. Run mpls te soft-preemption
Software preemption is configured in the global MPLS view.
After the preceding configuration, the device performs soft preemption
for tunnels that meet the following conditions:
Procedure
Step 1 Configure graceful shutdown in the MPLS view.
1. Run system-view
The system view is displayed.
2. Run mpls
The MPLS view is displayed.
3. Run mpls te
MPLS TE is globally enabled.
4. Run mpls rsvp-te
RSVP-TE is enabled.
5. Run mpls rsvp-te graceful-shutdown
Graceful shutdown is enabled.
6. (Optional) Run mpls rsvp-te timer graceful-shutdown graceful-shutdown-
time
A graceful shutdown timeout period is set.
After a local device performs graceful shutdown and sends a reroute request,
the device deletes the RSVP LSP if the rerouting fails within the graceful
shutdown timeout period.
7. Run commit
The configuration is committed.
Step 2 Configure graceful shutdown in the interface view.
1. Run system-view
The system view is displayed.
----End
Prerequisites
An RSVP-TE tunnel has been configured.
Procedure
● Run the display mpls te link-administration bandwidth-allocation
[ interface interface-type interface-number ] command to check the
allocated link bandwidth information.
● Run the display ospf [ process-id ] mpls-te [ area area-id ] [ self-
originated ] command to check OSPF TE information.
● Run either of the following commands to check the IS-IS TE status:
– display isis traffic-eng advertisements [ lsp-id | local ] [ level-1 |
level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
– display isis traffic-eng statistics [ process-id | vpn-instance vpn-
instance-name ]
● Run the display explicit-path [ [ name ] path-name ] [ verbose ] command
to check the configured explicit paths.
● Run the display mpls te cspf destination ip-address [ affinity { properties
[ mask mask-value ] | { { include-all | include-any } { pri-in-name-string }
&<1-32> | exclude { pri-ex-name-string } &<1-32> } * } | bandwidth ct0 ct0-
bandwidth | explicit-path path-name | hop-limit hop-limit-number | metric-
type { igp | te } | priority setup-priority | srlg-strict exclude-path-name | tie-
breaking { random | most-fill | least-fill } ] * [ hot-standby [ explicit-path
hsb-path-name | overlap-path | affinity { hsb-properties [ mask hsb-mask-
value ] | { { include-all | include-any } { hsb-in-name-string } &<1-32> |
exclude { hsb-ex-name-string } &<1-32> } * } | hop-limit hsb-hop-limit-
number | srlg { preferred | strict } ] * ] command to check the path that is
calculated using CSPF based on specified conditions.
● Run the display mpls te cspf tedb { all | area area-id | interface ip-address |
network-lsa | node [ router-id ] | srlg [ srlg-number ] [ igp-type { isis |
ospf } ] | overload-node } command to check information about TEDBs that
meet specified conditions and can be used by CSPF to calculate a path.
● Run the display mpls rsvp-te command to check RSVP information.
● Run the display mpls rsvp-te psb-content [ ingress-lsr-id tunnel-id [ lsp-
id ] ] command to check information about the RSVP-TE PSB.
● Run the display mpls rsvp-te rsb-content [ ingress-lsr-id tunnel-id lsp-id ]
command to check information about the RSVP-TE RSB.
● Run the display mpls rsvp-te established [ interface interface-type
interface-number peer-ip-address ] command to check information about the
established RSVP LSPs.
● Run the display mpls rsvp-te peer [ interface interface-type interface-
number | peer-address ] command to check the RSVP neighbor parameters.
● Run the display mpls rsvp-te reservation [ interface interface-type
interface-number peer-ip-address ] command to check information about
RSVP resource reservation.
● Run the display mpls rsvp-te request [ interface interface-type interface-
number peer-ip-address ] command to check information about RSVP LSP
resource reservation requests.
● Run the display mpls rsvp-te sender [ interface interface-type interface-
number peer-ip-address ] command to check information about an RSVP
transmit end.
● Run the display mpls rsvp-te statistics { global | interface [ interface-type
interface-number ] } command to check RSVP-TE statistics.
● Run the display mpls te link-administration admission-control [ interface
interface-type interface-number ] command to check tunnels established on
the local node.
● Run the display affinity-mapping [ attribute affinity-name ] [ verbose ]
command to check information about an affinity name template.
● Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id
session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ]
[ name tunnel-name ] [ { incoming-interface | interface | outgoing-
interface } interface-type interface-number ] [ verbose ] command to check
tunnel information.
● Run the display mpls te tunnel statistics or display mpls lsp statistics
command to check tunnel statistics.
● Run the display mpls te tunnel-interface command to check information
about a tunnel interface on the ingress of a tunnel.
----End
Usage Scenario
In an SDN solution, a controller can run the PCE Initiated LSP protocol to generate
RSVP-TE tunnels, without manual tunnel configuration. Dynamic RSVP-TE
Pre-configuration Tasks
Before configuring an automatic RSVP-TE tunnel, complete the following tasks:
Procedure
Step 1 Run system-view
RSVP-TE is enabled.
----End
Context
After CSPF is configured, it can be used to calculate paths if a connection between
the forwarder and controller is disconnected. If CSPF is not configured and a
connection between the forwarder and controller is disconnected, no path can be
created because only the controller can calculate paths.
CSPF calculates only the shortest path to the specified tunnel destination. During
path computation, if there are multiple paths with the same weight, the optimal
path is selected using the tie-breaking function.
Tie-breaking is based on the percentage of the available bandwidth to the
maximum reservable bandwidth. The maximum reservable bandwidth is
configured using the mpls te bandwidth max-reservable-bandwidth command,
not the BC bandwidth configured using the mpls te bandwidth bc0 command on
an interface. The following tie-breaking policies are available:
● Most-fill: The path with a larger percentage of the available bandwidth to the
maximum reservable bandwidth is preferred. That is, the path with lower
bandwidth usage is preferred.
● Least-fill: The path with a smaller percentage of the available bandwidth to
the maximum reservable bandwidth is preferred. That is, the path with higher
bandwidth usage is preferred.
● Random: The device selects a path at random. This mode allows LSPs to be
evenly distributed among links, regardless of their bandwidth.
NOTE
The Most-fill and Least-fill modes are only effective when the difference in bandwidth
usage between the two links exceeds 10%, such as 50% of link A bandwidth utilization and
45% of link B bandwidth utilization. The value is 5%. At this time, the Most-fill and Least-
fill modes do not take effect, and the Random mode is still used.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te cspf
CSPF is enabled on the local node.
Step 4 (Optional) Run mpls te cspf preferred-igp { isis [ process-id [ level-1 | level-2 ] ]
| ospf [ process-id [ area area-id ] ] }
A preferred IGP is configured. Its process and area or level can also be configured.
Step 5 (Optional) Run mpls te cspf multi-instance shortest-path [ preferred-igp { isis |
ospf } [ process-id ] ]
CSPF is configured to calculate shortest paths among all IGP processes and areas.
NOTICE
----End
Context
Either OSPF TE or IS-IS TE can be used:
NOTE
If neither OSPF TE nor IS-IS TE is configured, LSRs generate no TE link state advertisement
(LSA) or TE Link State PDUs (LSPs) and construct no TEDBs.
TE tunnels cannot be used in inter-area scenarios. In an inter-area scenario, an explicit path
can be configured, and the inbound and outbound interfaces of the explicit path must be
specified, preventing a failure to establish a TE tunnel.
● OSPF TE
OSPF TE uses Opaque Type 10 LSAs to carry TE attributes. The OSPF Opaque
capability must be enabled on each LSR. In addition, TE LSAs are generated
only when at least one OSPF neighbor is in the Full state.
● IS-IS TE
IS-IS TE uses the sub-time-length-value (sub-TLV) in the IS-reachable TLV (22)
to carry TE attributes. The IS-IS wide metric attribute must be configured. Its
value can be wide, compatible, or wide-compatible.
Procedure
● Configure OSPF TE.
a. Run system-view
The system view is displayed.
b. Run ospf [ process-id ]
The OSPF view is displayed.
c. Run opaque-capability enable
The OSPF Opaque capability is enabled.
d. Run area area-id
The OSPF area view is displayed.
e. Run mpls-te enable [ standard-complying ]
TE is enabled in the OSPF area.
f. Run commit
The configuration is committed.
● Configure IS-IS TE.
a. Run system-view
The system view is displayed.
b. Run isis [ process-id ]
The IS-IS view is displayed.
c. Run cost-style { wide | compatible | wide-compatible }
The IS-IS wide metric attribute is set.
d. Run traffic-eng [ level-1 | level-2 | level-1-2 ]
IS-IS TE is enabled.
If no level is specified when IS-IS TE is enabled, IS-IS TE is valid for both
Level-1 and Level-2 routers.
There are no unified standards for sub-TLVs that carry DS-TE attributes in
non-IETF mode. To ensure interconnection between devices of different
vendors, you need to manually configure TLV values for these sub-TLVs.
After TLV values are configured for these sub-TLVs, the traffic engineering
database (TEDB) is regenerated, and then TE tunnels are reestablished.
The sub-TLVs can be sent only after TLV values are configured for them.
f. Run commit
----End
Context
The PCE Initiated LSP protocol is used to implement the automatic RSVP-TE tunnel
function. A PCE client (PCC) (ingress) establishes a PCE link to a PCE server
(controller). The controller delivers tunnel and path information to a forwarder
configured on the ingress. The ingress uses the information to automatically
establish a tunnel and reports LSP status information to the controller along the
PCE link.
Procedure
Step 1 Run system-view
----End
Context
NOTE
Procedure
Step 1 Run system-view
----End
Context
NOTE
Procedure
Step 1 Run system-view
----End
Context
To view traffic information about an automatic tunnel, perform the following
steps to enable a device to collect traffic statistics on the automatic tunnel.
Procedure
Step 1 Run system-view
MPLS traffic statistics collection is enabled globally, and the traffic statistics view is
displayed.
----End
Prerequisites
The automatic RSVP-TE tunnel functions have been configured.
Procedure
● Run the following commands to check the IS-IS-related label allocation
information:
– display isis traffic-eng advertisements [ { level-1 | level-2 | level-1-2 } |
{ lsp-id | local } ] * [ process-id | [ vpn-instance vpn-instance-name ] ]
– display isis traffic-eng statistics [ process-id | [ vpn-instance vpn-
instance-name ] ]
● Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id
session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ]
[ name tunnel-name ] [ { incoming-interface | interface | outgoing-
interface } interface-type interface-number ] [ verbose ] command to check
tunnel information.
● Run the display mpls te tunnel statistics command to view TE tunnel
statistics.
● Run the display mpls te tunnel-interface [ tunnel tunnel-number ]
command to check information about a tunnel interface on the ingress.
----End
Usage Scenario
RSVP TE supports diversified signaling parameters, which meet requirements of
reliability and network resources and some MPLS TE advanced features.
Pre-configuration Tasks
Before adjusting RSVP signaling parameters, enable MPLS TE and RSVP-TE.
Procedure
Step 1 Run system-view
The system view is displayed.
NOTE
If the refresh interval is changed, the modification takes effect after the existing refresh
timer expires.
The interval must be longer than the time a device takes to perform a master/slave main
control board switchover. If the interval is set to less than the switchover time, a protocol
intermittent interruption occurs during a switchover. The default timer value is
recommended.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
If the refresh interval is modified, the modification takes effect after the existing refresh
timer expires. Do not set a long refresh interval or frequently modify a refresh interval.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te
MPLS TE is globally enabled.
Step 4 Run mpls rsvp-te
RSVP-TE is enabled.
Step 5 Run mpls rsvp-te reliable-delivery
Reliable RSVP message transmission is enabled.
Step 6 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
NOTE
This command can only be run on a downstream node running V8 when its upstream node
runs a version earlier than V8, which ensures that Srefresh can be properly negotiated
between the two nodes.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
NOTE
Receiving ResvConf messages does not mean that resource reservation is successful. It
means that, however, resources are reserved successfully only on the farthest upstream
node where this Resv message arrives. These resources may be preempted by other
applications later.
----End
Procedure
Step 1 Run system-view
NOTE
Set the PSB and RSB timeout multiplier greater than or equal to 5. This setting prevents the
PSB and RSB from aging or being deleted if the PSB and RSB fail to refresh when a great
number of services are transmitted.
----End
Prerequisites
RSVP signaling parameters have been adjusted.
Procedure
● Run the display mpls rsvp-te command to check RSVP-TE configurations.
● Run the display mpls rsvp-te psb-content [ ingress-lsr-id tunnel-id lsp-id ]
command to check RSVP-TE PSB information.
● Run the display mpls rsvp-te rsb-content [ ingress-lsr-id tunnel-id lsp-id ]
command to check RSVP-TE RSB information.
● Run the display mpls rsvp-te statistics { global | interface [ interface-type
interface-number ] } command to check RSVP-TE statistics.
----End
Usage Scenario
BFD for RSVP is used with TE FRR when a Layer 2 device exists on the primary LSP
between a PLR and its downstream RSVP neighbor.
The interval at which a neighbor is declared Down is three times as long as the
interval at which RSVP Hello messages are sent. This allows devices to detect a
fault in an RSVP neighbor in seconds.
If a Layer 2 device exists on a link between RSVP nodes, an RSVP node cannot
rapidly detect a link fault, which results in a great loss of data.
BFD rapidly detects faults in links or nodes on which RSVP adjacencies are
deployed. If BFD detects a fault, it notifies the RSVP module of the fault and
instructs the RSVP module to switch traffic to a bypass tunnel.
Pre-configuration Tasks
Before you configure BFD for RSVP, configure an RSVP-TE tunnel.
Context
Perform the following steps on the two RSVP neighboring nodes between which a
Layer 2 device resides:
Procedure
Step 1 Run system-view
----End
Context
Perform either of the following operations:
● Enable BFD for RSVP globally if most RSVP interfaces on a node need BFD
for RSVP.
● Enable BFD for RSVP on an RSVP interface if some RSVP interfaces on a
node need BFD for RSVP.
Procedure
● Enable BFD for RSVP globally.
Perform the following steps on the two RSVP neighboring nodes between
which a Layer 2 device resides:
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls rsvp-te bfd all-interfaces enable
BFD for RSVP is enabled globally.
After this command is run in the MPLS view, BFD for RSVP is enabled on
all RSVP interfaces except the interfaces with BFD for RSVP that are
blocked.
d. (Optional) Block BFD for RSVP on the RSVP interfaces that do not need
BFD for RSVP.
Perform the following steps on the two RSVP neighboring nodes between
which a Layer 2 device resides:
a. Run system-view
----End
Context
Perform either of the following operations:
● Adjust global BFD parameters if most RSVP interfaces on a node use the
same BFD parameters.
● Adjust BFD parameters on an RSVP interface if some RSVP interfaces
require BFD parameters different from global BFD parameters.
Procedure
● Adjust global BFD parameters globally.
Perform the following steps on the two RSVP neighboring nodes between
which a Layer 2 device resides:
a. Run system-view
NOTE
BFD detection parameters that take effect on the local node may be different
from the configured parameters:
● Effective local interval at which BFD packets are sent = MAX { min-tx-
interval configured for local device, min-rx-interval configured for remote
device }
● Effective local interval at which BFD packets are received = MAX { min-tx-
interval configured for remote device, min-rx-interval configured for local
device }
● Effective local detection interval = MAX { min-tx-interval configured for
remote device, min-rx-interval configured for local device } x Configured
remote detection multiplier
d. Run commit
The configuration is committed.
● Adjust BFD parameters on an RSVP interface.
Perform the following steps on the two RSVP neighboring nodes between
which a Layer 2 device resides:
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the RSVP-enabled interface is displayed.
c. Run mpls rsvp-te bfd { min-tx-interval tx-interval | min-rx-interval rx-
interval | detect-multiplier multiplier } *
BFD parameters on the RSVP interface are adjusted.
d. Run commit
The configuration is committed.
----End
Procedure
● Run the display mpls rsvp-te bfd session { all | interface interface-type
interface-number | peer ip-address } [ verbose ] command to check
information about the BFD for RSVP session.
● Run the display mpls rsvp-te interface [ interface-type interface-number ]
command to view BFD for RSVP configurations on a specific interface.
----End
Prerequisites
Before configuring self-ping for RSVP-TE, complete the following task:
● Configure RSVP-TE tunnels.
Context
After an RSVP-TE LSP is established, the system sets the LSP status to up, without
waiting for forwarding relationships to be completely established between nodes
on the forwarding path. If service traffic is imported to the LSP before all
forwarding relationships are established, some early traffic may be lost.
Self-ping can address this issue by checking whether the LSP can forward traffic.
Self-ping can be configured globally or for a specified tunnel. If both are
configured, the tunnel-specific configuration takes effect.
Procedure
Step 1 Configure self-ping globally.
1. Run system-view
The system view is displayed.
2. Run mpls
The MPLS view is displayed.
3. Run mpls te
MPLS TE is enabled for the device.
4. Run mpls te self-ping enable
Self-ping is enabled globally.
5. (Optional) Run mpls te self-ping duration self-ping-duration
The self-ping duration is set globally.
Value 65535 indicates an infinite detection duration.
6. Run commit
The configuration is committed.
Step 2 Configure self-ping for a specified tunnel.
1. Run system-view
The system view is displayed.
2. Run interface tunnel interface-number
A tunnel interface is created, and its view is displayed.
3. Run tunnel-protocol mpls te
MPLS TE is enabled.
4. Run mpls te self-ping enable
Self-ping is enabled for the specified tunnel.
----End
Usage Scenario
RSVP authentication prevents the following problems:
● An unauthorized node attempts to establish an RSVP neighbor relationship
with a local node.
● A remote node constructs forged RSVP messages to establish an RSVP
neighbor relationship with a local node and then initiates attacks to the local
node.
RSVP key authentication cannot prevent replay attacks or RSVP message mis-
sequence during network congestion. RSVP message mis-sequence causes
authentication termination between RSVP neighbors. The handshake function,
message window functions, and RSVP key authentication are used to prevent the
preceding problems.
Pre-configuration Tasks
Before configuring RSVP authentication, configure an RSVP-TE tunnel.
Context
RSVP authentication in the key mode is used to prevent an unauthorized node
from establishing an RSVP neighbor relationship with a local node. It can also
prevent a remote node from constructing forged packets to establish an RSVP
neighbor relationship with the local node.
The NE9000 supports three RSVP key authentication modes, as shown in Figure
1-6.
Each pair of RSVP neighbors must use the same key; otherwise, RSVP
authentication fails, and all received RSVP messages are discarded.
Table1 Rules for RSVP authentication mode selection describes differences
between local interface-, neighbor node-, and neighbor address-based
authentication modes.
Procedure
● Configure RSVP key authentication in neighbor address-based mode.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the interface on which the MPLS TE tunnel is established is
displayed.
c. Run mpls rsvp-te authentication { { cipher | plain } auth-key | keychain
keychain-name }
The key for RSVP authentication is configured.
HMAC-MD5 or keychain authentication can be configured based on the
selected parameter:
NOTICE
d. Run commit
NOTICE
g. Run commit
The configuration is committed.
----End
Context
RSVP neighbors retain an RSVP neighbor relationship within a specified RSVP
authentication lifetime even if there are no CR-LSPs between the RSVP neighbors.
Configuring the RSVP authentication lifetime does not affect existing CR-LSPs.
Procedure
● Configure RSVP authentication lifetime in the interface view.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of an RSVP-enabled interface is displayed.
c. Run mpls rsvp-te authentication lifetime lifetime
The RSVP authentication lifetime is set.
d. Run commit
The configuration is committed.
● Configure the RSVP authentication lifetime in the MPLS RSVP-TE peer view.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
----End
Context
If the handshake function is configured between neighbors and the lifetime is
configured, the lifetime must be greater than the interval at which RSVP update
messages are sent. If the lifetime is smaller than the interval at which RSVP
update messages are sent, authentication relationships may be deleted because
no RSVP update message is received within the lifetime. As a result, the
handshake mechanism is used again when a new update message is received. An
RSVP-TE tunnel may be deleted or fail to be established.
Procedure
● Configure the handshake function in the interface view.
a. Run system-view
NOTE
NOTE
Context
The message window function prevents RSVP message mis-sequence.
If the window size is greater than 1, the local device stores several latest valid
sequence numbers of RSVP messages from neighbors.
Procedure
● Configure the message window function in the interface view.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the interface on which the MPLS TE tunnel is established is
displayed.
c. Run mpls rsvp-te authentication handshake
The handshake function is enabled.
NOTE
NOTE
----End
Prerequisites
RSVP authentication has been configured.
Procedure
● Run the display mpls rsvp-te command to check RSVP-TE configurations on a
specific interface.
----End
Context
When the RSVP-TE service suffers a traffic burst, bandwidth may be preempted
among RSVP-TE sessions. To resolve this problem, you can configure whitelist
session-CAR for RSVP-TE to isolate bandwidth resources by session. If the default
parameters of whitelist session-CAR for RSVP-TE do not meet service
requirements, you can adjust them as required.
Procedure
Step 1 Run system-view
Step 3 (Optional) Run whitelist session-car rsvp-te { cir cir-value | cbs cbs-value | pir
pir-value | pbs pbs-value } *
Parameters of whitelist session-CAR for RSVP-TE are configured.
----End
Run the display cpu-defend whitelist session-car rsvp-te statistics slot slot-id
command to check whitelist session-CAR statistics about RSVP-TE packets on a
specified interface board.
To check the statistics in a coming period of time, you can run the reset cpu-
defend whitelist session-car rsvp-te statistics slot slot-id command to clear the
existing whitelist session-CAR statistics about RSVP-TE packets first. Then, after the
period elapses, run the display cpu-defend whitelist session-car rsvp-te
statistics slot slot-id command. In this case, all the statistics are newly generally,
facilitating statistics query.
NOTE
Cleared whitelist session-CAR statistics cannot be restored. Exercise caution when running
the reset command.
Context
Micro-isolation CAR for RSVP-TE is enabled by default to implement micro-
isolation protection for RSVP-TE connection establishment packets. If a device is
attacked, messages of one RSVP-TE session may preempt bandwidth of other
sessions. Therefore, you are advised to keep this function enabled.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run micro-isolation protocol-car rsvp-te { cir cir-value | cbs cbs-value | pir pir-
value | pbs pbs-value } *
Micro-isolation CAR parameters are configured for RSVP-TE.
In normal cases, you are advised to use the default values of these parameters.
pir-value must be greater than or equal to cir-value, and pbs-value must be
greater than or equal to cbs-value.
Step 3 (Optional) Run micro-isolation protocol-car rsvp-te disable
Micro-isolation CAR is disabled for RSVP-TE.
Micro-isolation CAR for RSVP-TE is enabled by default. To disable micro-isolation
for RSVP-TE packets, run the micro-isolation protocol-car rsvp-te disable
command. In normal cases, you are advised to keep micro-isolation CAR enabled
for RSVP-TE.
Step 4 Run commit
The configuration is committed.
----End
Usage Scenario
The NE9000 can only function as a GR Helper to help a neighbor node to
complete RSVP GR. The RSVP GR Helper needs to be configured only after GR is
enabled on the neighbor node that supports the RSVP GR Restarter. If a local
device is only connected to NE9000s running the same version with the local
device, there is no need to configure the RSVP GR Helper on the local device.
Pre-configuration Tasks
Before configuring an RSVP GR Helper, configure an RSVP-TE tunnel.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls rsvp-te hello
The RSVP Hello extension is enabled globally.
Step 4 Run quit
The system view is displayed.
Step 5 Run interface interface-type interface-number
The view of an RSVP-enabled interface is displayed.
Step 6 Run mpls rsvp-te hello
The RSVP Hello extension is enabled on an interface.
After RSVP Hello extension is enabled globally on a node, enable the RSVP Hello
extension on each interface of the node.
Step 7 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls rsvp-te
RSVP-TE is enabled.
----End
Procedure
Step 1 Run system-view
RSVP-TE is enabled.
----End
Prerequisites
RSVP GR has been configured.
Procedure
● Run the display mpls rsvp-te graceful-restart command to check the RSVP-
TE GR status.
● Run the display mpls rsvp-te graceful-restart peer [ { interface interface-
type interface-number | node-id } [ ip-address ] ] command to check
information about the RSVP GR status on a neighbor.
----End
Usage Scenario
With the increasing growth of user networks and the extending scope of network
services, load-balancing techniques are used to improve bandwidth between
nodes. A great number of traffic results in load imbalance on transit nodes. To
address the problems, the entropy label capability can be configured to improve
load balancing performance.
Pre-configuration Tasks
Before configuring the entropy label for tunnels, Enabling MPLS TE and RSVP-TE.
Context
After the entropy label function is enabled on the LSR, the LSR uses IP header
information to generate an entropy label and adds the label to the packets. The
entropy label is used as a key value by a transit node to load-balance traffic. If the
length of a data frame carried in a packet exceeds the parsing capability, the LSR
fails to parse the IP header or generate an entropy label. Perform the following
operations on the LSR:
Procedure
Step 1 Run system-view
----End
Context
The growth of user networks worsens the load imbalance on transit nodes. To
address this problem, the entropy label capability can be configured. When the
entropy label capability is configured, it must be enabled also on the egress.
Procedure
Step 1 Run system-view
----End
Context
If severe load imbalance occurs, the entropy label can be configured for global
tunnels to help transit nodes properly load-balance traffic. The entropy label
capability is enabled on the egress for tunnels. An entropy label is configured in
the tunnel interface view to confirm the tunnel entropy label requirement, and the
ingress sends the requirement to the forwarding plane for processing.
Procedure
Step 1 Run system-view
----End
Context
If severe load imbalance occurs, the entropy label can be configured in the tunnel
interface view to help transit nodes properly load-balance traffic. The entropy
label capability is enabled on the egress for tunnels. An entropy label is set on the
ingress to confirm the tunnel entropy label requirement, and the ingress sends the
requirement to the forwarding plane for processing. If no entropy label is
configured in the tunnel interface view, the entropy label capability is determined
by the global entropy label capability.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te
MPLS TE is globally enabled.
Step 4 Run quit
The system view is displayed.
Step 5 Run interface tunnel tunnel-number
The tunnel interface view is displayed.
Step 6 Run tunnel-protocol mpls te
MPLS TE is configured as a tunnel protocol.
Step 7 Run mpls te entropy-label
An entropy label is configured for a tunnel in the tunnel interface view.
Step 8 Run commit
The configuration is committed.
----End
Prerequisites
The entropy label has been configured for tunnels.
Procedure
● Run the display mpls te tunnel-interface command to check the tunnel
entropy label capability.
----End
Usage Scenario
Carriers use more MPLS TE tunnels to connect PEs. These MPLS TE tunnels pass
through Ps. Because the soft state refresh mechanism of an MPLS TE tunnel uses
many memory and CPU resources of Ps, P scalability is reduced. If mores PEs are
required in the future and tunnels are used to connect these PEs, the number of
tunnels increases dramatically, which increase the burden on devices. When the
number of tunnels reaches the maximum value, no new tunnels can be
established.
Pre-configuration Tasks
Before configuring RSVP distribution, enable MPLS TE and RSVP-TE.
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Run system-view
The view of the interface on which the MPLS TE tunnel is established is displayed.
----End
Prerequisites
RSVP distribution has been configured.
Procedure
Step 1 Run the display mpls rsvp-te distributed-instance [ name distributed-instance-
name ] [ verbose ] command to check RSVP distribution configurations.
----End
Usage Scenario
When you want to create a large number of P2P RSVP-TE tunnels or create P2P
RSVP-TE tunnels to form a full-mesh network, creating them one by one is
laborious and complex. To simplify MPLS RSVP-TE tunnel configuration, configure
the IP-prefix tunnel function so that P2P RSVP-TE tunnels can be established in a
batch.
The full-mesh network indicates that a P2P RSVP-TE tunnel is established between
any two nodes on a network.
Pre-configuration Tasks
Before configuring the ip-prefix tunnel function, complete the following tasks:
Procedure
Step 1 Run system-view
----End
Usage Scenario
Before you create P2P TE tunnels in a batch, create a P2P TE tunnel template and
configure parameters, such as bandwidth and a path limit, in the template. The
mpls te auto-primary-tunnel command can then be run to reference this
template. The device automatically uses the parameters configured in the P2P TE
tunnel template to create P2P TE tunnels in a batch.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls te p2p-template template-name
A P2P TE tunnel template is created, and the P2P TE tunnel template view is
displayed.
Step 3 Select one or more operations.
Run the record-route [ label ] This step enables nodes along a P2P
command to enable the route and TE tunnel to use RSVP messages to
label record for MPLS TE tunnels. record detailed P2P TE tunnel
information, including the IP address
of each hop. The label parameter in
the record-route command enables
RSVP messages to record label values.
Operation Description
Run the affinity primary { include-all Before this command is run, run the
| include-any | exclude } bit-name path-constraint affinity-mapping
&<1-32> command to configure an command in the system view to create
affinity for an MPLS TE tunnel. an affinity name template. In addition,
run the attribute affinity-name bit-
sequence bit-number command to
configure the mappings between
affinity bit values and names in the
template view.
Run the priority setup-priority [ hold- The setup priority of a tunnel must be
priority ] command to set the setup no higher than its holding priority. To
and holding priority values for MPLS be specific, a setup priority value must
TE tunnels. be greater than or equal to a holding
priority value.
If resources are insufficient, setting the
setup and holding priority values helps
a device release LSPs with lower
priorities and use the released
resources to establish LSPs with higher
priorities.
Operation Description
Run the bfd enable command to To rapidly detect LSP faults and
enable BFD for TE CR-LSP. improve network reliability,
configuring BFD for TE CR-LSP is
recommended.
Run the bfd { min-tx-interval tx- BFD parameters can be set to control
interval | min-rx-interval rx-interval | BFD detection sensitivity.
detect-multiplier multiplier } *
command to configure BFD for TE CR-
LSP parameters.
Run the lsp-tp outbound command to Physical links over which a TE tunnel is
enable traffic policing for MPLS TE established may also transmit traffic of
tunnels. other TE tunnels, non-CR-LSP traffic,
or even IP traffic, in addition to the TE
tunnel traffic. To limit TE traffic within
a configured bandwidth range, enable
traffic policing for a specific MPLS TE
tunnel.
----End
Context
The automatic primary tunnel function uses a specified IP prefix list in which
destination IP addresses are defined so that tunnels to the destination IP
addresses can be established in a batch. The automatic primary tunnel function
can also use a specified tunnel template that defines public attributes before
creating tunnels in a batch.
Procedure
Step 1 Run system-view
----End
Follow-up Procedure
If errors occur in services transmitted on a TE tunnel and the services cannot be
restored, run the reset mpls te auto-primary-tunnel command in the user view to
reestablish the TE tunnel to restore the services.
NOTICE
After this command is run, all LSPs in the specified tunnel are torn down and
reestablished. If some LSPs are transmitting traffic, the operation causes a traffic
interruption. Exercise caution when using this command.
Prerequisites
The IP-prefix tunnel function has been configured.
Operations
Usage Scenario
The reservable bandwidth values configured on the interfaces along an MPLS TE
tunnel are used by the MPLS TE module to check whether a link meets all tunnel
bandwidth requirements. If a fixed bandwidth value is configured on an interface
and the physical bandwidth of the interface changes, the MPLS TE module cannot
correctly evaluate link bandwidth resources when the actual reservable bandwidth
differs from the configured bandwidth value. For example, the actual physical
bandwidth of a trunk interface on an MPLS TE tunnel is 1 Gbit/s. The maximum
reservable bandwidth is set to 800 Mbit/s, and the BC0 bandwidth is set to 600
Mbit/s for the interface. If a member of the trunk interface fails, the trunk
interface has its physical bandwidth reduced to 500 Mbit/s, which does not meet
the requirements for the maximum reservable bandwidth and BC0 bandwidth.
However, the MPLS TE module still attempts to reserve the bandwidth as
configured. As a result, bandwidth reservation fails.
To address this issue, you can configure the maximum reservable dynamic
bandwidth and BC dynamic bandwidth. The former is the proportion of the
maximum reservable bandwidth to the actual physical bandwidth, and the latter is
the proportion of the BC bandwidth to the maximum reservable bandwidth. Based
on the two proportions, the MPLS TE module can quickly detect physical
bandwidth changes along links and preempt the bandwidth of any MPLS TE
tunnel that requires more than the available interface bandwidth. If soft
preemption is supported by the preempted tunnel, traffic on the tunnel can be
smoothly switched to another links with sufficient bandwidth. The smooth traffic
switchover is also performed when an interface fails, which minimizes traffic loss.
Pre-configuration Tasks
Before configuring dynamic bandwidth reservation, enable MPLS TE on the
interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The view of an MPLS TE-enabled interface is displayed.
Step 3 Run mpls te bandwidth max-reservable-bandwidth dynamic max-dynamic-bw-
value
The maximum reservable dynamic bandwidth is configured for the link.
NOTE
If this command is run in the same interface view as the mpls te bandwidth max-
reservable-bandwidth command, the latest configuration overrides the previous one.
NOTE
If this command is run in the same interface view as the mpls te bandwidth bc0
command, the latest configuration overrides the previous one.
----End
Usage Scenario
During the establishment of an MPLS TE tunnel, special configurations are
required.
Pre-configuration Tasks
Before adjusting parameters for establishing an MPLS TE tunnel, configure an
RSVP-TE tunnel.
Context
An explicit path consists of a series of nodes. These nodes are arranged in
sequence and form a vector path. An IP address for an explicit path is an interface
IP address on every node. The loopback IP address of the egress node is used as
the destination address of an explicit path.
Two adjacent nodes on an explicit path are connected in either of the following
modes:
Procedure
Step 1 Run system-view
Step 3 Run next hop ip-address [ include [ [ strict | loose ] | [ incoming | outgoing ] ] *
| exclude ]
The include parameter indicates that the tunnel does pass through a specified
node; the exclude parameter indicates that the tunnel does not pass through a
specified node.
Step 4 (Optional) Run add hop ip-address1 [ include [ [ strict | loose ] | [ incoming |
outgoing ] ] * | exclude ] { after | before } ip-address2
Step 5 (Optional) Run modify hop ip-address1 ip-address2 [ include [ [ strict | loose ] |
[ incoming | outgoing ] ] * | exclude ]
----End
Procedure
Step 1 Run system-view
Both the setup and holding priority values range from 0 to 7. The smaller the
value, the higher the priority.
NOTE
The setup priority value must be greater than or equal to the holding priority value. This
means the setup priority is lower than or equal to the holding priority.
----End
Procedure
Step 1 Run system-view
----End
Context
A node becomes overloaded in the following situations:
● When the node is transmitting a large number of services and its system
resources are exhausted, the node marks itself overloaded.
● When the node is transmitting a large number of services and its CPU is
overburdened, an administrator can run the set-overload command to mark
the node overloaded.
Procedure
Step 1 Run system-view
CR-LSP establishment is associated with the IS-IS overload setting. This association
allows CSPF to calculate paths excluding overloaded IS-IS nodes.
NOTE
Before the association is configured, the mpls te cspf command must be run to enable
CSPF and the mpls te record-route command must be run to enable the route and label
record.
Traffic travels through an existing CR-LSP before a new CR-LSP is established. After the new
CR-LSP is established, traffic switches to the new CR-LSP and the original CR-LSP is deleted.
This traffic switchover is performed based on the make-before-break mechanism. Traffic is
not dropped during the switchover.
This command does not take effect on bypass tunnels or P2MP TE tunnels.
If the ingress or egress is marked overloaded, the mpls te path-selection overload
command does not take effect. This means that the established CR-LSPs associated with
the ingress or egress will not be reestablished and new CR-LSPs associated with the ingress
or egress will also not be established.
----End
Procedure
Step 1 Run system-view
----End
Context
MPLS TE uses a make-before-break mechanism. If attributes of an MPLS TE
tunnel, such as bandwidth or path change, a new CR-LSP with new attributes is
established. Such a CR-LSP is called a modified CR-LSP. The new CR-LSP must be
established before the original CR-LSP, also called the primary CR-LSP, is torn
down. This prevents data loss and additional bandwidth consumption during
traffic switching.
If a forwarding entry associated with the new CR-LSP does not take effect after
the original CR-LSP has been torn down, a temporary traffic interruption occurs.
The switching and deletion delays can be set on the ingress of the CR-LSP to
prevent the preceding problem.
Procedure
Step 1 Run system-view
----End
Prerequisites
The establishment of the MPLS TE tunnel has been adjusted.
Procedure
● Run the display mpls te tunnel-interface command to check information
about a tunnel interface on the ingress of a tunnel.
----End
Usage Scenario
An MPLS TE tunnel does not automatically import traffic. To enable traffic to
travel along an MPLS TE tunnel, use one of the methods listed in Table 1-6 to
import the traffic to the MPLS TE tunnel.
NOTE
The preceding methods to import traffic to MPLS TE tunnels apply only to P2P tunnels.
Pre-configuration Tasks
Before you import traffic to an MPLS TE tunnel, configure an RSVP-TE tunnel.
Context
During path calculation in a scenario where IGP shortcut is configured, the device
calculates an SPF tree based on the paths in the IGP physical topology, and then
finds the SPF nodes on which shortcut tunnels are configured. If the metric of a TE
tunnel is smaller than that of an SPF node, the device replaces the outbound
interfaces of the routes to this SPF node and those of the other routes passing
through the SPF node with the TE tunnel interface.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
The view of the MPLS TE tunnel interface is displayed.
Step 3 Run mpls te igp shortcut [ isis | ospf ] or mpls te igp shortcut isis hold-time
interval
IGP shortcut is configured.
hold-time interval specifies the period after which IS-IS responds to the Down
status of the TE tunnel.
If a TE tunnel goes Down and this parameter is not specified, IS-IS recalculates
routes. If this parameter is specified, IS-IS responds to the Down status of the TE
tunnel after only the specified interval elapses. It either recalculates routes or not
depending on the TE tunnel status:
● If the TE tunnel goes Up, IS-IS does not recalculate routes.
● If the TE tunnel goes Down, IS-IS still recalculates routes.
Step 4 Run mpls te igp metric { absolute | relative } value
The IGP metric of the TE tunnel is configured.
You can set either of the following parameters when configuring the metric to be
used by a TE tunnel during IGP shortcut path calculation:
● If absolute is configured, the TE tunnel metric is equal to the configured
value.
● If relative is configured, the TE tunnel metric is equal to the sum of the IGP
route metric and relative TE tunnel metric.
Step 5 For IS-IS, run isis enable [ process-id ]
IS-IS is enabled on the tunnel interface.
----End
Follow-up Procedure
If a network fault occurs, IGP convergence is triggered. In this case, a transient
forwarding status inconsistency may occur among nodes because of their different
convergence rates, which poses the risk of microloops. To prevent microloops,
perform the following steps:
NOTE
Before you enable the OSPF TE tunnel anti-microloop function, configure CR-LSP backup
parameters.
● For IS-IS, run the following commands in sequence.
a. Run system-view
The system view is displayed.
b. Run isis [ process-id ]
An IS-IS process is created, and the IS-IS process view is displayed.
c. Run avoid-microloop te-tunnel
The IS-IS TE tunnel anti-microloop function is enabled.
d. (Optional) Run avoid-microloop te-tunnel rib-update-delay rib-update-
delay
The delay in delivering the IS-IS routes whose outbound interface is a TE
tunnel interface is set.
e. Run commit
The configuration is committed.
● For OSPF, run the following commands in sequence.
a. Run system-view
The system view is displayed.
b. Run ospf [ process-id ]
The OSPF view is displayed.
c. Run avoid-microloop te-tunnel
The OSPF TE tunnel anti-microloop function is enabled.
d. (Optional) Run avoid-microloop te-tunnel rib-update-delay rib-update-
delay
Context
A routing protocol performs bidirectional detection on a link. The forwarding
adjacency needs to be enabled on both ends of a tunnel. The forwarding
adjacency allows a node to advertise a CR-LSP route to other nodes. Another
tunnel for transferring data packets in the reverse direction must be configured.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
The view of an MPLS TE tunnel interface is displayed.
Set proper IGP metrics for TE tunnels to ensure that LSP routes are correctly advertised and
used. The metric of a TE tunnel should be smaller than that of an IGP route that is not
expected for use.
Step 5 You can select either of the following modes to enable the forwarding adjacency.
● For IS-IS, run the isis enable [ process-id ] command to enable the IS-IS
process of the tunnel interface.
● For OSPF, run the following commands in sequence.
a. Run the ospf enable [ process-id ] area { area-id | areaidipv4 } command
to enable OSPF on the tunnel interface.
b. Run the quit command to return to the system view.
c. Run the ospf [ process-id ] command to enter the OSPF view.
d. Run the enable traffic-adjustment advertise command to enable the
forwarding adjacency.
----End
Context
When services recurse to multiple TE tunnels, the mpls te service-class command
is run on the TE tunnel interface to set a service class so that a TE tunnel
transmits services of a specified service class.
DS-TE tunnels can be prioritized to receive traffic. One priority or multiple
priorities can be assigned to a tunnel to which services recurse. Table 1-7
describes the default mapping between DS-TE tunnel's CTs and flow queues.
Table 1-7 Default mapping between DS-TE tunnel's CTs and flow queues
CT Flow Queue
CT0 be
CT1 af1
CT2 af2
CT3 af3
CT4 af4
CT5 ef
CT6 cs6
CT7 cs7
If services recurse to multiple TE tunnels for load balancing, tunnel selection rules
are the same as those used in CBTS:
1. If the priority attribute of service traffic matches the priority attribute
configured for a tunnel, the service traffic is carried by the tunnel that
matches the priority attribute.
2. If the priority of service traffic does not match a configured priority of a
tunnel, the following rules apply:
a. If the priority of a tunnel among load-balancing tunnels is default, the
service traffic that does not match any priority is carried by the tunnel
with the default priority.
b. If none of load-balancing tunnels is assigned the default priority and
some tunnels are not configured with priorities, service traffic that does
not match any tunnel priorities is carried by the tunnels that are not
configured with priorities.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
The MPLS TE tunnel interface view is displayed.
Step 3 Run mpls te service-class { service-class & <1-8> | default }
A service class is set for packets that an MPLS TE tunnel allows to pass through.
NOTE
----End
Usage Scenario
If a BFD session detects a fault in a TE tunnel, the BFD module instructs VPN FRR
to quickly switch traffic, which reduces the adverse impact on services.
Pre-configuration Tasks
Before configuring the static BFD for TE tunnel, configure a static CR-LSP or an
MPLS TE tunnel.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
Configurations relevant to BFD can be performed only after the bfd command is
run globally.
Step 3 Run commit
The configurations are committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Perform either of the following operations:
● Configure a BFD session to monitor a TE tunnel:
bfd session-name bind mpls-te interface interface-type interface-number
If the specified tunnel is down, no BFD session can be configured.
● Configure a BFD session group to monitor a TE tunnel:
a. Run the bfd sessname-value bind mpls-te interface trackIfType
trackIfNum [ te-lsp [ backup ] ] group command to create binding
information about a BFD session group.
b. Run the sub-session discriminator local discr-value remote remote-
value select-board slot-id command to create sub-session 1 for the BFD
session group.
c. Run the sub-session discriminator local discr-value remote remote-
value select-board slot-id command to create sub-session 2 for the BFD
session group.
NOTE
– A BFD session group is used when two devices are connected through an Eth-Trunk
link, and the two member interfaces of the Eth-Trunk interface are located on
different interfaces. That is, a BFD session group is used for an inter-board Eth-
Trunk interface.
– Two BFD sub-sessions created in a BFD session group are used to detect two inter-
board Eth-Trunk links. The status of a BFD session group depends on the status of
the two BFD sub-sessions. As long as one BFD sub-session is up, the BFD session
group is up. When both sub-sessions are down, the BFD session group is down.
NOTE
A remote discriminator does not need to be set for a BFD session group. Each session in the BFD
session group has its own discriminator.
NOTE
The local discriminator of the local device and the remote discriminator of the remote
device are the same. The remote discriminator of the local device and the local
discriminator of the remote device are the same. A discriminator inconsistency causes the
BFD session to fail to be established.
A remote discriminator does not need to be set for a BFD session group. Each session in the
BFD session group has its own discriminator.
----End
Procedure
Step 1 Run system-view
Step 2 The IP link, LSP, or TE tunnel can be used as the reverse tunnel to inform the
ingress of a fault. If there is a reverse LSP or a TE tunnel, use the reverse LSP or
the TE tunnel. If no LSP or TE tunnel is established, use an IP link as a reverse
tunnel. If the configured reverse tunnel requires BFD detection, you can configure
a pair of BFD sessions for it. Run the following commands as required:
● Configure a BFD session to monitor reverse channels.
– For an IP link, run bfd session-name bind peer-ip ip-address [ vpn-
instance vpn-name ] [ source-ip ip-address ]
– For an LDP LSP, run bfd session-name bind ldp-lsp peer-ip ip-address
nexthop ip-address [ interface interface-type interface-number ]
– For a CR-LSP, run bfd session-name bind mpls-te interface tunnel
interface-number te-lsp [ backup ]
– For a TE tunnel, run bfd session-name bind mpls-te interface tunnel
interface-number
● Configure a BFD session group to monitor reverse channels.
a. Run the bfd sessname-value bind mpls-te interface trackIfType
trackIfNum [ te-lsp [ backup ] ] group command to create binding
information about a BFD session group.
b. Run the sub-session discriminator local discr-value remote remote-
value select-board slot-id command to create sub-session 1 for the BFD
session group.
c. Run the sub-session discriminator local discr-value remote remote-
value select-board slot-id command to create sub-session 2 for the BFD
session group.
NOTE
– A BFD session group is used when two devices are connected through an Eth-Trunk
link, and the two member interfaces of the Eth-Trunk interface are located on
different interfaces. That is, a BFD session group is used for an inter-board Eth-
Trunk interface.
– Two BFD sub-sessions created in a BFD session group are used to detect two inter-
board Eth-Trunk links. The status of a BFD session group depends on the status of
the two BFD sub-sessions. As long as one BFD sub-session is up, the BFD session
group is up. When both sub-sessions are down, the BFD session group is down.
NOTE
A BFD session group does not need to be configured with a local discriminator. Each sub-session
in a BFD session group has its own discriminator.
NOTE
The local discriminator of the local device and the remote discriminator of the remote
device are the same. The remote discriminator of the local device and the local
discriminator of the remote device are the same. A discriminator inconsistency causes the
BFD session to fail to be established.
A BFD session group does not need to be configured with a remote discriminator. Each sub-
session in a BFD session group has its own discriminator.
----End
Prerequisites
Static BFD for TE tunnel has been configured.
Procedure
● Run the display bfd session mpls-te interface tunnel-name [ verbose ]
command to check information about BFD sessions on the ingress.
● Run the following commands to check information about BFD sessions on the
egress.
– Run the display bfd session all [ for-ip | for-lsp | for-te ] [ verbose ]
command to check information about all BFD sessions.
– Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ]
command to check information about static BFD sessions.
– Run the display bfd session peer-ip peer-ip [ vpn-instance vpn-name ]
[ verbose ] command to check information about BFD sessions with
reverse IP links.
– Run the display bfd session ldp-lsp peer-ip peer-ip [ nexthop nexthop-
ip [ interface interface-type interface-number ] ] [ verbose ] command
to check information about BFD sessions with reverse LDP LSPs.
– Run the display bfd session mpls-te interface tunnel-name te-lsp
[ verbose ] command to check information about BFD sessions with
reverse CR-LSPs.
– Run the display bfd session mpls-te interface tunnel-name [ verbose ]
command to check information about BFD sessions with reverse TE
tunnels.
– Run the display bfd group session command to check the information
about the sessions and sub-sessions of BFD session groups with reverse
TE tunnels or CR-LSPs.
● Run the following commands to check BFD statistics.
– Run the display bfd statistics session all [ for-ip | for-lsp | for-te ]
command to check statistics about all BFD sessions.
– Run the display bfd statistics session static [ for-ip | for-lsp | for-te ]
command to check statistics about static BFD sessions.
– Run the display bfd statistics session peer-ip peer-ip [ vpn-instance
vpn-name ] command to check statistics about BFD sessions with reverse
IP links.
– Run the display bfd statistics session ldp-lsp peer-ip peer-ip [ nexthop
nexthop-ip [ interface interface-type interface-number ] ] command to
check statistics about BFD sessions with reverse LDP LSPs.
----End
Usage Scenario
FRR provides rapid local protection for MPLS TE networks requiring high reliability.
If a local failure occurs, FRR rapidly switches traffic to a bypass tunnel, minimizing
the impact on traffic.
A backbone network has a large capacity and its reliability requirements are high.
If a link or node failure occurs on the backbone network, a mechanism is required
to provide automatic protection and rapidly remove the fault. The Resource
Reservation Protocol (RSVP) usually establishes MPLS TE LSPs in Downstream on
Demand (DoD) mode. If a failure occurs, Constraint Shortest Path First (CSPF) can
re-calculate a reachable path only after the ingress is notified of the failure. The
failure may trigger reestablishment of multiple LSPs and the reestablishment fails
if bandwidth is insufficient. Either the CSPF failure or bandwidth insufficiency
delays the recovery of the MPLS TE network.
NOTE
● FRR requires reserved bandwidth for a bypass tunnel that needs to be pre-established. If
available bandwidth is insufficient, FRR protects only important nodes or links along a
tunnel.
● RSVP-TE tunnels using bandwidth reserved in Shared Explicit (SE) style support FRR, but
static TE tunnels do not.
Pre-configuration Tasks
Before configuring MPLS TE manual FRR, complete the following tasks:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
The view of the primary tunnel interface is displayed.
Step 3 Run mpls te fast-reroute [ bandwidth ]
TE FRR is enabled.
NOTE
After TE FRR is enabled using the mpls te fast-reroute command, run the mpls te bypass-
attributes command to set bypass LSP attributes.
----End
Context
Bypass tunnels are established on selected links or nodes that are not on the
protected primary tunnel. If a link or node on the protected primary tunnel is used
for a bypass tunnel and fails, the bypass tunnel also fails to protect the primary
tunnel.
NOTE
● TE FRR does not take effect if multiple nodes or links fail simultaneously. After FRR
switching is performed to switch data from the primary tunnel to a bypass tunnel, the
bypass tunnel must remain Up when forwarding data. If the bypass tunnel goes Down,
the protected traffic is interrupted, and FRR fails. Even though the bypass tunnel goes
Up again, traffic is unable to flow through the bypass tunnel but travels through the
primary tunnel after the primary tunnel recovers or is reestablished.
Procedure
Step 1 Run system-view
MPLS TE is configured.
NOTE
Physical links of a bypass tunnel cannot overlap protected physical links of the primary
tunnel.
----End
Follow-up Procedure
Routes and labels are automatically recorded after a bypass tunnel is configured.
If a primary tunnel fails, traffic switches to a bypass tunnel. If the bypass tunnel
goes Down, the protected traffic is interrupted, and FRR fails. Even though the
bypass tunnel goes Up, traffic cannot be forwarded. Traffic will be forwarded only
after the primary tunnel has been restored or re-established.
NOTE
● The mpls te fast-reroute command and the mpls te bypass-tunnel command cannot
be configured on the same tunnel interface.
● After FRR switches traffic from a primary tunnel to a bypass tunnel, the bypass tunnel
must be kept Up, and its path must remain unchanged when transmitting traffic. If the
bypass tunnel goes Down, the protected traffic is interrupted, and FRR fails.
Context
The FRR switching delay time can be set to delay FRR entry delivery. This allows
traffic to be switched to the HSB path, not the FRR path, preventing traffic from
being switched twice.
Procedure
Step 1 Run system-view
----End
1.1.3.25.4 (Optional) Enabling the Coexistence of Rapid FRR Switching and MPLS
TE HSB
When FRR and HSB are enabled for MPLS TE tunnels, enabling the coexistence of
MPLS TE HSB and rapid FRR switching improves switching performance.
Context
To enable the coexistence of FRR switching and MPLS TE HSB, TE FRR must be
deployed on the entire network. HSB must be deployed on the ingress, BFD for TE
LSP must be enabled, and the delayed down function must be enabled on the
outbound interface of the P. Otherwise, rapid switching cannot be performed in
case of the dual points of failure.
Procedure
Step 1 Run system-view
----End
Prerequisites
The MPLS TE manual FRR function has been configured.
Procedure
● Run the display mpls lsp command to check information about the primary
tunnel.
● Run the display mpls te tunnel-interface command to check information
about the tunnel interface on the ingress of a primary or bypass tunnel.
● Run the display mpls te tunnel path command to check information about
paths of a primary or bypass tunnel.
----End
Usage Scenario
On a network that requires high reliability, FRR is configured to improve network
reliability. If the network topology is complex and a great number of links must be
configured, the configuration procedure is complex.
MPLS TE Auto FRR, similar to MPLS TE manual FRR, can be performed in the RSVP
GR process. For details about MPLS TE manual FRR, see Configuring MPLS TE
Manual FRR.
NOTE
In this example, a bypass tunnel with a higher priority is available on the NE9000.
MPLS TE Auto FRR automatically deletes a binding between a primary tunnel and
a bypass tunnel with a lower priority and binds the primary tunnel to another
bypass tunnel with a higher priority. A bypass tunnel has a higher priority than
another based on the following conditions in descending order:
● SRLG
In MPLS TE Auto FRR, if the shared risk link group (SRLG) attribute is
configured, the primary and bypass tunnels must be in different SRLGs. If they
are in the same SRLG, the bypass tunnel cannot be established.
● Bandwidth protection takes precedence over non-bandwidth protection.
● Node protection takes precedence over link protection.
● Manual protection takes precedence over auto protection.
Pre-configuration Tasks
Before configuring MPLS TE Auto FRR, complete the following tasks:
Procedure
Step 1 Run system-view
After Auto FRR is enabled globally, all MPLS TE-enabled interfaces on the
device are automatically configured with mpls te auto-frr default. To disable
Auto FRR on some interfaces, run the mpls te auto-frr block command on
these interfaces. After the mpls te auto-frr block command is run on an
interface, the interface does not have the Auto FRR capability, regardless of
whether Auto FRR is enabled or re-enabled globally.
NOTE
– If the mpls te auto-frr default command is run, the Auto FRR capability status of
an interface is the same as the global Auto FRR capability status.
– After node protection is enabled, if a bypass tunnel for node protection fails to be
created because the topology does not meet requirements, the penultimate hop of
the primary tunnel attempts to create link protection, and other nodes do not
degrade to create link protection.
– If self-adapting is not specified and node protection is enabled, the penultimate
hop of the primary tunnel attempts to create link protection when a bypass tunnel
fails to be created because the topology does not meet requirements. Other nodes
do not degrade to create link protection.
----End
Procedure
Step 1 Run system-view
TE FRR is enabled.
After TE FRR takes effect, traffic is switched to the bypass LSP when the primary
LSP fails. If the bypass LSP is not the optimal path, traffic congestion easily occurs.
To prevent traffic congestion, you can configure LDP to protect TE tunnels. To have
the LDP protection function take effect, you need to run the mpls te frr-switch
degrade command to enable the MPLS TE tunnel to mask the FRR function. After
the command is run:
1. If the primary LSP is in the FRR-in-use state (that is, traffic has been switched
to the bypass LSP), traffic cannot be switched to the primary LSP.
2. If HSB is configured for the tunnel and an HSB LSP is available, traffic is
switched to the HSB LSP.
3. If no HSB LSP is available for the tunnel, the tunnel is unavailable, and traffic
is switched to another tunnel like an LDP tunnel.
4. If no tunnels are available, traffic is interrupted.
NOTE
● The bandwidth attribute can only be set for the bypass LSP after the mpls te fast-
reroute bandwidth command is run for the primary LSP.
● The bypass LSP bandwidth cannot exceed the primary LSP bandwidth.
● If no attributes are configured for an automatic bypass LSP, by default, the automatic
bypass LSP uses the same bandwidth as that of the primary LSP.
● The setup priority of a bypass LSP must be lower than or equal to the holding priority.
These priorities cannot be higher than the corresponding priorities of the primary LSP.
● If TE FRR is disabled, the bypass LSP attributes are automatically deleted.
The interface view of the link through which the bypass LSP passes is
displayed.
2. Run mpls te auto-frr attributes { bandwidth bandwidth | priority setup-
priority [ hold-priority ] | hop-limit hop-limit-value }
Attributes are configured for the bypass LSP.
3. Run quit
Affinities determine link attributes of an automatic bypass LSP. Affinities and a link
administrative group attribute are used together to determine over which links the
automatic bypass LSP can be established.
There are 32 affinity bits in total. You can repeat this step to configure
some or all affinity bits.
c. Run quit
Return to the system view.
d. Run interface interface-type interface-number
The interface view of the link through which the bypass LSP passes is
displayed.
e. Run mpls te link administrative group name bit-name &<1-32>
An administrative group attribute is specified.
f. Run quit
Return to the system view.
g. Run interface tunnel tunnel-number
The tunnel interface view of the primary LSP is displayed.
h. Run mpls te bypass-attributes affinity { include-all | include-any |
exclude } bit-name &<1-32>
An affinity is configured for the bypass LSP.
NOTE
If an automatic bypass LSP that satisfies the specified affinity cannot be established, a
node will bind a manual bypass LSP satisfying the specified affinity to the primary LSP.
----End
Context
Network changes often cause the changes in optimal paths. Auto bypass tunnel
re-optimization allows the system to re-optimize an auto bypass tunnel if an
optimal path to the same destination is found due to some reasons, such as the
changes in the cost. In this manner, network resources are optimized.
NOTE
Procedure
Step 1 Run system-view
After you configure the automatic re-optimization in the MPLS view, you can
return to the user view and run the mpls te reoptimization command to
immediately re-optimize the tunnels on which the automatic re-optimization is
enabled. After you perform the manual re-optimization, the timer of the
automatic re-optimization is reset and counts again.
----End
Prerequisites
MPLS TE Auto FRR has been configured.
Procedure
● Run the display mpls te tunnel verbose command to check the binding of a
primary tunnel and an automatic bypass tunnel.
● Run the display mpls te tunnel-interface command to check detailed
information about an automatic bypass tunnel.
● Run the display mpls te tunnel path command to check information about
paths of a primary or bypass tunnel.
----End
Context
TE FRR provides local link or node protection for TE tunnels. TE FRR is working in
either facility backup or one-to-one backup. Table 1-8 compares facility backup
and one-to-one backup.
TE FRR in one-to-one backup mode is also called MPLS detour FRR. Each eligible
node automatically creates a detour LSP.
This section describes how to configure MPLS detour FRR. For information about
how to configure TE FRR in facility backup mode, see 1.1.3.25 Configuring MPLS
TE Manual FRR and 1.1.3.26 Configuring MPLS TE Auto FRR.
NOTE
● The facility backup and one-to-one backup modes are mutually exclusive on the same
TE tunnel interface. If both modes are configured, the latest configured mode overrides
the previous one.
● After MPLS detour FRR is configured, nodes on a TE tunnel are automatically enabled to
record routes and labels. Before you disable the route and label record functions, disable
MPLS detour FRR.
Pre-configuration Tasks
Before configuring MPLS detour FRR, configure an RSVP-TE tunnel.
NOTE
CSPF must be enabled on each node along both the primary and backup RSVP-TE tunnels.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
The TE tunnel interface view is displayed.
If you run the mpls te detour and mpls te fast-reroute commands on the same
tunnel interface, the latest configuration overrides the previous one.
----End
Context
After MPLS detour FRR is enabled on a tunnel, the ingress node calculates a
detour LSP to protect the tunnel if the tunnel fails. Some transit nodes or the
egress node may not support MPLS detour FRR, but they can still function as
protection nodes along a detour LSP.
To disable MPLS detour FRR, run the mpls rsvp-te detour disable command in
the MPLS view. After the mpls rsvp-te detour disable command is run, detour
LSPs that are not in the FRR-in-use state are deleted.
Procedure
Step 1 Run system-view
----End
Usage Scenario
A tunnel protection group provides end-to-end protection for traffic transmitted
along TE tunnel. If a working tunnel fails, bidirectional automatic protection
switching switches traffic to the protection tunnel.
NOTE
In an MPLS OAM for associated or co-routed LSP scenario where tunnel APS is configured,
if the primary and backup tunnels use the same path and the path fails, both the tunnels
are affected, and services may be interrupted.
A protected tunnel is called a working tunnel. A tunnel that protects the working
tunnel is called a protection tunnel. The working and protection tunnels form a
tunnel protection group. A tunnel protection group works in 1:1 mode. In 1:1
mode, one protection tunnel protects only one working tunnel.
Pre-configuration Tasks
Before configuring an MPLS TE tunnel protection group, create an MPLS TE
working tunnel and a protection tunnel.
NOTE
Context
A tunnel protection group can be configured on the ingress to protect a working
tunnel. The switchback delay time and a switchback mode can also be configured.
If the revertive mode is used, the wait to restore (WTR) time can be set.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
The tunnel interface view is displayed.
Step 3 Run mpls te protection tunnel tunnel-id [ [ holdoff holdoff-time ] | [ mode
{ non-revertive | revertive [ wtr wtr-time ] } ] ] *
The working tunnel is added to a protection group.
The following parameters can be configured in this step:
● tunnel-id specifies the ID of a protection tunnel.
● holdoff-time specifies the period between the time when a signal failure
occurs and the time when the protection switching algorithm is initiated upon
notification of the signal fault. holdoff-time specifies a multiplier of 100
milliseconds.
NOTE
----End
Follow-up Procedure
You can also perform the preceding steps to modify a protection group.
An MPLS TE tunnel protection group must be detected by MPLS OAM or MPLS-TP
OAM to rapidly trigger protection switching if a fault occurs.
After an MPLS TE tunnel protection group is created, properly select an MPLS
OAM mechanism:
● If both the working and protection tunnels are static bidirectional associated
LSPs, configure MPLS OAM for bidirectional associated LSPs.
● If both the working and protection tunnels are static bidirectional co-routed
LSPs, configure MPLS OAM for bidirectional co-routed LSPs.
● If OAM is deleted before APS is deleted, APS incorrectly considers that OAM
has detected a link fault, affecting protection switching. To resolve this issue,
re-configure OAM.
After an MPLS TE tunnel protection group is created, if MPLS-TP OAM is used to
detect faults and both the working and protection tunnels are static bidirectional
co-routed CR-LSPs, configure MPLS-TP OAM for bidirectional co-routed LSPs.
Context
Read switching rules before configuring the protection switching trigger
mechanism.
Perform the following steps on the ingress of the tunnel protection group as
needed:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel interface-number
The tunnel interface view is displayed.
Step 3 Select one of the following protection switching trigger methods as required:
● To forcibly switch traffic from the working tunnel to the protection tunnel, run
mpls te protect-switch force
● To prevent traffic on the working tunnel from switching to the protection
tunnel, run mpls te protect-switch lock
● To switch traffic to the protection tunnel, run mpls te protect-switch manual
● To cancel the configuration of the protection switching trigger mechanism,
run mpls te protect-switch clear
NOTE
The preceding commands can take effect immediately after being run, without the commit
command executed.
----End
Prerequisites
A tunnel protection group has been configured.
Procedure
Step 1 Run the display mpls te protection tunnel { all | tunnel-id | interface tunnel-
interface-name } [ verbose ] command to check information about a tunnel
protection group.
----End
Prerequisites
Before configuring an MPLS TE associated tunnel group, complete the following
task:
Context
The bandwidth of an MPLS TE tunnel is limited, but the bandwidth of services
carried by the tunnel cannot be limited. As a result, the tunnel bandwidth may
become insufficient in some service scenarios, for example, when routes or VPN
services recurse to the tunnel. To address this issue, you can create an associated
tunnel group, specify the current tunnel as the original tunnel of the group, and
specify split tunnels for the original tunnel. The split tunnels can carry services
together with the original tunnel, relieving bandwidth pressure.
Procedure
Step 1 Configure split tunnels.
1. Run system-view
MPLS TE is enabled.
5. Run destination ip-address
A tunnel ID is set.
7. Run mpls te signal-protocol rsvp-te
RSVP-TE is enabled.
8. Run mpls te split-tunnel
----End
Usage Scenario
To synchronize data between TEDBs in an IGP area, OSPF TE or IS-IS TE is
configured to update TEDB information and flood bandwidth information if the
remaining bandwidth changes on an MPLS interface.
The NE9000 supports the following methods of controlling bandwidth information
flooding:
● Configure flooding commands to enable immediate bandwidth information
flooding on a device.
● Configure a flooding interval to enable periodic bandwidth information
flooding on a device.
● Configure a flooding threshold to prevent frequent flooding.
– When the percentage of the bandwidth reserved for the MPLS TE tunnel
on a link to the remaining link bandwidth in the TEDB is greater than or
equal to the configured threshold (flooding threshold), OSPF TE and IS-IS
TE flood link bandwidth information to all devices in this area and update
TEDB information.
– When the percentage of the bandwidth released by the MPLS TE tunnel
to the remaining link bandwidth in the TEDB is greater than or equal to
the configured threshold, OSPF TE and IS-IS TE flood link bandwidth
information to all devices in this area and update TEDB information.
Pre-configuration Tasks
Before adjusting the flooding threshold, configure an RSVP-TE tunnel.
Procedure
● Configure forcible bandwidth information flooding.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
Usage Scenario
Physical links over which a TE tunnel is established may also transmit traffic of
other TE tunnels, non-CR-LSP traffic, or even IP traffic, in addition to the TE tunnel
traffic. To limit the TE tunnel traffic within a bandwidth range that is actually
configured, set a limit rate for TE tunnel traffic.
After the configuration of the limit rate, TE traffic is limited to a bandwidth range
that is actually configured. TE traffic with the bandwidth higher than the set
bandwidth is dropped.
NOTE
Before you configure rate limiting for MPLS TE traffic, run the mpls te bandwidth
command on the corresponding tunnel interface. If this command is not run, rate limiting is
not performed for MPLS TE traffic.
Pre-configuration Tasks
Before configuring rate limiting for MPLS TE traffic, complete the following tasks:
● an RSVP-TE tunnel
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
The tunnel interface view is displayed.
Step 3 Run mpls te bandwidth ctType ctValue
The bandwidth constraint of the MPLS TE tunnel is configured.
Step 4 Run mpls te lsp-tp outbound
Traffic policing for the MPLS TE tunnel is enabled.
Step 5 Run commit
The configuration is committed.
----End
Usage Scenario
Topology and link attributes of an IP/MPLS network are changeable. As a result, a
path over which an MPLS TE tunnel has been established may not be optimal.
Tunnel re-optimization can be configured to allow the MPLS TE tunnel to be
reestablished over an optimal path.
Re-optimization is implemented in either of the following modes:
● Periodic re-optimization: The system attempts to reestablish tunnels over
better paths (if there are) at a specified interval configured using the mpls te
NOTE
Tunnel re-optimization is performed based on tunnel path constraints. During path re-
optimization, path constraints, such as explicit path constraints and bandwidth constraints,
are also considered.
Tunnel re-optimization cannot be used on tunnels for which a system selects paths in most-
fill tie-breaking mode.
Pre-configuration Tasks
Before configuring tunnel re-optimization, configure an RSVP-TE tunnel.
Procedure
● (Optional) Enable IGP metric-based re-optimization for an MPLS TE tunnel.
Perform this step if you want to re-optimize an MPLS TE tunnel based only on
the IGP metric. The following constraints are ignored during re-optimization:
The global MPLS configuration takes effect on all MPLS TE tunnels and is
used for batch configuration. A single tunnel configuration takes
precedence over the global MPLS configuration.
NOTE
This command takes effect only for link change-triggered tunnel re-optimization.
It does not take effect if the mpls te reoptimization (tunnel interface view)
command is run with the frequency interval parameter specified.
This command disables the re-optimization function configured for all tunnels.
h. Run commit
----End
Context
Most IP radio access networks (IP RANs) that use Multiprotocol Label Switching
(MPLS) TE have high reliability requirements for LSPs. However, the existing CSPF
algorithm simplifies the LSP path according to the principle of minimizing the link
cost, and cannot automatically calculate the completely separate primary and
secondary LSP paths.
Specifying explicit paths can meet this reliability requirement; this method,
however, does not adapt to topology changes. Each time a node is added to or
deleted from the IP RAN, carriers must modify the explicit paths, which is time-
consuming and laborious.
To resolve these problems, you can configure isolated LSP computation. After this
feature is enabled, the disjoint and CSPF algorithms work together to compute
primary and hot-standby LSPs at the same time and cut off crossover paths of the
two LSPs. Then, the device gets the isolated primary and hot-standby LSPs.
NOTE
● Isolated LSP computation is a best-effort technique. If the disjoint and CSPF algorithms
cannot get isolated primary and hot-standby LSPs or two isolated LSPs do not exist, the
device uses the primary and hot-standby LSPs computed by CSPF.
● After you enable the disjoint algorithm, the shared risk link group (SRLG), if configured,
becomes ineffective.
Pre-configuration Tasks
Before configuring isolated LSP computation, complete the following tasks:
Procedure
Step 1 Run system-view
----End
Usage Scenario
Automatic bandwidth adjustment can be enabled to adjust the bandwidth of the
tunnel automatically.
The system periodically collects the traffic rate on the outbound interface of a TE
tunnel and obtains multiple sampled traffic rates within a certain period. The
average value of the sampled values within this period is used as the bandwidth
constraint to request the establishment of a new LSP. After the new LSP is
established, traffic is switched to the new one. With make-before-break enabled,
the old LSP is torn down after the new LSP is established.
The sampling interval is configured in the MPLS view and takes effect for all MPLS
TE tunnels. The rate of the outbound interface on an MPLS TE tunnel is recorded
at each sampling interval. In this manner, the actual average bandwidth sampled
within a period of time can be obtained.
After automatic bandwidth adjustment is enabled, the mpls te timer auto-
bandwidth command can be run to configure periodic sampling in order to
obtain the bandwidth of the tunnel in a sampling interval. After multiple times of
sampling within an automatic bandwidth adjustment period, the average
bandwidth is calculated as the new bandwidth, and a new LSP is established
based on the new bandwidth. If a new LSP is successfully established, traffic is
switched to it, and the original LSP is torn down. If a new LSP fails to be
established, traffic is still transmitted along the original LSP. The bandwidth is
adjusted in the next automatic bandwidth adjustment period.
Pre-configuration Tasks
Before configuring the bandwidth automatic adjustment, configure an RSVP-TE
tunnel.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te timer auto-bandwidth [ interval ]
The sampling interval is specified. The actual sampling interval takes the larger
value in the mpls te timer auto-bandwidth command and the set flow-stat
interval command.
Step 4 Run quit
Return to the system view.
Step 5 Run interface tunnel interface-number
The tunnel interface view of the MPLS TE tunnel is displayed.
Step 6 Run statistic enable
MPLS TE tunnel statistics can be collected.
----End
Usage Scenario
In service delivery, a controller delivers tunnel configurations to a NE9000. The
NE9000 uses the obtained configurations to create a tunnel or modify an existing
tunnel. To prevent users from modifying such tunnel configurations, the controller
delivers the mpls te lock command to lock the configurations, in addition to
configurations to the NE9000. Before you modify the configuration of a tunnel,
run the undo mpls te lock command to unlock the tunnel configuration on the
tunnel interface.
Pre-configuration Tasks
Before locking the tunnel configuration, complete the following tasks:
Procedure
Step 1 Run system-view
----End
Usage Scenario
The demand for multicast services, such as IPTV, multimedia conferences, and
massively multiplayer online role-playing games (MMORPGs), has steadily
increased on IP/MPLS backbone networks. These services require sufficient
network bandwidth, assured quality of service (QoS), and high reliability. The
following multicast solutions are available, but are insufficient for the
requirements of multicast services or network carriers:
● IP multicast technology: deployed on a live P2P network with upgraded
software. This solution reduces upgrade and maintenance costs. IP multicast,
similar to IP unicast, does not support QoS or TE capabilities and has low
reliability.
● Dedicated multicast network: deployed using synchronous optical network
(SONET)/synchronous digital hierarchy (SDH) technologies. This solution
provides high reliability and transmission rates, but has high construction
costs and requires separate maintenance.
IP/MPLS backbone network carriers require a multicast solution that has high TE
capabilities and can be implemented by upgrading existing devices.
Pre-configuration Tasks
Before configuring P2MP TE tunnels, complete the following tasks:
Configuration Procedures
Context
You can configure a P2MP TE tunnel only after P2MP TE is globally enabled on
each node.
Procedure
Step 1 Run system-view
MPLS TE is enabled.
----End
Context
After P2MP TE is globally enabled, P2MP TE is automatically enabled on each
MPLS TE-enabled interface on a local node. To disable P2MP TE on a specific
interface during network planning or there is no need to have P2MP TE enabled
on a specific interface because it does not support P2MP forwarding, disable P2MP
TE on the specific interface.
Procedure
Step 1 Run system-view
After the mpls te p2mp-te disable command is run, P2MP TE LSPs established on
the interface are torn down, and newly configured P2MP TE LSPs on the interface
fail to be established.
----End
Context
Before the primary sub-LSP is deleted, both the primary sub-LSP and Modified
sub-LSP carry traffic. If the egress cannot receive traffic only from one sub-LSP,
two copies of traffic exist. To prevent two copies of traffic, perform the following
steps to reset the leaf CR-LSP switchover hold-off time and deletion hold-off time.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
MPLS is enabled globally and the MPLS view is displayed.
Step 3 Run mpls te
MPLS TE is enabled globally.
A leaf CR-LSP switchover hold-off time and deletion hold-off time can be set only
after MPLS TE is enabled globally.
Step 4 Run mpls te p2mp-te leaf switch-delay switch-time delete-delay delete-time
The leaf MBB switchover delay and deletion delay are set.
NOTICE
----End
Context
For a P2MP TE tunnel, the path that originates from the ingress and is destined for
each leaf node can be calculated either by constraint shortest path first (CSPF) or
by planning an explicit path for a specific leaf node or each leaf node. After each
leaf node is configured on an ingress, the ingress sends signaling packets to each
leaf node and then establishes a P2MP TE tunnel. The NE9000 uses leaf lists to
configure and manage leaf nodes. All leaf nodes and their explicit paths are
integrated into a table, which helps you configure and manage the leaf nodes
uniformly.
An MPLS network that transmits multicast services selects dynamically leaf nodes
on an automatic P2MP TE tunnel and uses constrained shortest path first (CSPF)
to calculate a path destined for each leaf node. To control the leaf nodes of an
automatic P2MP TE tunnel, configure a leaf list.
Explicit path planning requires you to configure an explicit path for a specific leaf
node or each leaf node, and use the explicit path in the leaf list view.
NOTE
Procedure
Step 1 Run system-view
For a P2MP TE tunnel, an explicit tunnel can be configured for a specific leaf node
or each leaf node.
A leaf list is specified for the P2MP TE tunnel, and the leaf list view is displayed.
The leaf-address parameter specifies the MPLS LSR ID of each leaf node.
The explicit-path path-name parameter specifies the name of the explicit path
established in Step 2.
NOTE
Repeat Step 3 and Step 4 on a P2MP TE tunnel to configure all leaf nodes.
----End
Context
A P2MP TE tunnel is established by binding multiple sub-LSPs to a P2MP TE
tunnel interface. A network administrator configures a tunnel interface to manage
and maintain the tunnel. After a tunnel interface is configured on an ingress, the
ingress sends signaling packets to all leaf nodes to establish a tunnel.
Procedure
Step 1 Run system-view
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● Run ip address ip-address { mask | mask-length } [ sub ]
An IP address is assigned to the tunnel interface.
● Run ip address unnumbered interface interface-type interface-number
The tunnel interface is allowed to borrow the IP address of a specified
interface.
A tunnel ID is set.
Step 8 (Optional) Perform the following operations to set other tunnel attributes as
needed:
Run mpls te record-route [ label ] This step enables nodes along a P2MP
The route and label recording function TE tunnel to use RSVP messages to
for a manual P2MP TE tunnel is record detailed P2MP TE tunnel
enabled. information, including the IP address
of each hop. The label parameter in
the command enables RSVP messages
to record label values.
Operation Description
----End
Context
Attributes of an automatic P2MP TE tunnel can only be defined in a P2MP tunnel
template, but cannot be configured on a tunnel interface because the automatic
P2MP TE tunnel has no tunnel interface. When NG MVPN or multicast VPLS is
deployed on a network, nodes that transmit multicast traffic can reference the
template and use attributes defined in the template to automatically establish
P2MP TE tunnels.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls te p2mp-template template-name
A P2MP tunnel template is created, and the P2MP tunnel template view is
displayed.
Step 3 Select one or multiple operations.
Operation Description
Run resv-style { se | ff } -
A resource reservation style is
specified.
Operation Description
NOTE
In an inter-AS scenario, if a loose explicit path is configured in a P2MP template, you need
to run this command only when NG MVPN triggers the establishment of a dynamic P2MP
tunnel. You are advised not to run this command in other scenarios.
----End
Context
If there is no path meeting the bandwidth requirement of a desired tunnel, a
device can tear down an established tunnel and use bandwidth resources assigned
to that tunnel to establish a desired tunnel. This is called preemption. The
following preemption modes are supported:
● Hard preemption: A CR-LSP with a higher setup priority can directly preempt
resources assigned to a CR-LSP with a lower holding priority. Some traffic is
dropped on the CR-LSP with a lower holding priority during the hard
preemption process.
● Soft preemption: After a CR-LSP with a higher setup priority preempts
bandwidth of a CR-LSP with a lower holding priority, the soft preemption
function retains the CR-LSP with a lower holding priority for a specified period
of time. If the ingress finds a better path for this CR-LSP after the time
elapses, the ingress uses the make-before-break mechanism to reestablish the
CR-LSP over the new path. If the ingress fails to find a better path after the
time elapses, the CR-LSP goes down.
Procedure
● Configure soft preemption in the P2MP TE tunnel template view.
a. Run system-view
----End
Context
To improve reliability of traffic transmitted along a P2MP tunnel, configure the
following reliability enhancement functions as needed:
Procedure
● Configure rapid MPLS P2MP switching.
a. Run system-view
The system view is displayed.
b. Run mpls p2mp fast-switch enable
Rapid MPLS P2MP switching is enabled.
c. Run commit
The configuration is committed.
● Configure multicast load balancing on a trunk interface.
a. Run system-view
The system view is displayed.
----End
Prerequisites
A P2MP TE tunnel has been configured.
Procedure
● Run the display mpls te p2mp tunnel-interface command to check
information about the P2MP TE tunnel interface on the ingress and all sub-
LSPs.
----End
Follow-up Procedure
If errors occur in tunnel services, perform the following to quickly restore the
services if no other workarounds are available.
Usage Scenario
If P2MP TE tunnels are established to transmit NG MVPN and multicast VPLS
services, BFD for P2MP TE can be configured to rapidly detect faults in P2MP TE
tunnels, which improves network reliability. To configure BFD for P2MP TE, run the
bfd enable command in the P2MP tunnel template view so that BFD sessions for
P2MP TE can be automatically established while P2MP TE tunnels are being
established.
Context
Before configuring BFD for P2MP TE, configure an automatic P2MP TE tunnel.
Procedure
Step 1 Configure the ingress.
1. Run system-view
The system view is displayed.
2. Run bfd
BFD is enabled.
3. Run quit
Return to the system view.
4. Run mpls te p2mp-template template-name
A P2MP tunnel template is created, and the MPLS TE P2MP template view is
displayed.
5. Run bfd enable
BFD for P2MP TE is enabled.
6. (Optional) Run bfd { min-tx-interval tx-interval | min-rx-interval rx-interval
| detect-multiplier multiplier } *
BFD for P2MP TE parameters are set.
7. Run commit
The configuration is committed.
Step 2 Configure each leaf node.
1. Run system-view
The system view is displayed.
2. Run bfd
BFD is enabled, and the BFD view is displayed.
3. Run mpls-passive
The egress is enabled to create a BFD session passively.
The egress has to receive an LSP ping request carrying a BFD TLV before
creating a BFD session.
4. Run commit
The configuration is committed.
----End
Usage Scenario
P2MP TE FRR establishes a bypass tunnel to provide local link protection for the
P2MP TE tunnel called the primary tunnel. The bypass tunnel is a P2P TE tunnel.
The principles and concepts of P2MP TE FRR are similar to those of P2P TE FRR.
The NE9000 supports FRR link protection, not node protection, over a P2MP TE
tunnel. Therefore, path planning for the bypass tunnel is irrelevant to node
protection. For example, in Figure 1-9, the bypass tunnel path planned for the link
between P1 and P2 can provide link protection. However, the link between P3 and
PE4 for which a bypass tunnel path is planned traverses the node P4 so that the
bypass tunnel cannot be bound to the primary tunnel or provide link protection.
P2P and P2MP TE tunnels can share a bypass tunnel. Therefore, when planning
bandwidth for the bypass tunnel, ensure that the bypass tunnel bandwidth is
equal to the total bandwidth of the bound P2P and P2MP tunnels.
NOTE
Pre-configuration Tasks
Before configuring P2MP TE FRR, complete the following task:
● Configure a P2MP TE tunnel.
Context
The process of configuring P2MP TE FRR is identical to that for configuring P2P TE
FRR, which includes the following two procedures:
● Enable the P2MP TE FRR function on the tunnel interface of the primary
tunnel (P2MP TE tunnel).
● Configure a bypass tunnel on the point of local repair (PLR) node and
bind the bypass tunnel to the primary tunnel.
NOTE
Procedure
● Enable the P2MP TE FRR function on the tunnel interface of the primary
tunnel (P2MP TE tunnel).
a. Run system-view
The system view is displayed.
b. Run interface tunnel interface-number
The MPLS TE tunnel interface view is displayed.
c. Run mpls te fast-reroute [ bandwidth ]
The P2MP TE FRR function is enabled.
NOTE
You can run the mpls te bypass-attributes command to configure bypass tunnel
attributes only after running the mpls te fast-reroute bandwidth command.
d. (Optional) Run mpls te bypass-attributes
Bypass tunnel attributes are configured.
e. Run commit
The configuration is committed.
● Configure a bypass tunnel on the PLR node and bind the bypass tunnel to the
primary tunnel.
a. Run system-view
The system view is displayed.
b. Run interface tunnel tunnel-number
NOTE
The explicit path planned for the bypass tunnel and the primary tunnel path to
be protected must use different physical links.
g. (Optional) Run mpls te bandwidth ct0 bandwidth
The bypass tunnel bandwidth is configured.
h. Run mpls te bypass-tunnel
A bypass tunnel is configured.
i. Run mpls te protected-interface interface-type interface-number
A link interface to be protected by a bypass tunnel is specified.
j. Run commit
The configuration is committed.
----End
Context
The process of configuring FRR for automatic P2MP TE tunnels is as follows:
● Configure the ingress.
● Configure a bypass tunnel on the PLR and bind it to the primary tunnel.
Procedure
● Configure the ingress.
a. Run system-view
The system view is displayed.
b. Run mpls te p2mp-template template-name
A P2MP tunnel template is created, and the MPLS TE P2MP template
view is displayed.
The bandwidth parameter sets the bandwidth for the bypass tunnel. The
priority parameter sets the holding and setup priority values for the
bypass tunnel.
e. Run commit
NOTE
The explicit path planned for the bypass tunnel and the primary tunnel path to
be protected must use different physical links.
g. (Optional) Run mpls te bandwidth ct0 bandwidth
----End
Prerequisites
P2MP TE FRR has been configured.
Procedure
● Run the display mpls te p2mp tunnel frr [ tunnel-name ] [ lsp-id ingress-
lsr-id session-id local-lsp-id [ s2l-destination leaf-address ] ] command to
check bypass tunnel attributes.
----End
Usage Scenario
FRR protection is configured for networks requiring high reliability. If P2MP TE
manual FRR is used (configured by following the steps in Configuring P2MP TE
FRR), a lot of configurations are needed on a network with complex topology and
a great number of links to be protected. In this situation, P2MP TE Auto FRR can
be configured.
Unlike P2MP TE manual FRR, P2MP TE Auto FRR automatically creates a bypass
tunnel that meets traffic requirements, which simplifies configurations.
The NE9000 supports upgrade binding that if a bypass tunnel with a priority
higher than an existing bypass tunnel is calculated, the primary tunnel will be
automatically unbound from the existing bypass tunnel and bound to the one
with the higher priority. A bypass tunnel is selected based on the following rules
prioritized in descending order:
● An SRLG attribute is configured for a bypass tunnel.
● If P2MP TE Auto FRR and an SRLG attribute are configured, the primary and
bypass tunnels must be in different SRLGs. If these two tunnels are in the
same SRLG, the bypass tunnel may fail to be established.
● A bypass tunnel with bandwidth protection configured takes preference over
that with non-bandwidth protection configured.
A bypass tunnel provides link protection, not node protection, for its primary
tunnel.
Pre-configuration Tasks
Before configuring P2MP TE Auto FRR, complete the following tasks:
Context
Perform either of the following operations to enable P2MP TE Auto FRR on the
NE9000:
● Configure the entire device and its interface when Auto FRR needs to be
configured on most interfaces.
● Only configure a specified interface when Auto FRR needs to be configured
only on a few interfaces.
Procedure
● Configure the entire device and its interface.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls te auto-frr self-adapting
MPLS TE Auto FRR is enabled globally.
P2MP TE FRR only supports link protection, while a bypass tunnel that
the ingress establishes supports node protection by default. As a result,
the bypass tunnel fails to be established. To prevent the establishment
failure, configure the self-adapting parameter in this command, which
enables the ingress to automatically switch from node protection to link
protection.
d. Run mpls te p2mp-te auto-frr enable
P2MP TE Auto FRR is enabled globally.
e. Run quit
Return to the system view.
f. Run interface interface-type interface-number
The view of the outbound interface on the primary tunnel is displayed.
g. (Optional) Run mpls te auto-frr { block | default }
1.1.3.40.2 Enabling the TE FRR and Configuring the AutoBypass Tunnel Attributes
After MPLS TE FRR is enabled on the ingress of a primary LSP, a bypass LSP is
established automatically.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
The tunnel interface view of the primary LSP is displayed.
Step 3 Run mpls te fast-reroute [ bandwidth ]
TE FRR is enabled.
If TE FRR bandwidth protection is needed, configure the bandwidth parameter in
this command.
Step 4 (Optional) Run mpls te frr-switch degrade
The MPLS TE tunnel is enabled to mask the FRR function.
After TE FRR takes effect, traffic is switched to the bypass LSP when the primary
LSP fails. If the bypass LSP is not the optimal path, traffic congestion easily occurs.
To prevent traffic congestion, you can configure LDP to protect TE tunnels. To have
the LDP protection function take effect, you need to run the mpls te frr-switch
degrade command to enable the MPLS TE tunnel to mask the FRR function. After
the command is run:
1. If the primary LSP is in the FRR-in-use state (that is, traffic has been switched
to the bypass LSP), traffic cannot be switched to the primary LSP.
2. If HSB is configured for the tunnel and an HSB LSP is available, traffic is
switched to the HSB LSP.
3. If no HSB LSP is available for the tunnel, the tunnel is unavailable, and traffic
is switched to another tunnel like an LDP tunnel.
4. If no tunnels are available, traffic is interrupted.
NOTE
● The bandwidth attribute can only be set for the bypass LSP after the mpls te fast-
reroute bandwidth command is run for the primary LSP.
● The bypass LSP bandwidth cannot exceed the primary LSP bandwidth.
● If no attributes are configured for an automatic bypass LSP, by default, the automatic
bypass LSP uses the same bandwidth as that of the primary LSP.
● The setup priority of a bypass LSP must be lower than or equal to the holding priority.
These priorities cannot be higher than the corresponding priorities of the primary LSP.
● If TE FRR is disabled, the bypass LSP attributes are automatically deleted.
The interface view of the link through which the bypass LSP passes is
displayed.
2. Run mpls te auto-frr attributes { bandwidth bandwidth | priority setup-
priority [ hold-priority ] | hop-limit hop-limit-value }
Attributes are configured for the bypass LSP.
3. Run quit
Affinities determine link attributes of an automatic bypass LSP. Affinities and a link
administrative group attribute are used together to determine over which links the
automatic bypass LSP can be established.
NOTE
If an automatic bypass LSP that satisfies the specified affinity cannot be established, a
node will bind a manual bypass LSP satisfying the specified affinity to the primary LSP.
----End
Context
Network changes often cause the changes in optimal paths. Auto bypass tunnel
re-optimization allows the system to re-optimize an auto bypass tunnel if an
optimal path to the same destination is found due to some reasons, such as the
changes in the cost. In this manner, network resources are optimized.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te auto-frr reoptimization [ frequency interval ]
Auto bypass tunnel re-optimization is enabled.
Step 4 (Optional) Run return
Return to the user view.
Step 5 (Optional) Run mpls te reoptimization [ auto-tunnel name tunnel-interface |
tunnel tunnel-number ]
Manual re-optimization is enabled.
After you configure the automatic re-optimization in the MPLS view, you can
return to the user view and run the mpls te reoptimization command to
immediately re-optimize the tunnels on which the automatic re-optimization is
enabled. After you perform the manual re-optimization, the timer of the
automatic re-optimization is reset and counts again.
Step 6 Run commit
----End
Prerequisites
P2MP TE Auto FRR has been configured.
Procedure
● Run the display mpls te p2mp tunnel frr [ tunnel-name ] [ lsp-id ingress-
lsr-id session-id local-lsp-id [ s2l-destination leaf-address ] ] command to
check bypass tunnel attributes.
----End
Usage Scenario
A static CR-LSP is easy to configure. Labels are manually allocated, and no
signaling protocol is used to exchange control packets. The setup of a static CR-
LSP consumes only a few resources, and you do not need to configure an IGP TE
extension or CSPF for the static CR-LSP. However, static CR-LSP application is quite
limited. A static CR-LSP cannot dynamically adapt to network changes and is
limited in applications.
MPLS TE tunnels apply to one of the following VPN scenarios:
● A single TE tunnel transmits various types of services in a non-VPN scenario.
● A single TE tunnel transmits various types of services in a VPN instance.
● A single TE tunnel transmits various types of services in multiple VPN
instances.
● A single TE tunnel transmits various types of VPN and non-VPN services.
Traditional MPLS TE tunnels (non-standard DS-TE tunnels) cannot transmit
services based on service types in compliance with the quality of service (QoS). For
example, when a TE tunnel carries both voice and video flows, video flows may
have more duplicate frames than voice flows. Therefore, video flows require higher
drop precedence than the voice flows. The same drop precedence, however, is used
for voice and video flows on MPLS TE tunnels.
To prevent services over a tunnel from interfering with each other, establish a
tunnel for each type of service in a VPN instance or for each type of non-VPN
service. This solution wastes resources because a large number of tunnels are
established when many VPN instances carry various services.
In the preceding MPLS TE tunnel scenarios, the DS-TE tunnel solution is optimal.
An edge node in a DS-TE domain classifies services and adds service type
information in the EXP field in packets. A transit node merely checks the EXP field
to select a proper PHB to forward packets.
A DS-TE tunnel classifies services and reserves resources for each type of services,
which improves network resource use efficiency. A DS-TE tunnel carries a
maximum of eight types of services.
NOTE
● The IETF DS-TE tunnel configuration requires the ingress and egress hardware to
support HQoS. The non-IETF DS-TE tunnel has no such a restriction.
● If the same type of service in multiple VPN instances is carried using the same CT of a
DS-TE tunnel, the bandwidth of each type of service in each VPN instance can be set on
an access CE to prevent services of the same type but different VPN instances from
competing for resources.
● To prevent non-VPN services and VPN services from completing resources, you can
configure DS-TE to carry VPN services only or configure the bandwidth for non-VPN
services in DS-TE.
Pre-configuration Tasks
Before configuring DS-TE, complete the following tasks:
● Configure unicast static routes or an IGP to ensure the readability between
LSRs at the network layer.
● Set an LSR ID on each LSR.
● Enable MPLS globally and on interfaces on all LSRs.
● Enable MPLS TE and RSVP-TE on all LSRs and their interfaces.
● Enable behavior aggregate (BA) traffic classification on each LSR interface
along an LSP.
Context
Perform the following steps on each LSR in a DS-TE domain:
NOTE
If bandwidth constraints are configured for a tunnel, the IETF and non-IETF modes cannot
be switched to each other.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te ds-te mode { ietf | non-ietf }
----End
Follow-up Procedure
The IETF mode and non-IETF mode can be switched between each other on the
NE9000. Table 1-13 describes switching between DS-TE modes. The arrow symbol
(—>) indicates "switched to."
Context
Perform the following steps on each LSR in a DS-TE domain:
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
Perform the following steps on each outbound interface on a DS-TE LSP:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The view of the link outbound interface is displayed.
Step 3 Run mpls te bandwidth max-reservable-bandwidth max-bw-value
The maximum reservable link bandwidth is set.
Step 4 Run mpls te bandwidth { bc0 bc0Bw | bc1 bc1Bw | bc2 bc2Bw | bc3 bc3Bw | bc4
bc4Bw | bc5 bc5Bw | bc6 bc6Bw | bc7 bc7Bw }*
BC bandwidth is configured for the link.
Step 5 Run commit
The configuration is committed.
----End
Follow-up Procedure
A distinct bandwidth constraints model determines a specific mapping between
the maximum reservable link bandwidth and BC bandwidth:
● RDM model: max-reservable-bandwidth ≥ bc0Bw ≥ bc1Bw ≥ bc2Bw ≥ bc3Bw
≥ bc4Bw ≥ bc5Bw ≥ bc6Bw ≥ bc7Bw
● MAM model: max-reservable-bandwidth ≥ bc0Bw + bc1Bw + bc2Bw + bc3Bw
+ bc4Bw + bc5Bw + bc6Bw + bc7Bw
The Bandwidth Constraint (BC) bandwidth refers to the bandwidth constraints on
a link, whereas the CT bandwidth refers to the bandwidth constraints of various
types of service traffic on a DS-TE tunnel. The BCi bandwidth of a link must be
greater than or equal to the sum (0 <= i <= 7) of all CTi bandwidth values of DS-
TE tunnels passing through the link. For example, three LSPs with CT1 passing
through a link has bandwidth values x, y, and z, respectively. The link interface
BC1 bandwidth must be greater than or equal to the sum of x, y, and z.
Context
Perform the following steps on the ingress of a TE tunnel to be established:
Procedure
Step 1 Run system-view
Step 4 Run either of the following commands to assign an IP address to the tunnel
interface:
The destination address of the tunnel is configured as the LSR ID of the egress.
A tunnel ID is set.
NOTE
The holding priority must be higher than or equal to the setup priority. If no holding
priority is set, its value is the same as that setup priority. If the combination of the
bandwidth and priorities is not listed in the TE class mapping table, LSPs cannot be
established.
Each time you change an MPLS TE parameter, run the commit command to
commit the configuration.
----End
Procedure
● Configure IGP TE.
a. Run system-view
For the same node, the sum of CTi bandwidth values must not exceed the BCi
bandwidth values (0 <= i <= 7). CTi can use bandwidth resources only of BCi.
NOTE
If the bandwidth required by the MPLS TE tunnel is higher than 28,630 kbit/s, the
available bandwidth assigned to the tunnel may not be precise, but the tunnel can be
established successfully.
● (Optional) Configure an explicit path.
To limit the path over which an MPLS TE tunnel is established, perform the
following steps on the ingress of the tunnel:
----End
Context
Skip this section if the non-IETF DS-TE mode is used.
In IETF DS-TE mode, plan a TE-class mapping table. Configuring the same TE-class
mapping table on the whole DS-TE domain is recommended. Otherwise, LSPs may
be incorrectly established.
Procedure
Step 1 Run system-view
A TE-class mapping table is created, and the TE-class mapping view is displayed.
● To configure TE-class 0, run te-class0 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 1, run te-class 1 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 2, run te-class 2 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 3, run te-class 3 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 4, run te-class 4 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 5, run te-class 5 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 6, run te-class 6 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
● To configure TE-class 7, run te-class 7 class-type { ct0 | ct1 | ct2 | ct3 | ct4 |
ct5 | ct6 | ct7 } priority priority [ description description-info ]
Note the following information when you configure a TE-class mapping table:
● The TE-class mapping table is unique on each device.
● The TE-class mapping table takes effect globally. It takes effect on all DS-TE
tunnels passing through the local LSR.
● A TE-class refers to a combination of a CT and a priority, in the format of <CT,
priority>. The priority is the priority of a CR-LSP in the TE-class mapping table,
not the EXP value in the MPLS header. The priority value is an integer ranging
from 0 to 7. The smaller the value, the higher the priority is.
When you create a CR-LSP, you can set the setup and holding priorities for it
(see Configuring a Tunnel Interface) and CT bandwidth values (see
Configuring an RSVP CR-LSP and Specifying Bandwidth Values).
A CR-LSP can be established only when both <CT, setup-priority> and <CT,
holding-priority> exist in a TE-class mapping table. For example, the TE-class
mapping table of a node contains only TE-Class [0] = <CT0, 6> and TE-Class
[1] = <CT0, 7>, only can the following three types of CR-LSPs be successfully
set up:
– Class-Type = CT0, setup-priority = 6, holding-priority = 6
– Class-Type = CT0, setup-priority = 7, holding-priority = 6
– Class-Type = CT0, setup-priority = 7, holding-priority = 7
NOTE
The combination of setup-priority = 6 and hold-priority = 7 does not exist because the
setup priority cannot be higher than the holding priority on a CR-LSP.
● In a MAN model, a higher-class CT preempts bandwidth of the same CT, not
bandwidth of different CTs.
● In the RDM module, CT bandwidth preemption is limited by priorities of CR-
LSPs and matching BCs. Assumed that priorities of CR-LSPs are set to m and n
and CT values are set to i and j. If 0 <= m < n <= 7 and 0 <= i < j <= 7, the
following situations occur:
– CTi with priority m can preempt the bandwidth of CTi with priority n or
of CTj with priority n.
TE-Class[0] 0 0
TE-Class[1] 1 0
TE-Class[2] 2 0
TE-Class[3] 3 0
TE-Class[4] 0 7
TE-Class[5] 1 7
TE-Class[6] 2 7
TE-Class[7] 3 7
NOTE
After the TE-class mapping is configured, to change TE-class descriptions, run the { te-
class0 | te-class1 | te-class2 | te-class3 | te-class4 | te-class5 | te-class6 | te-class7 }
description description-info command.
----End
Context
When services recurse to multiple TE tunnels, the mpls te service-class command
is run on the TE tunnel interface to set a service class so that a TE tunnel
transmits services of a specified service class.
DS-TE tunnels can be prioritized to receive traffic. One priority or multiple
priorities can be assigned to a tunnel to which services recurse. Table 1-15
describes the default mapping between DS-TE tunnel's CTs and flow queues.
Table 1-15 Default mapping between DS-TE tunnel's CTs and flow queues
CT Flow Queue
CT0 be
CT1 af1
CT2 af2
CT3 af3
CT4 af4
CT5 ef
CT6 cs6
CT7 cs7
If services recurse to multiple TE tunnels for load balancing, tunnel selection rules
are the same as those used in CBTS:
1. If the priority attribute of service traffic matches the priority attribute
configured for a tunnel, the service traffic is carried by the tunnel that
matches the priority attribute.
2. If the priority of service traffic does not match a configured priority of a
tunnel, the following rules apply:
a. If the priority of a tunnel among load-balancing tunnels is default, the
service traffic that does not match any priority is carried by the tunnel
with the default priority.
b. If none of load-balancing tunnels is assigned the default priority and
some tunnels are not configured with priorities, service traffic that does
not match any tunnel priorities is carried by the tunnels that are not
configured with priorities.
c. If none of load-balancing tunnels is assigned the default priority but all
tunnels are configured with priorities, traffic that does not match any
tunnel priority is transmitted by the tunnel with the lowest priority.
Procedure
Step 1 Run system-view
A service class is set for packets that an MPLS TE tunnel allows to pass through.
NOTE
----End
Prerequisites
All DS-TE functions have been configured.
Procedure
● Run the display mpls te ds-te { summary | te-class-mapping [ default |
config | verbose ] } command to check DS-TE information.
● Run the display mpls te te-class-tunnel { all | { ct0 | ct1 | ct2 | ct3 | ct4 | ct5
| ct6 | ct7 } priority priority } command to check information about the TE
tunnel associated with TE-classes.
● Run the display interface tunnel interface-number command to check CT
traffic information on a specified tunnel interface.
● Run the display ospf [ process-id ] mpls-te [ area area-id ] [ self-
originated ] command to check OSPF TE information.
● Run either of the following commands to check the IS-IS TE status:
– display isis traffic-eng advertisements [ lsp-id | local ] [ level-1 |
level-2 | level-1-2 ] [ process-id | vpn-instance vpn-instance-name ]
– display isis traffic-eng statistics [ process-id | vpn-instance vpn-
instance-name ]
----End
Context
After configuring an MPLS TE tunnel, you can run the ping lsp command on the
ingress of the TE tunnel to verify that the ping from the ingress to the egress is
successful. If the ping fails, run the tracert lsp command to locate the fault.
Procedure
● Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m
interval | -r reply-mode | -s packet-size | -t time-out | -v ] * te tunnel tunnel-
number [ hot-standby ] [ compatible-mode ]command to check the
connectivity of a TE tunnel from the ingress to the egress.
----End
Context
For information about configurations for monitoring a TE tunnel using NQA, see
"NQA Configuration" in Configuration Guide - System Management.
Context
Run the display mpls te tunnel-interface last-error command on the ingress to
view the last five recorded errors that occurred on a TE tunnel. The following
errors may occur:
● CSPF computation failures
● Errors that occurred when RSVP signaling was triggered
● Errors carried in received RSVP PathErr messages
Procedure
Step 1 Run the display mpls te tunnel-interface last-error [ tunnel-name ] command
to check error information on a tunnel interface.
----End
Context
NOTICE
RSVP-TE statistics are deleted if you reset RSVP-TE statistics using the reset
command. Exercise caution when running the reset command.
To delete RSVP-TE statistics, run the reset command in the user view.
Procedure
Step 1 Run the reset mpls rsvp-te statistics { global | interface [ interface-type
interface-number ] } command in the user view to delete RSVP-TE statistics.
----End
Context
NOTICE
Resetting the RSVP process causes all RSVP CR-LSPs to be torn down and re-
established.
Procedure
Step 1 In the user view, run reset mpls rsvp-te [ lsp-id lspId-value ] [ tunnel-id
tunnelId-value ] [ ingress-id ingressId-value ] [ egress-id egressId-value ] [ name
name-value ]
The RSVP-TE process is restarted.
----End
Procedure
Step 1 Run the reset mpls te auto-frr { lsp-id ingress-lsrid tunnel-id | name bypass-
tunnel-name } command to tear down an automatic bypass tunnel and re-
establish a new one.
----End
Context
On a network with a static bidirectional co-routed CR-LSP used to transmit
services, if a few packets are dropped or bit errors occur on links, no alarms
indicating link or LSP failures are generated, which poses difficulties in locating
the faults. To locate the faults, loopback detection can be enabled for the static
bidirectional co-routed CR-LSP.
Procedure
Step 1 (Optional) In the MPLS view, run lsp-loopback autoclear period period-value
The timeout period is set, after which loopback detection for a static bidirectional
co-routed LSP is automatically disabled.
Step 2 In the specified static bidirectional LSP transit view, run lsp-loopback start.
Loopback detection is enabled for the specified static bidirectional co-routed CR-
LSP.
Loopback detection enables a transit node on the CR-LSP to loop traffic back to
the ingress. A professional monitoring device connected to the ingress monitors
data packets that the ingress sends and receives and checks whether a fault occurs
on the link between the ingress and transit node. Figure 1-10 illustrates the
network on which loopback is enabled to monitor a static bidirectional co-routed
CR-LSP.
NOTICE
Step 3 Perform one of the following operations to check the loopback status on a transit
node:
● Run the display mpls te bidirectional command.
● View the MPLS_LSPM_1.3.6.1.4.1.2011.5.25.121.2.1.75 hwMplsLspLoopBack
alarm that is generated after loopback detection is started.
● View the MPLS_LSPM_1.3.6.1.4.1.2011.5.25.121.2.1.76
hwMplsLspLoopBackClear alarm that is generated after loopback detection is
stopped.
----End
Context
In an L3VPN over BGP over TE or IP over BGP over TE scenario, after an MPLS TE
tunnel is configured, the mpls load-balance wtr command is run in the system
view to prevent packet loss during an MPLS ECMP switchback and set a
switchback WTR time.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls load-balance wtr wtr-value
The delay time is configured for MPLS ECMP switchback.
----End
Networking Requirements
On the carrier network shown in Figure 1-11, some devices have low routing and
processing performance. The carrier wants to use an MPLS TE tunnel to transmit
services. To meet this requirement, a static TE tunnel from LSRA to LSRC and a
static TE tunnel from LSRC to LSRA can be established. A static TE tunnel is
manually established, without using a dynamic signaling protocol or IGP routes,
which consumes a few device resources and has low requirement on device
performance.
Configuration Roadmap
The configuration roadmap is as follows:
NOTE
● The outgoing label of each node is the incoming label of the next node.
● When running the static-cr-lsp ingress { tunnel-interface tunnel interface-number |
tunnel-name } destination destination-address { nexthop next-hop-address | outgoing-
interface interface-type interface-number } * out-label out-label command to configure
the ingress of a CR-LSP, note that tunnel-name must be the same as the tunnel
interface name specified in the interface tunnel interface-number command. The value
of tunnel-name is a string of case-sensitive character with no spaces. For example, the
name of the tunnel created by using the interface Tunnel 20 command is Tunnel20. In
this case, the parameter of the static CR-LSP on the ingress is Tunnel20. This restriction
does not apply to transit nodes or egresses.
Data Preparation
To complete the configuration, you need the following data:
● Inbound interface name, next-hop address, and outgoing label of the transit
node on the static CR-LSP
● Inbound interface name of the egress on the static CR-LSP
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and a mask to each interface.
For configuration details, see Configuration Files in this section.
Step 2 Configure basic MPLS functions and enable MPLS TE.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls te
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
Repeat this step for LSRB and LSRC. For configuration details, see Configuration
Files in this section.
Step 3 Configure an MPLS TE tunnel.
# Create an MPLS TE tunnel from LSRA to LSRC.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] ip address unnumbered interface loopback 1
[*LSRA-Tunnel10] tunnel-protocol mpls te
[*LSRA-Tunnel10] destination 3.3.3.3
[*LSRA-Tunnel10] mpls te tunnel-id 100
[*LSRA-Tunnel10] mpls te signal-protocol cr-static
[*LSRA-Tunnel10] commit
[~LSRA-Tunnel10] quit
Run the display mpls lsp or display mpls static-cr-lsp command on each LSR to
view the establishment status of the static CR-LSP.
# Check the configuration on LSRA.
[~LSRA] display mpls static-cr-lsp
TOTAL :2 STATIC CRLSP(S)
UP :2 STATIC CRLSP(S)
When the static CR-LSP is used to establish the MPLS TE tunnel, the packets on
the transit node and the egress are forwarded directly based on the specified
incoming and outgoing labels. Therefore, no FEC information is displayed on LSRB
or LSRC.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
static-cr-lsp ingress tunnel-interface Tunnel10 destination 3.3.3.3 nexthop 10.21.1.2 out-label 20
#
static-cr-lsp egress Tunnel20 incoming-interface GigabitEthernet1/0/0 in-label 130
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.21.1.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol cr-static
mpls te tunnel-id 100
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
static-cr-lsp transit Tunnel10 incoming-interface GigabitEthernet1/0/0 in-label 20 nexthop 10.32.1.2
out-label 30
#
static-cr-lsp transit Tunnel20 incoming-interface GigabitEthernet2/0/0 in-label 120 nexthop 10.21.1.1
out-label 130
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.32.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.21.1.2 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
return
Context
Static bidirectional co-routed CR-LSPs are used to establish static bidirectional
tunnels for services on an MPLS network.
On the network shown in Figure 1-12, a static bidirectional co-routed CR-LSP is
established between LSRA and LSRC. The link bandwidth between LSRA and LSRC
is required to be 10 Mbit/s.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure a routing protocol.
2. Configure basic MPLS functions and enable MPLS TE.
3. Configure MPLS TE attributes for links.
4. Configure MPLS TE tunnels.
5. Configure the ingress, transit nodes, and the egress for the static bidirectional
co-routed CR-LSP.
6. Bind the tunnel interface configured on LSRC to the static bidirectional co-
routed CR-LSP.
Data Preparation
To complete the configuration, you need the following data:
● Tunnel interface names, tunnel interface IP addresses, destination addresses,
tunnel IDs, and tunnel signaling protocol (CR-Static) on LSRA and LSRC
● Maximum reservable bandwidth and BC bandwidth of each link
● Next-hop address and outgoing label on the ingress
● Inbound interface, next-hop address, and outgoing label on the transit node
● Inbound interface on the egress
Procedure
Step 1 Assign an IP address to each interface and configure a routing protocol.
# Assign an IP address and a mask to each interface and configure a routing
protocol so that all LSRs are interconnected.
# Configure LSRB.
[~LSRB] interface gigabitethernet 1/0/0
[~LSRB-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 1/0/1
[*LSRB-GigabitEthernet1/0/1] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet1/0/1] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet1/0/1] commit
[~LSRB-GigabitEthernet1/0/1] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRC-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*LSRC-GigabitEthernet1/0/0] commit
[~LSRC-GigabitEthernet1/0/0] quit
Step 5 Configure the ingress, transit nodes, and the egress for the static bidirectional co-
routed CR-LSP.
# Configure LSRA as the ingress.
[~LSRA] bidirectional static-cr-lsp ingress Tunnel 10
[*LSRA-bi-static-ingress-Tunnel10] forward nexthop 10.21.1.2 out-label 20 bandwidth ct0 10000 pir
10000
[*LSRA-bi-static-ingress-Tunnel10] backward in-label 20
[*LSRA-bi-static-ingress-Tunnel10] commit
[~LSRA-bi-static-ingress-Tunnel10] quit
Step 6 Bind the static bidirectional co-routed CR-LSP to the tunnel interface on LSRC.
[~LSRC] interface Tunnel20
[~LSRC-Tunnel20] mpls te passive-tunnel
[*LSRC-Tunnel20] mpls te binding bidirectional static-cr-lsp egress Tunnel20
[*LSRC-Tunnel20] commit
[~LSRC-Tunnel20] quit
After completing the configuration, run the ping command on LSRA. The static
bidirectional co-routed CR-LSP is reachable.
[~LSRA] ping lsp -a 1.1.1.1 te Tunnel 10
LSP PING FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel10 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.3: bytes=100 Sequence=1 time = 56 ms
Reply from 3.3.3.3: bytes=100 Sequence=2 time = 53 ms
Reply from 3.3.3.3: bytes=100 Sequence=3 time = 3 ms
Reply from 3.3.3.3: bytes=100 Sequence=4 time = 60 ms
Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms
--- FEC: RSVP IPV4 SESSION QUERY Tunnel10 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 3/35/60 ms
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
bidirectional static-cr-lsp ingress Tunnel10
forward nexthop 10.21.1.2 out-label 20 bandwidth ct0 10000 pir 10000
backward in-label 20
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.21.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol cr-static
mpls te tunnel-id 100
mpls te bidirectional
#
ip route-static 2.2.2.2 255.255.255.255 10.21.1.2
ip route-static 3.3.3.3 255.255.255.255 10.21.1.2
#
return
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ip route-static 1.1.1.1 255.255.255.255 10.21.1.1
ip route-static 3.3.3.3 255.255.255.255 10.32.1.2
#
return
Networking Requirements
In Figure 1-13, a forward static CR-LSP is established along the path PE1 -> PE2,
and a reverse static CR-LSP is established along the path PE2 -> PE1. To allow a
traffic switchover to be performed on both CR-LSPs, bind the two static CR-LSPs to
each other to form an associated bidirectional static CR-LSP.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and its mask to every interface and configure a loopback
interface address as an LSR ID on every node.
2. Configure a forward static CR-LSP and a reverse static CR-LSP.
3. Bind the forward and reverse static CR-LSPs to each other.
Data Preparation
To complete the configuration, you need the following data:
NOTE
In this example, a forward static CR-LSP is established along the path PE1 -> PE2, and a
reverse static CR-LSP is established along the path PE2 -> PE1.
Procedure
Step 1 Assign an IP address and a mask to each interface.
Assign IP addresses and masks to interfaces. For configuration details, see
Configuration Files in this section.
Step 2 Configure a forward static CR-LSP and a reverse static CR-LSP.
For configuration details, see Configuration Files in this section.
Step 3 Bind the forward and reverse static CR-LSPs to each other.
# Configure PE1.
[~PE1] interface Tunnel 10
[~PE1-Tunnel10] mpls te reverse-lsp protocol static lsp-name Tunnel20
[*PE1-Tunnel10] commit
# Configure PE2.
[~PE2] interface Tunnel 20
[~PE2-Tunnel20] mpls te reverse-lsp protocol static lsp-name Tunnel10
[*PE2-Tunnel20] commit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol cr-static
mpls te reverse-lsp protocol static lsp-name Tunnel20
mpls te tunnel-id 100
#
static-cr-lsp ingress tunnel-interface Tunnel10 destination 3.3.3.3 nexthop 10.1.1.2 out-label 20
#
static-cr-lsp egress Tunnel20 incoming-interface GigabitEthernet1/0/0 in-label 130
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls te
#
interface LoopBack1
Networking Requirements
On the network shown in Figure 1-14, LSRA, LSRB, LSRC, and LSRD are level-2
routers that run IS-IS.
RSVP-TE is used to establish a TE tunnel with 20 Mbit/s bandwidth between LSRA
and LSRD. The maximum reservable bandwidth for every link along the TE tunnel
is 100 Mbit/s and the BC0 bandwidth is 100 Mbit/s.
Configuration Notes
None
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address and its mask to every interface.
Assign an IP address and a mask to each interface according to Figure 1-14. The
configuration details are not provided.
Step 2 Configure IS-IS.
# Configure LSRA.
[~LSRA] isis 1
[*LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
[*LSRA-isis-1] is-level level-2
[*LSRA-isis-1] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] isis enable 1
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] interface loopback 1
[*LSRA-LoopBack1] isis enable 1
[*LSRA-LoopBack1] commit
[~LSRA-LoopBack1] quit
# Configure LSRB.
[~LSRB] isis 1
[*LSRB-isis-1] network-entity 00.0005.0000.0000.0002.00
[*LSRB-isis-1] is-level level-2
[*LSRB-isis-1] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] isis enable 1
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] isis enable 1
[*LSRB-GigabitEthernet2/0/0] quit
[*LSRB] interface loopback 1
[*LSRB-LoopBack1] isis enable 1
[*LSRB-LoopBack1] commit
[~LSRB-LoopBack1] quit
# Configure LSRC.
[~LSRC] isis 1
[*LSRC-isis-1] network-entity 00.0005.0000.0000.0003.00
[*LSRC-isis-1] is-level level-2
[*LSRC-isis-1] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] isis enable 1
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] interface gigabitethernet 2/0/0
[*LSRC-GigabitEthernet2/0/0] isis enable 1
[*LSRC-GigabitEthernet2/0/0] quit
[*LSRC] interface loopback 1
[*LSRC-LoopBack1] isis enable 1
[*LSRC-LoopBack1] commit
[~LSRC-LoopBack1] quit
# Configure LSRD.
[~LSRD] isis 1
[*LSRD-isis-1] network-entity 00.0005.0000.0000.0004.00
[*LSRD-isis-1] is-level level-2
[*LSRD-isis-1] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] isis enable 1
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] interface loopback 1
[*LSRD-LoopBack1] isis enable 1
[*LSRD-LoopBack1] commit
[~LSRD-LoopBack1] quit
Step 3 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Enable MPLS, MPLS TE, and RSVP-TE globally on each node and on all interfaces
along the tunnel, and enable CSPF on the ingress.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.9
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls te
[*LSRA-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.9
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls te
[*LSRB-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] mpls te
[*LSRB-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.9
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] mpls
[*LSRD-GigabitEthernet1/0/0] mpls te
[*LSRD-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRD-GigabitEthernet1/0/0] commit
[~LSRD-GigabitEthernet1/0/0] quit
# Configure LSRB.
[~LSRB] isis 1
[~LSRB-isis-1] cost-style wide
[*LSRB-isis-1] traffic-eng level-2
[*LSRB-isis-1] commit
[~LSRB-isis-1] quit
# Configure LSRC.
[~LSRC] isis 1
[~LSRC-isis-1] cost-style wide
[*LSRC-isis-1] traffic-eng level-2
[*LSRC-isis-1] commit
[~LSRC-isis-1] quit
# Configure LSRD.
[~LSRD] isis 1
[~LSRD-isis-1] cost-style wide
[*LSRD-isis-1] traffic-eng level-2
[*LSRD-isis-1] commit
[~LSRD-isis-1] quit
# Configure LSRA.
[~LSRA] interface gigabitethernet 1/0/0
[~LSRA-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRA-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
# Configure LSRB.
[~LSRB] interface gigabitethernet 2/0/0
[~LSRB-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 1/0/0
[~LSRC-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRC-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*LSRC-GigabitEthernet1/0/0] commit
[~LSRC-GigabitEthernet1/0/0] quit
Run the display mpls te cspf tedb all command on LSRA. Link information in the
TEDB is displayed.
[~LSRA] display mpls te cspf tedb all
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
Networking Requirements
On the network shown in Figure 1-15, OSPF runs on DeviceB, DeviceC, and
DeviceD, and a GRE tunnel is established between DeviceB and DeviceD. A TE
tunnel with 10 Mbit/s bandwidth is required between DeviceA and DeviceE. The
maximum reservable link bandwidth of the tunnel is 10 Mbit/s, as is the BC0
bandwidth.
Interface1 and interface2 in this example represent GE 1/0/0 and GE 2/0/0, respectively.
Precautions
In this example, an RSVP-TE over GRE tunnel is configured, and the GRE tunnel
interfaces cannot borrow the IP addresses of other interfaces. During
configuration, you can enable an IGP on GRE tunnel interfaces and configure
MPLS link attributes.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface, including the loopback interface whose
address is to be used as an LSR ID on each involved node.
2. Configure OSPF on DeviceB, DeviceC, and DeviceD, and establish a GRE tunnel
between DeviceB and DeviceD.
3. Enable IS-IS globally, configure a network entity title (NET), specify the cost
type, and enable IS-IS TE. Enable IS-IS on each interface (including loopback
interfaces and GRE tunnel interfaces).
4. Configure MPLS LSR-IDs, and enable MPLS, MPLS TE, MPLS RSVP-TE, and
MPLS CSPF globally.
5. Enable MPLS, MPLS TE, and MPLS RSVP-TE on each interface.
6. Configure the maximum reservable link bandwidth and BC bandwidth on the
outbound interfaces of each involved tunnel.
7. Create a tunnel interface on the ingress and configure an IP address, tunnel
protocol, destination IP address, and tunnel bandwidth.
Data Preparation
To complete the configuration, you need the following data:
● IS-IS area ID, originating system ID, and IS-IS level of each node
● Maximum bandwidth and maximum reservable bandwidth for each link along
the tunnel
● Tunnel interface number, IP address, destination IP address, tunnel ID, and
tunnel bandwidth
Procedure
Step 1 Configure an IP address for each interface.
Assign an IP address and a mask to each interface according to Figure 1-15. The
configuration details are not provided.
Step 2 Establish a GRE tunnel between DeviceB and DeviceD.
# Configure DeviceB.
[~DeviceB] ospf 1
[*DeviceB-ospf-1] area 0.0.0.0
[*DeviceB-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*DeviceB-ospf-1-area-0.0.0.0] network 172.16.1.0 0.0.0.255
[*DeviceB-ospf-1-area-0.0.0.0] quit
[*DeviceB-ospf-1] quit
[*DeviceB] interface LoopBack1
[*DeviceB-LoopBack1] binding tunnel gre
[*DeviceB-LoopBack1] quit
[*DeviceB] interface Tunnel10
[*DeviceB-Tunnel10] ip address 10.2.1.1 255.255.255.252
[*DeviceB-Tunnel10] tunnel-protocol gre
[*DeviceB-Tunnel10] source 2.2.2.9
[*DeviceB-Tunnel10] destination 3.3.3.9
[*DeviceB-Tunnel10] quit
[*DeviceB] commit
# Configure DeviceC.
[~DeviceC] ospf 1
[*DeviceC-ospf-1] area 0.0.0.0
[*DeviceC-ospf-1-area-0.0.0.0] network 172.16.1.0 0.0.0.255
[*DeviceC-ospf-1-area-0.0.0.0] network 172.16.2.0 0.0.0.255
[*DeviceC-ospf-1-area-0.0.0.0] quit
[*DeviceC-ospf-1] quit
[*DeviceC] commit
# Configure DeviceD.
[~DeviceD] ospf 1
[*DeviceD-ospf-1] area 0.0.0.0
[*DeviceD-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*DeviceD-ospf-1-area-0.0.0.0] network 172.16.2.0 0.0.0.255
[*DeviceD-ospf-1-area-0.0.0.0] quit
[*DeviceD-ospf-1] quit
[*DeviceD] interface LoopBack1
[*DeviceD-LoopBack1] binding tunnel gre
[*DeviceD-LoopBack1] quit
[*DeviceD] interface Tunnel10
[*DeviceD-Tunnel10] ip address 10.2.1.2 255.255.255.252
[*DeviceD-Tunnel10] tunnel-protocol gre
[*DeviceD-Tunnel10] source 3.3.3.9
[*DeviceD-Tunnel10] destination 2.2.2.9
[*DeviceD-Tunnel10] quit
[*DeviceD] commit
After completing the configuration, run the display interface tunnel command.
The command output shows that the tunnel interface is in the Up state. The
following example uses the command output on DeviceB.
[~DeviceB] display interface tunnel 10
Tunnel10 current state : UP (ifindex: 30)
Line protocol current state : UP
Last line protocol up time : 2021-05-12 03:38:08
Description:
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 10.2.1.1/30
Run the display tunnel-info all command to check information about all tunnels.
The following example uses the command output on DeviceB.
[~DeviceB] display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------------------------
0x00000000050000001e gre 3.3.3.9 UP
Note that IS-IS must also be enabled on the GRE tunnel interfaces.
# Configure DeviceA.
[~DeviceA] isis 1
[*DeviceA-isis-1] network-entity 00.0005.0000.0000.0001.00
[*DeviceA-isis-1] is-level level-2
[*DeviceA-isis-1] quit
[*DeviceA] interface gigabitethernet 1/0/0
[*DeviceA-GigabitEthernet1/0/0] isis enable 1
[*DeviceA-GigabitEthernet1/0/0] quit
[*DeviceA] interface loopback 1
[*DeviceA-LoopBack1] isis enable 1
[*DeviceA-LoopBack1] commit
[~DeviceA-LoopBack1] quit
# Configure DeviceB.
[~DeviceB] isis 1
[*DeviceB-isis-1] network-entity 00.0005.0000.0000.0002.00
[*DeviceB-isis-1] is-level level-2
[*DeviceB-isis-1] quit
[*DeviceB] interface Tunnel10
[*DeviceB-Tunnel10] isis enable 1
[*DeviceB-Tunnel10] quit
[*DeviceB] interface gigabitethernet 2/0/0
[*DeviceB-GigabitEthernet2/0/0] isis enable 1
[*DeviceB-GigabitEthernet2/0/0] quit
[*DeviceB] interface loopback 1
[*DeviceB-LoopBack1] isis enable 1
[*DeviceB-LoopBack1] commit
[~DeviceB-LoopBack1] quit
# Configure DeviceD.
[~DeviceD] isis 1
[*DeviceD-isis-1] network-entity 00.0005.0000.0000.0003.00
[*DeviceD-isis-1] is-level level-2
[*DeviceD-isis-1] quit
[*DeviceD] interface Tunnel10
[*DeviceD-Tunnel10] isis enable 1
[*DeviceD-Tunnel10] quit
[*DeviceD] interface gigabitethernet 2/0/0
[*DeviceD-GigabitEthernet2/0/0] isis enable 1
[*DeviceD-GigabitEthernet2/0/0] quit
[*DeviceD] interface loopback 1
[*DeviceD-LoopBack1] isis enable 1
[*DeviceD-LoopBack1] commit
[~DeviceD-LoopBack1] quit
# Configure DeviceE.
[~DeviceE] isis 1
[*DeviceE-isis-1] network-entity 00.0005.0000.0000.0004.00
[*DeviceE-isis-1] is-level level-2
[*DeviceE-isis-1] quit
[*DeviceE] interface gigabitethernet 1/0/0
[*DeviceE-GigabitEthernet1/0/0] isis enable 1
[*DeviceE-GigabitEthernet1/0/0] quit
[*DeviceE] interface loopback 1
[*DeviceE-LoopBack1] isis enable 1
[*DeviceE-LoopBack1] commit
[~DeviceE-LoopBack1] quit
Step 4 Configure basic MPLS functions, and enable MPLS TE, RSVP-TE, and CSPF.
Enable MPLS, MPLS TE, and RSVP-TE globally on each node and on all interfaces
along the tunnel, and enable CSPF on the ingress. Note that you also need to
perform the related configurations on the GRE tunnel interfaces.
# Configure DeviceA.
[~DeviceA] mpls lsr-id 1.1.1.9
[*DeviceA] mpls
[*DeviceA-mpls] mpls te
[*DeviceA-mpls] mpls rsvp-te
[*DeviceA-mpls] mpls te cspf
[*DeviceA-mpls] quit
[*DeviceA] interface gigabitethernet 1/0/0
[*DeviceA-GigabitEthernet1/0/0] mpls
[*DeviceA-GigabitEthernet1/0/0] mpls te
# Configure DeviceB.
[~DeviceB] mpls lsr-id 2.2.2.9
[*DeviceB] mpls
[*DeviceB-mpls] mpls te
[*DeviceB-mpls] mpls rsvp-te
[*DeviceB-mpls] quit
[*DeviceB] interface Tunnel10
[*DeviceB-Tunnel10] mpls
[*DeviceB-Tunnel10] mpls te
[*DeviceB-Tunnel10] mpls rsvp-te
[*DeviceB-Tunnel10] quit
[*DeviceB] interface gigabitethernet 2/0/0
[*DeviceB-GigabitEthernet2/0/0] mpls
[*DeviceB-GigabitEthernet2/0/0] mpls te
[*DeviceB-GigabitEthernet2/0/0] mpls rsvp-te
[*DeviceB-GigabitEthernet2/0/0] commit
[~DeviceB-GigabitEthernet2/0/0] quit
# Configure DeviceD.
[~DeviceD] mpls lsr-id 3.3.3.9
[*DeviceD] mpls
[*DeviceD-mpls] mpls te
[*DeviceD-mpls] mpls rsvp-te
[*DeviceD-mpls] quit
[*DeviceD] interface Tunnel10
[*DeviceD-Tunnel10] mpls
[*DeviceD-Tunnel10] mpls te
[*DeviceD-Tunnel10] mpls rsvp-te
[*DeviceD-Tunnel10] quit
[*DeviceD] interface gigabitethernet 2/0/0
[*DeviceD-GigabitEthernet2/0/0] mpls
[*DeviceD-GigabitEthernet2/0/0] mpls te
[*DeviceD-GigabitEthernet2/0/0] mpls rsvp-te
[*DeviceD-GigabitEthernet2/0/0] commit
[~DeviceD-GigabitEthernet2/0/0] quit
# Configure DeviceE.
[~DeviceE] mpls lsr-id 4.4.4.9
[*DeviceE] mpls
[*DeviceE-mpls] mpls te
[*DeviceE-mpls] mpls rsvp-te
[*DeviceE-mpls] mpls te cspf
[*DeviceE-mpls] quit
[*DeviceE] interface gigabitethernet 1/0/0
[*DeviceE-GigabitEthernet1/0/0] mpls
[*DeviceE-GigabitEthernet1/0/0] mpls te
[*DeviceE-GigabitEthernet1/0/0] mpls rsvp-te
[*DeviceE-GigabitEthernet1/0/0] commit
[~DeviceE-GigabitEthernet1/0/0] quit
# Configure DeviceB.
[~DeviceB] isis 1
[~DeviceB-isis-1] cost-style wide
# Configure DeviceD.
[~DeviceD] isis 1
[~DeviceD-isis-1] cost-style wide
[*DeviceD-isis-1] traffic-eng level-2
[*DeviceD-isis-1] commit
[~DeviceD-isis-1] quit
# Configure DeviceE.
[~DeviceE] isis 1
[~DeviceE-isis-1] cost-style wide
[*DeviceE-isis-1] traffic-eng level-2
[*DeviceE-isis-1] commit
[~DeviceE-isis-1] quit
# Configure DeviceB.
[~DeviceB] interface Tunnel10
[~DeviceB-Tunnel10] bandwidth 100000
[*DeviceB-Tunnel10] mpls te bandwidth max-reservable-bandwidth 10000
[*DeviceB-Tunnel10] mpls te bandwidth bc0 10000
[*DeviceB-Tunnel10] quit
[*DeviceB] interface gigabitethernet 2/0/0
[*DeviceB-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 10000
[*DeviceB-GigabitEthernet2/0/0] mpls te bandwidth bc0 10000
[*DeviceB-GigabitEthernet2/0/0] commit
[~DeviceB-GigabitEthernet2/0/0] quit
# Configure DeviceD.
[~DeviceD] interface Tunnel10
[~DeviceD-Tunnel10] bandwidth 100000
[*DeviceD-Tunnel10] mpls te bandwidth max-reservable-bandwidth 10000
[*DeviceD-Tunnel10] mpls te bandwidth bc0 10000
[*DeviceD-Tunnel10] quit
[~DeviceD] interface gigabitethernet 2/0/0
[~DeviceD-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 10000
[*DeviceD-GigabitEthernet2/0/0] mpls te bandwidth bc0 10000
[*DeviceD-GigabitEthernet2/0/0] commit
[~DeviceD-GigabitEthernet2/0/0] quit
# Configure DeviceE.
[~DeviceE] interface gigabitethernet 1/0/0
[~DeviceE-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 10000
[*DeviceE-GigabitEthernet1/0/0] mpls te bandwidth bc0 10000
[*DeviceE-GigabitEthernet1/0/0] commit
[~DeviceE-GigabitEthernet1/0/0] quit
# Configure DeviceA.
[~DeviceA] interface tunnel1
[*DeviceA-Tunnel1] ip address unnumbered interface loopback 1
[*DeviceA-Tunnel1] tunnel-protocol mpls te
[*DeviceA-Tunnel1] destination 4.4.4.9
[*DeviceA-Tunnel1] mpls te tunnel-id 1
[*DeviceA-Tunnel1] mpls te bandwidth ct0 10000
[*DeviceA-Tunnel1] commit
[~DeviceA-Tunnel1] quit
# Configure DeviceE.
[~DeviceE] interface tunnel1
[*DeviceE-Tunnel1] ip address unnumbered interface loopback 1
[*DeviceE-Tunnel1] tunnel-protocol mpls te
[*DeviceE-Tunnel1] destination 1.1.1.9
[*DeviceE-Tunnel1] mpls te tunnel-id 1
[*DeviceE-Tunnel1] mpls te bandwidth ct0 10000
[*DeviceE-Tunnel1] commit
[~DeviceE-Tunnel1] quit
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0001.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te bandwidth ct0 10000
mpls te tunnel-id 1
#
return
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
binding tunnel gre
#
interface Tunnel10
ip address 10.2.1.1 255.255.255.252
bandwidth 100000
tunnel-protocol gre
source 2.2.2.9
destination 3.3.3.9
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.16.1.0 0.0.0.255
#
return
● DeviceC configuration file
#
sysname DeviceC
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.2.1 255.255.255.0
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.2.0 0.0.0.255
● DeviceD configuration file
#
sysname DeviceD
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0003.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.2.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
binding tunnel gre
#
interface Tunnel10
ip address 10.2.1.2 255.255.255.252
bandwidth 100000
tunnel-protocol gre
source 3.3.3.9
destination 2.2.2.9
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 172.16.2.0 0.0.0.255
#
return
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te bandwidth ct0 10000
mpls te tunnel-id 1
#
return
Networking Requirements
On the network shown in Figure 1-16, GE 1/0/0, GE 2/0/0, and GE 3/0/0 on LSRA
and LSRB join Eth-Trunk1. An MPLS TE tunnel between LSRA and LSRC is
established through RSVP.
The handshake function, RSVP key authentication, and message window are
configured for LSRA and LSRB. The handshake function allows LSRA and LSRB to
perform RSVP key authentication. RSVP key authentication prevents forged
packets from requesting network resource usage. The message window function
prevents RSVP message mis-sequence.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
NOTE
The window size to 32 is recommended. If the window size is too small, received RSVP
messages outside the window are discarded, which terminates RSVP neighbor relationships.
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID and area ID for every LSR
● Password and key for RSVP authentication
● RSVP message window size
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and its mask to every interface as shown in Figure 1-16. For
configuration details, see Configuration Files in this section.
Step 2 Configure OSPF.
Configure OSPF to advertise every network segment route and host route. For
configuration details, see Configuration Files in this section.
After completing the configurations, run the display ip routing-table command
on every node. All nodes have learned routes from each other.
Step 3 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface eth-trunk 1
[*LSRA-Eth-Trunk1] mpls
[*LSRA-Eth-Trunk1] mpls te
[*LSRA-Eth-Trunk1] mpls rsvp-te
[*LSRA-Eth-Trunk1] commit
[~LSRA-Eth-Trunk1] quit
NOTE
Repeat this step for LSRB and LSRC. For configuration details, see Configuration Files in
this section.
# Configure LSRB.
[~LSRB] ospf 1
[~LSRB-ospf-1] opaque-capability enable
[*LSRB-ospf-1] area 0
[*LSRB-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRB-ospf-1-area-0.0.0.0] commit
[~LSRB-ospf-1-area-0.0.0.0] quit
# Configure LSRC.
[~LSRC] ospf 1
[~LSRC-ospf-1] opaque-capability enable
[*LSRC-ospf-1] area 0
[*LSRC-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRC-ospf-1-area-0.0.0.0] commit
[~LSRC-ospf-1-area-0.0.0.0] quit
After completing the configuration, run the display interface tunnel command
on LSRA. The tunnel interface is Up.
[~LSRA] display interface tunnel1
Tunnel1 current state : UP (ifindex: 18)
Line protocol current state : UP
Last line protocol up time : 2012-02-23 10:00:00
Description:
Route Port,The Maximum Transmit Unit is 1500, Current BW: 0Mbps
Internet Address is unnumbered, using address of LoopBack1(1.1.1.1/32)
Encapsulation is TUNNEL, loopback not set
Tunnel destination 3.3.3.3
Tunnel up/down statistics 1
Tunnel protocol/transport MPLS/MPLS, ILM is available,
primary tunnel id is 0x161, secondary tunnel id is 0x0
Current system time: 2012-02-24 03:33:48
300 seconds output rate 0 bits/sec, 0 packets/sec
0 seconds output rate 0 bits/sec, 0 packets/sec
126 packets output, 34204 bytes
0 output error
18 output drop
Last 300 seconds input utility rate: 0.00%
Last 300 seconds output utility rate: 0.00%
# Configure LSRB.
[~LSRB] interface eth-trunk 1
[~LSRB-Eth-Trunk1] mpls rsvp-te authentication cipher YsHsjx_202206
[*LSRB-Eth-Trunk1] mpls rsvp-te authentication handshake
[*LSRB-Eth-Trunk1] mpls rsvp-te authentication window-size 32
[*LSRB-Eth-Trunk1] commit
Run the reset mpls rsvp-te and display interface tunnel commands in sequence
on LSRA. The tunnel interface is Up.
Run the display mpls rsvp-te interface command on LSRA or LSRB. RSVP
authentication information is displayed.
[~LSRA] display mpls rsvp-te interface eth-trunk 1
Interface: Eth-Trunk1
Interface Address: 10.1.1.1
Interface state: UP Interface Index: 0x15
Total-BW: 0 Used-BW: 0
Hello configured: NO Num of Neighbors: 1
SRefresh feature: DISABLE SRefresh Interval: 30 sec
Mpls Mtu: 1500 Retransmit Interval: 500 msec
Increment Value: 1
Authentication: ENABLE
Challenge: ENABLE WindowSize: 32
Next Seq # to be sent: 486866945 12 Key ID: 0x0101051d0101
Bfd Enabled: -- Bfd Min-Tx: --
Bfd Min-Rx: -- Bfd Detect-Multi: --
RSVP instance name: RSVP0
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface Eth-Trunk1
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
mpls rsvp-te authentication cipher O'W3[_\M"`!./a!1$H@GYA!!
mpls rsvp-te authentication handshake
mpls rsvp-te authentication window-size 32
#
interface GigabitEthernet1/0/1
undo shutdown
eth-trunk 1
#
interface GigabitEthernet1/0/2
undo shutdown
eth-trunk 1
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 1
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 1
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
interface Eth-Trunk1
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
mpls rsvp-te authentication cipher O'W3[_\M"`!./a!1$H@GYA!!
mpls rsvp-te authentication handshake
mpls rsvp-te authentication window-size 32
#
interface GigabitEthernet1/0/1
undo shutdown
eth-trunk 1
#
interface GigabitEthernet1/0/2
undo shutdown
eth-trunk 1
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 1
#
interface GigabitEthernet1/0/4
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
interface GigabiEthernet1/0/1
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 3.3.3.3 0.0.0.0
network 10.2.1.0 0.0.0.255
#
return
Networking Requirements
In Figure 1-17, a customer expects to establish MPLS TE tunnels to form a full-
mesh network and configure Auto FRR for each tunnel. Establishing tunnels one
by one is laborious and complex. In this case, the IP-prefix tunnel function can be
configured to automatically establish MPLS tunnels in a batch.
GE 1/0/0 10.1.1.1/24
GE 1/0/1 10.1.2.1/24
GE 1/0/0 10.1.1.2/24
GE 1/0/1 10.1.3.1/24
GE 1/0/1 10.1.2.2/24
GE 1/0/2 10.1.3.2/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IS-IS and IS-IS TE.
2. Enable MPLS TE and BFD globally on each device.
3. Configure an IP prefix list.
4. Configure a P2P TE tunnel template.
5. Configure the automatic primary tunnel function.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each node: values shown in Figure 1-17
● LSR ID of each node: loopback addresses shown in Figure 1-17
● IS-IS process number (1), IS-IS level (level-2), and network entity name of
each node:
– LSRA: 10.0000.0000.0001.00
– LSRC: 10.0000.0000.0002.00
– LSRB: 10.0000.0000.0003.00
● IP prefix name on each node: te-tunnel
● P2P TE tunnel template name on each node: te-tunnel
Procedure
Step 1 Assign an IP address to each interface. For configuration details, see
Configuration Files in this section.
Step 2 Configure IS-IS and IS-IS TE. For configuration details, see Configuration Files in
this section.
Step 3 Enable MPLS TE and Auto FRR globally on each device. For configuration details,
see Configuration Files in this section.
# Configure LSRA.
[~LSRA] ip ip-prefix te-tunnel permit 2.2.2.9 32
[*LSRA] ip ip-prefix te-tunnel permit 3.3.3.9 32
[*LSRA] commit
The configurations on LSRB and LSRC are similar to the configuration on LSRA. For
configuration details, see Configuration Files in this section.
# Configure LSRA.
[~LSRA] mpls te p2p-template te-tunnel
[*LSRA-te-p2p-template-te-tunnel] bandwidth ct0 1000
[*LSRA-te-p2p-template-te-tunnel] fast-reroute
[*LSRA-te-p2p-template-te-tunnel] commit
[~LSRA-te-p2p-template-te-tunnel] quit
The configurations on LSRB and LSRC are similar to the configuration on LSRA. For
configuration details, see Configuration Files in this section.
# Configure LSRA.
[~LSRA] mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel
[*LSRA] commit
The configurations on LSRB and LSRC are similar to the configuration on LSRA. For
configuration details, see Configuration Files in this section.
# After completing the preceding configuration, run the display mpls te tunnel
command on LSRA. The command output shows that MPLS TE tunnels have been
established.
[~LSRA] display mpls te tunnel
* means the LSP is detour LSP
-------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name
-------------------------------------------------------------------------------
1.1.1.9 2.2.2.9 16 -/3 I AutoTunnel32769
2.2.2.9 1.1.1.9 10 3/- E AutoTunnel32769
1.1.1.9 3.3.3.9 17 -/3 I AutoTunnel32770
3.3.3.9 1.1.1.9 9 3/- E AutoTunnel32770
1.1.1.9 2.2.2.9 13 -/48060 I AutoBypassTunnel_1.1.1.9_2.2.2.9_32771
2.2.2.9 3.3.3.9 8 48061/3 T AutoBypassTunnel_2.2.2.9_3.3.3.9_32771
3.3.3.9 2.2.2.9 7 48060/3 T AutoBypassTunnel_3.3.3.9_2.2.2.9_32771
1.1.1.9 3.3.3.9 15 -/48060 I AutoBypassTunnel_1.1.1.9_3.3.3.9_32772
2.2.2.9 1.1.1.9 9 3/- E AutoBypassTunnel_2.2.2.9_1.1.1.9_32772
3.3.3.9 1.1.1.9 8 3/- E AutoBypassTunnel_3.3.3.9_1.1.1.9_32772
-------------------------------------------------------------------------------
R: Role, I: Ingress, T: Transit, E: Egress
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te auto-frr
mpls rsvp-te
mpls te cspf
#
mpls te p2p-template te-tunnel
record-route label
bandwidth ct0 1000
fast-reroute
#
mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
ip ip-prefix te-tunnel index 10 permit 2.2.2.9 32
ip ip-prefix te-tunnel index 20 permit 3.3.3.9 32
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.9
#
mpls
mpls te
mpls te auto-frr
mpls rsvp-te
mpls te cspf
#
mpls te p2p-template te-tunnel
record-route label
bandwidth ct0 1000
fast-reroute
#
mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
ip ip-prefix te-tunnel index 10 permit 1.1.1.9 32
ip ip-prefix te-tunnel index 20 permit 3.3.3.9 32
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls te auto-frr
mpls rsvp-te
mpls te cspf
#
mpls te p2p-template te-tunnel
record-route label
bandwidth ct0 1000
fast-reroute
#
mpls te auto-primary-tunnel ip-prefix te-tunnel p2p-template te-tunnel
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-2
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.3.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack0
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
ip ip-prefix te-tunnel index 10 permit 1.1.1.9 32
ip ip-prefix te-tunnel index 20 permit 2.2.2.9 32
#
return
Networking Requirements
On the network shown in Figure 1-18, the bandwidth of the link between LSRA
and LSRB is 50 Mbit/s. The maximum reservable bandwidth of other links is 100
Mbit/s, and BC0 bandwidth is 100 Mbit/s.
Two tunnels named Tunnel1 and Tunnel2 from LSRA to LSRC are established on
LSRA. Both tunnels require 40 Mbit/s of bandwidth. The combined bandwidth of
these two tunnels is 80 Mbit/s, higher than the bandwidth of 50 Mbit/s provided
by the shared link between LSRA and LSRB. In addition, Tunnel2 has a higher
priority than Tunnel1, and preemption is enabled.
In this example, administrative group attributes, affinities, and masks for links are
used to allow Tunnel1 and Tunnel2 on LSRA to use separate links between LSRB
and LSRC.
Figure 1-18 Networking diagram for an MPLS TE tunnel with the affinity property
NOTE
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an RSVP-TE tunnel. See "Configuration Roadmap" in Example for
Configuring an RSVP-TE Tunnel.
2. Configure an administrative group attribute on an outbound interface of
every LSR along each RSVP TE tunnel.
3. Configure the affinity and mask for each tunnel based on the administrative
groups of links and networking requirements.
4. Set a priority value for each tunnel.
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID and area ID for every LSR
● Maximum reservable bandwidth and BC bandwidth for every link along each
tunnel
● Administrative groups for links between LSRA and LSRB and between LSRB
and LSRC
● Affinity and mask for each tunnel
● Tunnel interface number, source and destination IP addresses, bandwidth,
priority values, and RSVP-TE signaling protocol of the tunnel
Procedure
Step 1 Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a
loopback interface address as an LSR ID on every node according to Figure 1-18.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP.
Configure OSPF on every LSR to advertise every network segment route and host
route.
# Enable OSPF TE on every LSR. The following example uses the command output
on LSRA.
[*LSRA] ospf
[*LSRA-ospf-1] opaque-capability enable
[*LSRA-ospf-1] area 0
[*LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRA-ospf-1-area-0.0.0.0] quit
[*LSRA-ospf-1] quit
Repeat this step for LSRB and LSRC. For configuration details, see Configuration
Files in this section.
# Enable CSPF on the ingress LSRA.
[*LSRA] mpls
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] commit
[~LSRA-mpls] quit
After completing the configurations, run the display mpls te cspf tedb node
command on LSRA. TEDB information contains maximum available and reservable
bandwidth for every link, and the administrative group attribute in the Color field.
[~LSRA] display mpls te cspf tedb node
Router ID: 1.1.1.1
IGP Type: OSPF Process ID: 1 IGP Area: 0
MPLS-TE Link Count: 1
Link[1]:
OSPF Router ID: 192.168.1.1 Opaque LSA ID: 1.0.0.1
Interface IP Address: 192.168.1.1
DR Address: 192.168.1.2
IGP Area: 0
Link Type: Multi-access Link Status: Active
IGP Metric: 1 TE Metric: 1 Color: 0x10001
Bandwidth Allocation Model : -
Maximum Link-Bandwidth: 50000 (kbps)
Maximum Reservable Bandwidth: 50000 (kbps)
Operational Mode of Router: TE
Bandwidth Constraints: Local Overbooking Multiplier:
BC[0]: 50000 (kbps) LOM[0]: 1
BW Unreserved:
Class ID:
[0]: 50000 (kbps), [1]: 50000 (kbps)
[2]: 50000 (kbps), [3]: 50000 (kbps)
[4]: 50000 (kbps), [5]: 50000 (kbps)
[6]: 50000 (kbps), [7]: 50000 (kbps)
Router ID: 2.2.2.2
IGP Type: OSPF Process ID: 1 IGP Area: 0
MPLS-TE Link Count: 3
Link[1]:
OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.1
Interface IP Address: 192.168.1.2
DR Address: 192.168.1.2
IGP Area: 0
Link Type: Multi-access Link Status: Active
IGP Metric: 1 TE Metric: 1 Color: 0x0
Bandwidth Allocation Model : -
Maximum Link-Bandwidth: 0 (kbps)
Maximum Reservable Bandwidth: 0 (kbps)
Operational Mode of Router: TE
Bandwidth Constraints: Local Overbooking Multiplier:
BC[0]: 0 (kbps) LOM[0]: 1
BW Unreserved:
Class ID:
[0]: 0 (kbps), [1]: 0 (kbps)
[2]: 0 (kbps), [3]: 0 (kbps)
[4]: 0 (kbps), [5]: 0 (kbps)
[6]: 0 (kbps), [7]: 0 (kbps)
Link[2]:
OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.3
Interface IP Address: 192.168.2.1
DR Address: 192.168.2.1
IGP Area: 0
Link Type: Multi-access Link Status: Active
IGP Metric: 1 TE Metric: 1 Color: 0x10101
Bandwidth Allocation Model : -
Maximum Link-Bandwidth: 100000 (kbps)
Maximum Reservable Bandwidth: 100000 (kbps)
Operational Mode of Router: TE
Bandwidth Constraints: Local Overbooking Multiplier:
BC[0]: 100000 (kbps) LOM[0]: 1
BW Unreserved:
Class ID:
[0]: 100000 (kbps), [1]: 100000 (kbps)
[2]: 100000 (kbps), [3]: 100000 (kbps)
[4]: 100000 (kbps), [5]: 100000 (kbps)
[6]: 100000 (kbps), [7]: 100000 (kbps)
Link[3]:
Path Verification : --
Entropy Label : None
Associated Tunnel Group ID: - Associated Tunnel Group Type: -
Auto BW Remain Time : 200 s Reopt Remain Time : 100 s
Metric Inherit IGP : None
Binding Sid :- Reverse Binding Sid : -
Self-Ping : Disable Self-Ping Duration : 1800 sec
FRR Attr Source : - Is FRR degrade down : No
Run the display mpls te cspf tedb node command on LSRA. TEDB information
contains bandwidth for every link.
[~LSRA] display mpls te cspf tedb node
Router ID: 1.1.1.1
IGP Type: OSPF Process ID: 1 IGP Area: 0
MPLS-TE Link Count: 1
Link[1]:
OSPF Router ID: 192.168.1.1 Opaque LSA ID: 1.0.0.1
Interface IP Address: 192.168.1.1
DR Address: 192.168.1.2
IGP Area: 0
Link Type: Multi-access Link Status: Active
IGP Metric: 1 TE Metric: 1 Color: 0x10001
Bandwidth Allocation Model : -
Maximum Link-Bandwidth: 50000 (kbps)
Maximum Reservable Bandwidth: 50000 (kbps)
Operational Mode of Router: TE
Bandwidth Constraints: Local Overbooking Multiplier:
BC[0]: 50000 (kbps) LOM[0]: 1
BW Unreserved:
Class ID:
[0]: 50000 (kbps), [1]: 50000 (kbps)
[2]: 50000 (kbps), [3]: 50000 (kbps)
[4]: 50000 (kbps), [5]: 50000 (kbps)
[6]: 50000 (kbps), [7]: 10000 (kbps)
Router ID: 2.2.2.2
IGP Type: OSPF Process ID: 1 IGP Area: 0
MPLS-TE Link Count: 3
Link[1]:
OSPF Router ID: 192.168.1.2 Opaque LSA ID: 1.0.0.1
The BW Unreserved field indicates the remaining bandwidth reserved for tunnel
links with various priorities. The command output shows that the value of [7]
changes on the outbound interface of each node along the tunnel, indicating that
bandwidth of 40 Mbit/s has been successfully reserved for a tunnel. The
bandwidth information also matches the path of a tunnel. This proves that the
affinity and mask match the administrative group of every link.
Alternatively, run the display mpls te tunnel diagnostic command to check the
outbound interfaces of links along the tunnel on LSRB.
[~LSRB]display mpls te tunnel diagnostic
* means the LSP is detour LSP
--------------------------------------------------------------------------------
LSP-Id Destination In/Out-If
--------------------------------------------------------------------------------
1.1.1.1:1:3 3.3.3.3 GE1/0/0/GE2/0/0
--------------------------------------------------------------------------------
The mask of Tunnel2's affinity attribute is 0x11101. As such, the first three bits of
the affinity attribute value need to be compared, so do the last bit. In contrast, the
fourth bit is ignored. Because the affinity value of Tunnel2 is 0x10011, this tunnel
selects the link with the second and third bits of the administrative group attribute
being 0 and at least one of the first and fifth bits being 1. According to the
preceding rules, if the value of the administrative group attribute is 0x10001,
0x10000, 0x00001, 0x10011, 0x10010, or 0x00011, the value meets requirements.
Tunnel2 then selects the link between GE 1/0/0 of LSRA (the administrative group
value is 0x10001) and GE 3/0/0 of LSRB (the administrative group value is
0x10011).
Step 6 Verify the configuration.
After completing the configurations, run the display interface tunnel or display
mpls te tunnel-interface command on LSRA. The status of Tunnel1 is Down. This
is because since the maximum reservable bandwidth is insufficient, Tunnel2 is of a
higher priority and has preempted the bandwidth reserved for Tunnel1.
Run the display mpls te cspf tedb node command. TEDB information contains
the bandwidth for every link, which indicates that Tunnel2 indeed passes through
GE 3/0/0 of LSRB.
Alternatively, run the display mpls te tunnel diagnostic command to check
outbound interfaces of links along the tunnel on LSRB.
[~LSRB] display mpls te tunnel diagnostic
* means the LSP is detour LSP
--------------------------------------------------------------------------------
LSP-Id Destination In/Out-If
--------------------------------------------------------------------------------
1.1.1.1:1:4 3.3.3.3 GE1/0/0/GE3/0/0
--------------------------------------------------------------------------------
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10001
mpls te bandwidth max-reservable-bandwidth 50000
mpls te bandwidth bc0 50000
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 1
mpls te affinity property 10101 mask 11011
mpls te bandwidth ct0 40000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 101
mpls te priority 6
mpls te affinity property 10011 mask 11101
mpls te bandwidth ct0 40000
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.2.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10101
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 192.168.3.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10011
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
return
● LSRC configuration file
#
sysname LSRC
#
Networking Requirements
Figure 1-19 illustrates an MPLS network. An RSVP-TE tunnel is established along
the path PE1 -> P1 -> PE2 between PE1 and PE2. The outbound interface of the
primary tunnel on P1 is GE 2/0/0.
The links on network segments of 10.2.1.0/30 and 10.5.1.0/30 are in SRLG1.
TE Auto FRR is required on P1 to improve reliability. The automatic bypass tunnel
uses links in an SRLG different from those used by the primary tunnel.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign IP addresses to interfaces and configure an IGP on all LSRs to
implement network connectivity.
2. Enable MPLS, MPLS TE, and RSVP-TE on all LSRs and their interfaces.
3. Configure IS-IS TE on all LSRs and enable CSPF on PE1 and P1.
4. Set SRLG numbers for SRLG member interfaces.
5. Configure an SRLG mode in the system view on the PLR.
6. Establish an RSVP-TE tunnel between PE1 and PE2 over an explicit path PE1 -
> P1 -> PE2.
7. Enable TE FRR in the tunnel interface view and TE Auto FRR on the outbound
interface of the primary tunnel on the PLR.
Data Preparation
To complete the configuration, you need an SRLG number.
Configuration Procedure
1. Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a
loopback interface address as an LSR ID on every LSR according to Figure
1-19.
# Run the display mpls te srlg command on P1. Information about the SRLG
and SRLG member interfaces is displayed. The following example uses the
command output on P1.
[~P1] display mpls te srlg 1
SRLG 1: GE2/0/0 GE1/0/1
SRLGs on GigabitEthernet2/0/0:
1
SRLGs on GigabitEthernet1/0/1:
1
# Run the display mpls te cspf tedb srlg command on P1. Information about
the SRLG TEDB is displayed.
[~P1] display mpls te cspf tedb srlg 1
Interface-Address IGP-Type Area
10.2.1.1 ISIS 1
10.5.1.1 ISIS 1
10.2.1.1 ISIS 2
10.5.1.1 ISIS 2
NOTE
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path main
next hop 10.1.1.2
next hop 10.2.1.2
next hop 5.5.5.5
#
isis 1
cost-style wide
network-entity 10.0000.0000.0004.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 5.5.5.5
mpls te tunnel-id 100
mpls te record-route
mpls te bandwidth ct0 10000
mpls te path explicit-path main
mpls te fast-reroute
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
mpls te srlg path-calculation preferred
#
isis 1
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls te
mpls te auto-frr link
mpls te srlg 1
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.5.1.1 255.255.255.252
mpls
mpls te
mpls te srlg 1
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/1
undo shutdown
ip address 10.3.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0006.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.5.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.4.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Networking Requirements
Figure 1-20 illustrates an MPLS network. An RSVP-TE tunnel is established
between PE1 and PE2 over an explicit path PE1 -> P4 -> PE2.
The path PE1 -> P1 -> P2 -> P4 and the path from PE1 to P4 are in SRLG1.
Hot standby is enabled. The primary and hot-standby CR-LSP must be in different
SRLGs.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● SRLG numbers
● Either preferred or strict SRLG mode
Configuration Procedure
1. Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a
loopback interface address as an LSR ID on every LSR according to Figure
1-20.
For configuration details, see Configuration Files in this section.
2. Configure an IGP.
Configure OSPF or IS-IS on every node to implement IP connectivity between
them. IS-IS is used as an example.
For configuration details, see Configuration Files in this section.
3. Configure basic MPLS functions.
Set an MPLS LSR ID for every node and enable MPLS in the system and
interface views.
For configuration details, see Configuration Files in this section.
4. Configure MPLS TE and RSVP-TE.
Enable MPLS TE and RSVP-TE in the system and interface views of each node.
For configuration details, see Configuration Files in this section.
5. Configure IS-IS TE and CSPF.
Configure IS-IS TE on each node and enable CSPF on PE1. For configuration
details, see Configuration Files in this section.
6. Configure an explicit path for the primary CR-LSP.
# Configure the explicit path for the primary CR-LSP on PE1.
<PE1> system-view
[~PE1] explicit-path main
[*PE1-explicit-path-main] next hop 10.3.1.2
[*PE1-explicit-path-main] next hop 10.6.1.2
[*PE1-explicit-path-main] next hop 6.6.6.6
[*PE1-explicit-path-main] commit
[~PE1-explicit-path-main] quit
Run the display interface tunnel1 command on PE1. The tunnel is Up.
[~PE1] display interface tunnel1
Tunnel1 current state : UP (ifindex: 26)
Line protocol current state : UP
...
8. Configure an SRLG.
# Add links from PE1 to P1 and from PE1 to P4 to SRLG1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] mpls te srlg 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls te srlg 1
[*PE1-GigabitEthernet2/0/0] commit
# Configure an SRLG mode on PE1.
[~PE1] mpls
[~PE1-mpls] mpls te srlg path-calculation strict
[*PE1-mpls] commit
[~PE1-mpls] quit
# Run the display mpls te srlg command on P1. Information about the SRLG
and SRLG member interfaces is displayed. The following example uses the
command output on P1.
[~P1] display mpls te srlg all
Total SRLG supported : 1024
Total SRLG configured : 1
SRLGs on GigabitEthernet1/0/0:
1
SRLGs on GigabitEthernet2/0/0:
1
# Run the display mpls te cspf tedb srlg command. Information about the
SRLG TEDB is displayed. The following example uses the command output on
PE1.
[~PE1] display mpls te cspf tedb srlg 1
Interface-Address IGP-Type Area
10.1.1.1 ISIS Level-1
10.3.1.1 ISIS Level-1
10.1.1.1 ISIS Level-2
10.3.1.1 ISIS Level-2
9. Configure hot standby on the ingress PE1.
# Configure PE1.
[~PE1] interface tunnel1
[~PE1-Tunnel1] mpls te backup hot-standby
[~PE1-Tunnel1] commit
# Run the display mpls te hot-standby state interface tunnel1 command.
Information about hot standby is displayed.
10. Verify the configuration.
# Run the shutdown command on GE 1/0/1 on PE1.
[~PE1] interface gigabitethernet 1/0/1
[~PE1-GigabitEthernet1/0/1] shutdown
[*PE1-GigabitEthernet1/0/1] commit
[~PE1-GigabitEthernet1/0/1] quit
# Run the display mpls te hot-standby state interface tunnel1 command
on PE1. The hot-standby CR-LSP index is 0x0. The command output shows
that no hot-standby CR-LSP is established, preventing the hot-standby CR-LSP
from sharing the same SRLG with the primary CR-LSP.
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls te srlg path-calculation strict
mpls te cspf
mpls rsvp-te
#
explicit-path main
next hop 10.3.1.2
next hop 10.6.1.2
next hop 6.6.6.6
#
isis 1
cost-style wide
network-entity 10.0000.0000.0005.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls te
mpls te srlg 1
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls te
mpls te srlg 1
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.8.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 6.6.6.6
mpls te tunnel-id 100
mpls te record-route
mpls te bandwidth ct0 10000
mpls te backup hot-standby
mpls te path explicit-path main
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.5.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● P3 configuration file
#
sysname P3
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.7.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
● P4 configuration file
#
sysname P4
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0004.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.5.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
Networking Requirements
Figure 1-21 illustrates a network:
● IS-IS runs on LSRA, LSRB, LSRC, LSRD, and LSRE.
– LSRA and LSRE are level-1 routers.
– LSRB and LSRD are level-1-2 routers.
– LSRC is a level-2 router.
● RSVP-TE is used to establish a TE tunnel between LSRA and LSRE over IS-IS
areas. The bandwidth for the TE tunnel is 20 Mbit/s.
● Both the maximum reservable bandwidth and BC0 bandwidth for every link
along the TE tunnel are 100 Mbit/s.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to every interface and configure a loopback address that
is used as an LSR ID on every LSR.
2. Enable IS-IS globally and enable IS-IS TE.
3. Configure a loose explicit path on which LSRB, LSRC, and LSRD functioning as
area border routers (ABRs) are located.
Data Preparation
To complete the configuration, you need the following data:
● Origin AS number, IS-IS level, and area ID of every LSR
● Maximum reservable bandwidth and BC bandwidth for every link along the
TE tunnel
● Tunnel interface number, IP address, destination address, tunnel ID, signaling
protocol (RSVP-TE), and tunnel bandwidth
Procedure
Step 1 Assign an IP address and its mask to every interface.
Assign an IP address and a mask to each interface according to Figure 1-21. The
configuration details are not provided.
Step 2 Configure IS-IS.
# Configure LSRA.
[~LSRA] isis 1
[*LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
[*LSRA-isis-1] is-level level-1
[*LSRA-isis-1] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] isis enable 1
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] interface loopback 1
[*LSRA-LoopBack1] isis enable 1
[*LSRA-LoopBack1] commit
[~LSRA-LoopBack1] quit
# Configure LSRB.
[~LSRB] isis 1
[*LSRB-isis-1] network-entity 00.0005.0000.0000.0002.00
[*LSRB-isis-1] is-level level-1-2
[*LSRB-isis-1] import-route isis level-2 into level-1
[*LSRB-isis-1] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] isis enable 1
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] isis enable 1
[*LSRB-GigabitEthernet2/0/0] quit
[*LSRB] interface loopback 1
[*LSRB-LoopBack1] isis enable 1
[*LSRB-LoopBack1] commit
[~LSRB-LoopBack1] quit
# Configure LSRC.
[~LSRC] isis 1
[*LSRC-isis-1] network-entity 00.0006.0000.0000.0003.00
# Configure LSRD.
[~LSRD] isis 1
[*LSRD-isis-1] network-entity 00.0007.0000.0000.0004.00
[*LSRD-isis-1] is-level level-1-2
[*LSRD-isis-1] import-route isis level-2 into level-1
[*LSRD-isis-1] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] isis enable 1
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] interface gigabitethernet 2/0/0
[*LSRD-GigabitEthernet2/0/0] isis enable 1
[*LSRD-GigabitEthernet2/0/0] quit
[*LSRD] interface loopback 1
[*LSRD-LoopBack1] isis enable 1
[*LSRD-LoopBack1] commit
[~LSRD-LoopBack1] quit
# Configure LSRE.
[~LSRE] isis 1
[*LSRE-isis-1] network-entity 00.0007.0000.0000.0005.00
[*LSRE-isis-1] is-level level-1
[*LSRE-isis-1] quit
[*LSRE] interface gigabitethernet 1/0/0
[*LSRE-GigabitEthernet1/0/0] isis enable 1
[*LSRE-GigabitEthernet1/0/0] quit
[*LSRE] interface loopback 1
[*LSRE-LoopBack1] isis enable 1
[*LSRE-LoopBack1] commit
[~LSRE-LoopBack1] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls te
[*LSRB-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] mpls te
[*LSRB-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.3
[*LSRC] mpls
[*LSRC-mpls] mpls te
[*LSRC-mpls] mpls rsvp-te
[*LSRC-mpls] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] mpls te
[*LSRC-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] interface gigabitethernet 2/0/0
[*LSRC-GigabitEthernet2/0/0] mpls
[*LSRC-GigabitEthernet2/0/0] mpls te
[*LSRC-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRC-GigabitEthernet2/0/0] commit
[~LSRC-GigabitEthernet2/0/0] quit
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.4
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] mpls
[*LSRD-GigabitEthernet1/0/0] mpls te
[*LSRD-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] interface gigabitethernet 2/0/0
[*LSRD-GigabitEthernet2/0/0] mpls
[*LSRD-GigabitEthernet2/0/0] mpls te
[*LSRD-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRD-GigabitEthernet2/0/0] commit
[~LSRD-GigabitEthernet2/0/0] quit
# Configure LSRE.
[~LSRE] mpls lsr-id 5.5.5.5
[*LSRE] mpls
[*LSRE-mpls] mpls te
[*LSRE-mpls] mpls rsvp-te
[*LSRE-mpls] quit
[*LSRE] interface gigabitethernet 1/0/0
[*LSRE-GigabitEthernet1/0/0] mpls
[*LSRE-GigabitEthernet1/0/0] mpls te
[*LSRE-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRE-GigabitEthernet1/0/0] commit
[~LSRE-GigabitEthernet1/0/0] quit
# Configure LSRA.
[~LSRA] isis 1
[~LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-1
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
# Configure LSRB.
[~LSRB] isis 1
[~LSRB-isis-1] cost-style wide
[*LSRB-isis-1] traffic-eng level-1-2
[*LSRB-isis-1] commit
[~LSRB-isis-1] quit
# Configure LSRC.
[~LSRC] isis 1
[~LSRC-isis-1] cost-style wide
[*LSRC-isis-1] traffic-eng level-2
[*LSRC-isis-1] commit
[~LSRC-isis-1] quit
# Configure LSRD.
[~LSRD] isis 1
[~LSRD-isis-1] cost-style wide
[*LSRD-isis-1] traffic-eng level-1-2
[*LSRD-isis-1] commit
[~LSRD-isis-1] quit
# Configure LSRE.
[~LSRE] isis 1
[~LSRE-isis-1] cost-style wide
[*LSRE-isis-1] traffic-eng level-1
[*LSRE-isis-1] commit
[~LSRE-isis-1] quit
# Set the maximum bandwidth and reservable bandwidth for links on LSRB.
[~LSRB] interface gigabitethernet 2/0/0
[~LSRB-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRB-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Set the maximum bandwidth and reservable bandwidth for links on LSRC.
[~LSRC] interface gigabitethernet 1/0/0
# Set the maximum bandwidth and reservable bandwidth for links on LSRD.
[~LSRD] interface gigabitethernet 2/0/0
[~LSRD-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*LSRD-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
[*LSRD-GigabitEthernet2/0/0] commit
[~LSRD-GigabitEthernet2/0/0] quit
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path atoe
next hop 10.1.1.2 include loose
next hop 10.2.1.2 include loose
next hop 10.3.1.2 include loose
next hop 10.4.1.2 include loose
#
isis 1
is-level level-1
cost-style wide
traffic-eng level-1
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 5.5.5.5
mpls te tunnel-id 1
mpls te bandwidth ct0 20000
mpls te path explicit-path atoe
#
return
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-1-2
cost-style wide
traffic-eng level-1-2
network-entity 00.0007.0000.0000.0004.00
import-route isis level-2 into level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-22, an RSVP-TE tunnel between LSRA and
LSRD is established. The bandwidth is 50 Mbit/s. The maximum reservable
bandwidth and BC0 bandwidth for every link are 100 Mbit/s. The RDM is used.
The threshold for flooding bandwidth information is set to 20%. This reduces the
number of attempts to flood bandwidth information and saves network resources.
If the proportion of the bandwidth used or released by an MPLS TE tunnel to the
available bandwidth in the TEDB is greater than or equal to 20%, an IGP floods
the bandwidth information, and CSPF updates TEDB information.
Figure 1-22 Networking diagram for the threshold for flooding bandwidth
information
NOTE
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an RSVP-TE tunnel. See "Configuration Roadmap" in Example for
Configuring an RSVP-TE Tunnel.
2. Configure bandwidth and the threshold for flooding bandwidth information
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every physical interface and configure a
loopback interface address as an LSR ID on every node according to Figure 1-22.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP.
Configure OSPF or IS-IS on every node to implement connectivity between them.
IS-IS is used in this example.
For configuration details, see Configuration Files in this section.
Step 3 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Enable MPLS, MPLS TE, and RSVP-TE on every LSR and their interfaces along a
tunnel, and enable CSPF in the system view of the ingress.
For configuration details, see Configuration Files in this section.
Step 4 Set MPLS TE bandwidth for links.
# Set the maximum reservable bandwidth and BC0 bandwidth for a link on every
interface along the TE tunnel.
For configuration details, see Configuration Files in this section.
Step 5 Configure the threshold for flooding bandwidth information.
# Set the threshold for flooding bandwidth information to 20% on a physical
interface on LSRA. If the proportion of the bandwidth used or released by an
MPLS TE tunnel to the available bandwidth in the TEDB is greater than or equal to
20%, an IGP floods the bandwidth information, and CSPF updates TEDB
information.
[~LSRA] interface gigabitethernet 1/0/0
[~LSRA-GigabitEthernet1/0/0] mpls te bandwidth change thresholds up 20
[*LSRA-GigabitEthernet1/0/0] mpls te bandwidth change thresholds down 20
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
Run the display mpls te cspf tedb command on LSRA. TEDB information is
displayed.
[~LSRA] display mpls te cspf tedb interface 10.1.1.1
Router ID: 1.1.1.9
IGP Type: ISIS Process Id: 1
Link[1]:
ISIS System ID: 0000.0000.0001.00 Opaque LSA ID: 0000.0000.0001.00:00
Interface IP Address: 10.1.1.1
DR Address: 10.1.1.1
IsConfigLspConstraint: -
Hot-Standby Revertive Mode: Revertive
Hot-Standby Overlap-path: Disabled
Hot-Standby Switch State: CLEAR
Bit Error Detection: Disabled
Bit Error Detection Switch Threshold: -
Bit Error Detection Resume Threshold: -
Ip-Prefix Name : -
P2p-Template Name : -
PCE Delegate : No LSP Control Status : Local control
Path Verification : --
Entropy Label : None
Associated Tunnel Group ID: - Associated Tunnel Group Type: -
Auto BW Remain Time : 200 s Reopt Remain Time : 100 s
Metric Inherit IGP : None
Binding Sid :- Reverse Binding Sid : -
Self-Ping : Disable Self-Ping Duration : 1800 sec
FRR Attr Source : - Is FRR degrade down : No
Run the display mpls te cspf tedb command on LSRA. Bandwidth information is
unchanged.
[~LSRA] display mpls te cspf tedb interface 10.1.1.1
Router ID: 1.1.1.9
IGP Type: ISIS Process Id: 1
Link[1]:
ISIS System ID: 0000.0000.0001.00 Opaque LSA ID: 0000.0000.0001.00:00
Interface IP Address: 10.1.1.1
DR Address: 10.1.1.1
DR ISIS System ID: 0000.0000.0001.01
IGP Area: Level-2
Link Type: Multi-access Link Status: Active
IGP Metric: 10 TE Metric: 10 Color: 0x0
Bandwidth Allocation Model : -
Maximum Link-Bandwidth: 100000 (kbps)
Maximum Reservable Bandwidth: 100000 (kbps)
Operational Mode of Router : TE
Bandwidth Constraints: Local Overbooking Multiplier:
BC[0]: 100000 (kbps) LOM[0]: 1
BW Unreserved:
Class ID:
Run the display mpls te cspf tedb interface 10.1.1.1 command on LSRA. TEDB
information shows that the TE tunnel named Tunnel1 has been reestablished
successfully. Its bandwidth is 20,000 kbit/s, reaching 20%, the threshold for
flooding bandwidth information. Therefore, CSPF TEDB information has been
updated.
[~LSRA] display mpls te cspf tedb interface 10.1.1.1
Router ID: 1.1.1.9
IGP Type: ISIS Process Id: 1
Link[1]:
ISIS System ID: 0000.0000.0001.00 Opaque LSA ID: 0000.0000.0001.00:00
Interface IP Address: 10.1.1.1
DR Address: 10.1.1.1
DR ISIS System ID: 0000.0000.0001.01
IGP Area: Level-2
Link Type: Multi-access Link Status: Active
IGP Metric: 10 TE Metric: 10 Color: 0x0
Bandwidth Allocation Model : -
Maximum Link-Bandwidth: 100000 (kbps)
Maximum Reservable Bandwidth: 100000 (kbps)
Operational Mode of Router : TE
Bandwidth Constraints: Local Overbooking Multiplier:
BC[0]: 100000 (kbps) LOM[0]: 1
BW Unreserved:
Class ID:
[0]: 100000 (kbps), [1]: 100000 (kbps)
[2]: 100000 (kbps), [3]: 100000 (kbps)
[4]: 100000 (kbps), [5]: 100000 (kbps)
[6]: 100000 (kbps), [7]: 80000 (kbps)
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-23, a primary LSP is along the path LSRA ->
LSRB -> LSRC -> LSRD. FRR is enabled on LSRB to protect traffic on the link
between LSRB and LSRC.
A bypass CR-LSP is established over the path LSRB -> LSRE -> LSRC. LSRB is a PLR,
and LSRC is an MP.
Explicit paths are used to establish the primary and bypass CR-LSPs. RSVP-TE is
used as a signaling protocol.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a primary CR-LSP and enable TE FRR on the tunnel interface of the
primary CR-LSP.
2. Configure a bypass CR-LSP on the PLR (ingress) and specify the interface of
the protected link.
Data Preparation
To complete the configuration, you need the following data:
● IS-IS area ID, originating system ID, and IS-IS level of each node
● Explicit paths for the primary and bypass CR-LSPs
● Tunnel interface number, source and destination IP addresses, ID, and RSVP-TE
signaling protocol for each of the primary and bypass CR-LSPs
● Protected bandwidth and type and number of the interface on the protected
link
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and its mask to every physical interface and configure a
loopback interface address as an LSR ID on every node shown in Figure 1-23. For
configuration details, see Configuration Files in this section.
Step 2 Configure an IGP.
Configure IS-IS on all nodes to advertise host routes. For configuration details, see
Configuration Files in this section.
After completing the configurations, run the display ip routing-table command
on every node. All nodes have learned routes from each other.
Step 3 Configure basic MPLS functions and enable MPLS TE, CSPF, RSVP-TE, and IS-IS TE.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls te
[*LSRA-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] isis
[*LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-2
[*LSRA-isis-1] commit
NOTE
The configurations of LSRB, LSRC, LSRD and LSRE are similar to the configuration of LSRA.
For configuration details, see Configuration Files in this section. CSPF needs to be enabled
only on LSRA and LSRB, which are ingress nodes of the primary and bypass CR-LSPs,
respectively.
# Enable FRR.
[*LSRA-Tunnel1] mpls te fast-reroute
[*LSRA-Tunnel1] commit
[~LSRA-Tunnel1] quit
After completing the configuration, run the display interface tunnel command
on LSRA. Tunnel1 is Up.
[~LSRA] display interface tunnel
Tunnel1 current state : UP (ifindex: 20)
Line protocol current state : UP
Last line protocol up time : 2011-05-31 06:30:58
Description:
Route Port,The Maximum Transmit Unit is 1500, Current BW: 50Mbps
Internet Address is unnumbered, using address of LoopBack1(1.1.1.1/32)
Encapsulation is TUNNEL, loopback not set
Tunnel destination 4.4.4.4
Tunnel up/down statistics 1
Tunnel protocol/transport MPLS/MPLS, ILM is available,
primary tunnel id is 0x321, secondary tunnel id is 0x0
Current system time: 2011-05-31 07:32:31
300 seconds output rate 0 bits/sec, 0 packets/sec
0 seconds output rate 0 bits/sec, 0 packets/sec
126 packets output, 34204 bytes
0 output error
18 output drop
Last 300 seconds input utility rate: 0.00%
Last 300 seconds output utility rate: 0.00%
After completing the configuration, run the display interface tunnel command
on LSRB. The tunnel named Tunnel3 is Up.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. The
bypass tunnel is bound to the outbound interface GE 2/0/0 and is not in use.
[~LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : -
Session ID : 1 LSP ID : 95
Lsr Role : Transit
Ingress LSR ID : 1.1.1.1
Egress LSR ID : 4.4.4.4
In-Interface : GE1/0/0
Out-Interface : GE2/0/0
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : - AR-Hop Table Index: -
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : -
Run the display interface tunnel 1 command on LSRA. The tunnel interface of
the primary CR-LSP is still Up.
Run the tracert lsp te tunnel1 command on LSRA. The path through which the
primary CR-LSP passes is displayed.
[~LSRA] tracert lsp te tunnel1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.21.1.2/[25 ]
1 10.21.1.2 3 Transit 10.32.1.2/[16 16 ]
2 10.32.1.2 4 Transit 10.33.1.2/[3 ]
3 10.33.1.2 4 Transit 10.41.1.2/[3 ]
4 4.4.4.4 3 Egress
The preceding command output shows that traffic has switched to the bypass CR-
LSP.
NOTE
If the display mpls te tunnel-interface command is run immediately after FRR switching
has been performed, two CR-LSPs are both Up. This is because FRR uses the make-before-
break mechanism to establish a bypass CR-LSP. The original CR-LSP will be deleted after a
new CR-LSP has been established.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. The
bypass CR-LSP is being used.
[~LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : -
Session ID : 1 LSP ID : 95
Lsr Role : Transit
Ingress LSR ID : 1.1.1.1
Egress LSR ID : 4.4.4.4
In-Interface : GE1/0/0
Out-Interface : GE2/0/0
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : - AR-Hop Table Index: -
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : -
Created Time : 2012/02/01 04:53:22
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Reserved
CT0 Bandwidth(Kbit/sec) : 10000 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
Bypass In Use : In Use
Bypass Tunnel Id : 1
BypassTunnel : Tunnel Index[Tunnel3], InnerLabel[16]
Bypass Lsp ID : 8 FrrNextHop : 3.3.3.3
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute
Setup Priority : 7 Hold Priority : 7
HopLimit : 32 Bandwidth : 0
IncludeAnyGroup : 0 ExcludeAnyGroup : 0
IncludeAllGroup : 0
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
Run the display interface tunnel1 command on LSRA. The tunnel interface of the
primary CR-LSP is UP.
After specified period of time elapses, run the display mpls te tunnel name
tunnel1 verbose command on LSRB. Tunnel1's Bypass In Use status is Not Used,
indicating that traffic has switched back to GE 2/0/0.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path pri-path
next hop 10.21.1.2
next hop 10.31.1.2
next hop 10.41.1.2
next hop 4.4.4.4
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.21.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te tunnel-id 1
mpls te record-route label
mpls te path explicit-path pri-path
mpls te fast-reroute
#
return
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.31.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.33.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
● LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0004.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.41.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
● LSRE configuration file
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0005.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.32.1.2 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.33.1.1 255.255.255.0
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-24, a primary CR-LSP is established over an
explicit path LSRA -> LSRB -> LSRC. Bypass CR-LSPs need to be established on the
ingress LSRA and the transit node LSRB respectively. These bypass CR-LSPs are
required to provide bandwidth protection. A node protection tunnel is a bypass
tunnel that originates from LSRA's inbound interface, terminates at LSRC's
outbound interface, and passes through the intermediate LSRB. A link protection
tunnel is a bypass tunnel that originates from LSRB's outbound interface,
terminates at LSRC's inbound interface, and passes through the intermediate LSRD
or is a direct link between LSRB's outbound interface and LSRC's inbound
interface.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a primary CR-LSP, and enable TE FRR in the tunnel interface view
and MPLS auto FRR in the MPLS view.
2. Set the protected bandwidth and priorities for the bypass CR-LSP in the tunnel
interface view.
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID and OSPF area ID for every node
● Path for the primary CR-LSP
● Tunnel interface number, source and destination IP addresses of the primary
tunnel, tunnel ID, RSVP-TE signaling protocol, and tunnel bandwidth
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and its mask to every physical interface and configure a
loopback interface address as an LSR ID on every node shown in Figure 1-24. For
configuration details, see Configuration Files in this section.
Step 2 Configure OSPF to advertise every network segment route and host route.
Configure OSPF on all nodes to advertise host routes. For configuration details,
see Configuration Files in this section.
After completing the configurations, run the display ip routing-table command
on every node. All nodes have learned routes from one another.
Step 3 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[*LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 2/0/0
[*LSRA-GigabitEthernet2/0/0] mpls
[*LSRA-GigabitEthernet2/0/0] mpls te
[*LSRA-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRA-GigabitEthernet2/0/0] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls te
[*LSRA-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
NOTE
Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see Configuration
Files in this section.
NOTE
Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see Configuration
Files in this section.
# Configure LSRB.
[~LSRB] mpls
[~LSRB-mpls] mpls te auto-frr
[*LSRB-mpls] commit
FTid : 130
Tie-Breaking Policy : None Metric Type : None
Bfd Cap : None
Reopt : Disabled Reopt Freq : -
Inter-area Reopt : Disabled
Auto BW : Disabled Threshold : -
Current Collected BW: - Auto BW Freq : -
Min BW :- Max BW :-
Offload : Disabled Offload Freq : -
Low Value :- High Value : -
Readjust Value :-
Offload Explicit Path Name: -
Tunnel Group : Primary
Interfaces Protected: GigabitEthernet2/0/0
Excluded IP Address : 10.21.1.1
10.21.1.2
2.2.2.2
Referred LSP Count : 1
Primary Tunnel :- Pri Tunn Sum : -
Backup Tunnel :-
Group Status : Down Oam Status : None
IPTN InLabel :-
BackUp LSP Type : None BestEffort : Disabled
Secondary HopLimit : -
BestEffort HopLimit : -
Secondary Explicit Path Name: -
Secondary Affinity Prop/Mask: 0x0/0x0
BestEffort Affinity Prop/Mask: 0x0/0x0
IsConfigLspConstraint: -
Hot-Standby Revertive Mode: Revertive
Hot-Standby Overlap-path: Disabled
Hot-Standby Switch State: CLEAR
Bit Error Detection: Disabled
Bit Error Detection Switch Threshold: -
Bit Error Detection Resume Threshold: -
Ip-Prefix Name : -
P2p-Template Name : -
PCE Delegate : No LSP Control Status : Local control
Path Verification : --
Entropy Label : None
Associated Tunnel Group ID: - Associated Tunnel Group Type: -
Auto BW Remain Time : 200 s Reopt Remain Time : 100 s
Metric Inherit IGP : None
Binding Sid :- Reverse Binding Sid : -
Self-Ping : Disable Self-Ping Duration : 1800 sec
FRR Attr Source : - Is FRR degrade down : No
BFD Status :-
Soft Preemption : Disabled
Reroute Flag : Disabled
Pce Flag : Normal
Path Setup Type : EXPLICIT
Create Modify LSP Reason: -
Self-Ping Status : -
The automatic bypass CR-LSP protects traffic on GE 2/0/0, the outbound interface
of the primary CR-LSP, not other three interfaces. The bandwidth is 200 kbit/s, and
the setup and holding priority values are 5 and 4, respectively.
Run the display mpls te tunnel path command on LSRA. The bypass CR-LSP is
providing both node and bandwidth protection for the primary CR-LSP.
[~LSRA] display mpls te tunnel path
Tunnel Interface Name : Tunnel2
Lsp ID : 1.1.1.1 :200 :164
Hop Information
Hop 0 10.21.1.1 Local-Protection available | bandwidth | node
Hop 1 10.21.1.2 Label 32846
Hop 2 2.2.2.2 Label 32846
Hop 3 10.31.1.1 Local-Protection available | bandwidth
Hop 4 10.31.1.2 Label 3
Hop 5 3.3.3.3 Label 3
Run the display mpls te tunnel name Tunnel2 verbose command on the transit
LSRB. Information about the primary and bypass CR-LSPs is displayed.
[~LSRB] display mpls te tunnel name Tunnel2 verbose
No : 1
Tunnel-Name : Tunnel2
Tunnel Interface Name : -
TunnelIndex : -
Session ID : 200 LSP ID : 164
LSR Role : Transit
Ingress LSR ID : 1.1.1.1
Egress LSR ID : 3.3.3.3
In-Interface : GE3/0/0
Out-Interface : GE2/0/0
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : - AR-Hop Table Index: -
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : -
Created Time : 2015-01-28 11:10:32
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Reserved
CT0 Bandwidth(Kbit/sec) : 400 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 4 Hold-Priority : 3
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
The automatic bypass CR-LSP protects traffic on GE 2/0/0, the outbound interface
of the primary CR-LSP. The bandwidth is 200 kbit/s, and the setup and holding
priority values are 5 and 4, respectively.
Run the display mpls te tunnel path command on LSRB. Information about the
path of both primary CR-LSP and automatic bypass CR-LSP is displayed.
[~LSRB] display mpls te tunnel path
Tunnel Interface Name : Tunnel2
Lsp ID : 1.1.1.1 :200 :164
Hop Information
Hop 0 1.1.1.1
Hop 1 10.21.1.1 Local-Protection available | bandwidth | node
Hop 2 10.21.1.2 Label 32846
Hop 3 2.2.2.2 Label 32846
Hop 4 10.31.1.1 Local-Protection available | bandwidth
Hop 5 10.31.1.2 Label 3
Hop 6 3.3.3.3 Label 3
Hop Information
Hop 0 10.32.1.1
Hop 1 10.32.1.2 Label 32839
Hop 2 4.4.4.4 Label 32839
Hop 3 10.41.1.1
Hop 4 10.41.1.2 Label 3
Hop 5 3.3.3.3 Label 3
----End
Configuration Files
● LSRA configuration file
#
sysname LSR A
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te auto-frr
mpls te cspf
mpls rsvp-te
#
explicit-path master
next hop 10.21.1.1
next hop 10.31.1.1
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 10.1.1.0 0.0.0.255
network 10.21.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.21.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 200
mpls te record-route label
mpls te priority 4 3
mpls te bandwidth ct0 400
mpls te path explicit-path master
mpls te fast-reroute bandwidth
mpls te bypass-attributes bandwidth 200 priority 5 4
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls te auto-frr
mpls te cspf
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 10.31.1.0 0.0.0.255
network 10.32.1.0 0.0.0.255
network 10.21.1.0 0.0.0.255
network 2.2.2.2 0.0.0.0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.32.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.31.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.21.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 10.1.1.0 0.0.0.255
network 10.31.1.0 0.0.0.255
network 10.41.1.0 0.0.0.255
network 3.3.3.3 0.0.0.0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.41.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.31.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
return
● LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
#
ospf 1
opaque-capability enable
area 0.0.0.0
mpls-te enable
network 10.32.1.0 0.0.0.255
network 10.41.1.0 0.0.0.255
network 4.4.4.4 0.0.0.0
#
interface GigabitEthernet2/0/0
undo shutdown
mpls
ip address 10.41.1.1 255.255.255.0
mpls te
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.32.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
Return
Networking Requirements
Traffic engineering (TE) fast reroute (FRR) provides local link and node protection
for MPLS TE tunnels. If a link or node fails, traffic is rapidly switched to a backup
path, which minimizes traffic loss. TE FRR is working in facility or one-to-one
backup mode. TE FRR in one-to-one backup mode is also called MPLS detour FRR.
MPLS detour FRR automatically creates a detour LSP on each eligible node along
primary CR-LSP to protect downstream links or nodes. This mode is easy to
configure, eliminates manual network planning, and provides flexibility on a
complex network.
Figure 1-25 shows a primary RSVP-TE tunnel along the path LSRA -> LSRC ->
LSRE. To improve tunnel reliability, MPLS detour FRR must be configured.
NOTE
For information about how to configure TE FRR in facility backup mode, see 1.1.3.43.13
Example for Configuring MPLS TE Manual FRR and 1.1.3.43.14 Example for Configuring
MPLS TE Auto FRR.
Configuration Notes
● The facility backup and one-to-one backup modes are mutually exclusive on
the same TE tunnel interface. If both modes are configured, the latest
configured mode overrides the previous one.
● The shared explicit (SE) style must be used for the MPLS detour FRR-enabled
tunnel.
● CSPF must be enabled on each node along both the primary and backup
RSVP-TE tunnels.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an RSVP-TE tunnel.
2. Enable MPLS detour FRR on an RSVP-TE tunnel interface.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces
● IGP protocol (IS-IS), process ID (1), system ID (converted using loopback1
address), and IS-IS level (level-2)
● LSR ID (loopback interface address) of every MPLS node
● Tunnel interface name (Tunnel1), tunnel IP address (loopback interface IP
address), tunnel ID (100), and destination IP address (5.5.5.5)
Procedure
Step 1 Assign an IP address and a mask to each interface.
Assign an IP address to each interface and create a loopback interface on each
node. For configuration details, see Configuration Files in this section.
Step 2 Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each loopback address
that is used as an LSR ID.
Configure IS-IS on each node to implement network layer connectivity. For
configuration details, see Configuration Files in this section.
Step 3 Enable MPLS, MPLS TE, MPLS RSVP-TE, and CSPF globally on each node.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] commit
[~LSRA-mpls] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Step 4 Enable IGP TE on each node.
# Configure LSRA.
[~LSRA] isis 1
[~LSRA-isis-1] cost-style wide
[*LSRA-isis-1] traffic-eng level-2
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Step 6 Configure an RSVP-TE tunnel interface on LSRA (ingress).
# Configure LSRA.
[~LSRA] interface tunnel 1
[*LSRA-Tunnel1] ip address unnumbered interface loopback 1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] mpls te tunnel-id 100
[*LSRA-Tunnel1] destination 5.5.5.5
Run the display mpls te tunnel path command on LSRA to view the primary CR-
LSP and detour LSP information. The command output shows that a detour LSP
has been established to provide node protection on LSRA, and another detour LSP
has been established to provide link protection on LSRC.
[~LSRA] display mpls te tunnel path
Tunnel Interface Name : Tunnel1
Lsp ID : 1.1.1.1 :100 :25
Hop Information
Hop 0 10.1.1.1 Local-Protection available | node
Hop 1 10.1.1.2 Label 32832
Hop 2 3.3.3.3 Label 32832
Hop 3 10.1.3.1 Local-Protection available
Hop 4 10.1.3.2 Label 3
Hop 5 5.5.5.5 Label 3
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0001.0010.0100.1001.00
traffic-eng level-2
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 5.5.5.5
mpls te record-route label
mpls te detour
mpls te tunnel-id 100
#
return
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
● LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0001.0040.0400.4004.00
traffic-eng level-2
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.6.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.4.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.1.7.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
● LSRE configuration file
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0001.0050.0500.5005.00
traffic-eng level-2
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.3.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.5.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
#
return
Networking Requirements
Figure 1-26 illustrates an MPLS VPN network. A TE tunnel is established from PE1
to PE2. A hot-standby CR-LSP and a best-effort path are configured. The
networking is as follows:
● The primary CR-LSP is along the path PE1 -> P1 -> PE2.
● The hot-standby CR-LSP is along the path PE1 -> P2 -> PE2.
If the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary
CR-LSP recovers, traffic switches back to the primary CR-LSP after a 15-second
delay. If both the primary and backup CR-LSPs fail, traffic switches to the best-
effort path. Explicit paths can be configured for the primary and backup CR-LSPs.
A best-effort path can be generated automatically. In this example, the best-effort
path is PE1 -> P2 -> P1 -> PE2. The calculated best-effort path varies according to
the faulty node.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and its mask to every interface and configure a loopback
interface address as an LSR ID on every node. For configuration details, see
Configuration Files in this section.
Set an LSR ID in the system view, and enable MPLS in the system and physical
interface views on every node. For configuration details, see Configuration Files
in this section.
Enable MPLS TE and RSVP-TE in the MPLS and interface views on every node. For
configuration details, see Configuration Files in this section.
Configure IS-IS TE on all nodes and enable CSPF on PE1. For configuration details,
see Configuring an RSVP-TE Tunnel.
Step 6 Configure explicit paths for the primary and hot-standby CR-LSPs.
# Configure an explicit path for the primary CR-LSP on PE1.
<PE1> system-view
[~PE1] explicit-path main
[*PE1-explicit-path-main] next hop 10.4.1.2
[*PE1-explicit-path-main] next hop 10.2.1.2
[*PE1-explicit-path-main] next hop 3.3.3.3
[*PE1-explicit-path-main] quit
# Configure hot standby on the tunnel interface; set the switchback delay time to
15 seconds; specify an explicit path; configure a best-effort path.
[*PE1-Tunnel1] mpls te backup hot-standby mode revertive wtr 15
[*PE1-Tunnel1] mpls te path explicit-path backup secondary
[*PE1-Tunnel1] mpls te backup ordinary best-effort
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
# Run the tracert lsp te command. The path for the hot-standby CR-LSP is
reachable.
[~PE1] tracert lsp te tunnel1 hot-standby
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to break.
Insert the cable into GE 2/0/0 and wait 15 seconds. It can be seen that traffic
switches back to the primary CR-LSP.
If the cables to GE 2/0/0 on PE1 (or GE 2/0/0 on P1) and PE2 (or P2) are removed,
the tunnel interface goes Down and then Up. A best-effort path is established and
takes over traffic.
[~PE1] display mpls te tunnel-interface tunnel1
Tunnel Name : Tunnel1
Signalled Tunnel Name: -
Tunnel State Desc : Backup CR-LSP In use and Primary CR-LSP setting Up
Tunnel Attributes :
Active LSP : BestEffort LSP
Traffic Switch :-
Session ID : 502
Ingress LSR ID : 4.4.4.4 Egress LSR ID: 3.3.3.3
Admin State : UP Oper State : UP
Signaling Protocol : RSVP
FTid : 161
Tie-Breaking Policy : None Metric Type : None
Bfd Cap : None
Reopt : Disabled Reopt Freq : -
Inter-area Reopt : Disabled
Auto BW : Disabled Threshold : 0 percent
Current Collected BW: 0 kbps Auto BW Freq : 0
Min BW : 0 kbps Max BW : 0 kbps
Offload : Disabled Offload Freq : 0 sec
Low Value : 0 kbps High Value : 0 kbps
Readjust Value : 0 kbps
Offload Explicit Path Name: -
Tunnel Group :-
Interfaces Protected: -
Excluded IP Address : -
Referred LSP Count : 0
Primary Tunnel :- Pri Tunn Sum : -
Backup Tunnel :-
Group Status :- Oam Status : -
IPTN InLabel :- Tunnel BFD Status : -
BackUp LSP Type : BestEffort BestEffort : Enabled
Secondary HopLimit : 32
BestEffort HopLimit : -
Secondary Explicit Path Name: backup
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
explicit-path backup
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
isis enable 1
ip address 1.1.1.1 255.255.255.255
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
traffic-eng level-1-2
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.5.1.1 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
traffic-eng level-1-2
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.5.1.2 255.255.255.252
mpls
mpls te
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
Context
A tunnel protection group consists of static bidirectional co-routed CR-LSPs. If the
working tunnel fails, forward traffic and reverse traffic are both switched to the
protection tunnel, which helps improve network reliability.
On the MPLS network shown in Figure 1-27, a working tunnel is established over
the path LSRA -> LSRB -> LSRC, and a protection tunnel is established over the
path LSRA -> LSRD -> LSRC. To ensure that MPLS TE traffic is not interrupted if a
fault occurs, configure static bidirectional co-routed CR-LSPs for both working and
protection tunnels and combine them into a tunnel protection group.
Figure 1-27 Networking diagram for a tunnel protection group consisting of static
bidirectional co-routed CR-LSPs
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure a routing protocol.
2. Configure basic MPLS functions and enable MPLS TE.
3. Configure the ingress, transit nodes, and egress for each static bidirectional
co-routed CR-LSP.
4. Configure MPLS TE tunnel interfaces for the working and protection tunnels
and bind a specific static bidirectional co-routed CR-LSP to each tunnel
interface.
5. Configure an MPLS TE tunnel protection group.
6. Configure a detection mechanism to monitor the configured tunnel protection
group. MPLS-TP OAM is used in this example.
Data Preparation
To complete the configuration, you need the following data:
● Tunnel interface names, tunnel interface IP addresses, destination addresses,
tunnel IDs, and tunnel signaling protocol (CR-Static) on LSRA and LSRC
● Next-hop address and outgoing label on the ingress
● Inbound interface name, next-hop address, and outgoing label on the transit
node
● Inbound interface name on the egress
Procedure
Step 1 Assign an IP address to each interface and configure a routing protocol.
# Assign an IP address and a mask to each interface and configure static routes so
that all LSRs can interconnect with each other.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls te
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] interface gigabitethernet 1/0/1
[*LSRA-GigabitEthernet1/0/1] mpls
[*LSRA-GigabitEthernet1/0/1] mpls te
[*LSRA-GigabitEthernet1/0/1] commit
[~LSRA-GigabitEthernet1/0/1] quit
Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see
Configuration Files in this section.
Step 3 Configure the ingress, transit nodes, and egress for each static bidirectional co-
routed CR-LSP.
# Configure LSRA as the ingress on both the working and protection static
bidirectional co-routed CR-LSPs.
[~LSRA] bidirectional static-cr-lsp ingress Tunnel10
[*LSRA-bi-static-ingress-Tunnel10] forward nexthop 10.21.1.2 out-label 20
[*LSRA-bi-static-ingress-Tunnel10] backward in-label 20
[*LSRA-bi-static-ingress-Tunnel10] quit
[*LSRA] bidirectional static-cr-lsp ingress Tunnel11
[*LSRA-bi-static-ingress-Tunnel11] forward nexthop 10.41.1.2 out-label 21
[*LSRA-bi-static-ingress-Tunnel11] backward in-label 21
[*LSRA-bi-static-ingress-Tunnel11] commit
[~LSRA-bi-static-ingress-Tunnel11] quit
# Configure LSRC as the egress on both the working and protection static
bidirectional co-routed CR-LSPs.
[~LSRC] bidirectional static-cr-lsp egress lsp1
[*LSRC-bi-static-egress-lsp1] forward in-label 40 lsrid 1.1.1.1 tunnel-id 100
[*LSRC-bi-static-egress-lsp1] backward nexthop 10.32.1.1 out-label 16
[*LSRC-bi-static-egress-lsp1] quit
[*LSRC] bidirectional static-cr-lsp egress lsp2
[*LSRC-bi-static-egress-lsp2] forward in-label 41 lsrid 1.1.1.1 tunnel-id 101
[*LSRC-bi-static-egress-lsp2] backward nexthop 10.34.1.1 out-label 17
[*LSRC-bi-static-egress-lsp2] commit
[~LSRC-bi-static-egress-lsp2] quit
Step 4 Configure MPLS TE tunnel interfaces for the working and protection tunnels and
bind a specific static bidirectional co-routed CR-LSP to each tunnel interface.
# On LSRA, configure MPLS TE tunnel interfaces named Tunnel 10 and Tunnel 11.
[~LSRA] interface Tunnel 10
[*LSRA-Tunnel10] ip address unnumbered interface loopback 1
[*LSRA-Tunnel10] tunnel-protocol mpls te
[*LSRA-Tunnel10] destination 3.3.3.3
[*LSRA-Tunnel10] mpls te tunnel-id 100
[*LSRA-Tunnel10] mpls te signal-protocol cr-static
[*LSRA-Tunnel10] mpls te bidirectional
[*LSRA-Tunnel10] quit
[*LSRA] interface Tunnel 11
[*LSRA-Tunnel11] ip address unnumbered interface loopback 1
[*LSRA-Tunnel11] tunnel-protocol mpls te
[*LSRA-Tunnel11] destination 3.3.3.3
[*LSRA-Tunnel11] mpls te tunnel-id 101
[*LSRA-Tunnel11] mpls te signal-protocol cr-static
[*LSRA-Tunnel11] mpls te bidirectional
[*LSRA-Tunnel11] commit
[~LSRA-Tunnel11] quit
# On LSRC, configure MPLS TE tunnel interfaces named Tunnel 20 and Tunnel 21.
[~LSRC] interface Tunnel 20
[*LSRC-Tunnel20] ip address unnumbered interface loopback 1
[*LSRC-Tunnel20] tunnel-protocol mpls te
[*LSRC-Tunnel20] destination 1.1.1.1
[*LSRC-Tunnel20] mpls te tunnel-id 200
[*LSRC-Tunnel20] mpls te signal-protocol cr-static
[*LSRC-Tunnel20] mpls te passive-tunnel
[*LSRC-Tunnel20] mpls te binding bidirectional static-cr-lsp egress lsp1
[*LSRC-Tunnel20] quit
[*LSRC] interface Tunnel 21
[*LSRC-Tunnel21] ip address unnumbered interface loopback 1
[*LSRC-Tunnel21] tunnel-protocol mpls te
[*LSRC-Tunnel21] destination 1.1.1.1
[*LSRC-Tunnel21] mpls te tunnel-id 201
[*LSRC-Tunnel21] mpls te signal-protocol cr-static
[*LSRC-Tunnel21] mpls te passive-tunnel
[*LSRC-Tunnel21] mpls te binding bidirectional static-cr-lsp egress lsp2
[*LSRC-Tunnel21] commit
[~LSRC-Tunnel21] quit
[*LSRA-mpls-tp-meg-abc] commit
[~LSRA-mpls-tp-meg-abc] quit
----------------------------------------------------------------
Verbose information about the No."1" protection-group
----------------------------------------------------------------
Work-tunnel id : 100
Protect-tunnel id : 101
Work-tunnel name : Tunnel10
Protect-tunnel name : Tunnel11
Work-tunnel reverse-lsp :-
Protect-tunnel reverse-lsp :-
Bridge type : 1:1
Switch type : bidirectional
Switch result : work-tunnel
Tunnel using Best-Effort : none
Tunnel using Ordinary : none
Work-tunnel frr in use : none
Work-tunnel defect state : non-defect
Protect-tunnel defect state : non-defect
Work-tunnel forward-lsp defect state : non-defect
Protect-tunnel forward-lsp defect state : non-defect
Work-tunnel reverse-lsp defect state : non-defect
Protect-tunnel reverse-lsp defect state : non-defect
HoldOff config time : 0ms
HoldOff remain time :-
WTR config time : 0s
WTR remain time :-
Mode : revertive
Using same path :-
Local state : no request
Far end request : no request
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
bidirectional static-cr-lsp ingress Tunnel10
forward nexthop 10.21.1.2 out-label 20
backward in-label 20
#
bidirectional static-cr-lsp ingress Tunnel11
forward nexthop 10.41.1.2 out-label 21
backward in-label 21
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.21.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.41.1.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol cr-static
mpls te tunnel-id 100
mpls te bidirectional
mpls te protection tunnel 101 mode revertive wtr 0
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol cr-static
mpls te tunnel-id 101
mpls te bidirectional
#
ip route-static 2.2.2.2 255.255.255.255 10.21.1.2
ip route-static 3.3.3.3 255.255.255.255 10.21.1.2
ip route-static 3.3.3.3 255.255.255.255 10.41.1.2
ip route-static 4.4.4.4 255.255.255.255 10.41.1.2
#
mpls-tp meg abc
me te interface Tunnel10 mep-id 1 remote-mep-id 2
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
bidirectional static-cr-lsp transit lsp1
forward in-label 20 nexthop 10.32.1.2 out-label 40
backward in-label 16 nexthop 10.21.1.1 out-label 20
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.21.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.32.1.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
#
bidirectional static-cr-lsp transit lsp2
forward in-label 21 nexthop 10.34.1.2 out-label 41
backward in-label 17 nexthop 10.41.1.1 out-label 21
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.41.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.34.1.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
ip route-static 1.1.1.1 255.255.255.255 10.41.1.1
ip route-static 3.3.3.3 255.255.255.255 10.34.1.2
#
return
Networking Requirements
Isolated primary and hot-standby LSPs are necessary to improve the LSP reliability
on IP radio access networks (IP RANs) that use Multiprotocol Label Switching
(MPLS) Traffic Engineering (TE). The constrained shortest path first (CSPF)
algorithm does not meet this reliability requirement because CSPF may compute
two LSPs that intersect at aggregation nodes. Specifying explicit paths for LSPs
can improve reliability but this method does not adapt to topology changes. Each
time a node is added to or deleted from the IP RAN, operators must configure
new explicit paths, which is time-consuming and laborious. Isolated LSP
computation is another method to improve reliability. After this function is
configured, the device uses both the disjoint and CSPF algorithms to compute
isolated primary and hot-standby LSPs.
Two isolated LSPs exist on this topology: LSRA -> LSRC -> LSRE -> LSRF and LSRA
-> LSRB -> LSRD -> LSRF. However, if the disjoint algorithm is not enabled, CSPF
computes LSRA -> LSRC-> LSRD-> LSRF as the primary LSP and cannot compute
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign addresses to all physical and loopback interfaces listed in Table 1-18.
2. Globally enable OSPF on each device so that OSPF advertises segment routes
of each physical and loopback interface. Enable OSPF TE in the area where
the devices reside.
3. Set MPLS label switching router (LSR) IDs for all devices and globally enable
MPLS, TE, RSVP-TE, and CSPF.
4. Enable MPLS, TE, and RSVP-TE on the outbound interfaces of all links along
the TE tunnel. Set a TE metric for each link according to Figure 1-28.
5. Create a tunnel interface on LSRA and specify the IP address, tunnel protocol,
destination address, tunnel ID, and signaling protocol RSVP-TE for the tunnel
interface.
6. Enable the CR-LSP hot standby function and the disjoint algorithm on the
tunnel interface.
Data Preparation
To complete the configuration, you need the following data:
● IP address for each interface (see Table 1-18.)
● OSPF process ID (1) and area ID (0.0.0.0)
● TE metric for each link (see Figure 1-28.)
● Loopback interface address for each MPLS LSR ID
● Tunnel interface number (Tunnel1), tunnel ID (1), loopback interface address
to be borrowed, destination address (6.6.6.6), and signaling protocol (RSVP-
TE)
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface and create a loopback interface on each
device, according to Table 1-18. For detailed configurations, see Configuration
Files in this section.
Step 2 Enable OSPF on each device.
Enable basic OSPF functions and MPLS TE on each device.
# Configure LSRA.
<LSRA> system-view
[~LSRA] ospf 1
[*LSRA-ospf-1] opaque-capability enable
[*LSRA-ospf-1] area 0.0.0.0
[*LSRA-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*LSRA-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRA-ospf-1-area-0.0.0.0] commit
[~LSRA-ospf-1-area-0.0.0.0] quit
[~LSRA-ospf-1] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Step 3 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
Enable MPLS, MPLS TE, RSVP-TE, and CSPF on each device. Enable MPLS, TE, and
RSVP-TE on the outbound interface of each link. Set a TE metric for each link.
# Configure LSRA.
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Step 4 Configure an MPLS TE tunnel interface.
# Configure LSRA.
[~LSRA] interface tunnel1
[*LSRA-Tunnel1] ip address unnumbered interface LoopBack1
[*LSRA-Tunnel1] tunnel-protocol mpls te
[*LSRA-Tunnel1] destination 6.6.6.6
[*LSRA-Tunnel1] mpls te tunnel-id 1
[*LSRA-Tunnel1] mpls te signal-protocol rsvp-te
[*LSRA-Tunnel1] commit
# Run the display mpls te tunnel-interface Tunnel1 and display mpls te tunnel
path Tunnel1 commands on LSRA to view information about the primary and
hot-standby LSPs.
[~LSRA] display mpls te tunnel-interface Tunnel1
Tunnel Name : Tunnel1
Signalled Tunnel Name: -
Tunnel State Desc : CR-LSP is Up
Tunnel Attributes :
Active LSP : Primary LSP
Traffic Switch :-
Session ID :1
Ingress LSR ID : 1.1.1.1 Egress LSR ID: 6.6.6.6
Admin State : UP Oper State : UP
Signaling Protocol : RSVP
FTid :1
Tie-Breaking Policy : None Metric Type : None
Bfd Cap : None
Reopt : Disabled Reopt Freq : -
Inter-area Reopt : Disabled
Auto BW : Disabled Threshold : 0 percent
Current Collected BW: 0 kbps Auto BW Freq : 0
Min BW : 0 kbps Max BW : 0 kbps
Offload : Disabled Offload Freq : -
Low Value :- High Value : -
Readjust Value :-
Offload Explicit Path Name:
Tunnel Group :-
Interfaces Protected: -
Excluded IP Address : -
Referred LSP Count : 0
Primary Tunnel :- Pri Tunn Sum : -
Backup Tunnel :-
Group Status : Up Oam Status : -
IPTN InLabel :- Tunnel BFD Status : -
BackUp LSP Type : Hot-Standby BestEffort : Enabled
Secondary HopLimit : -
BestEffort HopLimit : -
Secondary Explicit Path Name: -
Secondary Affinity Prop/Mask: 0x0/0x0
BestEffort Affinity Prop/Mask: 0x0/0x0
IsConfigLspConstraint: -
Hot-Standby Revertive Mode: Revertive
Hot-Standby Overlap-path: Disabled
Hot-Standby Switch State: CLEAR
Bit Error Detection: Disabled
Bit Error Detection Switch Threshold: -
Bit Error Detection Resume Threshold: -
Ip-Prefix Name : -
P2p-Template Name : -
PCE Delegate : No LSP Control Status : Local control
Path Verification : --
Entropy Label : None
Associated Tunnel Group ID: - Associated Tunnel Group Type: -
Auto BW Remain Time : 200 s Reopt Remain Time : 100 s
Metric Inherit IGP : None
Hop 1 10.1.1.2
Hop 2 3.3.3.3
Hop 3 10.1.4.1
Hop 4 10.1.4.2
Hop 5 5.5.5.5
Hop 6 10.1.5.1
Hop 7 10.1.5.2
Hop 8 6.6.6.6
The command outputs show that the computed primary and hot-standby LSPs are
the same as the actual primary and hot-standby LSPs, indicating that the device
has computed two isolated LSPs.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te metric 1
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
mpls
mpls te
mpls te metric 10
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 6.6.6.6
mpls te record-route
mpls te backup hot-standby
mpls te tunnel-id 1
mpls te cspf disjoint
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
mpls-te enable
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.6.1 255.255.255.0
mpls
mpls te
mpls te metric 10
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.6.0 0.0.0.255
mpls-te enable
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
Networking Requirements
Figure 1-29 illustrates CR-LSP hot standby. A TE tunnel between PE1 and PE2 is
established. The tunnel is enabled with hot standby and configured with the best-
effort path. The following requirements must be met:
● The primary CR-LSP is PE1 → P1 → PE2.
● The backup CR-LSP is PE1 → P2 → PE2.
If the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary
CR-LSP recovers, traffic switches back to the primary CR-LSP after a 15-second
delay. If both the primary and backup CR-LSPs fail, traffic switches to the best-
effort path. Explicit paths can be configured for the primary and backup CR-LSPs.
A best-effort path can be generated automatically. In this example, the best-effort
path is PE1 -> P2 -> P1 -> PE2. The calculated best-effort path varies according to
the faulty node.
Two static BFD sessions are established to monitor the primary and backup CR-
LSPs. After the configuration, the following objects are achieved:
● If the primary CR-LSP fails, traffic is rapidly switched to the backup CR-LSP.
● If the primary CR-LSP recovers and the backup CR-LSP fails during the
switchover time (15s), traffic switches back to the primary CR-LSP.
Precautions
None.
Configuration Roadmap
The configuration roadmap is as follows:
A reverse CR-LSP must be established for each of the primary and hot-standby CR-
LSPs.
3. On PE1, establish two BFD sessions and bind one to the primary CR-LSP and
the other to the hot-standby CR-LSP; on PE2, establish two BFD sessions and
bind both sessions to the IP link (PE2 → PE1).
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure CR-LSP hot standby.
NOTE
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[*PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd mainlsptope2 bind mpls-te interface Tunnel2 te-lsp
[*PE2-bfd-lsp-session-mainlsptope2] discriminator local 314
[*PE2-bfd-lsp-session-mainlsptope2] discriminator remote 413
[*PE2-bfd-lsp-session-mainlsptope2] min-tx-interval 100
[*PE2-bfd-lsp-session-mainlsptope2] min-rx-interval 100
[*PE2-bfd-lsp-session-mainlsptope2] quit
[*PE2] bfd backuplsptope2 bind mpls-te interface Tunnel2 te-lsp backup
[*PE2-bfd-lsp-session-backuplsptope2] discriminator local 324
[*PE2-bfd-lsp-session-backuplsptope2] discriminator remote 423
[*PE2-bfd-lsp-session-backuplsptope2] min-tx-interval 100
[*PE2-bfd-lsp-session-backuplsptope2] min-rx-interval 100
[*PE2-bfd-lsp-session-backuplsptope2] commit
[*PE2-bfd-lsp-session-backuplsptope2] quit
# After completing the configuration, run the display bfd session discriminator
local-discriminator-value command on PE1 and PE2. The status of BFD sessions is
Up.
The following example uses the command output on PE1.
[~PE1] display bfd session discriminator 413
(w): State in WTR
(*): State is invalid
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
413 314 3.3.3.3 Up S_TE_LSP Tunnel1
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
[~PE1] display bfd session discriminator 423
(w): State in WTR
(*): State is invalid
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
423 324 3.3.3.3 Up S_TE_LSP Tunnel1
--------------------------------------------------------------------------------
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path backup
next hop 10.3.1.2
next hop 10.5.1.2
next hop 3.3.3.3
#
explicit-path main
next hop 10.4.1.2
next hop 10.2.1.2
next hop 3.3.3.3
#
isis 1
cost-style wide
network-entity 10.0000.0000.0004.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
ip address 10.3.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
ip address 10.4.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
interface Tunnel 1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 100
mpls te record-route
mpls te path explicit-path main
mpls te path explicit-path backup secondary
mpls te backup hot-standby wtr 15
mpls te backup ordinary best-effort
#
bfd mainlsptope2 bind mpls-te interface Tunnel1 te-lsp
discriminator local 413
discriminator remote 314
min-tx-interval 100
min-rx-interval 100
process-pst
#
bfd backuplsptope2 bind mpls-te interface Tunnel1 te-lsp backup
discriminator local 423
discriminator remote 324
min-tx-interval 100
min-rx-interval 100
process-pst
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
ip address 10.1.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
ip address 10.4.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet3/0/0
ip address 10.2.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
ip address 10.1.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
ip address 10.5.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet3/0/0
ip address 10.3.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
bfd
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
isis 1
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
ip address 10.2.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
ip address 10.5.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te record-route
mpls te backup ordinary best-effort
mpls te backup hot-standby
mpls te tunnel-id 502
#
bfd mainlsptope2 bind mpls-te interface Tunnel2 te-lsp
discriminator local 314
discriminator remote 413
min-tx-interval 100
min-rx-interval 100
process-pst
#
bfd backuplsptope2 bind mpls-te interface Tunnel2 te-lsp backup
discriminator local 324
discriminator remote 423
min-tx-interval 100
min-rx-interval 100
process-pst
#
return
Networking Requirements
Figure 1-30 illustrates CR-LSP hot standby. A TE tunnel between PE1 and PE2 is
established. Hot standby and a best-effort LSP are configured for the TE tunnel. If
the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary
CR-LSP recovers, traffic switches back to the primary CR-LSP after a 15-second
delay. If both the primary and backup CR-LSPs fail, traffic switches to the best-
effort path.
Dynamic BFD for TE CR-LSP is required to detect the primary and backup CR-LSPs.
After the configuration, the following objects should be achieved:
● If the primary CR-LSP fails, traffic rapidly switches to the backup CR-LSP.
● If the primary CR-LSP recovers and the backup CR-LSP fails during the
switchover time (15s), traffic switches back to the primary CR-LSP.
NOTE
Dynamic BFD configuration is simpler than static BFD configuration. In addition, dynamic
BFD reduces the number of BFD sessions and uses less network resources because only a
single BFD session can be created on a tunnel interface.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure CR-LSP hot standby according to Example for Configure a Hot-
standby CR-LSP.
2. Enable BFD on the ingress of the tunnel. Configure MPLS TE BFD. Set the
minimum intervals at which BFD packets are sent and received, and the local
BFD detection multiplier.
Data Preparation
To complete the configuration, you need the following data:
● Minimum intervals at which BFD packets are sent and received on the ingress
● Local BFD detection multiplier on the ingress
● For other data, see Example for Configure a Hot-standby CR-LSP.
Procedure
Step 1 Configure CR-LSP hot standby.
Configure the primary CR-LSP, hot-standby CR-LSP, and best-effort LSP based on
Example for Configure a Hot-standby CR-LSP.
Step 2 Enable BFD on the ingress of the tunnel and configure MPLS TE BFD.
# Enable MPLS TE BFD on the tunnel interface of PE1. Set the minimum intervals
at which BFD packets are sent and received to 100 milliseconds and the local BFD
detection multiplier to 3.
<PE1> system-view
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] interface Tunnel 10
[*PE1-Tunnel10] mpls te bfd enable
[*PE1-Tunenl10] mpls te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
[*PE1-Tunenl10] commit
Step 3 Enable the capability of passively creating BFD sessions on the egress of the
tunnel.
<PE2> system-view
[~PE2] bfd
[*PE2-bfd] mpls-passive
[*PE2-bfd] commit
[~PE2-bfd] quit
# Run the display bfd session mpls-te interface Tunnel command on PE1 and
PE2. The status of BFD sessions is Up.
[~PE1] display bfd session mpls-te interface Tunnel 10 te-lsp
(w): State in WTR
(*): State is invalid
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
16385 16385 3.3.3.3 Up D_TE_LSP Tunnel10
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path backup
next hop 10.3.1.2
next hop 10.5.1.2
next hop 3.3.3.3
#
explicit-path main
next hop 10.4.1.2
next hop 10.2.1.2
next hop 3.3.3.3
#
isis 1
cost-style wide
network-entity 10.0000.0000.0004.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te record-route
mpls te backup ordinary best-effort
mpls te backup hot-standby
mpls te tunnel-id 502
mpls te path explicit-path main
mpls te path explicit-path backup secondary
mpls te bfd enable
mpls te bfd min-tx-interval 100 min-rx-interval 100
#
return
● P1 configuration file
#
sysname P1
#
mpls rsvp-te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
bfd
mpls-passive
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
isis 1
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-1-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.5.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
Networking Requirements
Figure 1-31 illustrates an MPLS network. Layer 2 devices (switches) are deployed
between PE1 and PE2. PE1 is configured with VPN FRR and the MPLS TE tunnel.
The primary path of VPN FRR is PE1 → Switch → PE2; the backup path of VPN
FRR is PE1 → PE3. In a normal situation, VPN traffic is transmitted over the
primary path. If the primary path fails, VPN traffic is switched to the backup path.
BFD for TE tunnel is required to monitor the TE tunnel over the primary path and
enable VPN to rapidly detect tunnel faults. Traffic rapidly switches between the
primary and backup paths, and fault recovery is sped up.
Figure 1-31 Static BFD for TE tunnel with automatically negotiated discriminators
NOTE
NOTE
For simplicity, the IP addresses of the interfaces connected the PEs and the CEs are not
shown in the diagram.
Precautions
None
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● An IGP and its parameters
● BGP AS number and interface names used by BGP sessions
● MPLS LSR ID
● Tunnel interface number and explicit paths
● VPN instance name, RD, and route target (RT)
● Name of the tunnel policy
● Name of a BFD session
● Local and remote discriminators of BFD sessions
Procedure
Step 1 Assign an IP address and a mask to each interface.
Assign an IP address to each interface according to Figure 1-31, create loopback
interfaces on routers, and configure the IP addresses of the loopback interfaces as
MPLS LSR IDs. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP.
Configure OSPF or IS-IS on each router to ensure interworking between PE1 and
PE2, and between PE1 and PE3. OSPF is used in the example. For configuration
details, see Configuration Files in this section.
Step 3 Configure basic MPLS functions.
On each router, configure an LSR ID and enable MPLS in the system and interface
views. For configuration details, see Configuration Files in this section.
Step 4 Configure basic MPLS TE functions.
Enable MPLS TE and MPLS RSVP-TE in the MPLS and interface views on each LSR.
For configuration details, see Configuration Files in this section.
Step 5 Enable OSPF TE and configure the CSPF.
Enable OSPF TE on each router and configure CSPF on PE1. For configuration
details, see Configuration Files in this section.
Step 6 Configure tunnel interfaces.
Specify explicit paths between PE1 and PE2 and between PE1 and PE3. For PE1,
two explicit paths must be specified.
# Configure the explicit paths between PE1 and PE2 and between PE2 and PE3.
[~PE1] explicit-path tope2
[*PE1-explicit-path-tope2] next hop 10.2.1.2
[*PE1-explicit-path-tope2] next hop 3.3.3.3
[*PE1-explicit-path-tope2] quit
[*PE1] explicit-path tope3
[*PE1-explicit-path-tope3] next hop 10.1.1.2
[*PE1-explicit-path-tope3] next hop 2.2.2.2
[*PE1-explicit-path-tope3] commit
[*PE1-explicit-path-tope3] quit
Create tunnel interfaces and specify explicit paths on PE1, PE2, and PE3. Bind the
tunnel to the specified VPN. For PE1, two tunnel interfaces must be created. For
PE1, two tunnel interfaces must be created.
# Configure PE1.
[~PE1] interface tunnel 2
[*PE1-Tunnel2] ip address unnumbered interface loopback 1
[*PE1-Tunnel2] tunnel-protocol mpls te
[*PE1-Tunnel2] destination 3.3.3.3
[*PE1-Tunnel2] mpls te tunnel-id 2
[*PE1-Tunnel2] mpls te path explicit-path tope2
[*PE1-Tunnel2] mpls te reserved-for-binding
[*PE1-Tunnel2] quit
[*PE1] interface tunnel 1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 2.2.2.2
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te path explicit-path tope3
[*PE1-Tunnel1] mpls te reserved-for-binding
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
# Configure PE2.
[~PE2] interface tunnel 2
[*PE2-Tunnel2] ip address unnumbered interface loopback 1
[*PE2-Tunnel2] tunnel-protocol mpls te
[*PE2-Tunnel2] destination 1.1.1.1
[*PE2-Tunnel2] mpls te tunnel-id 3
[*PE2-Tunnel2] mpls te path explicit-path tope1
[*PE2-Tunnel2] mpls te reserved-for-binding
[*PE2-Tunnel2] commit
[~PE2-Tunnel2] quit
# Configure PE3.
[~PE3] interface tunnel 1
[*PE3-Tunnel1] ip address unnumbered interface loopback 1
[*PE3-Tunnel1] tunnel-protocol mpls te
[*PE3-Tunnel1] destination 1.1.1.1
[*PE3-Tunnel1] mpls te tunnel-id 4
[*PE3-Tunnel1] mpls te path explicit-path tope1
[*PE3-Tunnel1] mpls te reserved-for-binding
[*PE3-Tunnel1] commit
[~PE3-Tunnel1] quit
After completing the preceding configuration, run the display mpls te tunnel-
interface tunnel interface-number command on the PEs. The command output
shows that the status of tunnel 1 and tunnel 2 on PE1, tunnel 2 on PE2, and
tunnel 1 on PE3 is Up.
# Configure PE2.
[~PE2] tunnel-policy policy1
[*PE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te tunnel 2
[*PE2-tunnel-policy-policy1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] tnl-policy policy1
[*PE2-vpn-instance-vpn1] commit
[~PE2-vpn-instance-vpn1] quit
# Configure PE3.
[~PE3] tunnel-policy policy1
[*PE3-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te tunnel 1
[*PE3-tunnel-policy-policy1] quit
[*PE3] ip vpn-instance vpn1
[*PE3-vpn-instance-vpn1] tnl-policy policy1
[*PE3-vpn-instance-vpn1] commit
[~PE3-vpn-instance-vpn1] quit
After the configuration is complete, CEs can communicate, and traffic flows
through PE1, the switch, and PE2. If the cable to any interface connecting PE1 to
PE2 is removed, or the switch fails, or PE2 fails, VPN traffic is switched to the
backup path between PE1 and PE3. Time taken in fault recovery is close to the IGP
convergence time.
Step 8 Configure BFD for TE tunnel.
# Configure a BFD session on PE1 to monitor the TE tunnel of the primary path.
Set the minimum intervals at which BFD packets are sent and received.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] bfd pe1tope2 bind mpls-te interface tunnel2
[*PE1-bfd-lsp-session-pe1tope2] discriminator local 12
[*PE1-bfd-lsp-session-pe1tope2] discriminator remote 21
[*PE1-bfd-lsp-session-pe1tope2] min-tx-interval 100
[*PE1-bfd-lsp-session-pe1tope2] min-rx-interval 100
[*PE1-bfd-lsp-session-pe1tope2] process-pst
[*PE1-bfd-lsp-session-pe1tope2] commit
# Establish a BFD session on PE2 and specify the TE tunnel as the reverse tunnel.
Set the minimum intervals at which BFD packets are sent and received.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd pe2tope1 bind mpls-te interface tunnel2
[*PE2-bfd-lsp-session-pe2tope1] discriminator local 21
[*PE2-bfd-lsp-session-pe2tope1] discriminator remote 12
[*PE2-bfd-lsp-session-pe2tope1] min-tx-interval 100
[*PE2-bfd-lsp-session-pe2tope1] min-rx-interval 100
[*PE2-bfd-lsp-session-pe2tope1] commit
# After completing the configuration, run the display bfd session { all |
discriminator discr-value | mpls-te interface interface-type interface-number }
[ verbose ] command on PE1 and PE2. The command output shows that the BFD
session is Up.
Step 9 Verify the configuration.
Connect tester's Port 1 and Port 2 to CE1 and CE2, respectively. Inject traffic
destined for port 2 into port 1. The test shows that a fault can be rectified in
milliseconds.
----End
Configuration Files
NOTE
Configuration files of CE1, CE2, and the switch and the configuration of PE accessing CE are
not provided.
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
route-distinguisher 100:1
tnl-policy policy1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path tope2
next hop 10.2.1.2
next hop 3.3.3.3
#
explicit-path tope3
next hop 10.1.1.2
min-tx-interval 100
min-rx-interval 100
process-pst
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpn1
route-distinguisher 100:2
tnl-policy policy1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
bfd
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path tope1
next hop 10.2.1.1
next hop 1.1.1.1
#
interface gigabitethernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 3
mpls te path explicit-path tope1
mpls te reserved-for-binding
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.2.1.0 0.0.0.255
network 3.3.3.3 0.0.0.0
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel2
#
bfd pe2tope1 bind mpls-te interface Tunnel2
discriminator local 21
discriminator remote 12
min-tx-interval 100
min-rx-interval 100
#
return
● PE3 configuration file
#
sysname PE3
#
ip vpn-instance vpn1
route-distinguisher 100:3
tnl-policy policy1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path tope1
next hop 10.1.1.1
next hop 1.1.1.1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 4
mpls te path explicit-path tope1
mpls te reserved-for-binding
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.1.1.0 0.0.0.3
network 2.2.2.2 0.0.0.0
mpls-te enable
#
tunnel-policy policy1
Networking Requirements
On the MPLS network shown in Figure 1-32, a Layer 2 device (switch) is deployed
on a link between P1 and P2. A primary MPLS TE tunnel between PE1 and PE2 is
established over a path PE1 -> P1 -> switch -> P2 -> PE2. A TE FRR bypass tunnel
between P1 and PE2 is established over the path P1 -> P3 -> PE2. P1 functions as
the point of local repair (PLR), and PE2 functions as the merge point (MP).
If the link between the switch and P2 fails, P1 keeps sending the switch RSVP
messages (including Hello messages) destined for P2 and detects the fault only
after P1 fails to receive replies to RSVP Hello messages sent to P2.
The timeout period of RSVP neighbor relationships is three times as long as the
interval between Hello message transmissions. After the timeout period elapses,
P1 declares its neighbor Down, which is seconds slower than it does when there is
no Layer 2 device. The fault detection latency causes a large number of packets to
be dropped. To minimize traffic loss, BFD can be configured to rapidly detect the
fault in the link between P2 and the switch. After a BFD session detects the fault,
it advertises the fault to trigger TE FRR switching.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address for each interface and enable IGP on each LSR so that
LSRs can communicate. Enable IGP GR to support RSVP GR.
2. Configure the MPLS network and basic MPLS TE functions.
3. Configure explicit paths for the primary and bypass tunnels.
4. Create the primary tunnel interface and enable TE FRR on PE1. Configure the
bypass tunnel on P1.
5. Configure BFD for RSVP on P1 and P2.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface according to Figure 1-32, create loopback
interfaces on LSRs, and configure the loopback interface addresses as MPLS LSR
IDs. For configuration details, see Configuration Files in this section.
Step 2 Configure the switch.
Configure the switch so that P1 and P2 can communicate. For configuration
details, see Configuration Files in this section.
Step 3 Configure an IGP.
Configure OSPF or IS-IS on each LSR so that LSRs can communicate. In this
example, IS-IS is used. For configuration details, see Configuration Files in this
section.
Step 4 Configure basic MPLS functions.
Configure the LSR ID and enable MPLS in the system and interface views on each
LSR. For configuration details, see Configuration Files in this section.
Step 5 Configure basic MPLS TE functions.
Enable MPLS TE and MPLS RSVP-TE in the MPLS and interface views on each LSR.
For configuration details, see Configuration Files in this section.
Step 6 Configure IS-IS TE and CSPF.
Enable IS-IS TE on each node and configure CSPF on PE1 and P1. For
configuration details, see Configuration Files in this section.
Step 7 Configure the primary tunnel.
# Specify an explicit path for the primary tunnel on PE1.
<PE1> system-view
[~PE1] explicit-path tope2
[*PE1-explicit-path-tope2] next hop 10.1.1.2
[*PE1-explicit-path-tope2] next hop 10.2.1.2
[*PE1-explicit-path-tope2] next hop 10.4.1.2
[*PE1-explicit-path-tope2] next hop 5.5.5.5
[*PE1-explicit-path-tope2] commit
[~PE1-explicit-path-tope2] quit
# Create a tunnel interface on PE1, specify an explicit path, and enable TE FRR.
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] ip address unnumbered interface loopback 1
[*PE1-Tunnel10] tunnel-protocol mpls te
[*PE1-Tunnel10] destination 5.5.5.5
[*PE1-Tunnel10] mpls te tunnel-id 100
[*PE1-Tunnel10] mpls te path explicit-path tope2
[*PE1-Tunnel10] mpls te fast-reroute
[*PE1-Tunnel10] commit
[~PE1-Tunnel10] quit
# Run the display mpls te tunnel-interface tunnel command on PE1. The status
of Tunnel 10 on PE1 is Up.
Step 8 Configure the bypass tunnel.
# Specify the explicit path for the bypass tunnel on P1.
<P1> system-view
[~P1] explicit-path tope2
[*P1-explicit-path-tope2] next hop 10.3.1.2
[*P1-explicit-path-tope2] next hop 10.5.1.2
[*P1-explicit-path-tope2] next hop 5.5.5.5
[*P1-explicit-path-tope2] commit
[~P1-explicit-path-tope2] quit
# Configure a bypass tunnel interface and specify an explicit path for the bypass
tunnel on P1. Specify the physical interface to be protected by the bypass tunnel.
[~P1] interface Tunnel 30
[*P1-Tunnel30] ip address unnumbered interface loopback 1
[*P1-Tunnel30] tunnel-protocol mpls te
[*P1-Tunnel30] destination 5.5.5.5
[*P1-Tunnel30] mpls te tunnel-id 300
[*P1-Tunnel30] mpls te path explicit-path tope2
[*P1-Tunnel30] mpls te bypass-tunnel
[*P1-Tunnel30] mpls te protected-interface gigabitethernet 2/0/0
[*P1-Tunnel30] commit
[~P1-Tunnel30] quit
# Configure P2.
[~P2] bfd
[*P2-bfd] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] mpls rsvp-te bfd enable
[*P2-GigabitEthernet2/0/0] mpls rsvp-te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
[*P2-GigabitEthernet2/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
----End
Configuration Files
NOTE
mpls te cspf
#
isis 1
is-level level-2
cost-style wide
network-entity 86.4501.0030.0300.3003.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
mpls rsvp-te hello
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
mpls rsvp-te bfd enable
mpls rsvp-te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
mpls rsvp-te hello
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
● P3 configuration file
#
sysname P3
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
mpls rsvp-te hello
mpls te cspf
#
isis 1
is-level level-2
cost-style wide
network-entity 86.4501.0040.0400.4004.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
mpls rsvp-te hello
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.5.1.1 255.255.255.252
isis enable 1
mpls
mpls te
mpls rsvp-te
mpls rsvp-te hello
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-33, an RSVP distribution instance is configured
on LSRB. Traffic on GE 1/0/0 and GE 1/0/1 are allocated to RSVP0 and the new
RSVP instance, respectively. A TE tunnel is established along the path LSRA ->
LSRB -> LSRC. Traffic on LSRB is transmitted by the inbound and outbound
interfaces based on RSVP instances on LSRB.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and a loopback address to each interface.
2. Configure Open Shortest Path First (OSPF) to advertise the route to each
network segment to which each interface is connected and to advertise the
host route to each loopback interface address that is a label switching router
(LSR) ID.
3. Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF on
each LSR.
4. Enable the OSPF-TE capability to ensure that MPLS TE can advertise
information about link status.
5. Create an RSVP instance named RSVP_A on LSRB.
6. Allocate GE 1/0/0 to RSVP0 and allocate GE 1/0/1 to RSVP_A.
7. Configure an MPLS TE tunnel.
NOTE
Data Preparation
To complete the configuration, you need the following data:
● IP address of every interface on every LSR shown in Figure 1-33, OSPF
process ID (1), and OSPF area ID (0.0.0.0)
● MPLS LSR ID of each node using the corresponding loopback interface
address, as shown in Figure 1-33
● RSVP instance created on LSRB named RSVP_A
● Tunnel interface number (Tunnel 3), tunnel ID (1), and loopback interface
address used as the IP address of the tunnel interface
Procedure
Step 1 Assign an IP address and its mask to every interface.
Assign an IP address and its mask to every interface as shown in Figure 1-33. For
configuration details, see Configuration Files in this section.
Step 2 Configure OSPF.
Configure OSPF to advertise every network segment route and host route. For
configuration details, see Configuration Files in this section.
NOTE
Repeat this step for LSRB and LSRC. For configuration details, see Configuration Files in
this section.
# Configure LSRB.
[~LSRB] ospf 1
[~LSRB-ospf-1] opaque-capability enable
[*LSRB-ospf-1] area 0
[*LSRB-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRB-ospf-1-area-0.0.0.0] commit
[~LSRB-ospf-1-area-0.0.0.0] quit
# Configure LSRC.
[~LSRC] ospf 1
[~LSRC-ospf-1] opaque-capability enable
[*LSRC-ospf-1] area 0
[*LSRC-ospf-1-area-0.0.0.0] mpls-te enable
[*LSRC-ospf-1-area-0.0.0.0] commit
[~LSRC-ospf-1-area-0.0.0.0] quit
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface GigabitEthernet 1/0/1
[*LSRB-GigabitEthernet1/0/1] mpls rsvp-te distributed-instance RSVP_A
[*LSRB-GigabitEthernet1/0/1] commit
[~LSRB-GigabitEthernet1/0/1] quit
After completing the preceding configuration, run the display interface tunnel
command. The tunnel interface is UP.
[~LSRA] display interface Tunnel3
Tunnel3 current state : UP (ifindex: 33)
Line protocol current state : UP
Last line protocol up time : 2012-11-30 06:29:27
Description:
Route Port,The Maximum Transmit Unit is 1500, Current BW: 0Mbps
Internet Address is unnumbered, using address of LoopBack1(1.1.1.1/32)
Encapsulation is TUNNEL, loopback not set
Tunnel destination 3.3.3.3
Tunnel up/down statistics 1
Tunnel protocol/transport MPLS/MPLS, ILM is available,
primary tunnel id is 0xA1, secondary tunnel id is 0x0
Current system time: 2012-11-30 06:29:35
300 seconds output rate 0 bits/sec, 0 packets/sec
0 seconds output rate 0 bits/sec, 0 packets/sec
126 packets output, 34204 bytes
0 output error
18 output drop
Last 300 seconds input utility rate: 0.00%
Last 300 seconds output utility rate: 0.00%
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface GigabitEthernet 1/0/0
undo shutdown
ip address 172.16.12.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 1
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 172.16.12.0 0.0.0.255
mpls-te enable
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
mpls rsvp-te distributed-instance RSVP_A os-group OSG-MMB-BG1-1
#
interface GigabitEthernet 1/0/0
undo shutdown
ip address 172.16.12.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet 1/0/1
undo shutdown
ip address 172.16.23.2 255.255.255.0
mpls
mpls te
mpls rsvp-te distributed-instance RSVP_A
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 172.16.12.0 0.0.0.255
network 172.16.23.0 0.0.0.255
mpls-te enable
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface GigabitEthernet 1/0/0
undo shutdown
ip address 172.16.23.3 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.23.0 0.0.0.255
mpls-te enable
return
Networking Requirements
The IP multicast service bearer technology used on the current IP/MPLS backbone
network relies on the IP unicast technology. Like IP unicast, IP multicast fails to
provide sufficient bandwidth, QoS capabilities, reliability and high real-time
performance for multicast services such as IPTV and massively multiplayer online
role-playing games (MMORPGs). A P2MP TE tunnel can solve this problem. A
P2MP TE tunnel can be configured on a live IP/MPLS backbone network, and
supports the P2MP TE FRR function, meeting multicast service requirements.
A P2MP TE tunnel is established on the network shown in Figure 1-34. LSRA is the
tunnel ingress. LSRC, LSRE, and LSRF are leaf nodes, and the tunnel bandwidth is
1000 kbit/s.
Precautions
None.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IS-IS (used as an IGP protocol), IS-IS process ID (1), IS-IS system ID of each
node (obtained by translating the IP address of loopback1 of each node), and
IS-IS level (Level-2)
● MPLS LSR ID of each node using the corresponding loopback interface
address
● Maximum reservable bandwidth (10000 kbit/s) of the outbound interface
along the path and BC0 bandwidth (10000 kbit/s)
● Name of an explicit path used by each leaf node (toLSRB, toLSRE, and
toLSRF), name of the leaf list (iptv1), and addresses of each leaf node (MPLS
LSR ID of each leaf node)
● Tunnel interface number (Tunnel 10), tunnel ID (100), loopback interface
address used as the IP address of the tunnel interface, and tunnel bandwidth
(1000 kbit/s)
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface according to Figure 1-34 and create a
loopback interface on each node. For configuration details, see Configuration
Files in this section.
Step 2 Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each LSR ID.
Configure IS-IS on each node to implement network layer connectivity. For
configuration details, see Configuration Files in this section.
Step 3 Enable MPLS, MPLS TE, P2MP TE, and MPLS RSVP-TE globally on each node and
CSPF on the ingress.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls te
[*LSRA-mpls] mpls te p2mp-te
[*LSRA-mpls] mpls rsvp-te
[*LSRA-mpls] mpls te cspf
[*LSRA-mpls] commit
[~LSRA-mpls] quit
# Configure LSRB.
<LSRB> system-view
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls te p2mp-te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] commit
[~LSRB-mpls] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] mpls lsr-id 3.3.3.3
[*LSRC] mpls
[*LSRC-mpls] mpls te
[*LSRC-mpls] mpls te p2mp-te
# Configure LSRD.
<LSRD> system-view
[~LSRD] mpls lsr-id 4.4.4.4
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls te p2mp-te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] commit
[~LSRD-mpls] quit
# Configure LSRE.
<LSRE> system-view
[~LSRE] mpls lsr-id 5.5.5.5
[*LSRE] mpls
[*LSRE-mpls] mpls te
[*LSRE-mpls] mpls te p2mp-te
[*LSRE-mpls] mpls rsvp-te
[*LSRE-mpls] commit
[~LSRE-mpls] quit
# Configure LSRF.
<LSRF> system-view
[~LSRF] mpls lsr-id 6.6.6.6
[*LSRF] mpls
[*LSRF-mpls] mpls te
[*LSRF-mpls] mpls te p2mp-te
[*LSRF-mpls] mpls rsvp-te
[*LSRF-mpls] commit
[~LSRF-mpls] quit
# Configure LSRB.
[~LSRB] isis 1
[~LSRB-isis-1] cost-style wide
[*LSRB-isis-1] traffic-eng level-2
[*LSRB-isis-1] commit
[~LSRB-isis-1] quit
# Configure LSRC.
[~LSRC] isis 1
[~LSRC-isis-1] cost-style wide
[*LSRC-isis-1] traffic-eng level-2
[*LSRC-isis-1] commit
[~LSRC-isis-1] quit
# Configure LSRD.
[~LSRD] isis 1
[~LSRD-isis-1] cost-style wide
[*LSRD-isis-1] traffic-eng level-2
[*LSRD-isis-1] commit
[~LSRD-isis-1] quit
# Configure LSRE.
[~LSRE] isis 1
[~LSRE-isis-1] cost-style wide
[*LSRE-isis-1] traffic-eng level-2
[*LSRE-isis-1] commit
[~LSRE-isis-1] quit
# Configure LSRF.
[~LSRF] isis 1
[~LSRF-isis-1] cost-style wide
[*LSRF-isis-1] traffic-eng level-2
[*LSRF-isis-1] commit
[~LSRF-isis-1] quit
Step 5 Enable the MPLS TE capability on the interface of each node, and configure link
attributes for the interfaces.
# Configure LSRA.
<LSRA> system-view
[~LSRA] interface gigabitethernet 1/0/1
[~LSRA-GigabitEthernet1/0/1] mpls
[*LSRA-GigabitEthernet1/0/1] mpls te
[*LSRA-GigabitEthernet1/0/1] mpls rsvp-te
[*LSRA-GigabitEthernet1/0/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRA-GigabitEthernet1/0/1] mpls te bandwidth bc0 10000
[*LSRA-GigabitEthernet1/0/1] commit
[~LSRA-GigabitEthernet1/0/1] quit
# Configure LSRB.
<LSRB> system-view
[~LSRB] interface gigabitethernet 1/0/0
[~LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls te
[*LSRB-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRB-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 10000
[~LSRB-GigabitEthernet1/0/0] mpls te bandwidth bc0 10000
[~LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 1/0/2
[*LSRB-GigabitEthernet1/0/2] mpls
[*LSRB-GigabitEthernet1/0/2] mpls te
[*LSRB-GigabitEthernet1/0/2] mpls rsvp-te
[*LSRB-GigabitEthernet1/0/2] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRB-GigabitEthernet1/0/2] mpls te bandwidth bc0 10000
[*LSRB-GigabitEthernet1/0/2] quit
[*LSRB] interface gigabitethernet 1/0/1
[*LSRB-GigabitEthernet1/0/1] mpls
[*LSRB-GigabitEthernet1/0/1] mpls te
[*LSRB-GigabitEthernet1/0/1] mpls rsvp-te
[*LSRB-GigabitEthernet1/0/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRB-GigabitEthernet1/0/1] mpls te bandwidth bc0 10000
[*LSRB-GigabitEthernet1/0/1] commit
[~LSRB-GigabitEthernet1/0/1] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] interface gigabitethernet 1/0/2
[~LSRC-GigabitEthernet1/0/2] mpls
[*LSRC-GigabitEthernet1/0/2] mpls te
[*LSRC-GigabitEthernet1/0/2] mpls rsvp-te
[*LSRC-GigabitEthernet1/0/2] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRC-GigabitEthernet1/0/2] mpls te bandwidth bc0 10000
[*LSRC-GigabitEthernet1/0/2] commit
[~LSRC-GigabitEthernet1/0/2] quit
# Configure LSRD.
<LSRD> system-view
[~LSRD] interface gigabitethernet 1/0/0
[~LSRD-GigabitEthernet1/0/0] mpls
[*LSRD-GigabitEthernet1/0/0] mpls te
[*LSRD-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRD-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRD-GigabitEthernet1/0/0] mpls te bandwidth bc0 10000
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] interface gigabitethernet 1/0/2
[*LSRD-GigabitEthernet1/0/2] mpls
[*LSRD-GigabitEthernet1/0/2] mpls te
[*LSRD-GigabitEthernet1/0/2] mpls rsvp-te
[*LSRD-GigabitEthernet1/0/2] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRD-GigabitEthernet1/0/2] mpls te bandwidth bc0 10000
[*LSRD-GigabitEthernet1/0/2] quit
[*LSRD] interface gigabitethernet 1/0/1
[*LSRD-GigabitEthernet1/0/1] mpls
[*LSRD-GigabitEthernet1/0/1] mpls te
[*LSRD-GigabitEthernet1/0/1] mpls rsvp-te
[*LSRD-GigabitEthernet1/0/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRD-GigabitEthernet1/0/1] mpls te bandwidth bc0 10000
[*LSRD-GigabitEthernet1/0/1] commit
[~LSRD-GigabitEthernet1/0/1] quit
# Configure LSRE.
<LSRE> system-view
[~LSRE] interface gigabitethernet 1/0/0
[~LSRE-GigabitEthernet1/0/0] mpls
[*LSRE-GigabitEthernet1/0/0] mpls te
[*LSRE-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRE-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRE-GigabitEthernet1/0/0] mpls te bandwidth bc0 10000
[*LSRE-GigabitEthernet1/0/0] commit
[~LSRE-GigabitEthernet1/0/0] quit
# Configure LSRF.
<LSRF> system-view
[~LSRF] interface gigabitethernet 1/0/1
[~LSRF-GigabitEthernet1/0/1] mpls
[*LSRF-GigabitEthernet1/0/1] mpls te
[*LSRF-GigabitEthernet1/0/1] mpls rsvp-te
[*LSRF-GigabitEthernet1/0/1] mpls te bandwidth max-reservable-bandwidth 10000
[*LSRF-GigabitEthernet1/0/1] mpls te bandwidth bc0 10000
[*LSRF-GigabitEthernet1/0/1] commit
[~LSRF-GigabitEthernet1/0/1] quit
Step 6 Configure explicit paths and a leaf list on the ingress LSRA.
# Configure a leaf list iptv1 on LSRA and add leaf node addresses to the leaf list.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te p2mp-te
mpls rsvp-te
mpls te cspf
#
explicit-path tolsrc
next hop 10.1.1.2
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls te p2mp-te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0004.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.4.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.5.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
return
● LSRE configuration file
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
mpls te
mpls te p2mp-te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0005.00
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 10000
mpls rsvp-te
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-35, the PEs and P on the MPLS backbone
network run IS-IS to implement connectivity between one another. The P does not
support MPLS LDP. PE1 and PE2 access both VPN-A and VPN-B. LDP LSPs need to
be established between PE3 and PE4 along the path PE1 - P - PE2.
VPN-A transmits AF2 and AF1 traffic. VPN-B transmits AF2, AF1, and BE traffic.
The LDP LSPs transmit BE traffic. QoS requirements of each type of traffic are as
follows.
A DS-TE tunnel is established between PE1 and PE2 to transfer the preceding types
of traffic and satisfy various QoS requirements. The bandwidth constraints model
is set to RDM to allow CTi to preempt lower-priority CTj bandwidth (0 <= i < j <=
7) to guarantee higher-priority CT bandwidth.
Figure 1-36 provides the configuration guidelines for IS-IS, RSVP-TE, OSPF, and
LDP in this example.
Figure 1-36 Configuration guidelines for IS-IS, RSVP-TE, OSPF, and LDP in this
example
Configuration Notes
During the configuration, note the following:
1. Since each tunnel can be configured with a single CT, establish a tunnel for
LDP LSPs to carry CT0. Establish two tunnels in VPN-A, with each of them
carrying a different CT, namely CT1 and CT2. Establish three tunnels in VPN-B,
with each of them carrying a different CT, namely CT0, CT1, and CT2.
2. Configure CT0, CT1, and CT2 to carry BE, AF1, and AF2 flows, respectively.
3. Since the tunnels pass through the same path, configure the BCi link
bandwidth value to be greater than or equal to the sum of CTi through CT7
bandwidth values of all TE tunnels, and configure the maximum link
reservable bandwidth to be greater than or equal to the BC0 bandwidth
value. Therefore, BC2 bandwidth ≥ Total AF2 bandwidth = 200 Mbit/s; BC1
bandwidth ≥ (BC2 bandwidth + Total AF1 bandwidth) = 300 Mbit/s;
reservable link bandwidth ≥ BC0 bandwidth ≥ (BC1 bandwidth + Total BE
bandwidth) = 400 Mbit/s.
4. Use a CT template to configure TE tunnels because the same type of service
in different tunnels has the same bandwidth requirement.
5. Configure IGP forwarding adjacencies on PE1 and PE2 because LDP LSPs
between PE3 and PE4 need to be implemented through LDP over TE.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface on the PEs and P and configure IS-IS to
implement connectivity between the PEs and P.
2. Configure an LSR ID and enable MPLS on the PEs and P. Enable MPLS TE and
RSVP-TE on PE1, PE2, and the P.
3. Configure IS-IS TE and enable CSPF on PE1, PE2, and the P.
4. Configure a DS-TE mode and a BCM on PE1, PE2, and the P.
5. Configure link bandwidth values on PE1, PE2, and the P.
6. Configure a TE-class mapping table on PE1 and PE2.
7. Configure explicit paths on PE1 and PE2.
8. Create tunnel interfaces on PE1 and PE2 to carry services of different levels
using tunnels of different CTs.
Data Preparation
To complete the configuration, you need the following data:
● LSR IDs of PEs and the P
● Number of each MPLS TE tunnel interface
● TE-class mapping table
● Maximum reservable bandwidth value and each BC bandwidth value of each
link
● VPN-A's and VPN-B's VPN instance names, route distinguishers, VPN-Targets,
and tunnel policy name
Procedure
Step 1 Assign an IP address to each interface on the PEs and P and configure IS-IS to
implement connectivity between the PEs and P.
For configuration details, see Configuration Files in this section.
After the configuration, IS-IS neighbor relationships can be established between
PE1, P, and PE2. Run the display ip routing-table command. The command
output shows that the PEs have learned the routes to Loopback 1 of each other.
Step 2 Configure an LSR ID and enable MPLS, MPLS TE, and RSVP-TE on PE1, PE2, and
the P.
# Configure PE1.
<PE1> system-view
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] mpls rsvp-te
[*PE1-mpls] commit
[~PE1-mpls] quit
[*PE1] interface gigabitethernet 1/0/3
[*PE1-GigabitEthernet1/0/3] mpls
[*PE1-GigabitEthernet1/0/3] mpls te
[*PE1-GigabitEthernet1/0/3] mpls rsvp-te
[*PE1-GigabitEthernet1/0/3] quit
# Configure the P.
<P> system-view
[~P] mpls lsr-id 2.2.2.9
[*P] mpls
[*P-mpls] mpls te
[*P-mpls] mpls rsvp-te
[*P-mpls] commit
[~P-mpls] quit
[~P] interface gigabitethernet 1/0/1
[*P-GigabitEthernet1/0/1] mpls
[*P-GigabitEthernet1/0/1] mpls te
[*P-GigabitEthernet1/0/1] mpls rsvp-te
[*P-GigabitEthernet1/0/1] commit
[~P-GigabitEthernet1/0/1] quit
[~P] interface gigabitethernet 1/0/2
[*P-GigabitEthernet1/0/2] mpls
[*P-GigabitEthernet1/0/2] mpls te
[*P-GigabitEthernet1/0/2] mpls rsvp-te
[*P-GigabitEthernet1/0/2] commit
[~P-GigabitEthernet1/0/2] quit
# Configure PE2.
<PE2> system-view
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] commit
[~PE2-mpls] quit
[*PE2] interface gigabitethernet 1/0/3
[*PE2-GigabitEthernet1/0/3] mpls
[*PE2-GigabitEthernet1/0/3] mpls te
[*PE2-GigabitEthernet1/0/3] mpls rsvp-te
[*PE2-GigabitEthernet1/0/3] quit
After completing the configuration, run the display mpls rsvp-te interface
command on PE1, PE2, or the P to check RSVP interface information and RSVP
information.
Step 3 Configure IS-IS TE and enable CSPF on PE1, PE2, and the P.
# Enable IS-IS TE on all nodes and enable CSPF on the ingress of the TE tunnel.
# Configure PE1.
[~PE1] isis 1
[~PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
[~PE1] mpls
[~PE1-mpls] mpls te cspf
[*PE1-mpls] commit
# Configure the P.
[~P] isis 1
[~P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] traffic-eng level-1
[*P-isis-1] commit
[~P-isis-1] quit
# Configure PE2.
[~PE2] isis 1
[~PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-1
[*PE2-isis-1] commit
[~PE2-isis-1] quit
[~PE2] mpls
[~PE2-mpls] mpls te cspf
[*PE2-mpls] commit
[~PE2-mpls] quit
After completing the configuration, run the display isis lsdb command on a PE or
the P. The command output shows that the IS-IS link status information.
Step 4 Configure a DS-TE mode and a BCM on PE1, PE2, and the P.
# Configure PE1.
[~PE1] mpls
[~PE1-mpls] mpls te ds-te mode ietf
[*PE1-mpls] mpls te ds-te bcm rdm
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure the P.
[~P] mpls
[~P-mpls] mpls te ds-te mode ietf
[*P-mpls] mpls te ds-te bcm rdm
[*P-mpls] commit
[~P-mpls] quit
# Configure PE2.
[~PE2] mpls
[~PE2-mpls] mpls te ds-te mode ietf
[*PE2-mpls] mpls te ds-te bcm rdm
[*PE2-mpls] commit
[~PE2-mpls] quit
After completing the configuration, run the display mpls te ds-te summary
command on a PE or the P to check DS-TE configurations. The following example
uses the command output on PE1.
[~PE1] display mpls te ds-te summary
DS-TE IETF Supported :YES
DS-TE MODE :IETF
Bandwidth Constraint Model :RDM
TEClass Mapping (configured):
TE-Class ID Class Type Priority
TE-Class 0 0 0
TE-Class 1 1 0
TE-Class 2 2 0
TE-Class 3 3 0
TE-Class 4 0 7
TE-Class 5 1 7
TE-Class 6 2 7
TE-Class 7 3 7
# Configure the P.
[~P] interface gigabitethernet 1/0/1
[~P-GigabitEthernet1/0/1] mpls te bandwidth max-reservable-bandwidth 400000
[*P-GigabitEthernet1/0/1] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000
[*P-GigabitEthernet1/0/1] commit
[~P-GigabitEthernet1/0/1] quit
[~P] interface gigabitethernet 1/0/2
[~P-GigabitEthernet1/0/2] mpls te bandwidth max-reservable-bandwidth 400000
[*P-GigabitEthernet1/0/2] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000
[~P-GigabitEthernet1/0/2] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/3
[~PE2-GigabitEthernet1/0/3] mpls te bandwidth max-reservable-bandwidth 400000
[*PE2-GigabitEthernet1/0/3] mpls te bandwidth bc0 400000 bc1 300000 bc2 200000
[~PE2-GigabitEthernet1/0/3] quit
# Configure PE2.
[~PE2] te-class-mapping
[~PE2-te-class-mapping] te-class0 class-type ct0 priority 0 description For-BE
[*PE2-te-class-mapping] te-class1 class-type ct1 priority 0 description For-AF1
[*PE2-te-class-mapping] te-class2 class-type ct2 priority 0 description For-AF2
[*PE2-te-class-mapping] commit
[~PE2-te-class-mapping] quit
After completing the configuration, run the display mpls te ds-te te-class-
mapping command on a PE to check TE-class mapping table information. The
following example uses the command output on PE1.
[~PE1] display mpls te ds-te te-class-mapping
TE-Class ID Class Type Priority Description
TE-Class0 0 0 For-BE
TE-Class1 1 0 For-AF1
TE-Class2 2 0 For-AF2
TE-Class3 - - -
TE-Class4 - - -
TE-Class5 - - -
TE-Class6 - - -
TE-Class7 - - -
# Configure PE2.
[~PE2] explicit-path path1
[*PE2-explicit-path-path1] next hop 10.11.1.1
[*PE2-explicit-path-path1] next hop 10.10.1.1
[*PE2-explicit-path-path1] next hop 1.1.1.9
[*PE2-explicit-path-path1] commit
[~PE2-explicit-path-path1] quit
# Configure PE2.
[~PE2] interface tunnel10
[*PE2-Tunnel10] description For VPN-A & Non-VPN
After completing the configuration, run the display interface tunnel interface-
number command on a PE to check whether the tunnel interface state is UP. The
following example the command output for Tunnel10 of PE1.
[~PE1] display interface tunnel10
Tunnel1 current state : UP(ifindex: 27)
Line protocol current state : UP
Description: For VPN-A & Non-VPN
Route Port,The Maximum Transmit Unit is 1500
Internet Address is unnumbered, using address of LoopBack0(1.1.1.9/32)
Encapsulation is TUNNEL, loopback not set
Tunnel destination 3.3.3.9
Tunnel up/down statistics 0
Tunnel ct0 bandwidth is 0 Kbit/sec
Tunnel protocol/transport MPLS/MPLS, ILM is disabled
primary tunnel id is 0x0, secondary tunnel id is 0x0
Current system time: 2017-07-19 06:46:59
0 seconds output rate 0 bits/sec, 0 packets/sec
0 seconds output rate 0 bits/sec, 0 packets/sec
0 packets output, 0 bytes
0 output error
0 output drop
Last 300 seconds input utility rate: --
Last 300 seconds output utility rate: --
Step 9 Configure forwarding adjacencies on the ingresses (PE1 and PE2) of TE tunnels
and establish remote LDP peer relationships between the ingresses and egresses
of the TE tunnels.
# Configure PE1.
[~PE1] interface tunnel10
[~PE1-Tunnel10] mpls te igp advertise
[*PE1-Tunnel10] mpls te igp metric absolute 1
[*PE1-Tunnel10] mpls
[*PE1-Tunnel10] quit
[*PE1] interface tunnel11
[*PE1-Tunnel11] mpls te igp advertise
[*PE1-Tunnel11] mpls te igp metric absolute 1
[*PE1-Tunnel11] mpls
[*PE1-Tunnel11] quit
[*PE1] interface tunnel12
[*PE1-Tunnel12] mpls te igp advertise
[*PE1-Tunnel12] mpls te igp metric absolute 1
[*PE1-Tunnel12] mpls
[*PE1-Tunnel12] quit
[*PE1] interface tunnel20
[*PE1-Tunnel20] mpls te igp advertise
[*PE1-Tunnel20] mpls te igp metric absolute 1
[*PE1-Tunnel20] mpls
[*PE1-Tunnel20] quit
[*PE1] interface tunnel21
[*PE1-Tunnel21] mpls te igp advertise
[*PE1-Tunnel21] mpls te igp metric absolute 1
[*PE1-Tunnel21] mpls
[*PE1-Tunnel21] quit
[*PE1] interface tunnel22
[*PE1-Tunnel22] mpls te igp advertise
[*PE1-Tunnel22] mpls te igp metric absolute 1
[*PE1-Tunnel22] mpls
[*PE1-Tunnel22] quit
[*PE1] ospf 1
[*PE1-ospf-1] opaque-capability enable
[*PE1-ospf-1] enable traffic-adjustment advertise
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9
[*PE1-ospf-1-area-0.0.0.0] mpls-te enable
[*PE1-ospf-1-area-0.0.0.0] quit
[*PE1-ospf-1] quit
[*PE1] mpls ldp remote-peer pe1tope2
[*PE1-mpls-ldp-remote-pe1tope2] remote-ip 3.3.3.9
[*PE1-mpls-ldp-remote-pe1tope2] commit
[~PE1-mpls-ldp-remote-pe1tope2] quit
# Configure PE2.
[~PE2] interface tunnel10
[~PE2-Tunnel10] mpls te igp advertise
[*PE2-Tunnel10] mpls te igp metric absolute 1
[*PE2-Tunnel10] mpls
[*PE2-Tunnel10] quit
[*PE2] interface tunnel11
[*PE2-Tunnel11] mpls te igp advertise
[*PE2-Tunnel11] mpls te igp metric absolute 1
[*PE2-Tunnel11] mpls
[*PE2-Tunnel11] quit
[*PE2] interface tunnel12
[*PE2-Tunnel12] mpls te igp advertise
[*PE2-Tunnel12] mpls te igp metric absolute 1
[*PE2-Tunnel12] mpls
[*PE2-Tunnel12] quit
[*PE2] interface tunnel20
[*PE2-Tunnel20] mpls te igp advertise
[*PE2-Tunnel20] mpls te igp metric absolute 1
[*PE2-Tunnel20] mpls
[*PE2-Tunnel20] quit
[*PE2] interface tunnel21
[*PE2-Tunnel21] mpls te igp advertise
[*PE2-Tunnel21] mpls te igp metric absolute 1
[*PE2-Tunnel21] mpls
[*PE2-Tunnel21] quit
[*PE2] interface tunnel22
[*PE2-Tunnel22] mpls te igp advertise
[*PE2-Tunnel22] mpls te igp metric absolute 1
[*PE2-Tunnel22] mpls
[*PE2-Tunnel22] quit
[*PE2] ospf 1
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] enable traffic-adjustment advertise
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 3.3.3.9
[*PE2-ospf-1-area-0.0.0.0] mpls-te enable
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] mpls ldp remote-peer pe2tope1
[*PE2-mpls-ldp-remote-pe2tope1] remote-ip 1.1.1.9
[*PE2-mpls-ldp-remote-pe2tope1] commit
[~PE2-mpls-ldp-remote-pe2tope1] quit
Step 10 Enable MPLS LDP on all PEs, establish an LDP peer relationship between PE1 and
PE3, and establish an LDP peer relationship between PE2 and PE4.
# Configure PE3.
<PE3> system-view
[~PE3] mpls lsr-id 4.4.4.9
[*PE3] mpls
[*PE3] commit
[~PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/1
[*PE3-GigabitEthernet1/0/1] mpls
[*PE3-GigabitEthernet1/0/1] mpls ldp
[*PE3-GigabitEthernet1/0/1] commit
[~PE3-GigabitEthernet1/0/1] quit
# Configure PE1.
<PE1> system-view
[~PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 1/0/4
[*PE1-GigabitEthernet1/0/4] mpls
[*PE1-GigabitEthernet1/0/4] mpls ldp
[*PE1-GigabitEthernet1/0/4] commit
[~PE1-GigabitEthernet1/0/4] quit
# Configure PE2.
<PE2> system-view
[~PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 1/0/4
[*PE2-GigabitEthernet1/0/4] mpls
[*PE2-GigabitEthernet1/0/4] mpls ldp
[*PE2-GigabitEthernet1/0/4] commit
[~PE2-GigabitEthernet1/0/4] quit
# Configure PE4.
<PE4> system-view
[~PE4] mpls lsr-id 5.5.5.9
[*PE4] mpls
[*PE4] commit
[~PE4-mpls] quit
[~PE4] mpls ldp
[*PE4-mpls-ldp] quit
[*PE4] interface gigabitethernet 1/0/1
[*PE4-GigabitEthernet1/0/1] mpls
[*PE4-GigabitEthernet1/0/1] mpls ldp
[*PE4-GigabitEthernet1/0/1] commit
[~PE4-GigabitEthernet1/0/1] quit
After completing the configuration, run the display mpls ldp lsp command on
PE1, PE2, PE3, or PE4. The command output shows that an LDP LSP has been
established between the pair of PE3 and PE1 and that of PE2 and PE4.
Step 11 Establish an MP-IBGP peer relationship between PE1 and PE2, and establish EBGP
peer relationships between the PEs and CEs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
NOTE
The procedure for configuring PE2 is similar to that of PE1. For configuration details, see
Configuration Files in this section.
# Configure CE1.
[~CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.2 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] commit
NOTE
Repeat this step on CE2 to CE4. For configuration details, see Configuration Files in this
section.
After completing the configuration, run the display bgp vpnv4 all peer command
on each PE. The command output shows that BGP peer relationships have been
established between the PEs and are in the Established state.
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 3 Peers in established state : 3
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Peer of vpn instance:
VPN-Instance vpna, Router ID 1.1.1.9:
10.1.1.1 4 65410 25 25 0 00:17:57 Established 1
VPN-Instance vpnb, Router ID 1.1.1.9:
10.2.1.1 4 65420 21 22 0 00:17:10 Established 1
# Configure PE1.
[~PE1] tunnel-policy policya
[*PE1-tunnel-policy-policya] tunnel binding destination 3.3.3.9 te tunnel 10 tunnel 11 tunnel 12
[*PE1-tunnel-policy-policya] commit
[~PE1-tunnel-policy-policya] quit
[~PE1] tunnel-policy policyb
[*PE1-tunnel-policy-policyb] tunnel binding destination 3.3.3.9 te tunnel 20 tunnel 21 tunnel 22
[*PE1-tunnel-policy-policyb] commit
[~PE1-tunnel-policy-policyb] quit
# Configure PE2.
[~PE2] tunnel-policy policya
[*PE2-tunnel-policy-policya] tunnel binding destination 1.1.1.9 te tunnel 10 tunnel 11 tunnel 12
[*PE2-tunnel-policy-policya] commit
[~PE2-tunnel-policy-policya] quit
[~PE2] tunnel-policy policyb
[*PE2-tunnel-policy-policyb] tunnel binding destination 1.1.1.9 te tunnel 20 tunnel 21 tunnel 22
[*PE2-tunnel-policy-policyb] commit
[~PE2-tunnel-policy-policyb] quit
Step 13 Configure VPN instances on PE1 and PE2 to enable a CE to access the
corresponding PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] tnl-policy policya
[*PE1-vpn-instance-vpna-af-ipv4] commit
[~PE1-vpn-instance-vpna-af-ipv4] quit
[~PE1-vpn-instance-vpna] quit
[~PE1] ip vpn-instance vpnb
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpnb-af-ipv4] route-distinguisher 100:2
[*PE1-vpn-instance-vpnb-af-ipv4] vpn-target 222:2 both
[*PE1-vpn-instance-vpnb-af-ipv4] tnl-policy policyb
[*PE1-vpn-instance-vpnb-af-ipv4] commit
[~PE1-vpn-instance-vpnb-af-ipv4] quit
[~PE1-vpn-instance-vpnb] quit
[~PE1] interface gigabitethernet 1/0/1
[*PE1-GigabitEthernet1/0/1] ip binding vpn-instance vpna
[*PE1-GigabitEthernet1/0/1] ip address 10.1.1.2 24
[*PE1-GigabitEthernet1/0/1] commit
[~PE1-GigabitEthernet1/0/1] quit
[*PE1] interface gigabitethernet 1/0/2
[*PE1-GigabitEthernet1/0/2] ip binding vpn-instance vpnb
[*PE1-GigabitEthernet1/0/2] ip address 10.2.1.2 24
[*PE1-GigabitEthernet1/0/2] commit
[~PE1-GigabitEthernet1/0/2] quit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy policya
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2-vpn-instance-vpna] quit
[~PE2] ip vpn-instance vpnb
[*PE2-vpn-instance-vpnb] ipv4-family
[*PE2-vpn-instance-vpnb-af-ipv4] route-distinguisher 200:2
[*PE2-vpn-instance-vpnb-af-ipv4] vpn-target 222:2 both
[*PE2-vpn-instance-vpnb-af-ipv4] tnl-policy policyb
[*PE2-vpn-instance-vpnb-af-ipv4] commit
[~PE2-vpn-instance-vpnb-af-ipv4] quit
[~PE2-vpn-instance-vpnb] quit
[~PE2] interface gigabitethernet 1/0/1
[*PE2-GigabitEthernet1/0/1] ip binding vpn-instance vpna
[*PE2-GigabitEthernet1/0/1] ip address 10.3.1.2 24
[*PE2-GigabitEthernet1/0/1] commit
[~PE2-GigabitEthernet1/0/1] quit
[~PE2] interface gigabitethernet 1/0/2
[*PE2-GigabitEthernet1/0/2] ip binding vpn-instance vpnb
[*PE2-GigabitEthernet1/0/2] ip address 10.4.1.2 24
[*PE2-GigabitEthernet1/0/2] commit
[~PE2-GigabitEthernet1/0/2] quit
# Assign IP addresses to the interfaces on each CE. For configuration details, see
Configuration Files in this section.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy policya
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 100:2
tnl-policy policyb
apply-label per-instance
vpn-target 222:2 export-extcommunity
vpn-target 222:2 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te cspf
mpls te ds-te mode ietf
mpls rsvp-te
#
mpls ldp
#
mpls ldp remote-peer pe1tope2
remote-ip 3.3.3.9
#
explicit-path path1
next hop 10.10.1.2
next hop 10.11.1.2
next hop 3.3.3.9
#
te-class-mapping
te-class0 class-type ct0 priority 0 description For-BE
te-class1 class-type ct1 priority 0 description For-AF1
te-class2 class-type ct2 priority 0 description For-AF2
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet1/0/2
undo shutdown
ip binding vpn-instance vpnb
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.10.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te tunnel-id 21
mpls te priority 0 0
mpls te bandwidth ct1 50000
mpls te reserved-for-binding
mpls te path explicit-path path1
#
interface Tunnel22
description For VPN-B
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 22
mpls te priority 0 0
mpls te bandwidth ct2 100000
mpls te reserved-for-binding
mpls te path explicit-path path1
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
peer 10.1.1.1 as-number 65410
import-route direct
#
ipv4-family vpn-instance vpnb
peer 10.2.1.1 as-number 65420
import-route direct
#
isis 1
is-level level-1
cost-style wide
traffic-eng level-1
#
ospf 1
opaque-capability enable
enable traffic-adjustment advertise
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policya
tunnel binding destination 3.3.3.9 te Tunnel10 Tunnel11 Tunnel12
#
tunnel-policy policyb
tunnel binding destination 3.3.3.9 te Tunnel20 Tunnel21 Tunnel22
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.9
#
mpls
mpls te
mpls te ds-te mode ietf
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.10.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 400000
mpls te bandwidth bc0 400000 bc1 300000 bc2 200000
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 400000
mpls te bandwidth bc0 400000 bc1 300000 bc2 200000
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
isis 1
is-level level-1
cost-style wide
traffic-eng level-1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy policya
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 200:2
tnl-policy policyb
apply-label per-instance
vpn-target 222:2 export-extcommunity
vpn-target 222:2 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls te cspf
mpls te ds-te mode ietf
mpls te rsvp-te
#
mpls ldp
#
mpls ldp remote-peer pe2tope1
remote-ip 1.1.1.9
#
explicit-path path1
next hop 10.10.1.1
next hop 10.11.1.1
next hop 1.1.1.9
#
te-class-mapping
te-class0 class-type ct0 priority 0 description For-BE
tunnel-policy policya
tunnel binding destination 1.1.1.9 te Tunnel10 Tunnel11 Tunnel12
#
tunnel-policy policyb
tunnel binding destination 1.1.1.9 te Tunnel20 Tunnel21 Tunnel22
#
return
● PE3 configuration file
#
sysname PE3
#
mpls lsr-id 4.4.4.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.5.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
mpls
mpls ldp
#
ospf 1
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.1.5.0 0.0.0.255
#
return
● PE4 configuration file
#
sysname PE4
#
mpls lsr-id 5.5.5.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
mpls
mpls ldp
#
ospf 1
area 0.0.0.0
network 5.5.5.9 0.0.0.0
network 10.1.6.0 0.0.0.255
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/1
undo shutdown
Networking Requirements
In Figure 1-37, CE1 and CE2 belong to the same L3VPN. They access the public
network through PE1 and PE2 respectively. Various types of services are
transmitted between CE1 and CE2. Transmitting a large number of common
services deteriorates the efficiency of transmitting important services. To prevent
this problem, the CBTS function can be configured. A CBTS allows traffic of a
specific service class to be transmitted along a specified tunnel.
In this example, tunnel 1 and tunnel 2 on PE1 transmit important services, and
tunnel 3 transmits other packets.
NOTICE
If the CBTS function is configured, you are advised not to configure the following
services at the same time:
● Mixed load balancing
● Dynamic load balancing
Precautions
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and its mask to every interface and configure a loopback
interface address as an LSR ID on every node.
2. Enable IS-IS globally, configure a network entity title (NET), specify the cost
type, and enable IS-IS TE on each involved node. Enable IS-IS on interfaces,
including loopback interfaces.
3. Set MPLS label switching router (LSR) IDs for all devices and globally enable
MPLS, MPLS TE, RSVP-TE, and CSPF.
4. Enable MPLS, MPLS TE, and RSVP-TE, on each interface.
5. Configure the maximum reservable bandwidth and BC bandwidth for the link
on the outbound interface of each device along the tunnel.
6. Configure a tunnel interface on the ingress and configure the IP address,
tunnel protocol, destination IP address, and tunnel bandwidth.
7. Configure multi-field classification on PE1.
8. Configure a VPN instance and apply a tunnel policy on PE1.
Data Preparation
To complete the configuration, you need the following data:
● IS-IS area ID, originating system ID, and IS-IS level of each node
● Maximum available link bandwidth and maximum reservable link bandwidth
on each node
● Tunnel interface number, IP address, destination IP address, tunnel ID, and
tunnel bandwidth on the tunnel interface
● Traffic classifier name, traffic behavior name, and traffic policy name
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and mask for each interface according to Figure 1-37. For
configuration details, see Configuration Files in this section.
Step 2 Configure IS-IS to advertise routes.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] network-entity 00.0005.0000.0000.0001.00
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] network-entity 00.0005.0000.0000.0002.00
[*P1-isis-1] is-level level-2
[*P1-isis-1] quit
[*P1] interface gigabitethernet 1/0/0
# Configure P2.
[~P2] isis 1
[*P2-isis-1] network-entity 00.0005.0000.0000.0003.00
[*P2-isis-1] is-level level-2
[*P2-isis-1] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] network-entity 00.0005.0000.0000.0004.00
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
Step 3 Configure an EBGP peer relationship between each pair of a PE and a CE and an
MP-IBGP peer relationship between two PEs.
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] mpls rsvp-te
[*P1-mpls] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls te
[*P1-GigabitEthernet1/0/0] mpls rsvp-te
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls te
[*P1-GigabitEthernet2/0/0] mpls rsvp-te
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] mpls rsvp-te
[*P2-mpls] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] mpls
[*P2-GigabitEthernet1/0/0] mpls te
[*P2-GigabitEthernet1/0/0] mpls rsvp-te
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] mpls
[*P2-GigabitEthernet2/0/0] mpls te
[*P2-GigabitEthernet2/0/0] mpls rsvp-te
[*P2-GigabitEthernet2/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls te
[*PE2-GigabitEthernet1/0/0] mpls rsvp-te
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] isis 1
[~P1-isis-1] cost-style wide
[*P1-isis-1] traffic-eng level-2
[*P1-isis-1] commit
[~P1-isis-1] quit
# Configure P2.
[~P2] isis 1
[~P2-isis-1] cost-style wide
[*P2-isis-1] traffic-eng level-2
[*P2-isis-1] commit
[~P2-isis-1] quit
# Configure PE2.
[~PE2] isis 1
[~PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] commit
[~PE2-isis-1] quit
# Configure P1.
[~P1] interface gigabitethernet 1/0/0
[~P1-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*P1-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*P1-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] interface gigabitethernet 1/0/0
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*PE2-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
NOTE
Run the mpls te service-class { service-class & <1-8> | default } command to configure the
service class for packets transmitted along each tunnel.
# Configure PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 4.4.4.9
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te bandwidth ct0 20000
# Configure PE2.
[~PE2] interface tunnel1
[*PE2-Tunnel1] ip address unnumbered interface loopback 1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.9
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te bandwidth ct0 20000
[*PE2-Tunnel1] commit
[~PE2-Tunnel1] quit
[~PE2] tunnel-policy policy1
[*PE2-tunnel-policy-policy1] tunnel select-seq cr-lsp load-balance-number 3
[*PE2-tunnel-policy-policy1] commit
[~PE2-tunnel-policy-policy1] quit
# Configure PE2.
[~PE2] ip vpn-instance vpn2
[*PE2-vpn-instance-vpn2] ipv4-family
[*PE2-vpn-instance-vpn2-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpn1-af-ipv4] tnl-policy policy1
[*PE2-vpn-instance-vpn2-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpn2-af-ipv4] commit
[~PE2-vpn-instance-vpn2-af-ipv4] quit
[~PE2-vpn-instance-vpn2] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpn2
[*PE2-GigabitEthernet2/0/0] commit
[*PE1-GigabitEthernet2/0/0] quit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
tnl-policy policy1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0001.00
#
acl number 2001
rule 10 permit source 10.40.0.0 0.255.255.255
#
acl number 2002
rule 20 permit source 10.50.0.0 0.255.255.255
#
traffic classifier service1
if-match acl 2001
#
traffic classifier service2
if-match acl 2002
#
traffic behavior behavior1
service-class af1 color green
#
traffic behavior behavior2
service-class af2 color green
#
traffic policy policy1
classifier service1 behavior behavior1
classifier service2 behavior behavior2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 10.10.1.1 255.255.255.0
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 4.4.4.9
#
mpls
mpls te
mpls rsvp-te
#
ip vpn-instance vpn2
ipv4-family
route-distinguisher 200:1
tnl-policy policy1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0004.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn2
ip address 10.11.1.1 255.255.255.0
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te bandwidth ct0 20000
mpls te tunnel-id 1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpn2
peer 10.11.1.2 as-number 65420
#
tunnel-policy policy1
tunnel select-seq cr-lsp load-balance-number 3
#
return
1.1.3.43.27 Example for Configuring CBTS in an L3VPN over LDP over TE Scenario
This section provides an example for configuring a CBTS in an L3VPN over LDP
over TE scenario.
Networking Requirements
In Figure 1-38, CE1 and CE2 belong to the same L3VPN. They access the public
network through PE1 and PE2 respectively. Various types of services are
transmitted between CE1 and CE2. Transmitting a large number of common
services deteriorates the efficiency of transmitting important services. To prevent
this problem, the CBTS function can be configured. A CBTS allows traffic of a
specific service class to be transmitted along a specified tunnel.
Precautions
When configuring a TE tunnel group in an L3VPN over LDP over TE scenario, note
that the destination IP address of a tunnel must be equal to the LSR ID of the
egress.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID and OSPF area ID
● Policy for triggering the LSP establishment
● Name and IP address of each remote LDP peer of P1 and P2
● Link bandwidth attributes of the tunnel
● Tunnel interface number, IP address, destination address, tunnel ID, tunnel
signaling protocol (RSVP-TE is used by default and in this example), tunnel
bandwidth, TE metric value, and link cost on P1 and P2
● Multi-field classifier name and traffic policy name
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface, including the loopback interface according
to Figure 1-38. For configuration details, see Configuration Files in this section.
Step 2 Enable OSPF to advertise the route of the segment connected to each interface
and the host route destined for each LSR ID. For configuration details, see
Configuration Files in this section.
Step 3 Configure an EBGP peer relationship between each pair of a PE and a CE and an
MP-IBGP peer relationship between two PEs.
For configuration details, see Configuration Files in this section.
Step 4 Enable MPLS on each LSR. Enable LDP to establish an LDP session between PE1
and P1, and between P2 and PE2. Enable RSVP-TE to establish an RSVP neighbor
relationship between P1 and P2, and between P1 and P3.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] lsp-trigger all
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls ldp
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.2
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] lsp-trigger all
[*P1-mpls] mpls rsvp-te
[*P1-mpls] mpls te cspf
[*P1-mpls] quit
[*P1] mpls ldp
[*P1-mpls-ldp] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls ldp
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls te
[*P1-GigabitEthernet2/0/0] mpls rsvp-te
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P3.
[~P3] mpls lsr-id 3.3.3.3
[*P3] mpls
[*P3-mpls] mpls te
[*P3-mpls] mpls rsvp-te
[*P3-mpls] quit
[*P3] interface gigabitethernet 1/0/0
[*P3-GigabitEthernet1/0/0] mpls
[*P3-GigabitEthernet1/0/0] mpls te
[*P3-GigabitEthernet1/0/0] mpls rsvp-te
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet 2/0/0
[*P3-GigabitEthernet2/0/0] mpls
[*P3-GigabitEthernet2/0/0] mpls te
[*P3-GigabitEthernet2/0/0] mpls rsvp-te
[*P3-GigabitEthernet2/0/0] commit
[~P3-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.4
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] lsp-trigger all
[*P2-mpls] mpls rsvp-te
[*P2-mpls] mpls te cspf
[*P2-mpls] quit
[*P2] mpls ldp
[*P2-mpls-ldp] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] mpls
[*P2-GigabitEthernet1/0/0] mpls te
[*P2-GigabitEthernet1/0/0] mpls rsvp-te
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] mpls
[*P2-GigabitEthernet2/0/0] mpls ldp
[*P2-GigabitEthernet2/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 5.5.5.5
[*PE2] mpls
[*PE2-mpls] lsp-trigger all
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls ldp
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
After completing the preceding configurations, the local LDP sessions have been
successfully established between PE1 and P1 and between P2 and PE2.
# Run the display mpls ldp session command on PE1, P1, P2, or PE2 to view
information about the established LDP session.
[~PE1] display mpls ldp session
LDP Session(s) in Public Network
Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
An asterisk (*) before a session means the session is being deleted.
--------------------------------------------------------------------------
PeerID Status LAM SsnRole SsnAge KASent/Rcv
--------------------------------------------------------------------------
2.2.2.2:0 Operational DU Passive 0000:00:05 23/23
--------------------------------------------------------------------------
TOTAL: 1 Session(s) Found.
# Run the display mpls ldp peer command to view information about the
established LDP peer.
[~PE1] display mpls ldp peer
LDP Peer Information in Public network
An asterisk (*) before a peer means the peer is being deleted.
-------------------------------------------------------------------------
PeerID TransportAddress DiscoverySource
-------------------------------------------------------------------------
2.2.2.2:0 2.2.2.2 GigabitEthernet1/0/0
-------------------------------------------------------------------------
TOTAL: 1 Peer(s) Found.
# Run the display mpls lsp command to view information about LDP LSP
information. The command output shows that no LSP is established by RSVP. The
following example uses the command output on PE1.
[~PE1] display mpls lsp
----------------------------------------------------------------------
LSP Information: LDP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL GE1/0/0/-
2.2.2.2/32 NULL/3 -/GE1/0/0
2.2.2.2/32 1024/3 -/GE1/0/0
10.1.1.0/24 3/NUL GE1/0/0/-
10.2.1.0/24 NULL/3 -/GE1/0/0
10.2.1.0/24 1025/3 -/GE1/0/0
# Configure P2.
[~P2] mpls ldp remote-peer lsrb
[*P2-mpls-ldp-remote-lsrb] remote-ip 2.2.2.2
[*P2-mpls-ldp-remote-lsrb] commit
[~P2-mpls-ldp-remote-lsrb] quit
Step 6 Configure bandwidth attributes on each outbound interface along the link of the
TE tunnel.
# Configure P1.
[~P1] interface gigabitethernet 2/0/0
[~P1-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P1-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P3.
[~P3] interface gigabitethernet 1/0/0
[~P3-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P3-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet 2/0/0
[*P3-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P3-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
[*P3-GigabitEthernet2/0/0] commit
[~P3-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] interface gigabitethernet 1/0/0
[~P2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P2-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
Step 7 Configure L3VPN access on PE1 and PE2 and configure multi-field classification on
the inbound interface of PE1.
# Configure PE1.
[~PE1] ip vpn-instance VPNA
[*PE1-vpn-instance-VPNA] ipv4-family
[*PE1-vpn-instance-VPNA-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-VPNA-af-ipv4] vpn-target 111:1 both
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance VPNA
[*PE1] acl 2001
[*PE1-acl4-basic-2001] rule 10 permit source 10.40.0.0 0.255.255.255
[*PE1-acl4-basic-2001] quit
[*PE1] acl 2002
[*PE1-acl4-basic-2002] rule 20 permit source 10.50.0.0 0.255.255.255
[*PE1-acl4-basic-2002] quit
[*PE1] traffic classifier service1
[*PE1-classifier-service1] if-match acl 2001
[*PE1-classifier-service1] quit
[*PE1] traffic behavior behavior1
[*PE1-behavior-behavior1] service-class af1 color green
[*PE1-behavior-behavior1] quit
[*PE1] traffic classifier service2
[*PE1-classifier-service2] if-match acl 2002
[*PE1-classifier-service2] quit
[*PE1] traffic behavior behavior2
[*PE1-behavior-behavior2] service-class af2 color green
[*PE1-behavior-behavior2] quit
[*PE1] traffic policy test
[*PE1-trafficpolicy-test] classifier service1 behavior behavior1
[*PE1-trafficpolicy-test] classifier service2 behavior behavior2
[*PE1-trafficpolicy-test] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] traffic-policy test inbound
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] ip vpn-instance VPNB
[*PE2-vpn-instance-VPNB] ipv4-family
[*PE2-vpn-instance-VPNB-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-VPNB-af-ipv4] vpn-target 111:1 both
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance VPNB
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] trust upstream default
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] interface gigabitethernet 1/0/0
[~P1-GigabitEthernet1/0/0] trust upstream default
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
Step 9 Configure a TE tunnel that originates from P1 and is destined for P2 and set the
service class for each type of packets that can pass through the tunnel.
NOTE
Run the mpls te service-class { service-class & <1-8> | default } command to configure the
service class for packets transmitted along each tunnel.
# On P1, enable the IGP shortcut function on the tunnel interface and adjust the
metric value to ensure that traffic destined for P2 or PE2 passes through the
tunnel.
[~P1] interface tunnel1
[*P1-Tunnel1] ip address unnumbered interface LoopBack1
[*P1-Tunnel1] tunnel-protocol mpls te
[*P1-Tunnel1] destination 4.4.4.4
[*P1-Tunnel1] mpls te tunnel-id 100
[*P1-Tunnel1] mpls te bandwidth ct0 10000
[*P1-Tunnel1] mpls te igp shortcut
[*P1-Tunnel1] mpls te igp metric absolute 1
[*P1-Tunnel1] mpls te service-class af1 af2
[*P1-Tunnel1] quit
[*P1] interface tunnel2
[*P1-Tunnel2] ip address unnumbered interface LoopBack1
[*P1-Tunnel2] tunnel-protocol mpls te
[*P1-Tunnel2] destination 4.4.4.4
[*P1-Tunnel2] mpls te tunnel-id 200
[*P1-Tunnel2] mpls te bandwidth ct0 10000
Step 10 Configure a tunnel that originates from P2 and is destined for P1.
# On P2, enable the forwarding adjacency on the tunnel interface and adjust the
metric value of the forwarding adjacency to ensure that traffic destined for PE1 or
P1 passes through the tunnel.
[~P2] interface tunnel1
[*P2-Tunnel1] ip address unnumbered interface LoopBack1
[*P2-Tunnel1] tunnel-protocol mpls te
[*P2-Tunnel1] destination 2.2.2.2
[*P2-Tunnel1] mpls te tunnel-id 101
[*P2-Tunnel1] mpls te bandwidth ct0 10000
[*P2-Tunnel1] mpls te igp shortcut
[*P2-Tunnel1] mpls te igp metric absolute 1
[*P2-Tunnel1] quit
[*P2] ospf 1
[*P2-ospf-1] area 0
[*P2-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] enable traffic-adjustment advertise
[*P2-ospf-1] quit
[*P2] commit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
lsp-trigger all
#
mpls ldp
#
acl number 2001
rule 10 permit source 10.40.0.0 0.255.255.255
#
acl number 2002
rule 20 permit source 10.50.0.0 0.255.255.255
#
traffic classifier service1
if-match acl 2001
#
traffic classifier service2
if-match acl 2002
#
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
mpls-te enable
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer lsrb
remote-ip 2.2.2.2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 101
mpls te bandwidth ct0 10000
mpls te igp shortcut
mpls te igp metric absolute 1
#
ospf 1
opaque-capability enable
enable traffic-adjustment advertise
area 0.0.0.0
network 4.4.4.4 0.0.0.0
Networking Requirements
In Figure 1-39, CE1 and CE2 belong to the same VLL network. They access the
MPLS backbone network through PE1 and PE2, respectively. OSPF is used as an
IGP on the MPLS backbone network.
Configure an LDP VLL and use the dynamic signaling protocol RSVP-TE to
establish two MPLS TE tunnels between PE1 and PE2 to transmit VLL services.
Assign each TE tunnel a specific priority. Enable behavior aggregate classification
on the interfaces that receive VLL packets to trust 802.1p priority values so that
they can forward VLL packets with a specific priority to a specific tunnel.
Establish TE1 tunnel with ID 100 over the path PE1 –> P1 –> PE2 and TE2 tunnel
with ID 200 over the path PE1 –> P2 –> PE2. Configure AF1 on TE1 interface and
AF2 on TE2 interface. This configuration allows PE1 to forward traffic with service
class AF1 along TE1 tunnel and traffic with service class AF2 along TE2 tunnel. The
two tunnels can load-balance traffic based on priority values. The requirements of
PE2 are similar to those of PE1.
Note that if multiple tunnels with AF1 are established between PE1 and PE2,
packets mapped to AF1 are load-balanced among these tunnels.
NOTICE
If the CBTS function is configured, you are advised not to configure the following
services at the same time:
● Dynamic load balancing
Configuration Roadmap
The configuration roadmap is as follows:
● Enable a routing protocol on the MPLS backbone network devices (PEs and
Ps) for them to communicate with each other and enable MPLS.
● Establish MPLS TE tunnels and configure a tunnel policy.
● Enable MPLS Layer 2 virtual private network (L2VPN) on the PEs.
● Create a VLL, configure LDP as a signaling protocol, and bind the VLL to an
AC interface on each PE.
● Configure MPLS TE tunnels to transmit VLL packets.
Data Preparation
To complete the configuration, you need the following data:
● OSPF area enabled with TE
● VLL name and VLL ID
● IP addresses of peers and tunnel policy
● Names of AC interfaces bound to a VLL
● Interface number and IP address of each tunnel interface, as well as
destination IP address, tunnel ID, tunnel signaling protocol (RSVP-TE), and
tunnel bandwidth to be specified on each tunnel interface
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and mask for each interface according to Figure 1-39.
Step 2 Enable MPLS, MPLS TE, MPLS RSVP-TE, and MPLS CSPF.
On the nodes along each MPLS TE tunnel, enable MPLS, MPLS TE, and MPLS
RSVP-TE both in the system view and the interface view. On the ingress node of
each tunnel, enable MPLS CSPF in the system view.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] mpls rsvp-te
[*PE1-mpls] mpls te cspf
[*PE1-mpls] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls te
[*PE1-GigabitEthernet1/0/0] mpls rsvp-te
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] mpls
[*PE1-GigabitEthernet3/0/0] mpls te
[*PE1-GigabitEthernet3/0/0] mpls rsvp-te
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] mpls rsvp-te
[*P1-mpls] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls te
[*P1-GigabitEthernet1/0/0] mpls rsvp-te
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls te
[*P1-GigabitEthernet2/0/0] mpls rsvp-te
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] mpls rsvp-te
[*P2-mpls] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] mpls
[*P2-GigabitEthernet1/0/0] mpls te
[*P2-GigabitEthernet1/0/0] mpls rsvp-te
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] mpls
[*P2-GigabitEthernet2/0/0] mpls te
[*P2-GigabitEthernet2/0/0] mpls rsvp-te
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] mpls te cspf
[*PE2-mpls] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls te
[*PE2-GigabitEthernet1/0/0] mpls rsvp-te
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] mpls
[*PE2-GigabitEthernet3/0/0] mpls te
[*PE2-GigabitEthernet3/0/0] mpls rsvp-te
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P1.
[~P1] ospf
[*P1-ospf-1] opaque-capability enable
[*P1-ospf-1] area 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] mpls-te enable
[*P1-ospf-1-area-0.0.0.0] quit
[*P1-ospf-1] quit
[*P1] commit
# Configure P2.
[~P2] ospf
[*P2-ospf-1] opaque-capability enable
[*P2-ospf-1] area 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] mpls-te enable
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] quit
[*P2] commit
# Configure PE2.
[~PE2] ospf
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] area 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 4.4.4.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] mpls-te enable
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] commit
# Configure PE1.
[~PE1] interface Tunnel 10
[*PE1-Tunnel10] ip address unnumbered interface loopback1
[*PE1-Tunnel10] tunnel-protocol mpls te
[*PE1-Tunnel10] destination 4.4.4.9
[*PE1-Tunnel10] mpls te tunnel-id 100
[*PE1-Tunnel10] mpls te service-class af1
[*PE1-Tunnel10] quit
[*PE1] interface Tunnel 11
[*PE1-Tunnel11] ip address unnumbered interface loopback1
[*PE1-Tunnel11] tunnel-protocol mpls te
[*PE1-Tunnel11] destination 4.4.4.9
[*PE1-Tunnel11] mpls te tunnel-id 200
[*PE1-Tunnel11] mpls te service-class af2
[*PE1-Tunnel11] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] ip address unnumbered interface loopback1
[*PE2-Tunnel10] tunnel-protocol mpls te
[*PE2-Tunnel10] destination 1.1.1.9
[*PE2-Tunnel10] mpls te tunnel-id 100
[*PE2-Tunnel10] mpls te service-class af1
[*PE2-Tunnel10] quit
[*PE2] interface Tunnel 11
[*PE2-Tunnel11] ip address unnumbered interface loopback1
[*PE2-Tunnel11] tunnel-protocol mpls te
[*PE2-Tunnel11] destination 1.1.1.9
[*PE2-Tunnel11] mpls te tunnel-id 200
[*PE2-Tunnel11] mpls te service-class af2
[*PE2-Tunnel11] quit
[*PE2] commit
After completing the preceding configurations, run the display this interface
command in the tunnel interface view. The command output shows that Line
protocol current state is UP, indicating that an MPLS TE tunnel has been
established.
Run the display tunnel-info all command on PE1. The command output shows
that two TE tunnels destined for PE2 with the LSR ID of 4.4.4.9 have been
established. The command output on PE2 is similar to that on PE1.
<PE1> display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------
0xc2060404 te 4.4.4.9 UP
0xc2060405 te 4.4.4.9 UP
# Configure PE2. Specify a physical interface on the P as the first next hop and a
physical interface on PE1 as the second next hop to ensure that the two tunnels
are built over different links.
[~PE2] explicit-path t1
[*PE2-explicit-path-t1] next hop 10.1.4.1
[*PE2-explicit-path-t1] next hop 10.1.2.1
[*PE2-explicit-path-t1] quit
[*PE2] explicit-path t2
[*PE2-explicit-path-t2] next hop 10.1.5.1
[*PE2-explicit-path-t2] next hop 10.1.3.1
[*PE2-explicit-path-t2] quit
[*PE2] interface Tunnel 10
[*PE2-Tunnel10] mpls te path explicit-path t1
[*PE2-Tunnel10] quit
[*PE2] interface Tunnel 11
[*PE2-Tunnel11] mpls te path explicit-path t2
[*PE2-Tunnel11] quit
[*PE2] commit
[*PE1-mpls-ldp] quit
[*PE1] mpls ldp remote-peer DTB1
[*PE1-mpls-ldp-remote-DTB] remote-ip 4.4.4.9
[*PE1] commit
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] mpls ldp remote-peer DTB2
[*PE2-mpls-ldp-remote-DTB2] remote-ip 1.1.1.9
[*PE2] commit
After completing this step, run the display mpls ldp peer command. A remote
LDP session has been established between the two PEs.
The following example uses the command output on PE1.
<PE1> display mpls ldp peer
LDP Peer Information in Public network
An asterisk (*) before a peer means the peer is being deleted.
------------------------------------------------------------------------------
PeerID TransportAddress DiscoverySource
------------------------------------------------------------------------------
4.4.4.9:0 4.4.4.9 Remote Peer : DTB1
------------------------------------------------------------------------------
TOTAL: 1 Peer(s) Found.
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq cr-lsp load-balance-number 2
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
# Configure PE2.
[~PE2] mpls l2vpn
[*PE2-l2vpn] quit
[*PE2] commit
[~PE1-GigabitEthernet2/0/0.1] quit
# Configure PE2.
[~PE2] interface gigabitethernet2/0/0.1
[*PE2-GigabitEthernet2/0/0.1] vlan-type dot1q 10
[*PE2-GigabitEthernet2/0/0.1] mpls l2vc 1.1.1.9 1 tunnel-policy p1
[*PE2-GigabitEthernet2/0/0.1] trust upstream default
[*PE2-GigabitEthernet2/0/0.1] trust 8021p
[*PE2-GigabitEthernet2/0/0.1] undo shutdown
[*PE2-GigabitEthernet2/0/0.1] commit
[~PE2-GigabitEthernet2/0/0.1] quit
# Configure CE1.
[~CE1] interface gigabitethernet1/0/0.1
[*CE1-GigabitEthernet1/0/0.1] shutdown
[*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE1-GigabitEthernet1/0/0.1] ip address 10.1.1.1 255.255.255.0
[*CE1-GigabitEthernet1/0/0.1] undo shutdown
[*CE1-GigabitEthernet1/0/0.1] commit
[~CE1-GigabitEthernet1/0/0.1] quit
# Configure CE2.
[~CE2] interface gigabitethernet1/0/0.1
[*CE2-GigabitEthernet1/0/0.1] shutdown
[*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE2-GigabitEthernet1/0/0.1] ip address 10.1.1.2 255.255.255.0
[*CE2-GigabitEthernet1/0/0.1] undo shutdown
[*CE2-GigabitEthernet1/0/0.1] commit
[~CE2-GigabitEthernet1/0/0.1] quit
OAM Protocol : --
OAM Status : --
OAM Fault Type : --
TTL Value : --
link state : up
local VC MTU : 1500 remote VC MTU : 1500
local VCCV : alert ttl lsp-ping bfd
remote VCCV : alert ttl lsp-ping bfd
local control word : disable remote control word : disable
tunnel policy name : p1
PW template name : --
primary or secondary : primary
load balance type : flow
Access-port : false
Switchover Flag : false
VC tunnel info : 2 tunnels
NO.0 TNL type : te , TNL ID : 0x00000000030000000a
NO.1 TNL type : te , TNL ID : 0x000000000300000003
create time : 0 days, 0 hours, 9 minutes, 58 seconds
up time : 0 days, 0 hours, 7 minutes, 41 seconds
last change time : 0 days, 0 hours, 7 minutes, 41 seconds
VC last up time : 2014/05/23 10:13:29
VC total up time : 0 days, 0 hours, 7 minutes, 41 seconds
CKey :1
NKey : 989855833
PW redundancy mode : frr
AdminPw interface : --
AdminPw link state : --
Diffserv Mode : uniform
Service Class : --
Color : --
DomainId : --
Domain Name : --
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path t1
next hop 10.1.2.2
next hop 10.1.4.2
#
explicit-path t2
next hop 10.1.3.2
next hop 10.1.5.2
#
mpls l2vpn
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer DTB1
remote-ip 4.4.4.9
#
interface GigabitEthernet1/0/0
undo shutdown
undo shutdown
ip address 10.1.4.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.4.0 0.0.0.255
mpls-te enable
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.3.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.5.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.1.3.0 0.0.0.255
network 10.1.5.0 0.0.0.255
mpls-te enable
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 4.4.4.9
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path t1
next hop 10.1.4.1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
Networking Requirements
In Figure 1-40, CE1 and CE2 belong to the same VPLS network. They access the
MPLS backbone network through PE1 and PE2, respectively. OSPF is used as an
IGP on the MPLS backbone network.
It is required that an LDP VPLS tunnel and the dynamic signaling protocol RSVP-TE
be used to establish two MPLS TE tunnels between PE1 and PE2 to transmit VPLS
services. Each TE tunnel is assigned a specific priority. Interfaces that receive VPLS
packets have behavior aggregate classification enabled and trust 802.1p priority
values so that they can forward VPLS packets with a specific priority to a specific
tunnel.
TE1 tunnel with ID 100 is established over the path PE1 –> P1 –> PE2, and TE2
tunnel with ID 200 is established over the path PE1 –> P2 –> PE2. AF1 is
configured for TE1 tunnel, and AF2 is configured for TE2 tunnel. This configuration
allows PE1 to forward traffic with service class AF1 along the TE1 tunnel and
traffic with service class AF2 along the TE2 tunnel. The two tunnels can load-
balance traffic based on priority values. The requirements of PE2 are similar to the
requirements of PE1.
Note that if multiple tunnels with AF1 are established between PE1 and PE2,
packets mapped to AF1 are load-balanced along these tunnels.
NOTICE
If the CBTS function is configured, you are advised not to configure the following
services at the same time:
● Dynamic load balancing
Configuration Roadmap
The configuration roadmap is as follows:
● Enable a routing protocol on the MPLS backbone network devices (PEs and
Ps) for them to communicate with each other and enable MPLS.
● Establish MPLS TE tunnels and configure a tunnel policy.
● Enable MPLS Layer 2 virtual private network (L2VPN) on the PEs.
● Create a virtual switching instance (VSI), configure LDP as a signaling
protocol, and bind the VSI to an AC interface on each PE.
● Configure MPLS TE tunnels to transmit VSI packets.
Data Preparation
To complete the configuration, you need the following data:
● OSPF area enabled with TE
● VSI name and VSI ID
● IP addresses of peers and tunnel policy
● Names of AC interfaces bound to the VSI
● Interface number and IP address of each tunnel interface, as well as
destination IP address, tunnel ID, tunnel signaling protocol (RSVP-TE), and
tunnel bandwidth to be specified on each tunnel interface
Procedure
Step 1 Assign an IP address to each interface on the backbone network. For configuration
details, see Configuration Files in this section.
Step 2 Enable MPLS, MPLS TE, MPLS RSVP-TE, and MPLS CSPF.
On the nodes along each MPLS TE tunnel, enable MPLS, MPLS TE, and MPLS
RSVP-TE both in the system view and the interface view. On the ingress node of
each tunnel, enable MPLS CSPF in the system view.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] mpls rsvp-te
[*PE1-mpls] mpls te cspf
[*PE1-mpls] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls te
[*PE1-GigabitEthernet1/0/0] mpls rsvp-te
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] mpls
[*PE1-GigabitEthernet3/0/0] mpls te
[*PE1-GigabitEthernet3/0/0] mpls rsvp-te
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] mpls rsvp-te
[*P1-mpls] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls te
[*P1-GigabitEthernet1/0/0] mpls rsvp-te
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls te
[*P1-GigabitEthernet2/0/0] mpls rsvp-te
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] mpls rsvp-te
[*P2-mpls] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] mpls
[*P2-GigabitEthernet1/0/0] mpls te
[*P2-GigabitEthernet1/0/0] mpls rsvp-te
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] mpls
[*P2-GigabitEthernet2/0/0] mpls te
[*P2-GigabitEthernet2/0/0] mpls rsvp-te
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
# Configure PE1.
[~PE1] ospf
[*PE1-ospf-1] opaque-capability enable
[*PE1-ospf-1] area 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] mpls-te enable
[*PE1-ospf-1-area-0.0.0.0] quit
[*PE1-ospf-1] quit
[*PE1] commit
# Configure P1.
[~P1] ospf
[*P1-ospf-1] opaque-capability enable
[*P1-ospf-1] area 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P1-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] network 10.1.2.0 0.0.0.255
[*P1-ospf-1-area-0.0.0.0] mpls-te enable
[*P1-ospf-1-area-0.0.0.0] quit
[*P1-ospf-1] quit
[*P1] commit
# Configure P2.
[~P2] ospf
[*P2-ospf-1] opaque-capability enable
[*P2-ospf-1] area 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*P2-ospf-1-area-0.0.0.0] network 10.1.3.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*P2-ospf-1-area-0.0.0.0] mpls-te enable
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] quit
[*P2] commit
# Configure PE2.
[~PE2] ospf
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] area 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 4.4.4.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] mpls-te enable
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] commit
# Configure PE2.
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] ip address unnumbered interface loopback1
[*PE2-Tunnel10] tunnel-protocol mpls te
[*PE2-Tunnel10] destination 1.1.1.9
[*PE2-Tunnel10] mpls te tunnel-id 100
[*PE2-Tunnel10] mpls te service-class af1
[*PE2-Tunnel10] quit
[*PE2] interface Tunnel 20
[*PE2-Tunnel20] ip address unnumbered interface loopback1
[*PE2-Tunnel20] tunnel-protocol mpls te
[*PE2-Tunnel20] destination 1.1.1.9
[*PE2-Tunnel20] mpls te tunnel-id 200
[*PE2-Tunnel20] mpls te service-class af2
[*PE2-Tunnel20] quit
[*PE2] commit
After completing the preceding configurations, run the display this interface
command in the tunnel interface view. The command output shows that Line
protocol current state is UP, indicating that an MPLS TE tunnel has been
established.
Run the display tunnel-info all command on PE1. The command output shows
that two TE tunnels destined for PE2 with the LSR ID of 4.4.4.9 have been
established. The command output on PE2 is similar to that on PE1.
<PE1> display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------
0xc2060404 te 4.4.4.9 UP
0xc2060405 te 4.4.4.9 UP
[*PE1-explicit-path-t1] quit
[*PE1] explicit-path t2
[*PE1-explicit-path-t2] next hop 10.1.3.2
[*PE1-explicit-path-t2] next hop 10.1.5.2
[*PE1-explicit-path-t2] quit
[*PE1] interface Tunnel 10
[*PE1-Tunnel10] mpls te path explicit-path t1
[*PE1-Tunnel10] quit
[*PE1] interface Tunnel 20
[*PE1-Tunnel20] mpls te path explicit-path t2
[*PE1-Tunnel20] quit
[*PE1] commit
# Configure PE2. Specify a physical interface on the P as the first next hop and a
physical interface on PE1 as the second next hop to ensure that the two tunnels
are built over different links.
[~PE2] explicit-path t1
[*PE2-explicit-path-t1] next hop 10.1.4.1
[*PE2-explicit-path-t1] next hop 10.1.2.1
[*PE2-explicit-path-t1] quit
[*PE2] explicit-path t2
[*PE2-explicit-path-t2] next hop 10.1.5.1
[*PE2-explicit-path-t2] next hop 10.1.3.1
[*PE2-explicit-path-t2] quit
[*PE2] interface Tunnel 10
[*PE2-Tunnel10] mpls te path explicit-path t1
[*PE2-Tunnel10] quit
[*PE2] interface Tunnel 20
[*PE2-Tunnel20] mpls te path explicit-path t2
[*PE2-Tunnel20] quit
[*PE2] commit
# Configure PE1.
[~PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] mpls ldp remote-peer DTB1
[*PE1-mpls-ldp-remote-DTB1] remote-ip 4.4.4.9
[*PE1-mpls-ldp-remote-DTB1] quit
[*PE1] commit
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] mpls ldp remote-peer DTB2
[*PE2-mpls-ldp-remote-DTB2] remote-ip 1.1.1.9
[*PE2-mpls-ldp-remote-DTB2] quit
[*PE2] commit
After completing this step, run the display mpls ldp peer command. A remote
LDP session has been established between the two PEs.
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq cr-lsp load-balance-number 2
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
# Configure PE2.
[~PE2] mpls l2vpn
[*PE2-l2vpn] quit
[*PE2] commit
# Configure PE2.
[~PE2] vsi a2 static
[*PE2-vsi-a2] pwsignal ldp
[*PE2-vsi-a2-ldp] vsi-id 2
[*PE2-vsi-a2-ldp] peer 1.1.1.9 tnl-policy p1
[*PE2-vsi-a2-ldp] quit
[*PE2-vsi-a2] quit
[*PE2] commit
# Configure PE2.
[~PE2] interface gigabitethernet2/0/0.1
[*PE2-GigabitEthernet2/0/0.1] vlan-type dot1q 10
# Configure CE1.
[~CE1] interface gigabitethernet1/0/0.1
[*CE1-GigabitEthernet1/0/0.1] shutdown
[*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE1-GigabitEthernet1/0/0.1] ip address 10.1.1.1 255.255.255.0
[*CE1-GigabitEthernet1/0/0.1] undo shutdown
[*CE1-GigabitEthernet1/0/0.1] commit
[~CE1-GigabitEthernet1/0/0.1] quit
# Configure CE2.
[~CE2] interface gigabitethernet1/0/0.1
[*CE2-GigabitEthernet1/0/0.1]shutdown
[*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE2-GigabitEthernet1/0/0.1] ip address 10.1.1.2 255.255.255.0
[*CE2-GigabitEthernet1/0/0.1] undo shutdown
[*CE2-GigabitEthernet1/0/0.1] commit
[~CE1-GigabitEthernet1/0/0.1] quit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path t1
next hop 10.1.2.2
next hop 10.1.4.2
#
explicit-path t2
next hop 10.1.3.2
next hop 10.1.5.2
#
mpls l2vpn
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer DTB1
remote-ip 4.4.4.9
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 4.4.4.9 tnl-policy p1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
trust upstream default
trust 8021p
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.1.3.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te path explicit-path t1
mpls te tunnel-id 100
mpls te service-class af1
#
interface Tunnel20
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te path explicit-path t2
mpls te tunnel-id 200
mpls te service-class af2
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.3.0 0.0.0.255
mpls-te enable
#
tunnel-policy p1
tunnel select-seq cr-lsp load-balance-number 2
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.4.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.4.0 0.0.0.255
mpls-te enable
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
trust 8021p
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.1.5.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te path explicit-path t1
mpls te tunnel-id 100
mpls te service-class af1
#
interface Tunnel20
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te path explicit-path t2
mpls te tunnel-id 200
mpls te service-class af2
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.1.4.0 0.0.0.255
network 10.1.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy p1
tunnel select-seq cr-lsp load-balance-number 2
#
return
1.1.4.1 Overview
Multiprotocol Label Switching (MPLS) Label Distribution Protocol (LDP) is widely
used for transmitting virtual private network (VPN) services. MPLS LDP
networking and configurations are simple. MPLS LDP supports route-driven
establishment of a large number of label switched paths (LSPs).
On an MPLS network, LDP distributes label mappings and establishes LSPs. LDP
sends multicast Hello messages to discover local peers and sets up local peer
relationships. Alternatively, LDP sends unicast Hello messages to discover remote
peers and sets up remote peer relationships.
Two LDP peers establish a TCP connection, negotiate LDP parameters over the TCP
connection, and establish an LDP session. They exchange messages over the LDP
session to set up an LSP. LDP networking is simple to construct and configure, and
LDP establishes LSPs using routing information.
Context
The establishment of static LSPs does not require a label distribution protocol or
exchange of control packets. As such, static LSPs consume fewer resources and are
applicable to small networks with a simple and stable topology. Static LSPs cannot
dynamically adapt to network topology changes. Once the network topology
changes, an administrator must modify configurations on each LSR of each
involved LSP so that the LSPs can work properly.
Pre-configuration Tasks
Before configuring a static LSP, configure a unicast static route or an IGP to
implement network connectivity between LSRs.
Context
Perform the following steps on each LSR in an MPLS domain:
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Procedure
Step 1 Run system-view
NOTE
You are advised to specify a next hop for a static LSP. Ensure that the local routing table
contains a routing entry that exactly matches the specified destination IP address and next-
hop IP address.
If an Ethernet interface is used as an outbound interface of an LSP, you must specify the
nexthop next-hop-address parameter to ensure normal traffic forwarding on the LSP.
----End
Procedure
Step 1 Run system-view
NOTE
You are advised to specify the next hop when configuring a static LSP, so that the local
routing table will contain the routing entry that exactly matches the specified next hop IP
address.
If an Ethernet interface is used as an outbound interface, the nexthop next-hop-address
parameter must be configured to ensure normal traffic forwarding on the LSP.
----End
Procedure
Step 1 Run system-view
----End
Prerequisites
The configurations of a static LSP are complete.
Procedure
● Run the display mpls static-lsp [ lsp-name ] [ { include | exclude } ip-
address mask-length ] [ verbose ] command to check information about local
static LSPs.
● Run the display mpls lsp protocol static [ { include | exclude } destaddr
masklen ] [ incoming-interface in-port-type in-port-num ][ outgoing-
interface out-port-type out-port-num ] [ in-label in-label-value ] [ out-label
Usage Scenario
An LDP session is established over a TCP connection. After the TCP connection is
set up, LSRs negotiate parameters of the LDP session. If the negotiation is
successful, an LDP session can be established.
After the local LDP session is established, LSRs assign labels to establish an LDP
LSP.
When LDP LSPs carry Layer 2 virtual private network (L2VPN) and Layer 3 virtual
private network (L3VPN) services, you can specify an LSR ID for each local LDP
session on the current LSR to isolate VPN services.
Pre-configuration Tasks
Before configuring a local LDP session, complete the following task:
● Configure static routes or an IGP to ensure IP route reachability among nodes.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls lsr-id lsr-id
An LSR ID is set for the local node.
When configuring an LSR ID, note the following:
● LSR IDs must be set before you run other MPLS commands.
● LSR IDs can only be manually configured, and do not have default values.
● Using the address of a loopback interface as the LSR ID is recommended.
NOTICE
Running the undo mpls command deletes all MPLS configurations, including
established LDP sessions and LSPs.
An MPLS LSR ID is usually used as the LSR ID of an LDP instance. When VPN
instances are used, such as a BGP/MPLS VPN, if the VPN address space and public
network address space overlap, set LSR IDs for LDP instances so that TCP
connections for LDP sessions can be properly established.
----End
Procedure
Step 1 Run system-view
NOTE
Disabling MPLS LDP from an interface leads to interruptions of all LDP sessions on the
interface and deletions of all LSPs established over these LDP sessions.
----End
Context
By default, all LDP sessions of an LSR, including local LDP sessions and remote
LDP sessions, use the LSR ID of the LDP instance configured on the LSR. However,
if LDP LSPs carry L2VPN and L3VPN services, sharing one LSR ID may cause LDP
LSPs to fail to isolate VPN services. To address this problem, you can configure an
LSR ID for each LDP session.
This section describes how to configure an LSR ID for a local LDP session.
Procedure
Step 1 Run system-view
The primary IP address of a specified interface is used as the LSR ID for the
current LDP session.
Here:
If multiple links directly connect an LSR pair, the LSR ID configured on the
interface of each link must be the same. Otherwise, the LDP session uses the LSR
ID of the link that first finds the adjacency, while other links with different LSR IDs
cannot be bound to the LDP session. As a result, LDP LSPs fail to be established on
these links.
If both a local session and a remote LDP session are to be established between an
LSR pair, LSR IDs configured for the two sessions must be the same. Otherwise,
only the LDP session that finds the adjacency first can be established.
NOTE
To establish an LDP session between two devices, a TCP link must be established between
them. This link is called an adjacency. After the adjacency is established, the two devices
can exchange LDP control messages to establish an LDP session. If there is only one
adjacency in an LDP session, the LDP session is called a single-link session. If there are
multiple adjacencies in an LDP session, the LDP session is called a multi-link session.
Running this command causes a single-link LDP session to reset or causes the current
adjacency of a multi-link LDP session to reset.
----End
Context
An LDP transport address is used to set up a TCP connection between peers. A
route to the LDP transport address must be reachable on each peer. An LSR ID,
which is the loopback interface address, serves as the LDP transport address.
NOTE
● The LDP sessions over multiple links between two LSRs can be established using the
same pair of transport addresses.
● A change in an LDP transport address will terminate the associated LDP session. Exercise
caution when configuring an LDP transport address.
● The default LDP transport address is recommended.
Procedure
Step 1 Run system-view
----End
Context
The following timers are used in a local LDP session:
Procedure
● Configure a link Hello send timer.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the interface on which an LDP session is to be established is
displayed.
c. Run mpls ldp timer hello-send interval
A link Hello send timer is configured.
Effective link Hello send timer value = Min{Configured link Hello send
timer value, 1/3 of the link Hello hold timer value}
d. Run commit
The configuration is committed.
● Configure a link Hello hold timer.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the interface on which an LDP session is to be established is
displayed.
c. Run mpls ldp timer hello-hold interval
A link Hello hold timer is configured.
If a link Hello hold timer is configured on each end of a local LDP session,
the smaller value takes effect.
NOTE
The timer must be longer than the time a device takes to perform a master/slave
main control board switchover. If the timer is set to less than the switchover time,
a protocol intermittent interruption occurs during a switchover. The default timer
value is recommended.
d. Run commit
NOTE
The timer must be longer than the time a device takes to perform a master/slave
main control board switchover. If the timer is set to less than the switchover time,
a protocol intermittent interruption occurs during a switchover. The default timer
value is recommended.
d. Run commit
a. Run system-view
----End
Usage Scenario
The NE9000 does not support LDP loop detection. To establish an LDP session
with a device enabled with LDP loop detection, the NE9000 needs to be enabled
with the capability of negotiating LDP loop detection.
Procedure
Step 1 Run system-view
LDP loop detection negotiation is enabled. This allows the device to negotiate LDP
parameters during the initialization phase and establish an LDP session with a
peer device that is enabled with LDP loop detection.
NOTE
After the loop-detect command is run, the NE9000 obtains the capability of negotiating
LDP loop detection but still does not support LDP loop detection.
----End
Prerequisites
The local MPLS LDP session has been established.
Procedure
● Run the display mpls ldp [ all ] [verbose ] command to check LDP
information.
● Run the display mpls ldp interface [ interface-type interface-number |
verbose | all ] command to check information about LDP-enabled interfaces.
● Run the display mpls ldp session [ verbose | peer-id | all ] command to
check the status of an LDP session.
● Run the display mpls ldp adjacency [ interface interface-type interface-
number | remote ] [ peer peer-id ] [ verbose ] command to check
information about LDP adjacencies.
● Run the display mpls ldp peer [ verbose | peer-id | all ] command to check
the peers of an LDP session.
● Run the display mpls interface [ interface-type interface-number ]
[ verbose ] command to check information about an MPLS-enabled interface.
----End
Usage Scenario
Remote LDP sessions are used in LDP over TE and L2VPN scenarios:
● LDP over TE: If the core area on an MPLS network supports TE and the edge
devices run LDP, two LSRs on the edge establish a remote LDP session. LDP
over TE allows a TE tunnel to function as a hop on an LDP LSP.
● L2VPN: Devices exchange protocol packets over an LDP session. If the devices
are indirectly connected, a remote LDP session must be configured. However,
no remote LDP session needs to be configured for a static PW.
Pre-configuration Tasks
Before configuring a remote LDP session, complete the following task:
● Configure static routes or an IGP to ensure IP route reachability among nodes.
Procedure
Step 1 Run system-view
NOTICE
Running the undo mpls command deletes all MPLS configurations, including
established LDP sessions and LSPs.
An MPLS LSR ID is usually used as the LSR ID of an LDP instance. When VPN
instances are used, such as a BGP/MPLS VPN, if the VPN address space and public
network address space overlap, set LSR IDs for LDP instances so that TCP
connections for LDP sessions can be properly established.
----End
Context
A remote LDP session can be established between nonadjacent LSRs or between
adjacent LSRs.
A local LDP session and a remote LDP session can be configured together between
the same two LSRs.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp remote-peer remote-peer-name
A remote MPLS LDP peer is created, and the remote MPLS LDP peer view is
displayed.
Step 3 (Optional) Run description description-value
The description of a remote peer is configured.
Step 4 Run remote-ip ip-address
The IP address is assigned to the remote MPLS LDP peer.
The remote MPLS LDP peer must use the LSR ID as the IP address.
NOTE
● The IP address of a remote LDP peer must be the LSR ID of the remote LDP peer. If an
LDP LSR ID is different from an MPLS LSR ID, the LDP LSR ID is used.
● Modifying or deleting a configured IP address of a remote peer also deletes the remote
LDP session.
Step 5 (Optional) Perform either of the following operations to prevent label distribution
to remote LDP peers:
● Run remote-ip ip-address pwe3
The device is disabled from distributing labels to a specified remote MPLS LDP
peer.
● Run the following commands to prevent labels from being distributed to all
remote MPLS LDP peers.
a. (Optional) Run clear remote-ip pwe3
The explicit configuration of enabling or disabling the ability to distribute
labels to a specified remote LDP peer is deleted.
If an explicit configuration has been performed to allow a device to
distribute labels to a specified remote LDP peer, to delete the
configuration, you can perform this step.
b. Run quit
Return to the system view.
c. Run mpls ldp
The MPLS-LDP view is displayed.
d. Run remote-peer pwe3
The device is disabled from distributing labels to all remote MPLS LDP
peers.
NOTE
When a remote LDP session provides VPN services, run the preceding commands to prohibit
labels from being distributed to the remote MPLS LDP peers, which helps efficiently use
system resources. When TE services are transmitted over a backbone network in the LDP
over TE scenario, do not perform this configuration.
----End
Context
A remote LDP session can be established between two indirectly connected LSRs
or two adjacent LSRs. Both a local LDP session and a remote LDP session can be
established between two LSRs. When a local LDP session and a remote LDP
session are established between two LSRs, the configurations that both the local
and remote LDP sessions support must be the same. The L2VPN/L3VPN services
that pass through the LSPs between the two LSRs cannot be isolated from each
other. To address this problem, you can specify a local LSR ID for each LDP session.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp remote-peer remote-peer-name
The remote MPLS-LDP peer view is displayed.
Step 3 Run mpls ldp local-lsr-id interface-type interface-number
The primary IP address of a specified interface is used as a local LSR ID for the
current LDP session.
If both a local and remote LDP sessions are to be established between an LSR pair,
LSR IDs configured for the two sessions must be the same. Otherwise, only the
LDP session that finds the adjacency first can be established.
NOTE
Execution of this command resets the current remote LDP session. The reset remote LSP
session uses the new LSR ID.
----End
Context
The following timers are used in a remote LDP session:
● Target Hello send timer: An LSR sends Hello messages to a peer LSR at an
interval specified by the Hello send timer. The LSR can advertise its existence
and establish a Hello adjacency with the peer LSR.
● Target Hello hold timer: LDP peers that establish a Hello adjacency
periodically exchange Hello messages indicating that they expect to maintain
the adjacency. If the Hello hold timer expires and no Hello messages are
received, the Hello adjacency is torn down.
● Keepalive send timer: LSRs on both ends of an established LDP session start
Keepalive send timers and periodically exchange Keepalive messages to
maintain the LDP session.
● Keepalive hold timer: LDP peers start Keepalive hold timers and periodically
send LDP PDUs over an LDP session connection to maintain the LDP session.
If the Keepalive hold timers expire and no LDP PDUs are received, the
connection is closed, and the LDP session is torn down.
● Exponential backoff timer: An active LSR starts this timer after it fails to
process an LDP Initialization message or after it receives the notification that
the passive LSR to which the active LSR sends the LDP Initialization message
has rejected the parameters carried in the message. The active LSP
periodically resends an LDP Initialization message to initiate an LDP session
before the Exponential backoff timer expires.
Procedure
● Configure a target Hello send timer.
a. Run system-view
Effective target Hello send timer value = Min {Configured target Hello
send timer value, 1/3 of the target Hello hold timer value}
d. Run commit
The value of the Hello hold timer configured on the local LSR may not be
the actual effective value. The actual effective value is the smaller of the
two values configured on the two ends of a remote LDP session.
NOTE
The configured timer value must be greater than or equal to the time required
for an active/standby switchover. Otherwise, protocol flapping may occur during
an active/standby switchover. The default value is recommended.
d. Run commit
The Keepalive send timer value is set for a remote LDP session.
The Keepalive hold timer value is set for the remote LDP session.
The value of the Keepalive hold timer configured on the local LSR may
not be the actual effective value. The actual effective value is the smaller
of the two values configured on the two ends of a remote LDP session.
NOTE
The configured timer value must be greater than or equal to the time required
for an active/standby switchover. Otherwise, protocol flapping may occur during
an active/standby switchover. The default value is recommended.
d. Run commit
The global Keepalive hold timer value is set for the remote LDP session.
The value of the Keepalive hold timer configured on the local LSR may
not be the actual effective value. The actual effective value is the smaller
of the two values configured on the two ends of a remote LDP session.
NOTE
The configured timer value must be greater than or equal to the time required
for an active/standby switchover. Otherwise, protocol flapping may occur during
an active/standby switchover. The default value is recommended.
The value of the timer that takes effect is the smaller of the values of the two
Keepalive hold timers configured on both ends of a remote LDP session. The
Keepalive hold timer configured in the remote MPLS-LDP peer view takes
precedence over the global Keepalive hold timer. If the Keepalive hold timer is
configured both globally and in the remote MPLS-LDP peer view, the Keepalive
hold timer configured in the remote MPLS-LDP peer view takes effect.
d. Run commit
e. Run commit
----End
Usage Scenario
The NE9000 does not support LDP loop detection. To establish an LDP session
with a device enabled with LDP loop detection, the NE9000 needs to be enabled
with the capability of negotiating LDP loop detection.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run loop-detect
LDP loop detection negotiation is enabled. This allows the device to negotiate LDP
parameters during the initialization phase and establish an LDP session with a
peer device that is enabled with LDP loop detection.
NOTE
After the loop-detect command is run, the NE9000 obtains the capability of negotiating
LDP loop detection but still does not support LDP loop detection.
----End
Prerequisites
A remote MPLS LDP session has been established.
Procedure
● Run the display mpls ldp [ all ] [ verbose ] command to check LDP
information.
● Run one of the following commands to check the LDP session status:
– display mpls ldp session [ verbose | peer-id ]
– display mpls ldp session [ all ] [ verbose ]
● Run the display mpls ldp adjacency [ interface interface-type interface-
number | remote ] [ peer peer-id ] [ verbose ] command to check
information about LDP adjacencies.
● Run one of the following commands to check information about the peer of
an LDP session:
– display mpls ldp peer [ verbose | peer-id ]
– display mpls ldp peer [ all ] [ verbose]
● Run the display mpls ldp remote-peer [ remote-peer-name | peer-id peer-
id ] command to check information about the remote peer of an LDP session.
----End
Usage Scenario
On a device disabled from dynamic LDP advertisement, if an extended LDP
function is enabled after an LDP session is created, the LDP session will be
interrupted and the extended LDP function will be negotiated, affecting LSP
stability. After the dynamic LDP advertisement capability is enabled, the LDP
features that support the dynamic LDP advertisement capability can be
dynamically enabled or disabled without interrupting sessions, improving the
stability of LSPs.
NOTE
The dynamic LDP advertisement capability does not affect existing LDP functions. You are
advised to enable this function immediately after LDP is enabled globally, as it facilitates
dynamic advertisement of new extended functions.
Before enabling the dynamic LDP advertisement capability, enable MPLS and MPLS LDP
globally.
Pre-configuration Tasks
Before configuring the dynamic LDP advertisement capability, complete the
following task:
Procedure
Step 1 Run system-view
NOTE
Enabling dynamic LDP advertisement after an LDP session is established will result in
reestablishment of the LDP session.
----End
Usage Scenario
LDP can dynamically establish an LSP. An LDP LSP provided that LDP nodes do not
need to be specified and traffic engineering (TE) does not need to be deployed on
the MPLS network.
The maximum number of LSPs varies with the capacity and performance of a
device. If too many LSPs are configured on a device, the device may operate
unstably.
An LSP can be established only when eligible routes exist on LSRs and match the
LSP setup policy. LDP can only use routes that match a specified policy to set up
LSPs, which helps control the number of LSPs.
The NE9000 provides the following policies for controlling the number of LSPs:
● Policies for establishing ingress or egress LSPs are as follows:
– LDP uses all IGP routes to establish LSPs
– LDP uses host routes to establish LSPs.
– LDP uses an IP prefix list to establish LSPs.
– LDP does not establish LSPs.
● To control the number of transit LSPs on a transit LSR, an IP prefix list can be
used to filter routes, and only the routes matching the filtering policy can be
used to establish transit LSPs.
To correctly select a path maximum transmission unit (MTU), an LSR must obtain
the MTU of each link connected to it using LDP MTU signaling.
Pre-configuration Tasks
Before configuring an LDP LSP, configure a local LDP session.
Configuration Procedures
Context
If local LDP sessions have been established among the neighboring LSRs on an
LSP to be established, an LDP LSP can be established automatically.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run label advertise { explicit-null | implicit-null | non-null }
The label assigned to the penultimate node is specified.
----End
Context
By default, a downstream device sends Label Mapping messages to an upstream
device. This means that if a fault occurs on the network, services can be rapidly
switched to the backup path, improving network reliability. Digital subscriber line
access multiplexers (DSLAMs) deployed on an MPLS network for user access,
however, have low performance. On a large-scale network, a DSLAM can be
configured to send Label Mapping messages to only upstream LSRs only after
receiving requests for labels. This minimizes the number of unwanted MPLS
forwarding entries forwarded by the DSLAM.
Procedure
● Configure a label advertisement mode for the local LDP session.
a. Run system-view
NOTE
● When multiple links exist between neighbors, all interfaces must use the
same label advertisement mode.
● Modifying a configured label advertisement mode leads to the
reestablishment of an LDP session, resulting in service interruptions.
d. Run commit
The configuration is committed.
● Configure a label advertisement mode for the remote LDP session.
a. Run system-view
The system view is displayed.
b. Run mpls ldp remote-peer remote-peer-name
A remote MPLS LDP peer is created, and the remote MPLS-LDP peer view
is displayed.
c. Run mpls ldp advertisement { dod | du }
A label advertisement mode is configured.
NOTE
When the local and remote LDP sessions coexist, they must have the same label
advertisement mode.
d. Run commit
The configuration is committed.
----End
Context
A label distribution control mode defines how an LSR distributes labels during the
establishment of an LSP.
There are two label distribution control modes:
● Label distribution control in independent mode
In independent label distribution control mode, a local LSR independently
distributes and binds a label to a FEC and notifies the upstream LSR of the
label without waiting for a label from the downstream LSR.
– If the label advertisement mode is DU and the label distribution control
mode is independent, an LSR directly distributes a label to its upstream
LSR, without waiting for a label from the downstream LSR.
– If the label advertisement mode is DoD and the distribution control mode
is independent, an LSR distributes a label to its upstream LSR after
receiving a label request from the upstream LSR, without waiting for a
label from the downstream LSR.
Procedure
Step 1 Run system-view
----End
Context
To improve the stability of a large network with a great number of remote LDP
peers and low-end DSLAMs deployed at the network edge, you need to minimize
resource consumption. To achieve this, run the remote-ip auto-dod-request or
remote-peer auto-dod-request command to configure the function of triggering
a request to a downstream node for Label Mapping messages associated with a
specified or all remote LDP peers in DoD mode.
NOTE
● A remote LDP session must have been configured before the remote-peer auto-dod-
request or remote-ip auto-dod-request command is run.
● Inter-area LDP extension must have been configured by using the longest-match
command before the remote-peer auto-dod-request or remote-ip auto-dod-request
command is run.
● A DoD session must have been established with the downstream node by using the
mpls ldp advertisement dod command before the remote-peer auto-dod-request or
remote-ip auto-dod-request command is run.
Procedure
Step 1 Run system-view
Step 3 Perform either or both of the following operations to configure the automatic
triggering of a request to a downstream node for Label Mapping messages of a
specified or all remote LDP peers in DoD mode.
● To enable the device to automatically send DoD requests for Label Mapping
messages to all downstream remote LDP peers, run the remote-peer auto-
dod-request command.
● To enable the device to automatically send DoD requests for Label Mapping
messages to a specified downstream remote LDP peer, perform the following
procedures:
a. Run the quit command to enter the system view.
b. Run the mpls ldp remote-peer remote-peer-name command to create a
remote MPLS LDP peer and enter the remote MPLS LDP peer view.
c. Run the remote-ip ip-address command to specify the IP address of the
remote MPLS LDP peer.
NOTE
▪ This IP address must be the LSR ID that the remote LDP peer uses to establish
the current remote session.
----End
Context
LDP selects the minimum value among MTUs on all outbound interfaces of an
LSP. On the ingress, MPLS uses the minimum MTU to determine the maximum
size of each MPLS packet that can be forwarded without being fragmented. The
MPLS MTU helps prevent forwarding failures on transit nodes.
The relationships between the MPLS MTU and interface MTU are as follows:
● If the MPLS MTU is not configured on an interface, the interface MTU is used.
● If both an MPLS MTU and an interface MTU are set on an interface, the
smaller value between them is used.
NOTE
The MPLS MTU of an interface can take effect only after the MTU signaling function is
enabled.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The view of an MPLS-enabled interface is displayed.
Step 3 Run mpls mtu mtu
An MPLS MTU is set for an interface.
Step 4 (Optional) Run interface-mtu check-mode { ip | label-contained-length } slot
slot-id An interface MTU check mode is configured.
Choose an MTU check mode based on scenarios:
● ip: applies to IP forwarding scenarios.
● label-contained-length: applies to MPLS forwarding scenarios.
The device checks a packet's size based on the configured check mode and
fragments a packet if its size exceeds the interface MTU.
In VS mode, this command is supported only by the admin VS.
Step 5 Run commit
The configuration is committed.
----End
Context
If the MTU value of a packet exceeds the maximum size supported by a receiver
or transit device, the packet is fragmented during transmission, increasing the
workload of the network. The packet may even be discarded during transmission,
affecting services. If MTU values are correctly negotiated before packet
transmission, packets can successfully reach the receiver without packet
fragmentation and reassembly.
Procedure
Step 1 Run system-view
The node is enabled to send Label Mapping message carrying MTU TLVs.
----End
Context
By default, an LSR distributes labels to both upstream and downstream LDP peers,
speeding up LDP LSP convergence. If low-performance digital subscriber line
access multiplexers (DSLAMs) are deployed as access devices on an MPLS
network, you are advised to configure an LDP split horizon policy on an LSR to
allow the LSR to distribute labels only to its upstream LDP peers.
Procedure
Step 1 Run system-view
----End
Context
Generally, an LSR receives Label Mapping messages from all LDP peers. This
results in the establishment of numerous LSPs, wasting resources and leading to
unstable device running status, especially on low-performance devices. To address
these issues, an LDP inbound policy can be configured to limit Label Mapping
messages to be received, thereby reducing the number of LDP LSPs to be
established and memory resource consumption.
An LDP inbound policy restricts the receiving of LDP Label Mapping messages
based on the selected parameter:
● none: filters out all FECs. If this parameter is set, the specified peer does not
receive Label Mapping messages on any IGP route.
● host: allows only the FECs on host routes to pass. If this parameter is set, the
specified peer receives Label Mapping messages on host routes.
● ip-prefix: allows only the FECs on routes in a specified IP prefix list. If this
parameter is set, the specified peer receives Label Mapping messages on IGP
routes in the specified IP prefix list.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run ipv4-family
The MPLS-LDP-IPv4 view is displayed.
Step 4 Run inbound peer { peer-id | peer-group peer-group-name | all } fec { none |
host | ip-prefix prefix-name }
An inbound policy is applied to specified IGP routes to specified peers.
To apply a policy associated with the same FEC range to an LDP peer group or all
LDP peers receiving Label Mapping messages, specify either peer-group peer-
group-name or all in the command.
NOTE
If multiple inbound policies are configured for a specified peer, the earliest configuration
takes effect. For example, the following configurations are performed in this sequence:
inbound peer 2.2.2.2 fec host
inbound peer peer-group group1 fec none
As group1 also contains an LDP peer with peer-id of 2.2.2.2, the following inbound policy
takes effect:
inbound peer 2.2.2.2 fec host
If two inbound policies are configured one after the other and the peer parameter settings
in the two commands are the same, the latter configuration overwrites the former. For
example, the following configurations are performed in this sequence:
inbound peer 2.2.2.2 fec host
inbound peer 2.2.2.2 fec none
The second configuration overwrites the first one. This means that the following inbound
policy takes effect for the LDP peer with peer-id of 2.2.2.2:
inbound peer 2.2.2.2 fec none
If an inbound policy for all peers is configured and another inbound policy for a specified
peer or peer group is configured, the former policy has a higher priority, and the latter
policy does not take effect. For example:
inbound peer all fec none
inbound peer 2.2.2.2 fec host
The following inbound policy takes effect:
inbound peer all fec none
MPLS and MPLS LDP must be enabled globally before an inbound policy is configured.
To delete all inbound policies simultaneously, run the undo inbound peer all command.
----End
Context
Generally, an LSR sends Label Mapping messages to all its LDP peers. This results
in the establishment of numerous LSPs, wasting resources and leading to unstable
device running status, especially on low-performance devices. To address these
issues, an LDP outbound policy can be configured to limit Label Mapping
messages to be sent, thereby reducing the number of LDP LSPs to be established
and memory resource consumption.
The following parameters can be specified in an LDP outbound policy to limit
Label Mapping messages to be sent:
● none: filters out all FECs. If this parameter is specified, the device does not
send Label Mapping messages for IGP routes to specified peers.
● host: allows only the FECs on host routes to pass. If this parameter is
specified, the device sends Label Mapping messages only for host routes to
specified peers.
● ip-prefix: allows only the FECs on routes in a specified IP prefix list. If this
parameter is specified, the device sends Label Mapping messages only for IGP
routes in the specified IP prefix list to specified peers.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 (Optional) Run ipv4-family
The MPLS-LDP-IPv4 view is displayed.
Step 4 Perform either of the following steps to apply the outbound policy that allows
Label Mapping messages for specified IGP routes or BGP labeled routes to be sent
to a specified LDP peer:
● Run the outbound peer { peer-id | peer-group peer-group-name | all } fec
{ none | host | ip-prefix prefix-name } command to apply an outbound policy
to specified IGP routes to specified peers.
● Run the outbound peer { peer-id | peer-group peer-group-name | all } bgp-
label-route { none | ip-prefix prefix-name } command to apply an outbound
policy to specified BGP labeled routes to specified peers.
If FECs in the Label Mapping messages to be sent to an LDP peer group or all LDP
peers are in the same range, specify either peer-group peer-group-name or all in
the command.
NOTE
If multiple outbound policies are configured for a specified LDP peer, the earliest
configuration takes effect. For example, the following configurations are performed in
sequence:
outbound peer 2.2.2.2 fec host
outbound peer peer-group group1 fec none
As group1 also contains an LDP peer with peer-id of 2.2.2.2, the following outbound policy
takes effect for the peer:
outbound peer 2.2.2.2 fec host
If two outbound policies are configured in sequence and the peer parameters in the two
commands are the same, the latter configuration overwrites the former. For example, the
following configurations are performed in sequence:
outbound peer 2.2.2.2 fec host
outbound peer 2.2.2.2 fec none
The second configuration overwrites the first one. This means that the following outbound
policy takes effect for the LDP peer with peer-id of 2.2.2.2:
outbound peer 2.2.2.2 fec none
MPLS and MPLS LDP must be enabled globally before an outbound policy is configured.
To delete all outbound policies simultaneously, run the undo outbound peer all command.
----End
Context
A policy can be configured to allow LDP to use eligible static and IGP routes to
trigger the establishment of public-network ingress and egress LSPs.
NOTE
A policy for triggering LSP establishment can be configured in either the MPLS or MPLS-
LDP-IPv4 view. If such a policy is configured in both views, the configuration in the MPLS-
LDP-IPv4 view takes effect.
The LSR must have route entries that exactly match the FECs for the LSPs to be established.
Procedure
● Configure a policy for triggering LSP establishment in the MPLS view.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run lsp-trigger { all | host | ip-prefix ip-prefix-name | none }
The policy of triggering LSP establishment using static and IGP routes is
configured.
If a policy allows LDP to establish LSPs for static and IGP routes or for
routes within a specified IP prefix list, the policy also allows LDP to
establish proxy egress LSPs. However, these proxy egress LSPs may be
useless and unnecessarily consume system resources. To prevent such an
issue, run this command to disable the device from establishing proxy
egress LSPs.
e. Run commit
The configuration is committed.
● Configure a policy for triggering LSP establishment in the MPLS-LDP-IPv4
view.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run proxy-egress disable
A policy for triggering proxy egress LSP establishment is configured.
If a policy allows LDP to establish LSPs for static and IGP routes or for
routes within a specified IP prefix list, the policy also allows LDP to
establish proxy egress LSPs. However, these proxy egress LSPs may be
useless and unnecessarily consume system resources. To prevent such an
issue, run this command to disable the device from establishing proxy
egress LSPs.
d. Run quit
Return to the system view.
e. Run mpls ldp
The MPLS-LDP view is displayed.
f. Run ipv4-family
The MPLS-LDP-IPv4 view is displayed.
g. Run lsp-trigger { all | host | ip-prefix prefix-name | none }
A policy for triggering LSP establishment is configured.
If the triggering policy is changed from all to host, LSPs that have been
established using host routes are not reestablished.
h. Run commit
The configuration is committed.
----End
Context
A policy can be configured to enable LDP to use eligible routes to trigger the
establishment of public-network ingress and egress LSPs.
Both the lsp-trigger bgp-label-route and lsp-trigger commands can be used to
configure policies to trigger the establishment of LDP LSPs. The former applies
only to labeled BGP routes of the public network, and the latter applies to static
and IGP routes.
NOTE
During LDP GR, changing the policy for triggering LSP establishment does not take effect.
Procedure
Step 1 Configure a policy for triggering LSP establishment in the MPLS view.
1. Run system-view
The system view is displayed.
2. Run mpls
The MPLS view is displayed.
3. Run lsp-trigger bgp-label-route [ ip-prefix ip-prefix-name ] not-only-host
A policy of triggering LSP establishment using labeled BGP routes of the
public network is configured.
– If the ip-prefix parameter is specified, LDP can only use labeled BGP
routes of the public network that match the IP prefix list to trigger LSP
establishment.
– If the not-only-host parameter is specified, LDP can use all labeled BGP
routes of the public network, including non-host BGP routes, to trigger
LSP establishment.
4. Run commit
The configuration is committed.
----End
Context
After MPLS LDP is enabled, LDP LSPs are automatically established, including a
large number of unnecessary transit LSPs, which wastes resources. A policy for
triggering transit LSP establishment can be configured, allowing LDP to establish
transit LSPs only for eligible routes. The local node does not send Label Mapping
messages upstream for the routes that are filtered out. This limits the number of
LSPs to be established, thereby reducing network resource consumption.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 (Optional) Run ipv4-family
The MPLS-LDP-IPv4 view is displayed.
Step 4 Run propagate mapping for ip-prefix ip-prefix-name
A policy for triggering transit LSP establishment is configured.
The command takes effect in both the MPLS-LDP and MPLS-LDP-IPv4 views. If the
command is configured in both views, only the latter configuration takes effect.
Step 5 Run commit
The configuration is committed.
----End
Context
After an LDP LSP goes Up, it goes Down due to a protocol or interface failure. A
device attempts to reestablish the LDP LSP, which maximizes the LDP LSP protocol
hard convergence. If LDP LSP alternates between Up and Down when a
downstream node frequently sends a label to an upstream node or a label is
withdrawn, CPU usage increases. To prevent label suppression, configure LDP LSP
flapping suppression.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
MPLS LDP is globally enabled.
Step 3 Run propagate mapping unknown-tlv disable
The device is disabled from forwarding unknown TLVs.
If an upstream device cannot process unknown TLVs, network problems may
occur. In this case, you can run this command to disable the local device from
forwarding unknown TLVs.
Step 4 Run commit
The configuration is committed.
----End
Context
When an LDP network is connected with an SR network, it is required that LDP
LSPs interwork with SR LSPs, so that traffic on LDP LSPs can be further forwarded
on SR LSPs when the traffic enters the SR network. To meet this requirement,
configure the policy for triggering interworking between LDP LSPs and SR LSPs,
allowing SR LSPs to interwork with proxy egress LSPs and transit LSPs that are
established over non-local host routes with a 32-bit mask. If they interwork
successfully, traffic on such LDP LSPs can be further forwarded on SR LSPs.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run lsp-trigger segment-routing-interworking best-effort host
The policy that triggers interworking between SR LSPs and proxy egress LSPs and
transit LSPs that are established over non-local host routes with a 32-bit mask is
configured.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
All LDP LSP configurations have been completed.
Procedure
● Run the display mpls ldp [ all | all verbose ] command to check LDP
information.
● Run the display mpls ldp lsp [ destination-address mask-length | all ]
command to check information about LDP LSPs.
● Run the display mpls ldp lsp inbound-policy command to check information
about the liberal LSPs that have passed an inbound policy.
● Run the display mpls lsp [ verbose ] command to check LSP information.
● Run the display mpls ldp lsp fault-analysis ip-address mask command to
check the cause for an LDP LSP establishment failure.
----End
Usage Scenario
On a large-scale network, multiple IGP areas need to be configured for flexible
deployment and fast convergence. To prevent excessive resource consumption
caused by a large number of routes, an area border router (ABR) needs to
summarize the routes in an area and advertise the summary routes to neighboring
IGP areas. By default, when establishing an LSP, LDP searches the routing table for
the route that exactly matches the FEC carried in a received Label Mapping
message. For summary routes, LDP can establish only liberal LSPs, but cannot
establish LDP LSPs across IGP areas.
In this case, you can run the longest-match command to enable LDP to search for
routes based on the longest match rule and establish inter-area LDP LSPs.
Pre-configuration Tasks
Before configuring LDP extension for inter-area LSPs, complete the following task:
Procedure
Step 1 Run system-view
LDP is configured to search for routes based on the longest match rule to
establish LSPs.
NOTE
----End
Prerequisites
LDP extension for inter-area LSPs has been configured.
Procedure
● Run the display mpls lsp to check the establishment of inter-area LSPs after
LDP is configured to search for routes based on the longest match rule to
establish LSPs.
----End
Usage Scenario
LDP multi-instance is mainly used in MPLS L3VPN scenarios of carrier networks.
To configure LDP multi-instance on a BGP/MPLS IP VPN network, bind LDP to a
created VPN instance. Disabled
Pre-configuration Tasks
Before configuring LDP multi-instance, complete the following tasks:
● Enable MPLS.
● Enable MPLS LDP.
● Configure an IP VPN instance.
Context
To configure the transport address for an LDP instance, you must use the IP
address of the interfaces that are bound to the same VPN instance.
NOTE
In LDP multi-instance scenarios, you can use the interface address to establish a session.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp vpn-instance vpn-instance-name
LDP is enabled for the specified VPN instance, and the MPLS-LDP-VPN instance
view is displayed.
For LDP-enabled interfaces, note the following:
● Configurations in the MPLS-LDP-VPN instance view only take effect on LDP-
enabled interfaces that are bound to the same VPN instance.
NOTE
In most applications, use the default LDP LSR ID. When VPN instances are used, such as a
BGP/MPLS VPN, if the VPN address space and public network address space overlap, set
LSR IDs for LDP instances so that TCP connections for LDP sessions can be properly
established.
----End
1.1.4.9.2 (Optional) Enabling the Function to Trigger Trap Messages Only for
Public Network LDP Sessions
In an LDP multi-instance scenario, a device can be enabled to trigger trap
messages only for public network LDP sessions, which prevents a failure to
distinguish trap messages for both the private and public network sessions with
the same ID.
Context
In an LDP multi-instance scenario, multiple LDP instances may contain sessions of
the same ID. Since trap messages do not contain VPN instance information, these
trap messages carrying the same session ID cannot be differentiated based on
VPN instances. To distinguish trap messages for public and private network
sessions with the same ID, run the session-state-trap public-only command to
enable a device to generate trap messages only for public network LDP sessions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run session-state-trap public-only
The device is enabled to generate trap messages only for public network LDP
sessions.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
The LDP multi-instance function has been configured.
Procedure
● Run the display mpls ldp vpn-instance vpn-instance-name command to
check information about LDP of a specified VPN instance.
----End
Usage Scenario
To configure IGP-based MPLS LDP, you need to enable MPLS LDP globally and
then enable MPLS LDP on all interfaces that require the function. If a large
number of interfaces require the function, this configuration method is time-
consuming and prone to configuration errors.
To address this issue, configure IGP-based automatic LDP configuration, allowing
MPLS LDP to be enabled automatically on IGP-capable interfaces after MPLS LDP
is enabled globally.
Pre-configuration Tasks
Before configuring IGP-based automatic LDP configuration, complete the
following tasks:
● Configure basic IGP functions.
● Enable MPLS and MPLS LDP globally.
Procedure
● Configure IS-IS-based automatic LDP configuration.
a. Run system-view
The system view is displayed.
b. Run isis [ process-id ]
The IS-IS view is displayed.
c. Run mpls ldp auto-config
Automatic LDP configuration is enabled on IS-IS interfaces.
After the command is run, MPLS LDP is enabled automatically on all
interfaces which can establish IS-IS neighbor relationships in the IS-IS
process. If you want to disable MPLS LDP on an interface, run the isis
mpls ldp auto-config disable command in the interface view.
d. Run commit
The configuration is committed.
----End
Usage Scenario
When LDP LSPs transmit application traffic, for example, VPN traffic, to improve
network reliability, LDP FRR and an LDP upper-layer protection mechanism, such
as VPN FRR or VPN equal-cost multipath (ECMP), are used. BFD for LDP LSP only
detects primary LSP faults and switches traffic to an FRR LSP. If the primary and
FRR LSP fail simultaneously, the BFD mechanism does not take effect. In this
situation, LDP can instruct its upper-layer application to perform a protection
switchover only after LDP detects the FRR LSP failure. As a result, a great number
of packets are dropped.
NOTE
For applications, for example, VPN, which are transmitted over LDP LSPs, the primary and
backup LDP LSPs are collectively called LDP tunnels.
To minimize packet loss, dynamic BFD can be configured to establish dynamic BFD
sessions to monitor both the primary and FRR LSPs. If both primary and FRR LSPs
fail, BFD rapidly detects the failures and instructs a specific LDP upper-layer
application to perform a protection switchover.
Pre-configuration Tasks
Before configuring dynamic BFD to monitor an LDP tunnel, complete the following
tasks:
● Configure basic MPLS functions.
● Configure MPLS LDP.
● (Optional) Configure an IP address prefix list if it is used to trigger LDP LSP
establishment.
● (Optional) Configure a FEC list if it is used to trigger LDP LSP establishment.
Procedure
● Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is globally enabled.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bfd enable
The capability of dynamically establishing a BFD session is configured.
The command does not create a BFD session.
f. Run commit
The configuration is committed.
● Perform the following steps on the egress:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is globally enabled, and the BFD view is displayed.
c. Run mpls-passive
The capability of passively creating a BFD session is configured.
After this command is run, a BFD session will be established only after
the egress receives an LSP ping request packet that carries a BFD TLV
from the ingress.
d. Run commit
----End
1.1.4.11.2 Configuring a Policy for Triggering Dynamic BFD for LDP Tunnel
Either the host address-based policy or FEC list-based policy can be used to
dynamically establish BFD sessions to monitor LDP tunnels.
Context
One of the following trigger policies can be used to establish BFD sessions to
monitor LDP tunnels:
● Host address-based policy: used when all host addresses are available to
trigger the creation of BFD sessions.
● IP address prefix-based policy: used when only FEC entries that match a
specified IP address prefix can be used to trigger the creation of BFD sessions.
● FEC list-based policy: used when only some host addresses are available to
establish BFD sessions. The FEC list contains specified host addresses.
Procedure
Step 1 Run system-view
The policy for establishing a session of dynamic BFD for LDP LSP is configured.
----End
Context
Perform the following steps on the ingress.
Procedure
Step 1 Run system-view
Effective local interval at which BFD packets are sent = MAX { Locally configured
interval at which BFD packets are sent, Remotely configured interval at which BFD
packets are received}
Effective local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval at
which BFD packets are received }
Local BFD detection period = Actual local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
Therefore, you can adjust the minimum interval at which BFD packets are sent,
the minimum interval at which BFD packets are received, and the detection
multiplier only on the ingress to update BFD detection time parameters on both
the ingress and egress.
Effective local interval at which BFD packets are sent = MAX { Locally configured
interval at which BFD packets are sent, Remotely configured interval at which BFD
packets are received}
Effective local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval at
which BFD packets are received }
Local BFD detection period = Actual local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
If both the mpls bfd-tunnel and mpls bfd commands are run, the parameters
configured using the mpls bfd-tunnel command take precedence over those
configured using the mpls bfd command.
Step 8 Run commit
The configuration is committed.
----End
Prerequisites
The dynamic BFD for LDP tunnel function has been configured.
Procedure
● Run the display mpls bfd session protocol ldp [ fec ip-address ] [ bfd-type
ldp-tunnel ] [ verbose ] command to check information about all BFD
sessions that monitor LDP tunnels on the ingress.
----End
Usage Scenario
When LDP LSPs are established to transmit services with high quality
requirements, bit errors on LSPs may cause service interruptions. To detect bit
errors, run the corresponding command for LDP LSPs. If a node on an LSP detects
bit errors, LDP notifies the VPN services of the bit error rate and triggers a service
switchover, which guarantees service quality.
Pre-configuration Tasks
Before configuring LDP bit error detection, complete the following task:
● Configure LDP LSPs.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bit-error-detection level level-number threshold switch switch-coe switch-
pow resume resume-coe resume-pow
----End
Usage Scenario
On an MPLS network with a backup link, if a link fault occurs, Interior Gateway
Protocol (IGP) routes converge and routes related to the backup link become
available. Traffic on an LDP LSP can be switched to a backup path only after IGP
routes are converged successfully. Before the switchover is complete, traffic is
interrupted. To prevent traffic interruptions, LDP FRR can be configured.
LDP FRR uses the liberal label retention mode, obtains a liberal label, and applies
for a forwarding entry associated with the label. It then forwards the forwarding
entry to the forwarding plane as a backup forwarding entry used by the primary
LSP. On the network enabled with LDP FRR, the interface can detect a fault that
occurs on itself, and a BFD session associated with the interface can also detect a
failure in the interface or the primary LSP established on the interface. If a fault
occurs, LDP FRR is notified of the failure and rapidly forwards traffic to a backup
LSP, protecting traffic on the primary LSP. The traffic switchover is performed
within 50 milliseconds, which minimizes the traffic interruption time.
LDP Auto FRR depends on IGP FRR. When IGP FRR is enabled, LDP Auto FRR will
be automatically enabled, and a backup LSP will be established based on a
specific policy.
LFA Auto FRR cannot be used to calculate alternate links on large-scale networks,
especially on ring networks. To address this problem, enable Remote LFA Auto
FRR.
Pre-configuration Tasks
Before configuring LDP Auto FRR, complete the following tasks:
Context
LDP auto FRR depends on IGP auto FRR. LDP auto FRR will be automatically
enabled after IGP auto FRR is enabled. To change a policy for triggering LDP LSP
establishment, you can run the auto-frr lsp-trigger command.
NOTE
Before you enable remote LFA FRR, configure the remote LFA algorithm when you
configure IGP auto FRR.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 (Optional) Run ipv4-family
The MPLS-LDP-IPv4 view is displayed.
Step 4 (Optional) Run auto-frr lsp-trigger { all | host | ip-prefix ip-prefix-name | none }
A policy for triggering LDP to establish backup LSPs is configured.
NOTE
If both the auto-frr lsp-trigger and lsp-trigger commands are run, the established backup
LSPs satisfy both the policy for triggering LDP LSP establishment and the policy for
triggering backup LDP LSP establishment.
This command can be run in both the MPLS-LDP and MPLS-LDP-IPv4 views. If it is
run in both views, only the later configuration takes effect.
Step 5 To enable remote LFA FRR, perform the following steps in the MPLS-LDP view of a
PQ node:
1. Run quit
Return to the MPLS-LDP view.
2. Run accept target-hello { all | peer-group ip-prefix ip-prefix-name }
Automatic remote LDP session establishment upon the receiving of a Targeted
Hello message is enabled.
In a Remote LFA FRR scenario, after an ingress uses the Remote LFA
algorithm to calculate a PQ node, LDP automatically establishes a remote
LDP session between the ingress and the PQ node. To enable the PQ node to
implement this function, run the accept target-hello command on the PQ
node.
In the preceding command:
After the accept target-hello command is run, the PQ node is prone to Targeted
Hello packet-based attacks. If the PQ node receives a large number of Targeted Hello
packets, it establishes many remote LDP sessions. To prevent such an issue, perform
either of the following operations:
– Specify the peer-group ip-prefix ip-prefix-name parameter to limit the LDP peers
with which a PQ node can establish remote LDP sessions.
– Configure LDP security authentication for LDP peers in batches. For details, see
1.1.4.21 Configuring LDP Security Features.
3. Run send-message address all-loopback
In a remote LFA FRR scenario, LDP uses the PQ node's address calculated
using an IGP to establish a remote LDP session between a node and the PQ
node. Then the two nodes establish a remote LFA FRR LSP over the session.
The PQ node's IP address can be any loopback interface's IP address or an LSR
ID. To advertise the loopback addresses to LDP peers, run this command on a
PQ node so that a remote LFA FRR LSP can be established.
You can enter the Eth-Trunk interface view, Eth-Trunk sub-interface view, POS
interface view, IP-Trunk interface view, GE interface view, or GE sub-interface
view.
2. Run mpls poison-reverse enable
This command also applies to the scenario where two ECMP paths are formed
on an LDP/SR-MPLS BE ring network to resolve similar issues.
3. Run commit
----End
Context
LDP graceful deletion can be configured in the LDP-IGP synchronization or LDP
FRR scenario to speed up traffic switching. It helps implement uninterrupted traffic
transmission during traffic switching, which improves reliability of the entire
network.
If both the primary link and the LDP session on that link also go down, LDP
immediately instructs the upstream device to withdraw labels and triggers LDP
Auto FRR. LSP convergence on the backup link requires LDP to distribute labels to
the upstream device again, which prolongs convergence and FRR traffic switching.
As a result, packet loss occurs.
If LDP graceful deletion is configured and the LDP session goes down, LDP delays
deleting the LDP session and keeps the relevant labels and LSP. The LSP on the
backup link does not require LDP to distribute labels to the upstream device again,
which shortens FRR traffic switching and reduces packet loss.
Perform the following configuration on the LDP FRR-enabled LSR.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 graceful-delete
LDP graceful deletion is enabled.
Step 4 (Optional) Run graceful-delete timer timer
The graceful deletion timer value is set.
After the LDP session goes down, forwarding entries on the LSR remain before the
graceful deletion timer expires.
NOTE
If the value of the graceful delete timer is too large, the invalid LSP will be kept for a long time,
consuming system resources.
----End
Context
On a network with ECMP enabled, the same types of devices reside on both end
of ECMP links. If an optical fiber between the two devices is disconnected,
network-wide protection fails because backup path calculation is not supported.
To prevent traffic loss from such a disconnection, enable the coexistence of ECMP
and FRR so that protection paths can be established for ECMP paths.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run ecmp-frr-coexist enable
The coexistence of ECMP and FRR is enabled.
Step 4 Run commit
The configuration is committed.
----End
Context
After an ingress uses the remote LFA algorithm to calculate a PQ node, the ingress
establishes a remote LDP session with the PQ node. The remote LDP session goes
down when the RLFA route is deleted. Such session down issues occur frequently
during remote LFA FRR convergence and do not harm service deployment.
Therefore, you can configure the device not to report a trap when a remote LDP
session goes down.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
----End
Prerequisites
LDP Auto FRR has been configured.
Procedure
● Run the display mpls lsp command to check information about LSPs
generated after LDP Auto FRR is enabled.
● Run the display mpls ldp event session-down verbose command to check
LDP session down causes. The cause value IGP delete the RLFA IID indicates
that an LDP session is down because the RLFA route is deleted.
● Run the display mpls ldp event adjacency-down verbose command to
check adjacency down causes. The cause value IGP delete the RLFA IID
indicates that the adjacency is down because the RLFA route is deleted.
----End
Usage Scenario
BFD implements fast detection at the millisecond level. To enable a device to
rapidly monitor whether LDP LSPs are faulty, establish BFD sessions.
When configuring static BFD to monitor an LDP LSP, note the following:
● You can bind a BFD session to an LDP LSP only on the ingress.
● An LDP LSP to be monitored by static BFD can be established only using host
routes.
Pre-configuration Tasks
Before configuring static BFD to monitor an LDP LSP, complete the following tasks:
● Configure network layer parameters to implement network layer connectivity.
● Enable MPLS LDP on each node and set up an LDP session.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
Step 3 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd session-name bind ldp-lsp peer-ip ip-address nexthop ip-address
[ interface interface-type interface-number ]
A BFD session is bound to a dynamic LSP.
Step 3 Run discriminator local discr-value
A local discriminator is configured for the BFD session.
Step 4 Run discriminator remote discr-value
A remote discriminator is configured for the BFD session.
NOTE
The local discriminator of the local device and the remote discriminator of the remote
device are the same. The remote discriminator of the local device and the local
discriminator of the remote device are the same. A discriminator inconsistency causes the
BFD session to fail to be established.
with a BFD session, you must configure a wait to restore (WTR) time for the BFD
session bound to the main interface. This prevents the BFD session bound to the
main interface from flapping when its member interface joins or leaves the
interface.
The minimum interval at which the local device sends BFD packets is changed.
Effective local interval at which BFD packets are sent = MAX { Locally configured
interval at which BFD packets are sent, Remotely configured interval at which BFD
packets are received }
Effective local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval at
which BFD packets are received }
Local BFD detection period = Actual local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
For example, if: On the local device, the intervals at which BFD packets are sent
and received are 200 ms and 300 ms, respectively, and the detection multiplier is
4; on the remote device, the intervals at which BFD packets are sent and received
are 100 ms and 600 ms, respectively, and the detection multiplier is 5. Then:
● On the local device, the actual interval for sending BFD packets is 600 ms
calculated using the formula MAX { 200 ms, 600 ms }, the interval for
receiving BFD packets is 300 ms calculated using the formula MAX { 100 ms,
300 ms }, and the detection period is 1500 ms (300 ms × 5).
● On the remote device, the actual interval for sending BFD packets is 300 ms
calculated using the formula MAX { 100 ms, 300 ms }, the interval for
receiving BFD packets is 600 ms calculated using the formula MAX { 200 ms,
600 ms }, and the detection period is 2400 ms (600 ms × 4).
The minimum interval at which the local device receives BFD packets is changed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 The IP link, LSP, or TE tunnel can be used as the reverse tunnel to inform the
ingress of a fault. If there is an LSP or a TE tunnel, use an LSP or the TE tunnel. If
no LSP or TE tunnel is available, use an IP link. If the configured reverse tunnel
requires BFD, configure a pair of BFD sessions for it. Perform one of the following
configurations as required:
● For an IP link, run the bfd session-name bind peer-ip ip-address [ vpn-
instance vpn-name ] [ source-ip ip-address ] command.
● For an LDP LSP, run the bfd session-name bind ldp-lsp peer-ip ip-address
nexthop ip-address [ interface interface-type interface-number ] command.
● For an MPLS TE tunnel, run the bfd session-name bind mpls-te interface
tunnel interface-number [ te-lsp ] command.
The peer-ip ip-address value is the LSR ID of the remote device.
Step 3 Run discriminator local discr-value
A local discriminator is configured for the BFD session.
Step 4 Run discriminator remote discr-value
A remote discriminator is configured for the BFD session.
NOTE
The local discriminator of the local device and the remote discriminator of the remote
device are the same. The remote discriminator of the local device and the local
discriminator of the remote device are the same. A discriminator inconsistency causes the
BFD session to fail to be established.
receiving BFD packets is 300 ms calculated using the formula MAX { 100 ms,
300 ms }, and the detection period is 1500 ms (300 ms × 5).
● On the remote device, the actual interval for sending BFD packets is 300 ms
calculated using the formula MAX { 100 ms, 300 ms }, the interval for
receiving BFD packets is 600 ms calculated using the formula MAX { 200 ms,
600 ms }, and the detection period is 2400 ms (600 ms × 4).
The minimum interval at which the local device receives BFD packets is changed.
The BFD session is allowed to modify the port or link state table upon detection of
a fault.
If an LSP is used as a reverse tunnel to notify the ingress of a fault, you can run
this command to allow the reverse tunnel to switch traffic if the BFD session goes
Down. If a single-hop IP link is used as a reverse tunnel, this command can be
configured because the process-pst command can only be configured for BFD
single-link detection.
----End
Prerequisites
Static BFD used to monitor an LDP LSP has been configured.
Procedure
● Run the display bfd session { all | static | dynamic | discriminator discr-
value } [ verbose ] command to check information about BFD sessions.
● Run the display bfd statistics session { all | static | dynamic | discriminator
discr-value | peer-ip peer-ip } command to check statistics about BFD
sessions.
----End
Usage Scenario
Dynamic BFD for LDP LSPs detects link faults rapidly and reduces configuration
workloads. It can be used together with LDP FRR to reduce the impact of link
faults on services.
Note that dynamic BFD can be used to check LDP LSPs established only using host
routes.
Pre-configuration Tasks
Before configuring dynamic BFD for LDP LSPs, complete the following tasks:
● Configure network layer parameters to implement network layer connectivity.
● Enable MPLS LDP on each node and establish an LDP session.
● Configure an LDP LSP.
Context
Perform the following steps on the ingress and egress.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally, and the BFD view is displayed.
Step 3 Run commit
The configuration is committed.
----End
Procedure
● Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
NOTE
After this step is complete, a BFD session can be established when the egress is
configured and the ingress sends LSP Ping Request packets carrying BFD TLV
objects.
d. Run commit
The configuration is committed.
● Perform the following steps on the egress:
a. Run system-view
The system view is displayed.
b. Run bfd
The BFD view is displayed.
c. Run mpls-passive
The capability of passively creating a BFD session is enabled.
NOTE
After this command is run, the egress does not create a BFD session immediately.
Instead, the egress waits for an LSP ping request carrying the BFD TLV before
creating a BFD session.
d. Run commit
The configuration is committed.
----End
1.1.4.15.3 Configuring a Policy for Triggering Dynamic BFD for LDP LSPs
Configure a policy for dynamically establishing a BFD session to monitor LDP LSPs
and create a BFD session.
Context
A policy can be enforced to establish a session of dynamic BFD for LDP LSP in
either of the following modes
● Host mode: applies when all host addresses can be used to establish a BFD
session. You can specify nexthop and outgoing-interface to define LSPs that
support a BFD session.
● FEC list mode: applies when only some host addresses can be used to
establish a BFD session.
You can use the fec-list command to specify host addresses. Perform the following
steps on the ingress of an LSP to be monitored:
Procedure
Step 1 Run system-view
----End
Context
Perform the following steps on the ingress.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
The BFD view is displayed.
Step 3 Run mpls ping interval interval
----End
Prerequisites
Dynamic BFD for LDP LSP has been configured.
Procedure
● Run the display mpls bfd session [ fec ip-address | nexthop ip-address |
outgoing-interface interface-type interface-number | protocol { rsvp-te |
ldp } ] [ verbose ] command to check BFD session information.
● Run the display bfd session all verbose command on the ingress to check
BFD session information.
● Run the display bfd session passive-dynamic verbose command on the
egress to check BFD session information.
----End
Usage Scenario
If the direct link of a local LDP session between two devices fails, an LDP
adjacency for the LDP session is torn down. The LDP session and related labels are
also deleted. After the direct link recovers, the LDP session can be reestablished
and distribute labels so that an LDP LSP over the session can converge. During this
process, LDP LSP traffic is dropped.
With LDP session protection configured, LDP establishes a remote adjacency when
establishing local adjacencies and uses both adjacencies to maintain LDP sessions.
If the direct link of an LDP session is faulty and other paths and routes are
available, the remote adjacency can be used to maintain the LDP session without
interruption. After the direct link recovers, the local outgoing label can still be
used, without being distributed by the downstream node again. The LDP session
does not need to be reestablished. This speeds up LDP LSP convergence and
reduces traffic loss.
Pre-configuration Tasks
Before configuring LDP session protection, complete the following task:
● Configure a local LDP session.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
----End
Usage Scenario
You can configure LDP-IGP synchronization to prevent traffic loss after the primary
LSP fails on the network where primary and backup links exist. The details are as
follows:
● When the primary link is restored and an LDP session or an LDP adjacency
between nodes along the primary link fails, LSP traffic is discarded because
LSP traffic is switched from the primary link to the backup link, whereas IGP
traffic is still transmitted through the primary link.
● When the primary link is restored and an LDP session between nodes along
the primary link fails, LSP traffic is discarded because LSP traffic is switched
from the primary link to the backup link, whereas IGP traffic is still
transmitted through the primary link.
NOTE
Pre-configuration Tasks
Before configuring LDP-IGP synchronization, complete the following tasks:
● Configure basic IGP (OSPF or IS-IS) functions.
● Enable MPLS.
● Enable MPLS LDP globally and on each interface.
Context
LDP-IGP synchronization can be enabled in either of the following modes:
● Enable LDP-IGP synchronization in the interface view.
LDP-IGP synchronization can be enabled on specified interfaces if a few
interfaces need to support this function.
● Enable LDP-IGP synchronization in an IGP process.
After LDP-IGP synchronization is enabled in an IGP process, it is automatically
enabled on all interfaces in the process. If LDP is enabled on all IGP-enabled
interfaces of a node, this configuration mode is recommended.
NOTE
Procedure
● Enable LDP-IGP synchronization in the interface view.
If OSPF is used, perform the following steps on the interfaces on both ends of
the link between the node where the primary LSP and the backup LSP diverge
from each other and its LDP peer on the primary LSP.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view is displayed.
c. Run ospf ldp-sync
LDP-OSPF synchronization is enabled for the interface.
NOTE
e. Run commit
The configuration is committed.
NOTE
When LDP-IGP synchronization and LDP GTSM are configured on an interface, an LDP
session needs to be established over a non-direct link. Therefore, the number of GTSM
hops must be set based on the actual hop count and cannot be set to 1. If the number of
GTSM hops is set to 1, the LDP session cannot be established. As a result, the route cannot
be switched back or LDP-IGP synchronization fails.
● Enable LDP-IGP synchronization in an IGP process.
If OSPF is used, perform the following steps on the node on which the
primary LSP and the backup LSP diverge from each other and its LDP peer on
the primary LSP:
a. Run system-view
The system view is displayed.
b. Run ospf [ process-id ]
The OSPF process is started, and the OSPF view is displayed.
process-id specifies an OSPF process. If the process-id parameter is not
specified, the default process ID 1 is used. To associate an OSPF process
with a VPN instance and run OSPF in the VPN instance, run the ospf
[ process-id | vpn-instance vpn-instance-name ] * command. If a VPN
instance is specified, the OSPF process belongs to the specified instance.
Otherwise, the OSPF process belongs to the global instance.
c. Run area area-id
The OSPF area view is displayed.
d. Run ldp-sync enable
LDP-OSPF synchronization is enabled.
e. Run commit
The configuration is committed.
If IS-IS is used, perform the following steps on the node on which the primary
LSP and the backup LSP diverge from each other and its LDP peer on the
primary LSP:
a. Run system-view
The system view is displayed.
b. Run isis [ process-id ]
The specified IS-IS process is started, and the IS-IS view is displayed.
process-id specifies an IS-IS process ID. If the process-id parameter is not
specified, the default process ID 1 is used. To associate an IS-IS process
with a VPN instance, run the isis [ process-id ] [ vpn-instance vpn-
instance-name ] command.
c. Run ldp-sync enable
LDP-IS-IS synchronization is enabled.
d. Run commit
The configuration is committed.
----End
Context
After LDP-IGP synchronization is enabled in an IGP process using the ldp-sync
enable command, it is enabled on all interfaces whose neighbor status is up on a
P2P network, enabled on all interfaces whose neighbor status is up between a DR
and a non-DR/BDR on an OSPF-enabled broadcast network, and enabled on all
interfaces whose neighbor status is up between a DIS and a non-DIS on an IS-IS-
enabled broadcast network.
If the interfaces on a device carry key services, ensure that the backup path does
not pass through this device. The NE9000 allows you to block LDP-IGP
synchronization on a specified interface.
Procedure
● If OSPF is used, perform the following configuration on the interfaces on both
ends of the link between the node where the primary LSP and the backup LSP
diverge from each other and its LDP peer on the primary LSP.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of an OSPF interface is displayed.
c. Run ospf ldp-sync block
LDP-OSPF synchronization is blocked on the interface.
NOTE
----End
Context
On a device that has LDP-IGP synchronization enabled, if the active physical link
recovers, the IGP enters the Hold-down state, and a Hold-down timer starts.
Before the Hold-down timer expires, the IGP delays establishing an IGP neighbor
relationship until the reestablishment of an LDP session and an LDP adjacency
over the active link so that the LDP session over and IGP route for the active link
can become available simultaneously.
NOTE
A Hold-down timer can be set on either an OSPF or IS-IS interface and can only be set in
an IS-IS process, not in an OSPF process.
If different Hold-down values on an interface and in an IS-IS process are set, the setting on
the interface takes effect.
Procedure
● Set a value for the Hold-down timer on a specified OSPF interface.
a. Run system-view
A value is set for the Hold-down timer, which enables an OSPF interface
to delay establishing an OSPF neighbor relationship until the
reestablishment of an LDP session and an LDP adjacency.
NOTE
a. Run system-view
A value is set for the Hold-down timer, which enables an IS-IS interface
to delay establishing an IS-IS neighbor relationship until the
reestablishment of an LDP session and an LDP adjacency.
d. Run commit
A value is set for the Hold-down timer, which enables all IS-IS interfaces
in an IS-IS process to delay establishing IS-IS neighbor relationships
before the establishment of LDP sessions and LDP adjacencies.
d. Run commit
----End
Context
Select parameters based on networking requirements:
● If IGP routes carry only LDP services, specify the infinite parameter to ensure
that the behavior for IGP routes is always consistent with that for an LDP LSP.
● If IGP routes carry multiple types of services, including LDP services, set a
specific time value to ensure that an LDP session or adjacency teardown does
not affect IGP route selection or other services.
Procedure
● If OSPF is used, perform the following configuration on the interfaces on both
ends of the link between the node where the primary LSP and the backup LSP
diverge from each other and its LDP peer on the primary LSP:
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view is displayed.
c. Run ospf timer ldp-sync hold-max-cost { value | infinite }
The period during which the interface advertises the maximum link cost
in local LSAs is set.
The hold-max-cost timer value determines the period in which the local
node advertises the maximum link cost in local LSAs.
NOTE
The period in which all interfaces in the IS-IS process advertise the
maximum link cost in local LSPs is set.
Perform the following configuration on the interfaces of both ends of the link
between the cross node of primary and backup links and the LDP neighboring
node on the primary link.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view is displayed.
c. Run isis timer ldp-sync hold-max-cost { value | infinite }
The period in which the IS-IS interface advertises the maximum link cost
in local LSPs is set.
The hold-max-cost timer value determines the period in which the local
node advertises the maximum link cost in local LSPs.
d. Run commit
The configuration is committed.
----End
Procedure
● In the MPLS-LDP view:
a. Run system-view
The system view is displayed.
b. Run mpls ldp
The MPLS-LDP view is displayed.
c. Run igp-sync-delay timer value
The Delay timer value is set. This value determines the period during
which the device waits for LSP establishment after an LDP session is
established.
d. Run commit
The configuration is committed.
● In the interface view:
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view is displayed.
The Delay timer value is set. This value determines the period during
which the device waits for LSP establishment after an LDP session is
established.
d. Run commit
----End
Context
LDP graceful deletion can be configured in the LDP-IGP synchronization or LDP
FRR scenario to speed up traffic switching. It helps implement uninterrupted traffic
transmission during traffic switching, which improves reliability of the entire
network.
If the physical and protocol status of the primary link is normal but the LDP
session on the primary link is down, LDP-IGP synchronization enables LDP to
inform the IGP of the primary link fault, and the IGP advertises the maximum cost
of the primary link. After that, LDP immediately instructs the upstream device to
withdraw labels and assigns labels to the upstream device because a new LSP is
established on the backup link, which prolongs LSP convergence. As a result,
packet loss occurs.
After the LDP session on the faulty link goes down, LDP does not immediately
instruct the upstream device to withdraw labels; instead, it keeps the labels and
LSP and allows traffic to be transmitted on the primary link until LSP convergence
is complete on the backup link. This ensures uninterrupted traffic transmission and
speeds up LDP-IGP synchronization.
Procedure
Step 1 Run system-view
Step 4 (Optional) Run graceful-delete timer timer The graceful deletion timer value is
set.
After the LDP session goes down, LDP does not instruct the upstream device to
withdraw labels until the graceful delete timer expires.
NOTE
If the value of the graceful delete timer is too large, the invalid LSP will be kept for a long time,
consuming system resources.
----End
Prerequisites
LDP-IGP synchronization has been configured.
Procedure
● Run the display mpls ldp command to check the global LDP configuration.
● Run the display isis [ process-id ] ldp-sync interface command to check the
synchronization states of interfaces on which LDP-IS-IS synchronization has
been enabled.
● Run the display ospf ldp-sync interface { all | interface-type interface-
number } command to check the synchronization states of interfaces on
which LDP-OSPF synchronization has been enabled.
----End
Usage Scenario
In LDP GR, a Restarter, with the help of the Helper, ensures uninterrupted
forwarding during an active main board (AMB)/standby main board (/SMB)
switchover or when a protocol is restarted.
By default, NSR is used on a device with double main control boards installed.
NOTE
Pre-configuration Tasks
Before configuring LDP GR, complete the following tasks:
● Configure IGP GR.
● Configure a local LDP session.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
LDP is enabled on the local LSR, and the MPLS-LDP view is displayed.
Step 3 Run graceful-restart
LDP GR is enabled.
NOTE
----End
Context
Timers associated with LDP GR are as follows:
● Reconnect timer: After the GR restarter performs an active/standby
switchover, the GR Helper detects that the LDP session with the GR Restarter
fails, starts the Reconnect timer, and waits for the reestablishment of the LDP
session.
– If the Reconnect timer expires before the LDP session between the GR
Helper and Restarter is established, the GR Helper immediately deletes
MPLS forwarding entries associated with the GR Restarter and exits from
the GR Helper process.
– If the LDP session between the GR Helper and the GR Restarter is
established before the Reconnect timer times out, the GR Helper deletes
the timer and starts the Recovery timer.
● Recovery timer: After an LDP session is reestablished, the GR Helper starts the
Recovery timer and waits for the LSP to recover.
– If the Recovery timer expires, the GR Helper considers that the GR
process on the neighbor is complete and deletes non-restored LSPs.
– If all LSPs are restored before the Recovery timer expires, the GR Helper
considers that the GR process is complete on the neighbor after the
Recovery timer expires.
● Neighbor-liveness timer: indicates the LDP GR time.
NOTE
Changing the value of an LDP GR timer also causes an LDP session to be reestablished.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run graceful-restart timer reconnect time
The Reconnect timer value is set.
The Reconnect timer value that takes effect is the smaller value between the
Neighbor-liveness timer value configured on the GR Helper and the Reconnect
timer value configured on the GR Restarter.
Step 4 Run graceful-restart timer recovery time
The Recovery timer value is set.
The Recovery timer value that takes effect is the smaller value between the
Recovery timer value configured on the GR Helper and the Recovery timer value
configured on the GR Restarter.
Step 5 Run graceful-restart timer neighbor-liveness time
The Neighbor-liveness timer value is set.
When negotiating the reconnection time of an LDP session during LDP GR, the
device uses the smaller value between the Neighbor-liveness timer value
configured on the GR helper and the Reconnect timer value configured on the GR
restarter.
Step 6 Run commit
The configuration is committed.
----End
Procedure
● Run the display mpls ldp command to check information about LDP.
● Run the display mpls ldp session [ all ] [ verbose ] command to check
information about LDP sessions.
----End
Usage Scenario
As user networks and the scope of network services continue to expand, load-
balancing techniques are usually used to improve bandwidth between nodes. A
great amount of traffic results in load imbalance on transit nodes. To address this
problem, the entropy label capability can be configured to improve load balancing.
The entropy label feature applies to public network LDP tunnels in service
scenarios such as IPv4/IPv6 over LDP, L3VPNv4/v6 over LDP, VPLS/VPWS over LDP,
and EVPN over LDP.
Context
After the entropy label function is enabled on the LSR, the LSR uses IP header
information to generate an entropy label and adds the label to the packets. The
entropy label is used as a key value by a transit node to load-balance traffic. If the
length of a data frame carried in a packet exceeds the parsing capability, the LSR
fails to parse the IP header or generate an entropy label. Perform the following
operations on the LSR:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run load-balance identify entropy-label
The LSR is enabled to deeply parse IP packets.
Step 3 Run commit
The configuration is committed.
----End
Context
The growth of user networks worsens the load imbalance on transit nodes. To
address this problem, the entropy label capability can be configured on the ingress
of an LSP. After an LDP tunnel with the entropy label capability is negotiated using
LDP, forwarding entries can carry the flag that supports the entropy label
capability to implement load balancing.
Procedure
Step 1 Run system-view
Step 4 (Optional) To configure LDP to negotiate the entropy label capability only based
on the primary LSP, perform the following steps:
1. Run the ipv4-family command to enter the MPLS-LDP-IPv4 view.
2. Run the entropy-label negotiate primary-lsp-only [ ip-prefix ip-prefix-
name ] command to configure LDP to negotiate the entropy label capability
only based on the primary LSP.
If there are primary and backup paths, you can perform this step on the
ingress or transit node of an LSP to prevent an LDP tunnel entropy label
negotiation failure.
----End
Context
The growth of user networks worsens the load imbalance on transit nodes. To
address this problem, the entropy label capability can be configured. When the
entropy label capability is configured, it must be enabled also on the egress.
Procedure
Step 1 Run system-view
----End
Prerequisites
The LDP entropy label capability has been configured.
Procedure
● Run the display mpls lsp protocol ldp verbose command to check the
entropy label information of tunnels.
----End
Usage Scenario
LDP over TE is a technique used to establish LDP LSPs across an RSVP TE domain
and provide services for a VPN. To deploy MPLS TE on a network transmitting VPN
services, a carrier has difficulties in deploying TE on an entire network. The carrier
can plan a core area in which TE is deployed, and implement LDP on PEs on the
edge of the TE area.
NOTE
If the IGP route used by LDP is switched from a TE interface to a non-TE tunnel interface,
ensure that IGP and LDP are configured on the non-TE tunnel interface. Otherwise, the LDP
LSP may fail to be established after the switchover, causing service interruptions.
Pre-configuration Tasks
Before configuring LDP over TE, complete the following tasks:
Context
During path calculation in a scenario where IGP shortcut is configured, the device
calculates an SPF tree based on the paths in the IGP physical topology, and then
finds the SPF nodes on which shortcut tunnels are configured. If the metric of a TE
tunnel is smaller than that of an SPF node, the device replaces the outbound
interfaces of the routes to this SPF node and those of the other routes passing
through the SPF node with the TE tunnel interface.
NOTE
Procedure
Step 1 Run system-view
Step 3 Run mpls te igp shortcut [ isis | ospf ] or mpls te igp shortcut isis hold-time
interval
IGP shortcut is configured.
hold-time interval specifies the period after which IS-IS responds to the Down
status of the TE tunnel.
If a TE tunnel goes Down and this parameter is not specified, IS-IS recalculates
routes. If this parameter is specified, IS-IS responds to the Down status of the TE
tunnel after only the specified interval elapses. It either recalculates routes or not
depending on the TE tunnel status:
● If the TE tunnel goes Up, IS-IS does not recalculate routes.
● If the TE tunnel goes Down, IS-IS still recalculates routes.
----End
Follow-up Procedure
If a network fault occurs, IGP convergence is triggered. In this case, a transient
forwarding status inconsistency may occur among nodes because of their different
convergence rates, which poses the risk of microloops. To prevent microloops,
perform the following steps:
NOTE
Before you enable the OSPF TE tunnel anti-microloop function, configure CR-LSP backup
parameters.
● For IS-IS, run the following commands in sequence.
a. Run system-view
The system view is displayed.
b. Run isis [ process-id ]
An IS-IS process is created, and the IS-IS process view is displayed.
c. Run avoid-microloop te-tunnel
The IS-IS TE tunnel anti-microloop function is enabled.
d. (Optional) Run avoid-microloop te-tunnel rib-update-delay rib-update-
delay
The delay in delivering the IS-IS routes whose outbound interface is a TE
tunnel interface is set.
e. Run commit
The configuration is committed.
● For OSPF, run the following commands in sequence.
a. Run system-view
The system view is displayed.
b. Run ospf [ process-id ]
The OSPF view is displayed.
c. Run avoid-microloop te-tunnel
The OSPF TE tunnel anti-microloop function is enabled.
d. (Optional) Run avoid-microloop te-tunnel rib-update-delay rib-update-
delay
The delay in delivering the OSPF routes whose outbound interface is a TE
tunnel interface is set.
e. Run commit
The configuration is committed.
Context
A routing protocol performs bidirectional detection on a link. The forwarding
adjacency needs to be enabled on both ends of a tunnel. The forwarding
adjacency allows a node to advertise a CR-LSP route to other nodes. Another
tunnel for transferring data packets in the reverse direction must be configured.
Procedure
Step 1 Run system-view
NOTE
Set proper IGP metrics for TE tunnels to ensure that LSP routes are correctly advertised and
used. The metric of a TE tunnel should be smaller than that of an IGP route that is not
expected for use.
Step 5 You can select either of the following modes to enable the forwarding adjacency.
● For IS-IS, run the isis enable [ process-id ] command to enable the IS-IS
process of the tunnel interface.
● For OSPF, run the following commands in sequence.
a. Run the ospf enable [ process-id ] area { area-id | areaidipv4 } command
to enable OSPF on the tunnel interface.
b. Run the quit command to return to the system view.
c. Run the ospf [ process-id ] command to enter the OSPF view.
d. Run the enable traffic-adjustment advertise command to enable the
forwarding adjacency.
----End
Context
If the destination address of the TE tunnel is not the LSR ID of the egress, the
interface with the destination address must be enabled with LDP.
Procedure
Step 1 Run system-view
----End
Context
A policy can be configured to enable LDP to use eligible routes to trigger the
establishment of public-network ingress and egress LSPs.
NOTE
Each LSR must have route entries that exactly match FECs for the LSPs to be established.
Procedure
Step 1 Run system-view
● If the triggering policy is all, all static routes and IGP routes are used to
trigger LDP to establish LSPs. The device does not use public network BGP
routes to trigger LDP LSP establishment.
● If the ip-prefix parameter is specified, only FECs matching a specified IP
address prefix list can trigger LDP to establish LSPs.
● If the none parameter is specified, LDP is not triggered to establish LSPs.
NOTE
● By default, 32-bit addresses are used to trigger LDP to establish LSPs. The default
configuration is recommended. Running the lsp-trigger all command is not
recommended, as this command enables LDP LSPs to be established for all static routes
and IGP routes. As a result, a large number of LSPs are established, consuming excessive
label resources and slowing down LSP convergence on the entire network. You are
advised to run the lsp-trigger ip-prefix command instead.
● If the triggering policy is changed from all to host, LSPs that have been established
using host routes are not reestablished.
----End
Prerequisites
LDP over TE has been configured.
Procedure
● Run the display mpls ldp lsp [ destination-address mask-length ] command
to check information about the LDP LSP on the ingress.
----End
Usage Scenario
MD5 authentication, LDP GTSM, or keychain authentication can be configured on
an MPLS network to meet network security requirements:
NOTE
For security purposes, you are advised not to use weak security algorithms in this feature. If
you need to use such an algorithm, run the undo crypto weak-algorithm disable
command to enable the weak security algorithm function.
● LDP MD5 authentication
A typical MD5 application is to calculate a message digest to prevent
message spoofing. The MD5 message digest is a unique result calculated
using an irreversible character string conversion. If a message is modified
during transmission, a different digest is generated. After the message arrives
at the receiving end, the receiving end can detect the modification after
comparing the received digest with a pre-computed digest.
When configuring MD5 authentication, you can configure different
authentication modes (plaintext or ciphertext) for the two peers of an LDP
session. The passwords on the two peers, however, must be the same.
NOTE
As MD5 is insecure, you are advised to use a more secure authentication mode.
● LDP keychain authentication
Keychain, an enhanced encryption algorithm similar to MD5, calculates a
message digest for an LDP message to prevent the message from being
modified.
Keychain allows users to define a group of passwords to form a password
string. Each password is assigned encryption and decryption algorithms, such
as MD5 and secure hash algorithm-1 (SHA-1), and a validity period. The
system selects a valid password before sending or receiving a packet. Within
the validity period of the password, the system uses the encryption algorithm
matching the password to encrypt the packet before sending it. The system
also uses the decryption algorithm matching the password to decrypt the
packet before accepting the packet. In addition, the system automatically uses
a new password after the previous password expires, which minimizes
password decryption risks.
Pre-configuration Tasks
Before configuring LDP security features, complete the following tasks:
Context
MD5 authentication can be configured for a TCP connection over which an LDP
session is established to improve security. Two peers of an LDP session can be
configured with different authentication modes but must be configured with the
same passwords
You can configure either LDP MD5 authentication or LDP keychain authentication
to match your scenario:
● The MD5 algorithm is easy to configure and generates a single password,
which can only be changed manually. MD5 authentication applies to
networks requiring short-period encryption.
● Keychain authentication involves a set of passwords, which can be
automatically switched based on the configuration. However, keychain
authentication is complex to configure and applies to networks requiring high
security.
NOTE
LDP authentication configurations are prioritized in descending order: for a single peer, for
a specified peer group, and for all peers. Both keychain and MD5 authentication can be
configured. However, configurations with a higher priority override those with a lower
priority, and those with the same priority are mutually exclusive. For example, if MD5
authentication is configured for Peer1 and keychain authentication is configured for all LDP
peers, MD5 authentication takes effect on Peer1 and keychain authentication takes effect
on other peers.
As MD5 is insecure, you are advised to use a more secure authentication mode.
Procedure
● Configure LDP MD5 authentication for a single LDP peer.
a. Run system-view
For security purposes, you are advised not to use weak security
algorithms in this feature. If you need to use such an algorithm, run the
undo crypto weak-algorithm disable command to enable the weak
security algorithm function.
NOTE
● The new password is at least eight characters long and contains at least two
of the following types: upper-case letters, lower-case letters, digits, and
special characters, except the question mark (?) and space.
● For security purposes, you are advised to configure a password in ciphertext
mode. To further improve device security, periodically change the password.
NOTICE
d. Run commit
For security purposes, you are advised not to use weak security
algorithms in this feature. If you need to use such an algorithm, run the
undo crypto weak-algorithm disable command to enable the weak
security algorithm function.
NOTE
● The new password is at least eight characters long and contains at least two
of the following types: upper-case letters, lower-case letters, digits, and
special characters, except the question mark (?) and space.
● For security purposes, you are advised to configure a password in ciphertext
mode. To further improve device security, periodically change the password.
d. (Optional) Run authentication exclude peer peer-id
For security purposes, you are advised not to use weak security
algorithms in this feature. If you need to use such an algorithm, run the
undo crypto weak-algorithm disable command to enable the weak
security algorithm function.
NOTE
● The new password is at least eight characters long and contains at least two
of the following types: upper-case letters, lower-case letters, digits, and
special characters, except the question mark (?) and space.
● For security purposes, you are advised to configure a password in ciphertext
mode. To further improve device security, periodically change the password.
d. (Optional) Run authentication exclude peer peer-id
The device is disabled from authenticating a specified LDP peer.
e. Run commit
The configuration is committed.
----End
Pre-configuration Tasks
To help improve LDP session security, keychain authentication can be configured
for a TCP connection over which an LDP session has been established.
During keychain authentication, a group of passwords are defined in the format of
a password string, and each password is associated with a specified encryption
and decryption algorithm, such as MD5 or secure hash algorithm-1 (SHA-1), and
is assigned with a validity period. The system selects a valid password based on
the user configuration before sending or receiving a packet. Based on the validity
period of the password, the system uses the encryption algorithm matching the
password to encrypt the packet before sending it, and uses the decryption
algorithm matching the password to decrypt the packet before accepting it. In
addition, the system automatically switches to a new valid password based on the
password validity period, which minimizes password decryption risks if the
password is not changed for a long time.
You can configure either LDP MD5 authentication or LDP keychain authentication
as required:
● The MD5 algorithm is easy to configure and generates a single password,
which can only be changed manually. MD5 authentication applies to
networks requiring short-period encryption.
● Keychain authentication involves a set of passwords, which can be
automatically switched based on the configuration. However, keychain
authentication is complex to configure and applies to networks requiring high
security.
NOTE
LDP authentication configurations are prioritized in descending order: for a single peer, for
a specified peer group, and for all peers. Keychain authentication and MD5 authentication
are mutually exclusive for configurations with the same priority. Keychain authentication
and MD5 authentication can be configured simultaneously for LDP peers with different
priorities, but only the configuration with a higher priority takes effect for a specified LDP
peer. For example, if MD5 authentication is configured for Peer 1 and then keychain
authentication is configured for all peers, MD5 authentication remains effective on Peer 1.
Keychain authentication takes effect on other peers.
As MD5 is insecure, you are advised to use a more secure authentication mode.
Procedure
● Configure LDP keychain authentication for a single peer.
a. Run system-view
NOTICE
d. Run commit
Pre-configuration Tasks
A remote LDP peer relationship can be established across multiple devices. To
enhance the security of Hello message sending and receiving and prevent
relationship establishment with unauthorized peers, you can configure LDP
keychain authentication for Targeted Hello to improve network security.
Before configuring LDP keychain authentication for a UDP connection, complete
the following task:
● Configure a global keychain.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
LDP keychain authentication is enabled for Targeted Hello, and the configured
keychain name is referenced.
NOTE
This command supports only the keychain authentication using a strong encryption
algorithm (SHA-256, HMAC-SHA-256, or SM3) but not a weak encryption algorithm.
----End
Pre-configuration Tasks
A TCP-AO is used to authenticate received and to-be sent packets during TCP
session establishment and data exchange. It supports packet integrity check to
prevent TCP replay attacks. After creating a TCP-AO, specify the peer that needs to
reference the TCP-AO and the name of the TCP-AO in the MPLS LDP view. This
enables the TCP-AO to be referenced, and the LDP session to be encrypted. You
can specify multiple peers to reference the same TCP-AO.
A TCP-AO uses the passwords configured in the bound keychain, and these
passwords can be automatically switched based on the configuration. However,
the configuration process is complex and applies to networks with high security
requirements.
Procedure
Step 1 Run system-view
NOTE
A key ID is created for the TCP-AO, and the TCP-AO key ID view is displayed.
The value of tcpaoname must be the same as that of the TCP-AO created in Step
2.
NOTE
For the same peer, the authentication modes TCP-AO, MD5, and keychain are mutually
exclusive.
Configuring LDP TCP-AO authentication may cause the reestablishment of LDP sessions.
----End
Context
The GTSM checks TTL values to verify packets and defends devices against attacks.
LDP peers with the GTSM and a valid TTL range configured check TTLs in LDP
messages exchanged between them. If the TTL in an LDP message is out of the
valid range, this LDP message is considered invalid and discarded. The GTSM
defends against CPU-based attacks initiated using a great number of forged
packets and protects upper-layer protocols.
Procedure
Step 1 Run system-view
If the value of hops is set to the maximum number of valid hops permitted by the
GTSM, when the TTL values carried in the packets sent by an LDP peer are within
the range [255 – hops + 1, 255], the packets are accepted; otherwise, the packets
are discarded.
----End
Context
When traffic bursts occur in the LDP service, bandwidth may be preempted among
LDP sessions. To resolve this problem, you can configure whitelist session-CAR for
LDP to isolate bandwidth resources by session. If the default parameters of
whitelist session-CAR for LDP do not meet service requirements, you can adjust
them as required.
Procedure
Step 1 Run system-view
By default, whitelist session-CAR for LDP is enabled. You are advised to keep this
function enabled unless it does not work properly.
----End
Context
The micro-isolation CAR function is enabled for LDP by default to isolate CPCAR
channels between LDP peers and implement micro-isolation protection for LDP
connection establishment packets. When traffic bursts occur in the LDP service,
the packets of LDP peers may preempt the bandwidth. To prevent this issue, you
are advised to keep this function enabled.
Procedure
Step 1 Run system-view
In normal cases, you are advised to use the default values of these parameters.
pir-value must be greater than or equal to cir-value, and pbs-value must be
greater than or equal to cbs-value.
By default, this function is enabled. You can run the micro-isolation protocol-car
ldp disable command to disable micro-isolation protection for LDP packets. In
normal cases, you are advised to keep micro-isolation CAR enabled for LDP.
----End
Prerequisites
LDP security features have been configured.
Procedure
● Run the display mpls ldp session verbose command to check the
configurations of LDP MD5 authentication and LDP keychain authentication.
● Run the display gtsm statistics { slot-id | all } command to check GTSM
statistics.
● Run the display cpu-defend whitelist session-car { ldp-tcp | ldp-udp-local |
ldp-udp-remote } statistics slot slot-id command to check the statistics
about whitelist session-CAR for LDP on a specified interface board.
To facilitate the query of statistics in a new period, run the reset cpu-defend
whitelist session-car { ldp-tcp | ldp-udp-local | ldp-udp-remote } statistics
slot slot-id command to clear the existing statistics about whitelist session-
CAR for LDP on a specified interface board. Then, check the statistics after a
certain period.
----End
Usage Scenario
Traditional core networks and backbone networks generally use IP/MPLS to
transmit service packets. For unicast packets, this deployment is highly flexible and
provides sufficient reliability and traffic engineering capabilities. The proliferation
of applications, such as IPTV, video conference, and massively multiplayer online
role-playing games (MMORPGs), amplifies demands on multicast transmission
over IP/MPLS networks. The existing P2P MPLS technology requires a transmit end
to deliver the same data packet to each receive end, which wastes bandwidth
resources.
To address this problem, deploy mLDP P2MP tunnels on IP/MPLS networks. P2MP
LDP establishes a tree-shaped tunnel from a root node to multiple leaf nodes and
directs multicast traffic from the root node to the tunnel for forwarding. In actual
forwarding, only one copy of the packet is sent on the root node, and the packet is
replicated on the branch node. This ensures that the bandwidth is not repeatedly
occupied.
Table 1-20 Comparison between manual and automatic mLDP P2MP tunnels
Pre-configuration Tasks
Before configuring an mLDP P2MP tunnel, complete the following tasks:
● Configure network layer parameters to implement connectivity.
● Configure MPLS LDP on each node to establish LDP sessions.
NOTE
Context
Manually configure the root and leaf nodes to trigger the establishment of a
manual mLDP P2MP tunnel.
Procedure
● Enable mLDP P2MP globally.
a. Run system-view
The system view is displayed.
b. Run mpls ldp
The MPLS-LDP view is displayed.
c. Run mldp p2mp
mLDP P2MP is enabled globally.
d. (Optional) Run mldp make-before-break
The mLDP make-before-break (MBB) capability is enabled.
If the optimal route between a non-root node and a root node on an
mLDP P2MP network changes, the non-root node re-selects an upstream
node and by default tears down the current P2MP LSP. As a result, traffic
is dropped before a new P2MP LSP is established. To prevent traffic loss,
the mLDP MBB capability can be enabled. If the optimal route to the root
node changes, the node does not delete the original P2MP LSP until a
new P2MP LSP is established. This minimizes traffic loss.
e. Run commit
The configuration is committed.
In a scenario where mLDP P2MP uses intra-AS routes or inter-AS BGP routes
to tunnel root nodes to establish mLDP tunnels, you can configure mLDP not
to use the default route 0.0.0.0/0 to establish an mLDP tunnel if such a tunnel
is not expected.
a. Run system-view
The system view is displayed.
b. Run mpls
MPLS is enabled, and the MPLS view is displayed.
c. Run quit
Return to the system view.
d. Run mpls ldp
The MPLS-LDP view is displayed.
e. Run mldp p2mp
mLDP P2MP is enabled globally.
f. Run mldp default-route-match ignore
mLDP is disabled from using the default route to establish tunnels.
g. Run commit
The configuration is committed.
----End
Follow-up Procedure
Statically add the tunnel interface to an IGMP multicast group to allow multicast
traffic to be steered into the mLDP tunnel.
Context
There is no need to manually specify leaf nodes before automatic mLDP P2MP
tunnels are triggered.
NOTE
After the configuration is complete, preparation for an automatic mLDP tunnel is ready.
Automatic mLDP P2MP tunnels can be established automatically when NG MVPN or
multicast VPLS is being deployed.
Procedure
● Enable mLDP P2MP globally.
a. Run system-view
The system view is displayed.
b. Run mpls ldp
The MPLS-LDP view is displayed.
c. Run mldp p2mp
mLDP P2MP is enabled globally.
d. (Optional) Run mldp make-before-break
The mLDP make-before-break (MBB) capability is enabled.
If the optimal route between a non-root node and a root node on an
mLDP P2MP network changes, the non-root node re-selects an upstream
node and by default tears down the current P2MP LSP. As a result, traffic
is dropped before a new P2MP LSP is established. To prevent traffic loss,
the mLDP MBB capability can be enabled. If the optimal route to the root
node changes, the node does not delete the original P2MP LSP until a
new P2MP LSP is established. This minimizes traffic loss.
e. Run commit
The configuration is committed.
● (Optional) Disable mLDP P2MP on an interface.
To flexibly control the path of a P2MP LSP, you can disable mLDP P2MP on a
specified interface.
Disabling mLDP P2MP on an interface helps you plan a network. For example,
if links balance traffic on a network, to enable P2MP traffic to travel along a
specific link, disable mLDP P2MP on the interfaces connected to other links.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view is displayed.
c. Run mpls mldp p2mp disable
mLDP P2MP is disabled on the interface.
Disabling mLDP P2MP on an interface affects the establishment of P2MP
LSPs, but does not cause the reestablishment of other P2P LDP sessions.
d. Run commit
The configuration is committed.
● (Optional) Configure a tunnel establishment policy that allows mLDP to
select an upstream node based on peer IDs.
By default, a leaf or transit node selects the next hop on the optimal route to
the root node as its upstream node during P2MP tunnel establishment. If
routes work in load balancing mode, more than one such upstream node
exists, and a leaf or transit node randomly selects an upstream node among
the candidates. If you want a leaf or transit node to select a specific upstream
node, configure a tunnel establishment policy that allows mLDP to select an
upstream node based on peer IDs — an upstream node with the largest or
smallest peer ID.
Both NG MVPN over mLDP P2MP and VPLS over mLDP P2MP have the dual-
root 1+1 protection mechanism. If the routes to the primary and backup roots
work in load balancing mode and share some links, an upstream node may be
selected by mLDP for both the primary and backup mLDP tunnels. In this
case, if the shared link where the selected upstream node resides becomes
faulty, dual-root 1+1 protection fails to take effect. To prevent such a
protection failure in the scenario with co-routed primary and backup mLDP
tunnels, run the mldp upstream-lsr-select highest command for one tunnel
and the mldp upstream-lsr-select lowest command for the other tunnel.
a. Run system-view
The system view is displayed.
b. Run ip ip-prefix ip-prefix-name { permit | deny } ip-address
A policy for selecting tunnels to specified root nodes is configured.
c. Run mpls
MPLS is enabled, and the MPLS view is displayed.
d. Run mpls ldp
The MPLS-LDP view is displayed.
e. Run mldp p2mp
mLDP P2MP is enabled globally.
f. Configure a tunnel establishment policy that allows mLDP to select an
upstream node based on peer IDs.
NOTE
Context
To improve reliability of traffic transmitted along a P2MP tunnel, configure the
following reliability enhancement functions as needed:
● Rapid MPLS P2MP switching
With this function, if a device detects a fault in the active link, the device
rapidly switches services to the standby link over which an MPLS P2MP tunnel
is established, which improves service reliability.
● Multicast load balancing on a trunk interface
Without this function, a device randomly selects a trunk member interface to
forward multicast traffic. If this member interface fails, multicast traffic is
interrupted. With this function, multicast traffic along a P2MP tunnel is
balanced among all trunk member interfaces. This function helps improve
service reliability and increase available bandwidth for multicast traffic.
● MPLS P2MP load balancing
To enable P2MP load balancing globally, run the mpls p2mp force-
loadbalance enable command. In a multicast scenario where load balancing
is configured in the Eth-Trunk interface view, if a leaf node connected to the
Eth-Trunk interface joins or quits the multicast model, packet loss occurs on
the other leave nodes connected to the non-Eth-Trunk interfaces due to the
model change. After the mpls p2mp force-loadbalance enable command is
run, load balancing is forcibly enabled in the system view, therefore
preventing packet loss.
● WTR time for traffic to be switched from the MPLS P2MP FRR path to the
primary path.
If the primary MPLS P2MP path fails, traffic on the forwarding plane is rapidly
switched to the backup path. If the primary path recovers before MPLS P2MP
Procedure
● Configure rapid MPLS P2MP switching.
a. Run system-view
The system view is displayed.
b. Run mpls p2mp fast-switch enable
Rapid MPLS P2MP switching is enabled.
c. Run commit
The configuration is committed.
● Configure multicast load balancing on a trunk interface.
a. Run system-view
The system view is displayed.
b. Run interface eth-trunk trunk-id or interface ip-trunk trunk-id
The Eth-Trunk interface view is displayed.
c. Run multicast p2mp load-balance enable
Multicast traffic load balancing among trunk member interfaces is
enabled on the trunk interface that functions as an outbound interface of
a P2MP tunnel.
d. Run commit
The configuration is committed.
● Configure MPLS P2MP load balancing.
a. Run system-view
The system view is displayed.
b. Run mpls p2mp force-loadbalance enable
MPLS P2MP load balancing is enabled globally.
c. (Optional) Run multicast p2mp load-balance number load-
balance_number
The number of trunk member interfaces that balance multicast traffic on
a P2MP tunnel is set.
d. Run commit
The configuration is committed.
● Set the WTR time for traffic to be switched from the MPLS P2MP FRR path to
the primary path.
a. Run system-view
The system view is displayed.
b. Run mpls p2mp frr-wtr time-value
The WTR time is set for traffic to be switched from the MPLS P2MP FRR
path to the primary path.
c. Run commit
The configuration is committed.
----End
Context
On a network with the mLDP MBB capability enabled, if mLDP FRR is disabled and
an outbound interface fails, the capability of establishing a best-effort path for an
mLDP P2MP tunnel can be enabled to resolve the problem.
Procedure
Step 1 Run system-view
----End
Context
If the network topology changes, to prevent traffic congestion on paths of an
mLDP P2MP tunnel that converge on a few links, appropriately perform mLDP
P2MP re-optimization. If the upstream or downstream nodes are optional, an
mLDP P2MP tunnel is reestablished over the updated path. To set the interval at
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run mldp p2mp
mLDP P2MP is enabled globally.
Step 4 Run mldpreoptimize timer reoptimize-time-value
An mLDP re-optimization timer value is set.
After this step is performed, reoptimize-time-value becomes the interval between
the network topology change and the actual mLDP re-optimization.
Step 5 Run commit
The configuration is committed.
----End
Prerequisites
An mLDP P2MP LSP has been configured.
Procedure
● Run the ping multicast-lsp mldp p2mp root-ip root-ip-address { lsp-id lsp-id
| opaque-value opaque-value } command to check mLDP P2MP LSP
connectivity on the root node.
● Run the display mpls mldp lsp p2mp [ root-ip root-ip-address { lsp-id lsp-id
| opaque-value opaque-value } ] command to check P2MP LSP signaling
information on the local node.
● Run the display mpls multicast-lsp protocol mldp p2mp [ root-ip root-ip-
address { lsp-id lsp-id | opaque-value opaque-value } ] [ lsr-role { bud |
ingress | transit | egress } ] command to check P2MP LSP forwarding
information on the local node.
----End
Usage Scenario
As user services continue to grow, the demands for using mLDP LSPs to carry
multicast traffic are increasing, so are the link faults on mLDP LSPs. mLDP P2MP
FRR link protection can be configured to prevent packet loss due to link faults.
NOTE
mLDP P2MP FRR link protection does not support backup links on a TE tunnel.
Pre-configuration Tasks
Before configuring mLDP P2MP FRR link protection, configure an automatic
P2MP TE tunnel.
Context
mLDP P2MP FRR link protection configured in the MPLS-LDP view speeds up
convergence when link faults are detected, which minimizes traffic loss.
Procedure
Step 1 Run system-view
NOTE
After mLDP FRR link protection is enabled and a link fault is rectified, traffic needs to be
switched back after a delay. During the delayed switchback, the backup path must remain
unchanged. When the link fault is rectified, IGP route convergence is fast, causing mLDP to
recalculate a new backup path. During path calculation, the old backup path is deleted. To
prevent packet loss caused by the deletion of the old backup path, run the mpls p2mp frr-
wtr command to set a hold-off time for the maximum IGP cost to a value greater than the
mLDP FRR switchback delay.
----End
1.1.4.23.2 Enabling the Detection of Traffic with New mLDP MBB Incoming Labels
After the detection of traffic with new mLDP MBB incoming labels is enabled, an
MBB switchover can be performed as soon as possible after a fault occurs,
reducing traffic loss.
Context
On a network with mLDP MBB enabled, if a network path fails, transient packet
loss occurs because of the delayed switching timer for MBB LSPs on a downstream
node. To prevent packet loss, enable the local device to monitor traffic with new
mLDP MBB incoming labels.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls ldp
The MPLS-LDP view is displayed.
Step 3 Run mldp p2mp
mLDP P2MP is enabled globally.
Step 4 Run mldp make-before-break
The mLDP make-before-break (MBB) capability is enabled.
Step 5 Run mldp make-before-break p2mp traffic-detect
The detection of traffic with new mLDP MBB incoming labels is enabled.
Step 6 Run commit
The configuration is committed.
----End
1.1.4.23.3 (Optional) Setting Timers for mLDP P2MP FRR Link Protection
Timers can be set for mLDP P2MP FRR link protection, which helps properly
perform a traffic switchover.
Context
The following timers can be set for mLDP P2MP FRR link protection:
● Delay timer for deleting mLDP P2MP LSP labels: This timer delays in deleting
labels for a faulty mLDP P2MP LSP, preventing local link flapping from
spreading globally.
● Timer for a downstream node to wait for an MBB Notification message sent
by an upstream node: Within this timer period, a downstream node confirms
that a branch LSP is successfully established only after receiving an MBB
Notification message replied by an upstream node during MBB LSP
establishment. This timer sets the period of time for the downstream node to
wait for an MBB Notification message.
● Timer for delaying an MBB LSP switchover: After an MBB LSP is established,
LSP switching is delayed to ensure the proper traffic switching on the
forwarding and control planes.
Default timer values are recommended.
Procedure
● Configure a delay timer for deleting mLDP P2MP LSP labels.
a. Run system-view
The system view is displayed.
b. Run mpls ldp
The MPLS-LDP view is displayed.
c. Run mldp p2mp
mLDP P2MP is enabled globally.
d. Run mldp label-withdraw-delay delay-time-value
A delay timer is set for deleting mLDP P2MP LSP labels.
e. Run commit
The configuration is committed.
● Configure a timer for a downstream node to wait for an MBB Notification
message sent by an upstream node.
a. Run system-view
The system view is displayed.
b. Run mpls ldp
The MPLS-LDP view is displayed.
c. Run mldp p2mp
mLDP P2MP is enabled globally.
d. Run mldp make-before-break
The MBB capability is configured.
e. Run mldp make-before-break timer wait-ack wait-ack-time-value
A timer is set for a downstream node to wait for an MBB Notification
message sent by an upstream node.
f. Run commit
The configuration is committed.
● Configure a timer for delaying an MBB LSP switchover.
a. Run system-view
The system view is displayed.
b. Run mpls ldp
The MPLS-LDP view is displayed.
c. Run mldp p2mp
Prerequisites
mLDP P2MP FRR has been configured.
Procedure
● Run the display mpls multicast-lsp protocol mldp p2mp command to check
P2MP multicast LSP information, including FRR LSP information.
----End
Usage Scenario
No tunnel protection is provided for mLDP P2MP tunnels. If an LSP fails, traffic
can only be switched using route change-induced hard convergence, which renders
low performance. BFD for mLDP P2MP tunnel applies to NG-MVPNs and VPLS
networks on which mLDP P2MP trees with primary and backup roots are
configured. If a P2MP tunnel fails, BFD for mLDP P2MP tunnel rapidly detects the
fault and switches traffic to the backup tunnel, which reduces traffic loss and
improves fault convergence performance in an NG-MVPN over mLDP P2MP
scenario or a VPLS over mLDP P2MP scenario.
NOTE
Pre-configuration Tasks
Before configuring dynamic BFD to monitor an mLDP P2MP tunnel, complete the
following tasks:
Context
Perform the following steps on each root and leaf node:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally.
Step 3 Run quit
Return to the system view.
Step 4 Run mpls
The MPLS view is displayed.
Step 5 Run mpls mldp bfd enable
Dynamic BFD is enabled to monitor an mLDP P2MP tunnel.
Step 6 Run mpls mldp p2mp bfd-trigger-tunnel all
A policy is configured to dynamically establish a BFD session to monitor an mLDP
P2MP tunnel.
Step 7 Run commit
The configuration is committed.
----End
Context
Perform the following steps on each root and leaf node:
Procedure
Step 1 Run system-view
The system view is displayed.
The minimum interval at which BFD packets are sent, the minimum interval at
which BFD packets are received, and the BFD detection multiplier are set.
----End
Prerequisites
Dynamic BFD has been configured to monitor an mLDP P2MP tunnel.
Procedure
● Run the display mpls bfd session protocol mldp p2mp [ root-ip root-ip
{ lsp-id lsp-id | opaque-value opaque-value } ] [ bfd-type ldp-tunnel ]
command to check BFD session information.
----End
Usage Scenario
To obtain LDP LSP traffic statistics, configure LDP traffic statistics collection.
LDP traffic statistics contains only forwarded traffic data. Therefore, statistics
collection can be configured only on the ingress and transit nodes.
NOTE
LDP traffic statistics collection enables the ingress or a transit node to collect statistics only
about outgoing LDP LSP traffic with the destination IP address mask of 32 bits.
Pre-configuration Tasks
Before configuring LDP statistics collection, configure an LDP LSP.
Procedure
Step 1 Run system-view
MPLS traffic statistics collection is enabled globally, and the traffic statistics
collection view is displayed.
----End
Prerequisites
LDP statistics collection has been configured.
Procedure
● Run the display mpls ldp lsp traffic-statistics [ ipv4–address mask-length ]
[ verbose ] command to check LDP traffic statistics.
● Run the reset mpls traffic-statistics ldp [ ipv4–address mask-length ]
command to delete LDP traffic statistics.
----End
Usage Scenario
To obtain mLDP P2MP LSP traffic statistics, configure mLDP P2MP traffic statistics
collection.
Only statistics about the traffic forwarded on mLDP P2MP LSPs are collected.
Therefore, statistics collection can be configured only on the ingress, transit nodes,
or bud nodes.
Pre-configuration Tasks
Before configuring mLDP P2MP traffic statistics collection, configure an mLDP
P2MP tunnel.
Procedure
Step 1 Run system-view
MPLS traffic statistics collection is enabled globally, and the traffic statistics
collection view is displayed.
----End
● Run the display mpls mldp lsp p2mp traffic-statistics [ root-ip root-ip { lsp-
id lsp-id | opaque-value opaque-value } | in-label in-label-value ] [ verbose ]
command to check statistics about traffic forwarded on a specified mLDP
P2MP LSP.
Follow-up Procedure
Before collecting statistics, run the reset mpls traffic-statistics mldp p2mp
[ root-ip root-ip { lsp-id lsp-id | opaque-value opaque-value } | in-label in-label-
value ] command to delete existing statistics.
1.1.4.27 Configuring the Uniform or Pipe Mode for the MPLS Penultimate
Hop
This section describes the process of configuring the uniform/pipe mode for the
MPLS penultimate hop.
Pre-configuration Tasks
Before configuring the uniform or pipe mode for the MPLS penultimate hop,
complete the following tasks:
Procedure
Step 1 Run system-view
NOTE
● The mode configured using this command takes effect only on new LSPs. To have the mode
take effect on existing LSPs, you need to run the reset mpls ldp command to reestablish the
LSPs.
● The command is run only on the penultimate hop to determine whether to copy the EXP
value of an outer label to the EXP value of an inner label.
----End
Context
If an LDP session goes down due to a protocol or interface fault, LDP immediately
attempts to reestablish the LDP session to ensure the fastest LDP hard
convergence. For an LDP session that alternates between up and down multiple
times within a period of time, the involved upstream and downstream LSPs are
frequently created and deleted, wasting resources. To prevent this problem, LDP
session flapping suppression is enabled by default. For a stable LDP network, you
can disable LDP session flapping suppression.
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Run the mpls command to enable MPLS globally and enter the MPLS view.
Step 3 Run the mpls ldp command to enable LDP globally and enter the MPLS-LDP view.
Step 4 Run the session suppress disable command to disable LDP session flapping
suppression.
Step 5 Run the commit command to commit the configuration.
----End
Context
If LDP interface flapping suppression is disabled and an interface frequently flaps,
LDP frequently sends Address and Address Withdraw messages to all LDP sessions.
If there are a large number of sessions, the CPU usage of the device increases,
causing protocol flapping. To prevent this problem, LDP interface flapping
suppression is enabled by default. For a stable LDP network, you can disable LDP
interface flapping suppression.
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Run the mpls command to enable MPLS globally and enter the MPLS view.
Step 3 Run the mpls ldp command to enable LDP globally and enter the MPLS-LDP view.
Step 4 Run the suppress-flapping interface disable command to disable LDP interface
flapping suppression.
Step 5 Run the commit command to commit the configuration.
----End
Context
NOTICE
Procedure
● To reset all LDP peers in global LDP instances, run the reset mpls ldp
command in the user view to make new configurations take effect.
● To reset a specified LDP peer, run the reset mpls ldp peer peer-id command
in the user view to make new configurations take effect.
● To reset all GR-capable LDP peers in global LDP instances, run the reset mpls
ldp graceful command in the user view to make new configurations take
effect, which implements uninterrupted service transmission during a restart.
● To reset a specified GR-capable LDP peer, run the reset mpls ldp peer peer-id
graceful command in the user view to make new configurations take effect,
which implements uninterrupted service transmission during a restart.
----End
Context
Run either of the following commands to perform MPLS ping or MPLS tracert
detection.
Procedure
● Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m
interval | -r reply-mode | -s packet-size | -t time-out | -v ] * ip destination-
iphost mask-length [ ip-address ] command in any view to execute an MPLS
ping.
● Run the tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-
mode | -t time-out | -s size ] * ip destination-iphost mask-length [ ip-address ]
[ detail ] command in any view to execute an MPLS tracert.
----End
Networking Requirements
All nodes support MPLS and run OSPF on the MPLS backbone network shown in
Figure 1-43. A static LSP needs to be established between LSRA and LSRC so that
the LSP functions as a public network tunnel to carry L2VPN and L3VPN services.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface (as shown in Figure 1-43), OSPF process ID, and
OSPF area ID
● Name of the static LSP
● Outgoing label value on each interface
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and its mask to each physical interface and configure a
loopback interface address as an LSR ID on each node shown in Configuration
Files. For details, see the configuration files in this section.
Step 2 Configure OSPF to advertise the route to the network segment to which each
interface is connected and the host route to each LSR ID.
# Configure LSRA.
[~LSRA] ospf 1
[*LSRA-ospf-1] area 0
[*LSRA-ospf-1-area-0.0.0.0] network 192.168.1.9 0.0.0.0
[*LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*LSRA-ospf-1-area-0.0.0.0] quit
[*LSRA-ospf-1] quit
[*LSRA] commit
# Configure LSRB.
[~LSRB] ospf 1
[*LSRB-ospf-1] area 0
[*LSRB-ospf-1-area-0.0.0.0] network 192.168.2.9 0.0.0.0
[*LSRB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*LSRB-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*LSRB-ospf-1-area-0.0.0.0] quit
[*LSRB-ospf-1] quit
[*LSRB] commit
# Configure LSRC.
[~LSRC] ospf 1
[*LSRC-ospf-1] area 0
[*LSRC-ospf-1-area-0.0.0.0] network 192.168.3.9 0.0.0.0
[*LSRC-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*LSRC-ospf-1-area-0.0.0.0] quit
[*LSRC-ospf-1] quit
[*LSRC] commit
The next-hop IP address and outbound interface name of the LSRA-to-LSRC static
LSP destined for 192.168.3.9/32 are determined by a routing table. In this
example, the next-hop IP address is 10.1.1.2/24.
Step 3 Enable MPLS globally on each node.
# Configure LSRA.
# Configure LSRB.
[~LSRB] mpls lsr-id 192.168.2.9
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] commit
# Configure LSRC.
[~LSRC] mpls lsr-id 192.168.3.9
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] commit
# Configure LSRB.
[~LSRB] interface gigabitethernet 1/0/0
[~LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] quit
[*LSRB] commit
# Configure LSRC.
[~LSRC] interface gigabitethernet 1/0/0
[~LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] commit
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 192.168.1.9
#
mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
#
interface LoopBack1
ip address 192.168.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 192.168.1.9 0.0.0.0
#
static-lsp ingress AtoC destination 192.168.3.9 32 nexthop 10.1.1.2 out-label 20
#
return
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
network 192.168.2.9 0.0.0.0
#
static-lsp transit AtoC in-label 20 outgoing-interface GigabitEthernet2/0/0 nexthop 10.2.1.2 out-label
40
#
return
Networking Requirements
In Figure 1-44, LSRA, LSRB, and LSRC function as core or edge devices on a
backbone network. Configure local LDP sessions for MPLS LDP services. The LSRs
can then exchange labels to establish LDP LSPs.
Configuration Notes
During the configuration, note the following:
● An LSR ID must be configured before you run other MPLS commands.
● LSR IDs can only be manually configured, and do not have default values.
● Using the IP address of a reachable loopback interface on an LSR as the LSR
ID is recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to advertise the
route to the network segment to which each interface is connected and the
host route to each LSR ID.
2. Enable MPLS and MPLS LDP globally on each LSR.
3. Enable MPLS on the interfaces of each LSR.
4. Enable MPLS LDP on the interfaces of both ends of each local LDP session.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each LSR (as shown in Figure 1-44), OSPF
process ID, and area ID
● LSR ID of each node
Procedure
Step 1 Assign an IP address to each interface and configure OSPF to advertise the route
to the network segment to which each interface is connected and the host route
to each LSR ID.
Assign an IP address to each interface (as shown in Figure 1-44), including the
loopback interfaces. Configure OSPF to advertise the route to the network
segment to which each interface is connected and the host route to each LSR ID.
Step 2 Enable MPLS and MPLS LDP globally on each LSR.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.9
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
# Configure LSRB.
<LSRB> system-view
[~LSRB] mpls lsr-id 2.2.2.9
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] commit
[~LSRB-mpls-ldp] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] mpls lsr-id 3.3.3.9
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] commit
[~LSRC-mpls-ldp] quit
Step 3 Enable MPLS and MPLS LDP on the interfaces of each LSR.
# Configure LSRA.
[~LSRA] interface gigabitethernet 1/0/0
[~LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls ldp
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
# Configure LSRB.
[~LSRB] interface gigabitethernet 1/0/0
[~LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls ldp
[*LSRB-GigabitEthernet1/0/0] commit
[~LSRB-GigabitEthernet1/0/0] quit
[~LSRB] interface gigabitethernet 2/0/0
[~LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] mpls ldp
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] mpls ldp
[*LSRC-GigabitEthernet1/0/0] commit
[~LSRC-GigabitEthernet1/0/0] quit
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.3
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.3
network 10.2.1.0 0.0.0.3
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.2.1.0 0.0.0.3
#
return
Networking Requirements
In Figure 1-45, LSRA and LSRC are on the edge of a backbone network. To deploy
VPN services over the backbone network, establish a remote LDP session between
LSRA and LSRC to establish an LSP.
Configuration Notes
During the configuration, note the following:
● An LSR ID must be configured before you run other MPLS commands.
● LSR IDs can only be manually configured, and do not have default values.
● Using the IP address of a reachable loopback interface on an LSR as the LSR
ID is recommended.
● The IP address of a remote LDP peer must be the LSR ID of the remote LDP
peer. When an LDP LSR ID is different from an MPLS LSR ID, the LDP LSR ID
must be used.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface (as shown in Figure 1-45), OSPF process ID, and
OSPF area ID
● LSR ID of each node
● Name and IP address of each remote peer of a remote LDP session
Procedure
Step 1 Assign an IP address to each interface.
According to Figure 1-45, assign an IP address to each interface, configure the
loopback interface addresses as LSR IDs, and configure OSPF to advertise the
route to the network segment to which each interface is connected and the host
route to each LSR ID. For configuration details, see the configuration files in this
section.
Step 2 Enable MPLS and MPLS LDP globally on each LSR.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.9
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] mpls lsr-id 3.3.3.9
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] commit
[~LSRC-mpls-ldp] quit
Step 3 Specify the name and IP address of the remote peer on LSRs of both ends of a
remote LDP session.
# Configure LSRA.
[~LSRA] mpls ldp remote-peer LSRC
[*LSRA-mpls-ldp-remote-LSRC] remote-ip 3.3.3.9
[*LSRA-mpls-ldp-remote-LSRC] commit
[~LSRA-mpls-ldp-remote-LSRC] quit
# Configure LSRC.
[~LSRC] mpls ldp remote-peer LSRA
# Run the display mpls ldp remote-peer command on either of the LSR of the
remote LDP session. You can view information about the remote peer of the LSR.
The following example uses the command output on LSRA.
<LSRA> display mpls ldp remote-peer
LDP Remote Entity Information
------------------------------------------------------------------------------
Remote Peer Name : LSRC
Description : ----
Remote Peer IP : 3.3.3.9 LDP ID : 1.1.1.9:0
Transport Address : 1.1.1.9 Entity Status : Active
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
ipv4-family
#
Networking Requirements
On the network shown in Figure 1-46, LSRA, LSRB, and LSRC all function as core
devices or edge devices on a backbone network. On this network, after local LDP
sessions are set up between LSRA and LSRB, and between LSRB and LSRC, each
pair of LSRs can distribute labels to each other and establish LDP LSPs. MPLS
services can be transmitted along the LSPs.
Configuration Notes
During the configuration, note the following:
● Each LSR must have route entries that exactly match FECs for the LSPs to be
established.
● By default, the triggering policy is host, allowing a device to use host IP
routes with 32-bit addresses to trigger LDP LSP establishment.
● If the triggering policy is all, all IGP routes are used to trigger LDP LSP
establishment. The device does not use public network BGP routes to trigger
LDP LSP establishment.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure local LDP sessions.
2. Change the policy for triggering LDP LSP establishment on each LSR.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each LSR (as shown in Figure 1-46), OSPF
process ID, and area ID
● Policy for triggering LDP LSP establishment
Procedure
Step 1 Configure an LDP LSP.
After you complete the task described in 1.1.4.29.2 Example for Configuring
Local LDP Sessions, each LSR uses the default LDP LSP triggering policy. That is,
each LSR uses host IP routes with 32-bit addresses to trigger LDP LSP
establishment.
# Run the display mpls ldp lsp command on each LSR. The command output
shows the LSR has successfully established LDP LSPs for all host routes.
The following example uses the command output on LSRA.
[~LSRA] display mpls ldp lsp
LDP LSP Information
-------------------------------------------------------------------------------
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
-------------------------------------------------------------------------------
DestAddress/Mask In/OutLabel UpstreamPeer NextHop OutInterface
-------------------------------------------------------------------------------
1.1.1.9/32 3/NULL 2.2.2.9 127.0.0.1 LoopBack1
*1.1.1.9/32 Liberal/3 DS/2.2.2.9
2.2.2.9/32 NULL/3 - 10.1.1.2 GE1/0/0
2.2.2.9/32 1024/3 2.2.2.9 10.1.1.2 GE1/0/0
3.3.3.9/32 NULL/1025 - 10.1.1.2 GE1/0/0
3.3.3.9/32 1025/1025 3.3.3.9 10.1.1.2 GE1/0/0
------------------------------------------------------------------------------
TOTAL: 5 Normal LSP(s) Found.
TOTAL: 1 Liberal LSP(s) Found.
TOTAL: 0 Frr LSP(s) Found.
An asterisk (*) before an LSP means the LSP is not established
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before an UpstreamPeer means the session is stale
An asterisk (*) before a DS means the session is stale
An asterisk (*) before a NextHop means the LSP is FRR LSP
NOTE
The default triggering policy is recommended, as this allows a device to use host IP routes
with 32-bit addresses to trigger LDP LSP establishment. You can also perform the following
steps to change the policy for triggering LDP LSP establishment as required.
Step 2 Change the policy for triggering the establishment of LDP LSPs.
Change the policy for triggering LDP LSP establishment to all on each LSR so that
the LSR uses all static routes and IGP routes in the routing table to trigger LDP
LSP establishment.
# Configure LSRA.
[~LSRA] mpls
[~LSRA-mpls] lsp-trigger all
[*LSRA-mpls] commit
[~LSRA-mpls] quit
# Configure LSRB.
[~LSRB] mpls
[~LSRB-mpls] lsp-trigger all
[*LSRB-mpls] commit
[~LSRB-mpls] quit
# Configure LSRC.
[~LSRC] mpls
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.3
#
return
Networking Requirements
After MPLS LDP is enabled on each interface, LDP LSPs can be automatically
established, including a great number of unnecessary transit LSPs, which wastes
resources. On the network shown in Figure 1-47, after a policy for triggering the
establishment of transit LSPs is configured, LSRB only uses the routes to 4.4.4.4/32
to establish a transit LSP.
Configuration Notes
During the configuration, note the following:
By default, LDP establishes transit LSPs for all routes, without filtering them.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to advertise the
route to the network segment to which each interface is connected and the
host route to each LSR ID.
2. Configure an IP prefix list to limit the routes for which transit LSPs can be
established.
3. Enable MPLS and MPLS LDP globally on each LSR and configure a policy of
triggering LSP establishment.
4. Configure LSRB (transit node) to use the IP prefix list to limit the routes for
which transit LSPs can be established.
5. Enable MPLS and MPLS LDP on each interface.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each node (as shown in Figure 1-47), OSPF
process ID, and area ID
● Policy for triggering LSP establishment
● IP prefix list name and the routes to be filtered on the transit node
Procedure
Step 1 Assign an IP address to each interface and configure OSPF to advertise the route
to the network segment to which each interface is connected and the host route
to each LSR ID.
# Assign an IP address to each interface (as shown in Figure 1-47), including the
loopback interfaces. Configure OSPF to advertise the route to the network
segment to which each interface is connected and the host route to each LSR ID.
Step 2 Configure an IP prefix list on the transit node LSRB.
# Configure an IP prefix list on LSRB to allow LSRB to establish a transit LSP only
for the route 4.4.4.4/32 to LSRD.
[~LSRB]ip ip-prefix FilterOnTransit permit 4.4.4.4 32
[*LSRB]commit
Step 3 Configure basic MPLS and MPLS LDP functions on each node and interface, and
configure a policy for triggering LSP establishment.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls ldp
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] propagate mapping for ip-prefix FilterOnTransit
[*LSRB-mpls-ldp] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls ldp
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] mpls ldp
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
The configurations of LSRC and LSRD are similar to the configuration of LSRA.
Step 4 Verify the configuration.
Run the display mpls ldp lsp command to check LSP information.
# Display LDP LSPs established on LSRA.
[~LSRA] display mpls ldp lsp
LDP LSP Information
-------------------------------------------------------------------------------
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
-------------------------------------------------------------------------------
DestAddress/Mask In/OutLabel UpstreamPeer NextHop OutInterface
-------------------------------------------------------------------------------
1.1.1.1/32 3/NULL 2.2.2.2 127.0.0.1 LoopBack1
2.2.2.2/32 NULL/3 - 192.168.1.2 GE1/0/0
2.2.2.2/32 1025/3 2.2.2.2 192.168.1.2 GE1/0/0
4.4.4.4/32 NULL/1025 - 192.168.1.2 GE1/0/0
4.4.4.4/32 1026/1026 4.4.4.4 192.168.1.2 GE1/0/0
192.168.1.0/24 3/NULL 2.2.2.2 192.168.1.1 GE1/0/0
*192.168.1.0/24 Liberal/26 DS/2.2.2.2
192.168.2.0/24 NULL/3 - 192.168.1.2 GE1/0/0
192.168.2.0/24 1027/3 3.3.3.3 192.168.1.2 GE1/0/0
--------------------------------------------------------------------------
TOTAL: 8 Normal LSP(s) Found.
TOTAL: 1 Liberal LSP(s) Found.
TOTAL: 0 Frr LSP(s) Found.
An asterisk (*) before an LSP means the LSP is not established
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before an UpstreamPeer means the session is stale
An asterisk (*) before a DS means the session is stale
An asterisk (*) before a NextHop means the LSP is FRR LSP
The command output on each node shows that the LDP LSP with LSRB as the
transit node is established only for the route 4.4.4.4/32 and that other LDP LSPs
not with LSRB as the transit node are established.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
ipv4-family
propagate mapping for ip-prefix FilterOnTransit
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.2.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.2.0 0.0.0.255
#
ip ip-prefix FilterOnTransit index 10 permit 4.4.4.4 32
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.3.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 192.168.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
return
Networking Requirements
MPLS LDP services are deployed on the network shown in Figure 1-48. LSRD is a
low-performance DSLAM for user access. By default, LSRD receives Label Mapping
messages from all peers and uses the routing information in these messages to
establish a large number of LSPs. As a result, memory on LSRD is overused and
LSRD is overburdened. Configure an LDP inbound policy to allow LSRD to receive
only Label Mapping messages destined for LSRC. This ensures that LSRD
establishes LSPs only to LSRC, reducing resource consumption.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface, including the loopback interface on
each node.
2. Configure OSPF to advertise the route to the network segment of each
interface and to advertise the host route to each LSR ID.
3. Enable MPLS and MPLS LDP on each node and interfaces.
4. Configure an LDP inbound policy.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each LSR (as shown in Figure 1-48), OSPF
process ID, and area ID
● LSR ID of each node
Procedure
Step 1 Assign an IP address to each interface and configure an IGP.
Assign an IP address and mask to each interface (as shown in Figure 1-48),
including the loopback interfaces. Configure OSPF to advertise the route to the
network segment to which each interface is connected and the host route to each
LSR ID.
Step 2 Enable MPLS and MPLS LDP globally and on the interfaces of each node.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls ldp
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] commit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls ldp
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 1/0/1
[*LSRB-GigabitEthernet1/0/1] mpls
[*LSRB-GigabitEthernet1/0/1] mpls ldp
[*LSRB-GigabitEthernet1/0/1] quit
[*LSRB] interface gigabitethernet 1/0/2
[*LSRB-GigabitEthernet1/0/2] mpls
[*LSRB-GigabitEthernet1/0/2] mpls ldp
[*LSRB-GigabitEthernet1/0/2] quit
[*LSRB] commit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.3
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] mpls ldp
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] commit
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.4
[*LSRD] mpls
[*LSRD-mpls] quit
[*LSRD] mpls ldp
[*LSRD-mpls-ldp] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] mpls
[*LSRD-GigabitEthernet1/0/0] mpls ldp
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] commit
# After completing the preceding configuration, run the display mpls lsp
command on LSRD to check information about established LSPs.
[~LSRD] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 NULL/32829 -/GE1/0/0
1.1.1.1/32 32828/32829 -/GE1/0/0
2.2.2.2/32 NULL/3 -/GE1/0/0
2.2.2.2/32 32829/3 -/GE1/0/0
3.3.3.3/32 NULL/32830 -/GE1/0/0
The command output shows that LSPs to LSRA, LSRB, and LSRC have been
established on LSRD.
Step 3 Configure an LDP inbound policy.
# Configure an IP prefix list on LSRD to permit only the routes to LSRC.
[~LSRD] ip ip-prefix prefix1 permit 3.3.3.3 32
[*LSRD] commit
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
mpls
#
mpls ldp
ipv4-family
inbound peer 2.2.2.2 fec ip-prefix prefix1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.3.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.1.3.0 0.0.0.255
#
ip ip-prefix prefix1 index 10 permit 3.3.3.3 32
#
return
Networking Requirements
An IP metro or bearer network uses L2VPN or L3VPN to transmit high speed
Internet (HSI) or voice over IP (VoIP) services over end-to-end public network LDP
LSPs. Generally, user-side DSLAMs have low performance, and are easily
overloaded if a large number of LDP LSPs are established. To prevent this issue,
configure an outbound LDP policy to minimize LDP LSPs to be established, reduce
DSLAM memory consumption, and relieve the burden of the DSLAMs.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each LSR (as shown in Figure 1-49), OSPF
process ID (1), and area ID (0.0.0.0)
● LSR ID (loopback interface IP address as shown in Figure 1-49) of each node
● Name of an IP prefix list (prefix1) to be specified in an outbound LDP policy
on LSRA
Procedure
Step 1 Assign an IP address to each interface and configure an IGP.
Assign an IP address and mask to each interface (as shown in Figure 1-49),
including the loopback interfaces. Configure OSPF to advertise the route to the
network segment to which each interface is connected and the host route to each
LSR ID.
Step 2 Enable MPLS and MPLS LDP globally on each node.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 3.3.3.9
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
# Configure LSRB.
<LSRB> system-view
[~LSRB] mpls lsr-id 2.2.2.9
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] commit
[~LSRB-mpls-ldp] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] mpls lsr-id 1.1.1.9
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] commit
[~LSRC-mpls-ldp] quit
# Configure an outbound policy on LSRA to send the DSLAM the Label Mapping
messages destined for LSRC only.
# Configure LSRB.
<LSRB> system-view
[~LSRB] interface gigabitethernet1/0/1
[~LSRB-GigabitEthernet1/0/1] mpls
[*LSRB-GigabitEthernet1/0/1] mpls ldp
[*LSRB-GigabitEthernet1/0/1] quit
[*LSRB] interface gigabitethernet1/0/3
[*LSRB-GigabitEthernet1/0/3] mpls
[*LSRB-GigabitEthernet1/0/3] mpls ldp
[*LSRB-GigabitEthernet1/0/3] commit
[~LSRB-GigabitEthernet1/0/3] quit
# Configure LSRC.
<LSRC> system-view
[~LSRC] interface gigabitethernet1/0/1
[~LSRC-GigabitEthernet1/0/1] mpls
[*LSRC-GigabitEthernet1/0/1] mpls ldp
[*LSRC-GigabitEthernet1/0/1] commit
[~LSRC-GigabitEthernet1/0/1] quit
If no outbound LDP policy is configured on LSRA, the LDP LSPs established on the
DSLAM are as follows:
LDP LSP Information
-------------------------------------------------------------------------------
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
-------------------------------------------------------------------------------
DestAddress/Mask In/OutLabel UpstreamPeer NextHop OutInterface
-------------------------------------------------------------------------------
1.1.1.9/32 NULL/1025 - 10.1.3.1 GE1/0/1
1.1.1.9/32 1024/1025 3.3.3.9 10.1.3.1 GE1/0/1
2.2.2.9/32 NULL/1024 - 10.1.3.1 GE1/0/1
2.2.2.9/32 1027/1024 3.3.3.9 10.1.3.1 GE1/0/1
3.3.3.9/32 NULL/3 - 10.1.3.1 GE1/0/1
3.3.3.9/32 1028/3 3.3.3.9 10.1.3.1 GE1/0/1
4.4.4.9/32 3/NULL 3.3.3.9 127.0.0.1 LoopBack0
*4.4.4.9/32 Liberal/1026 DS/3.3.3.9
-------------------------------------------------------------------------------
TOTAL: 7 Normal LSP(s) Found.
TOTAL: 1 Liberal LSP(s) Found.
TOTAL: 0 Frr LSP(s) Found.
An asterisk (*) before an LSP means the LSP is not established
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before an UpstreamPeer means the session is stale
An asterisk (*) before a DS means the session is stale
An asterisk (*) before a NextHop means the LSP is FRR LSP
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
ipv4-family
outbound peer 4.4.4.9 fec ip-prefix prefix1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.1.3.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
Networking Requirements
The network shown in Figure 1-50 has two IGP areas: Area 10 and Area 20. Inter-
area LSPs need to be established from LSRA to LSRB and from LSRA to LSRC. LDP
extension for inter-area LSPs needs to be configured on LSRA so that LSRA can
search for routes based on the longest match rule to establish LSPs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign IP addresses to interfaces on each node and configure the loopback
addresses to be used as LSR IDs.
2. Configure basic IS-IS functions.
3. Configure a policy for summarizing routes.
4. Enable MPLS and MPLS LDP on each node and interfaces.
5. Configure LDP extension for inter-area LSPs.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each node, according to Figure 1-50
● IS-IS area ID of each node and level of each node and interface
Procedure
Step 1 Assign IP addresses to interfaces on each node and configure the loopback
addresses to be used as LSR IDs.
Assign an IP address and a mask to each interface (including loopback interfaces)
according to Figure 1-50.
Step 2 Configure basic IS-IS functions.
# Configure LSRA.
<~LSRA> system-view
[~LSRA] isis 1
[*LSRA-isis-1] is-level level-2
[*LSRA-isis-1] network-entity 20.0010.0100.0001.00
[*LSRA-isis-1] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-Gigabitethernet1/0/0] isis enable 1
[*LSRA-Gigabitethernet1/0/0] quit
[*LSRA] interface loopback 0
[*LSRA-LoopBack0] isis enable 1
[*LSRA-LoopBack0] commit
[~LSRA-LoopBack0] quit
# Configure LSRD.
<~LSRD> system-view
[~LSRD] isis 1
[*LSRD-isis-1] network-entity 10.0010.0200.0001.00
[*LSRD-isis-1] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-Gigabitethernet1/0/0] isis enable 1
[*LSRD-Gigabitethernet1/0/0] isis circuit-level level-2
[*LSRD-Gigabitethernet1/0/0] quit
[*LSRD] interface gigabitethernet 1/0/1
[*LSRD-Gigabitethernet1/0/1] isis enable 1
[*LSRD-Gigabitethernet1/0/1] isis circuit-level level-1
[*LSRD-Gigabitethernet1/0/1] quit
[*LSRD] interface gigabitethernet 1/0/2
[*LSRD-Gigabitethernet1/0/2] isis enable 1
[*LSRD-Gigabitethernet1/0/2] isis circuit-level level-1
[*LSRD-Gigabitethernet1/0/2] quit
[*LSRD] interface loopback 0
[*LSRD-LoopBack0] isis enable 1
[*LSRD-LoopBack0] commit
[~LSRD-LoopBack0] quit
# Configure LSRB.
<~LSRB> system-view
[~LSRB] isis 1
[*LSRB-isis-1] is-level level-1
[*LSRB-isis-1] network-entity 10.0010.0300.0001.00
[*LSRB-isis-1] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-Gigabitethernet1/0/0] isis enable 1
[*LSRB-Gigabitethernet1/0/0] quit
[*LSRB] interface loopback 0
[*LSRB-LoopBack0] isis enable 1
[*LSRB-LoopBack0] commit
[~LSRB-LoopBack0] quit
# Configure LSRC.
<~LSRC> system-view
[~LSRC] isis 1
[*LSRC-isis-1] is-level level-1
[*LSRC-isis-1] network-entity 10.0010.0300.0002.00
[*LSRC-isis-1] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-Gigabitethernet1/0/0] isis enable 1
[*LSRC-Gigabitethernet1/0/0] quit
[*LSRC] interface loopback 0
[*LSRC-LoopBack0] isis enable 1
[*LSRC-LoopBack0] commit
[~LSRC-LoopBack0] quit
The command output shows that the host routes to LSRB and LSRC have been
summarized.
Step 4 Configure MPLS and MPLS LDP globally and on interfaces on each node so that
the network can forward MPLS traffic. Then, check information about established
LSPs.
# Configure LSRA.
[~LSRA] mpls lsr-id 10.10.1.1
[~LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-Gigabitethernet1/0/0] mpls
[*LSRA-Gigabitethernet1/0/0] mpls ldp
[*LSRA-Gigabitethernet1/0/0] commit
[~LSRA-Gigabitethernet1/0/0] quit
# Configure LSRD.
[~LSRD] mpls lsr-id 10.10.2.2
[~LSRD] mpls
[*LSRD-mpls] quit
[*LSRD] mpls ldp
[*LSRD-mpls-ldp] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-Gigabitethernet1/0/0] mpls
[*LSRD-Gigabitethernet1/0/0] mpls ldp
[*LSRD-Gigabitethernet1/0/0] quit
[*LSRD] interface gigabitethernet 1/0/1
[*LSRD-Gigabitethernet1/0/1] mpls
[*LSRD-Gigabitethernet1/0/1] mpls ldp
[*LSRD-Gigabitethernet1/0/1] quit
[*LSRD] interface gigabitethernet 1/0/2
[*LSRD-Gigabitethernet1/0/2] mpls
[*LSRD-Gigabitethernet1/0/2] mpls ldp
[*LSRD-Gigabitethernet1/0/2] commit
[~LSRD-Gigabitethernet1/0/2] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 10.10.3.1
[~LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-Gigabitethernet1/0/0] mpls
[*LSRB-Gigabitethernet1/0/0] mpls ldp
[*LSRB-Gigabitethernet1/0/0] commit
[~LSRB-Gigabitethernet1/0/0] quit
# Configure LSRC.
[~LSRC] mpls lsr-id 10.10.3.2
[~LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-Gigabitethernet1/0/0] mpls
[*LSRC-Gigabitethernet1/0/0] mpls ldp
[*LSRC-Gigabitethernet1/0/0] commit
[~LSRC-Gigabitethernet1/0/0] quit
# After completing the configuration, run the display mpls lsp command on LSRA
to check information about the established LSP.
[~LSRA] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
10.10.2.2/32 NULL/3 -/Gigabitethernet1/0/0
10.10.2.2/32 1024/3 -/Gigabitethernet1/0/0
The preceding command output shows that by default, LDP does not establish
inter-area LSPs from LSRA to LSRB or from LSRA to LSRC.
Step 5 Configure LDP extension for inter-area LSPs.
# Run the longest-match command on LSRA to enable LDP to use the longest
match rule to search for routes to establish LSPs.
[~LSRA] mpls ldp
[*LSRA-mpls-ldp] longest-match
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
The preceding command output shows that LDP has established inter-area LSPs
from LSRA to LSRB and from LSRA to LSRC.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 10.10.1.1
mpls
#
mpls ldp
longest-match
#
isis 1
is-level level-2
network-entity 20.0010.0100.0001.00
#
interface gigabitethernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 10.10.1.1 255.255.255.255
isis enable 1
#
return
interface gigabitethernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
isis circuit-level level-2
mpls
mpls ldp
#
interface gigabitethernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
isis circuit-level level-1
mpls
mpls ldp
#
interface gigabitethernet1/0/2
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
isis circuit-level level-1
mpls
mpls ldp
#
interface LoopBack0
ip address 10.10.2.2 255.255.255.255
isis enable 1
#
ip ip-prefix permit-host index 10 permit 0.0.0.0 32
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 10.10.3.1
mpls
#
mpls ldp
#
isis 1
is-level level-1
network-entity 10.0010.0300.0001.00
#
interface gigabitethernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 10.10.3.1 255.255.255.255
isis enable 1
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 10.10.3.2
mpls
#
mpls ldp
#
isis 1
is-level level-1
network-entity 10.0010.0300.0002.00
#
interface gigabitethernet1/0/0
undo shutdown
ip address 10.1.3.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 10.10.3.2 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-51, establish an LDP LSP along the path PE1 ->
P1 -> PE2, and an IP link along the path PE2 -> P2 -> PE1. Configure static BFD to
monitor the LDP LSP.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces
● OSPF process ID
● BFD configuration name and local and remote discriminators
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
Assign an IP address and a mask to each interface (including loopback interfaces)
according to Figure 1-51.
Configure OSPF on all nodes to advertise the host route of each loopback
interface. For configuration details, see Configuration Files.
After completing the configuration, ping the LSR ID of each peer to check that the
LSRs interwork successfully. Run the display ip routing-table command on each
LSR to view the routes to the other LSRs.
<PE1> display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 16 Routes : 16
Step 2 Establish an LDP LSP along the path PE1 -> P1 -> PE2.
# Configure PE1.
<PE1> system-view
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls] quit
[*PE1]interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls ldp
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure P1.
<P1> system-view
[~P1] mpls lsr-id 2.2.2.2
[*P1] mpls
[*P1-mpls] quit
[*P1] mpls ldp
[*P1-mpls] quit
[*P1]interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls ldp
[*P1-GigabitEthernet1/0/0] quit
[*P1]interface gigabitethernet 1/0/1
[*P1-GigabitEthernet1/0/1] mpls
[*P1-GigabitEthernet1/0/1] mpls ldp
[*P1-GigabitEthernet1/0/1] quit
[*P1] commit
# Configure PE2.
<PE2> system-view
[~PE2] mpls lsr-id 4.4.4.4
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls] quit
[*PE2]interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls ldp
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Run the display mpls ldp lsp command to check whether an LDP LSP destined
for 4.4.4.4/32 has been established on PE1.
<PE1> display mpls ldp lsp
LDP LSP Information
-------------------------------------------------------------------------------
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
-------------------------------------------------------------------------------
DestAddress/Mask In/OutLabel UpstreamPeer NextHop OutInterface
-------------------------------------------------------------------------------
1.1.1.1/32 3/NULL 2.2.2.2 127.0.0.1 LoopBack1
*1.1.1.1/32 Liberal/21 DS/2.2.2.2
2.2.2.2/32 NULL/3 - 10.1.1.2 GE1/0/0
2.2.2.2/32 16/3 2.2.2.2 10.1.1.2 GE1/0/0
4.4.4.4/32 NULL/22 - 10.1.1.2 GE1/0/0
4.4.4.4/32 17/22 2.2.2.2 10.1.1.2 GE1/0/0
-------------------------------------------------------------------------------
TOTAL: 5 Normal LSP(s) Found.
TOTAL: 1 Liberal LSP(s) Found.
TOTAL: 0 Frr LSP(s) Found.
An asterisk (*) before an LSP means the LSP is not established
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before an UpstreamPeer means the session is stale
An asterisk (*) before a DS means the session is stale
An asterisk (*) before a NextHop means the LSP is FRR LSP
<PE1> system-view
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] commit
# Configure P2.
<PE2> system-view
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] commit
Step 4 On the ingress, configure a BFD session and bind it to the LDP LSP.
# Configure PE1.
<PE1> system-view
[~PE1] bfd 1to4 bind ldp-lsp peer-ip 4.4.4.4 nexthop 10.1.1.2 interface gigabitethernet 1/0/0
[*PE1-bfd-lsp-session-1to4] discriminator local 1
[*PE1-bfd-lsp-session-1to4] discriminator remote 2
[*PE1-bfd-lsp-session-1to4] process-pst
[*PE1-bfd-lsp-session-1to4] commit
[~PE1-bfd-lsp-session-1to4] quit
Step 5 On the egress, configure a BFD session and bind it to the IP link, enabling the
egress to notify the ingress of LDP LSP faults.
# Configure PE2.
<PE2> system-view
[~PE2] bfd 4to1 bind peer-ip 1.1.1.1
[*PE2-bfd-session-4ot1] discriminator local 2
[*PE2-bfd-session-4ot1] discriminator remote 1
[*PE2-bfd-session-4ot1] commit
[~PE2-bfd-session-4ot1] quit
Run the display bfd session all verbose command on the egress. The (Multi
Hop) State field displays Up, and the BFD Bind Type field displays Peer IP
Address.
<PE2> display bfd session all verbose
(w): State in WTR
(*): State is invalid
--------------------------------------------------------------------------------
(Multi Hop) State : Up Name : 4to1
--------------------------------------------------------------------------------
Local Discriminator : 2 Remote Discriminator : 1
Session Detect Mode : Asynchronous Mode Without Echo Function
BFD Bind Type : Peer IP Address
Bind Session Type : Static
Bind Peer IP Address : 1.1.1.1
Bind Interface :-
Track Interface :-
Bind Source IP Address : 4.4.4.4
FSM Board Id :3 TOS-EXP :7
Min Tx Interval (ms) : 10 Min Rx Interval (ms) : 10
Actual Tx Interval (ms): 10 Actual Rx Interval (ms): 10
Local Detect Multi :3 Detect Interval (ms) : 30
Echo Passive : Disable Acl Number :-
Destination Port : 3784 TTL : 254
Proc Interface Status : Disable Process PST : Disable
WTR Interval (ms) :- Config PST : Disable
Active Multi :3
Last Local Diagnostic : No Diagnostic
Bind Application : No Application Bind
Session TX TmrID :- Session Detect TmrID : -
Session Init TmrID :- Session WTR TmrID :-
Session Echo Tx TmrID : -
Session Description : -
--------------------------------------------------------------------------------
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
#
bfd 1to4 bind ldp-lsp peer-ip 4.4.4.4 nexthop 10.1.1.2 interface GigabitEthernet1/0/0
discriminator local 1
discriminator remote 2
process-pst
#
return
● PE2 configuration file
#
sysname PE2
#
bfd
#
mpls lsr-id 4.4.4.4
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.5.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.4.1 255.255.255.0
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.5.0 0.0.0.255
network 10.1.4.0 0.0.0.255
network 4.4.4.4 0.0.0.0
#
bfd 4to1 bind peer-ip 1.1.1.1
discriminator local 2
discriminator remote 1
#
return
● P1 configuration
#
sysname P1
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.5.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.5.0 0.0.0.255
return
● P2 configuration file
#
sysname P2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.4.2 255.255.255.0
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.4.0 0.0.0.255
network 10.1.2.0 0.0.0.255
return
Networking Requirements
The proliferation of MPLS LDP applications drives the increasing demand for
network reliability. To meet the reliability requirement, BFD for LDP can be used.
BFD for LDP is a detection mechanism that can rapidly detect faults and trigger a
primary/backup LSP switchover. The BFD for LDP function and LDP FRR function
are used together on an MPLS LDP network.
On the network shown in Figure 1-52, PE1, P1, P2, and PE2 are in the same MPLS
domain. PE1 and PE2 establish primary and backup LDP LSPs. To monitor the LDP
LSPs, configure dynamic BFD.
Figure 1-52 Networking diagram for configuring dynamic BFD for LDP LSP
NOTE
Configuration Notes
During the configuration, note the following:
● Each LSR must have route entries that exactly match FECs for the LSPs to be
established.
● By default, the triggering policy is host, allowing a device to use host IP
routes with 32-bit addresses to trigger LDP LSP establishment.
● If the triggering policy is all, a device is allowed to use all IGP routes to
trigger LDP LSP establishment. The device does not use public network BGP
routes to trigger LDP LSP establishment.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● LSR ID and interface IP addresses of each node, as shown in Figure 1-52
● OSPF process ID (1) and area ID (0)
● Policy for triggering LSP establishment (all routes); FEC list names (l1 and l2)
● Minimum interval (100 ms) at which BFD packets are sent, minimum interval
(600 ms) at which BFD packets are received, and detection multiplier (4)
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface according to Figure 1-52 and create a
loopback interface on each node. For details, see the configuration files.
Step 2 Configure OSPF.
Configure OSPF on each node to implement network layer connectivity. For
details, see the configuration files.
Step 3 Configure LDP LSPs.
Configure MPLS LDP on each node and enable the nodes to use all IGP routes to
establish LDP LSPs. For details, see the configuration files.
Step 4 Configure OSPF FRR and LDP Auto FRR on each node.
# Configure PE1.
[~PE1] ospf 1
[~PE1-ospf-1] frr
[*PE1-ospf-1-frr] loop-free-alternate
[*PE1-ospf-1-frr] commit
[~PE1-ospf-1-frr] quit
[~PE1-ospf-1] quit
[~PE1] mpls ldp
[~PE1-mpls-ldp] auto-frr lsp-trigger all
[*PE1-mpls-ldp] commit
[~PE1-mpls-ldp] quit
# Configure PE2.
[~PE2] ospf 1
[~PE2-ospf-1] frr
[*PE2-ospf-1-frr] loop-free-alternate
[*PE2-ospf-1-frr] commit
[~PE2-ospf-1-frr] quit
[~PE2-ospf-1] quit
[~PE2] mpls ldp
[~PE2-mpls-ldp] auto-frr lsp-trigger all
[*PE2-mpls-ldp] commit
[~PE2-mpls-ldp] quit
[~PE1-bfd] quit
[~PE1] fec-list l1
[*PE1-fec-list-l1] fec-node 4.4.4.4
[*PE1-fec-list-l1] commit
[~PE1-fec-list-l1] quit
[~PE1] mpls
[~PE1-mpls] mpls bfd enable
[*PE1-mpls] mpls bfd-trigger fec-list l1
[*PE1-mpls] mpls bfd min-tx-interval 100 min-rx-interval 600 detect-multiplier 4
[*PE1-mpls] commit
[~PE1-mpls] quit
# Enable BFD, specify a FEC list used to establish a BFD session, and set BFD
parameters on PE2.
[~PE2] bfd
[*PE2-bfd] mpls-passive
[*PE2-bfd] commit
[~PE2-bfd] quit
[~PE2] fec-list l2
[*PE2-fec-list-l2] fec-node 1.1.1.1
[*PE2-fec-list-l2] commit
[~PE2-fec-list-l2] quit
[~PE2] mpls
[~PE2-mpls] mpls bfd enable
[*PE2-mpls] mpls bfd-trigger fec-list l2
[*PE2-mpls] mpls bfd min-tx-interval 100 min-rx-interval 600 detect-multiplier 4
[*PE2-mpls] commit
[~PE2-mpls] quit
# Run the display bfd session all verbose command to view the dynamic BFD
session status on PE1. The BFD session status is Up.
[~PE1] display bfd session all verbose
(w): State in WTR (*): State is invalid
--------------------------------------------------------------------------------
State : Up Name : dyn_16388
--------------------------------------------------------------------------------
Local Discriminator : 16388 Remote Discriminator : 16386
Session Detect Mode : Asynchronous Mode Without Echo Function
BFD Bind Type : LDP_LSP
Bind Session Type : Dynamic
Bind Peer IP Address : 4.4.4.4
NextHop Ip Address : 10.1.1.2
Bind Interface : GigabitEthernet1/0/0
Tunnel ID :-
FSM Board Id :3 TOS-EXP :7
Min Tx Interval (ms) : 600 Min Rx Interval (ms) : 100
Actual Tx Interval (ms): 600 Actual Rx Interval (ms): 100
Local Detect Multi :4 Detect Interval (ms) : 300
Echo Passive : Disable Acl Number :-
Destination Port : 3784 TTL :1
Proc Interface Status : Disable Process PST : Enable
WTR Interval (ms) :- Config PST : Enable
Active Multi :3
Last Local Diagnostic : No Diagnostic
Bind Application : LDP
Session TX TmrID :- Session Detect TmrID : -
Session Init TmrID :- Session WTR TmrID :-
Session Echo Tx TmrID : -
Session Description : -
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
(Multi Hop) State : Up Name : dyn_16390
--------------------------------------------------------------------------------
Local Discriminator : 16390 Remote Discriminator : 16387
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
mpls-passive
#
mpls lsr-id 1.1.1.1
#
mpls
lsp-trigger all
mpls bfd enable
mpls bfd-trigger fec-list l1
mpls bfd min-tx-interval 100 min-rx-interval 600 detect-multiplier 4
#
fec-list l1
fec-node 4.4.4.4
#
mpls ldp
#
ipv4-family
auto-frr lsp-trigger all
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
ospf cost 2
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
frr
loop-free-alternate
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
#
return
mpls ldp
#
ipv4-family
auto-frr lsp-trigger all
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.5.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.4.1 255.255.255.0
ospf cost 2
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
ospf 1
frr
loop-free-alternate
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.1.4.0 0.0.0.255
network 10.1.5.0 0.0.0.255
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.2
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.5.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.5.0 0.0.0.255
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 3.3.3.3
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.4.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.4.0 0.0.0.255
#
return
Networking Requirements
When LDP LSPs transmit application traffic, for example, VPN, to improve network
reliability, LDP FRR and an LDP upper-layer protection mechanism, such as VPN
FRR or VPN equal-cost multipath (ECMP), are used. BFD for LDP LSP only detects
primary LSP faults and switches traffic to an FRR LSP. If the primary and FRR LSPs
fail simultaneously, the BFD mechanism does not take effect. In this situation, LDP
can instruct its upper-layer application to perform a protection switchover only
after LDP detects the FRR LSP failure. As a result, a great number of packets are
dropped.
To prevent packet loss occurring when BFD for LDP LSP cannot detect faults of the
primary and backup LSPs, configure dynamic BFD for LDP tunnel to create a
dynamic BFD session that monitors the primary LSP and FRR LSP. In this way,
when both the primary LSP and FRR LSP are faulty, BFD can quickly detect and
trigger the upper-layer application of the LDP to perform protection switching and
reduce traffic loss.
On the network shown in Figure 1-53, an LDP LSP originates from LSRA and is
destined for LSRD. LDP Auto FRR is configured to protect LSP traffic. LSRA
establishes the primary LSP over the path LSRA -> LSRC -> LSRD and the FRR LSP
over the path LSRA -> LSRB -> LSRC -> LSRD. Dynamic BFD for LDP tunnel can be
configured to dynamically create a BFD session to monitor both the primary and
FRR LSPs.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface. For configuration details, see
Configuration Files in this section.
Step 2 Configure basic IS-IS functions. For configuration details, see Configuration Files
in this section.
Step 3 Configure LDP LSPs.
# Configure LSRA.
<LSRA> system-view
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] quit
[*LSRA] interface GigabitEthernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls ldp
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] interface GigabitEthernet 1/0/1
[*LSRA-GigabitEthernet1/0/1] mpls
[*LSRA-GigabitEthernet1/0/1] mpls ldp
[*LSRA-GigabitEthernet1/0/1] quit
# Repeat this step for LSRB, LSRC, and LSRD. For configuration details, see
Configuration Files in this section.
Step 4 Configure LDP Auto FRR.
# Enable IS-IS Auto FRR on LSRA.
[~LSRA] isis 1
[*LSRA-isis-1] frr
[*LSRA-isis-1-frr] loop-free-alternate level-2
[*LSRA-isis-1-frr] quit
[*LSRA-isis-1] commit
[~LSRA-isis-1] quit
After IS-IS Auto FRR is enabled, LDP Auto FRR automatically takes effect. Then,
run the display mpls lsp command on LSRA to view information about the
primary and FRR LSPs.
[~LSRA] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 3/NULL -/-
2.2.2.9/32 NULL/3 -/GE1/0/1
2.2.2.9/32 32833/3 -/GE1/0/1
**LDP FRR** NULL/32835 -/GE1/0/0
**LDP FRR** 32833/32835 -/GE1/0/0
3.3.3.9/32 NULL/3 -/GE1/0/1
3.3.3.9/32 32837/3 -/GE1/0/1
**LDP FRR** NULL/32837 -/GE1/0/0
**LDP FRR** 32837/32837 -/GE1/0/0
4.4.4.9/32 NULL/32832 -/GE1/0/1
4.4.4.9/32 32836/32832 -/GE1/0/1
**LDP FRR** NULL/32836 -/GE1/0/0
**LDP FRR** 32836/32836 -/GE1/0/0
10.1.3.0/24 32834/3 -/GE1/0/0
10.1.3.0/24 32834/3 -/GE1/0/1
10.1.4.0/24 32835/3 -/GE1/0/1
# Configure LSRA.
[~LSRA] bfd
[*LSRA-bfd] commit
[~LSRA-bfd] quit
# Configure LSRD.
[~LSRD] bfd
[*LSRD-bfd] commit
[~LSRD-bfd] quit
Step 6 Enable the function to dynamically create BFD sessions in the MPLS scenario.
# Configure LSRA.
[~LSRA] mpls
[~LSRA-mpls] mpls bfd enable
[*LSRA-mpls] commit
[~LSRA-mpls] quit
# Configure LSRD.
[~LSRD] bfd
[~LSRD-bfd] mpls-passive
[*LSRD-bfd] commit
[~LSRD-bfd] quit
Step 7 Configure a policy for triggering dynamic BFD for LDP tunnel.
# On LSRA, create a FEC list and add a node with IP address 4.4.4.9 to the list so
that the FEC list is used to establish a BFD session only to monitor the LDP tunnel
from LSRA to LSRD.
[~LSRA] fec-list list1
[*LSRA-fec-list-list1] fec-node 4.4.4.9
[*LSRA-fec-list-list1] commit
[~LSRA-fec-list-list1] quit
# Specify the FEC list on LSRA so that LSRA uses it to establish a BFD session.
[~LSRA] mpls
[~LSRA-mpls] mpls bfd-trigger-tunnel fec-list list1
[*LSRA-mpls] commit
[~LSRA-mpls] quit
# Run the display mpls bfd session protocol ldp bfd-type ldp-tunnel verbose
command on LSRA. The command output shows that a dynamic BFD session is
Up.
[~LSRA] display mpls bfd session protocol ldp bfd-type ldp-tunnel verbose
--------------------------------------------------------------------------------
BFD Information: LDP Tunnel
--------------------------------------------------------------------------------
No :1
LspIndex :0
Protocol : LDP
Fec : 4.4.4.9
Bfd-Discriminator : 16389
ActTx : 10
ActRx : 10
ActMulti :3
Bfd-State : Up
Time : 800 sec
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
mpls bfd enable
mpls bfd-trigger-tunnel fec-list list1
#
fec-list list1
fec-node 4.4.4.9
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
network-entity 10.0000.0000.0001.00
frr
loop-free-alternate level-1
loop-free-alternate level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
return
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.9
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.4.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.3.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
● LSRD configuration file
#
sysname LSRD
#
bfd
mpls-passive
#
mpls lsr-id 4.4.4.9
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
network-entity 10.0000.0000.0004.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.4.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
Modern network services, such as VoIP, online games, and online video services,
have higher requirements on real-time performance. Many services are based on
VPNs, and VPN services usually use LDP tunnels. Data loss due to link faults
adversely affects these services.
To minimize the adverse impact, LDP manual FRR can be configured. If a fault
occurs on a public network, LDP manual FRR switches the VPN services to a
backup LSP before the primary LSP routes re-converge and the primary LSP is
reestablished. Traffic loss during fault detection and traffic switchover to the
backup LSP lasts less than 50 ms. However, after route re-convergence is
complete, the time for a VPN service to switch to the new primary LSP depends on
the VPN implementation. In order to keep the VPN service interruption time
within 50 ms, the speed of switching the VPN service to the new primary LSP
needs to be improved. Configure LDP Auto FRR to address this need.
On the network shown in Figure 1-54, primary and backup LSPs are established
from LSRA to LSRC. The LSP over the path LSRA -> LSRC is the primary one, and
the LSP over the path LSRA -> LSRB -> LSRC is the backup one. To allow traffic to
rapidly switch to the backup LSP if the primary LSP fails, configure LDP Auto FRR
on LSRA to enable LSRA to automatically establish a backup LSP. Traffic can then
be rapidly switched to the backup LSP if a fault occurs in the primary LSP,
minimizing traffic loss.
GigabitEthernet1/0/0 10.1.1.1/24
GigabitEthernet1/0/1 10.1.2.1/24
GigabitEthernet1/0/0 10.1.1.2/24
GigabitEthernet1/0/1 10.1.3.1/24
GigabitEthernet1/0/0 10.1.4.1/24
GigabitEthernet1/0/0 10.1.2.2/24
GigabitEthernet1/0/2 10.1.3.2/24
GigabitEthernet1/0/0 10.1.4.2/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign IP addresses to interfaces on each node and configure the loopback
addresses to be used as LSR IDs.
2. Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each LSR ID.
3. Enable MPLS and MPLS LDP on each node and interfaces.
4. Enable IS-IS Auto FRR on the ingress LSR to protect traffic.
5. Configure a policy for triggering LDP LSP establishment based on all routes.
6. Configure a policy for triggering backup LDP LSP establishment on ingress
LSR.
Data Preparation
To complete the configuration, you need the following data:
● IP address of every interface on every node shown in Figure 1-54, IS-IS
process ID, and level of each router
● Policy for triggering backup LDP LSP establishment
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and a mask to each interface (including loopback interfaces)
according to Figure 1-54.
Step 2 Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each LSR ID.
# Configure LSRA.
<LSRA> system-view
[~LSRA] isis 1
[*LSRA-isis-1] network-entity 10.0000.0000.0001.00
[*LSRA-isis-1] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] isis enable 1
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] interface gigabitethernet 1/0/1
[*LSRA-GigabitEthernet1/0/1] isis enable 1
[*LSRA-GigabitEthernet1/0/1] quit
[*LSRA] interface loopBack 0
[*LSRA-LoopBack0] isis enable 1
[*LSRA-LoopBack0] quit
[*LSRA] commit
# Configure LSRB.
<LSRB> system-view
[~LSRB] isis 1
[*LSRB-isis-1] network-entity 10.0000.0000.0002.00
[*LSRB-isis-1] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] isis enable 1
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 1/0/1
[*LSRB-GigabitEthernet1/0/1] isis enable 1
[*LSRB-GigabitEthernet1/0/1] quit
[*LSRB] interface loopBack 0
[*LSRB-LoopBack0] isis enable 1
[*LSRB-LoopBack0] quit
[*LSRB] commit
# Configure LSRC.
<LSRC> system-view
[~LSRC] isis 1
[*LSRC-isis-1] network-entity 10.0000.0000.0003.00
[*LSRC-isis-1] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] isis enable 1
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] interface gigabitethernet 1/0/1
[*LSRC-GigabitEthernet1/0/1] isis enable 1
[*LSRC-GigabitEthernet1/0/1] quit
[*LSRC] interface gigabitethernet 1/0/2
[*LSRC-GigabitEthernet1/0/2] isis enable 1
[*LSRC-GigabitEthernet1/0/2] quit
[*LSRC] interface loopBack 0
[*LSRC-LoopBack0] isis enable 1
[*LSRC-LoopBack0] quit
[*LSRC] commit
# Configure LSRD.
<LSRD> system-view
[~LSRD] isis 1
[*LSRD-isis-1] network-entity 10.0000.0000.0004.00
[*LSRD-isis-1] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] isis enable 1
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] interface loopBack 0
[*LSRD-LoopBack0] isis enable 1
[*LSRD-LoopBack0] quit
[*LSRD] commit
Step 3 Configure MPLS and MPLS LDP globally and on interfaces on each node so that
the network can forward MPLS traffic. Then, check information about established
LSPs.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.9
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls ldp
[*LSRA-GigabitEthernet1/0/0] quit
[*LSRA] interface gigabitethernet 1/0/1
[*LSRA-GigabitEthernet1/0/1] mpls
[*LSRA-GigabitEthernet1/0/1] mpls ldp
[*LSRA-GigabitEthernet1/0/1] quit
[*LSRA] commit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.9
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls ldp
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 1/0/1
[*LSRB-GigabitEthernet1/0/1] mpls
[*LSRB-GigabitEthernet1/0/1] mpls ldp
[*LSRB-GigabitEthernet1/0/1] quit
[*LSRB] commit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.9
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] mpls ldp
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] interface gigabitethernet 1/0/1
[*LSRC-GigabitEthernet1/0/1] mpls
[*LSRC-GigabitEthernet1/0/1] mpls ldp
[*LSRC-GigabitEthernet1/0/1] quit
[*LSRC] interface gigabitethernet 1/0/2
[*LSRC-GigabitEthernet1/0/2] mpls
[*LSRC-GigabitEthernet1/0/2] mpls ldp
[*LSRC-GigabitEthernet1/0/2] quit
[*LSRC] commit
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.9
[*LSRD] mpls
[*LSRD-mpls] quit
[*LSRD] mpls ldp
[*LSRD-mpls-ldp] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] mpls
[*LSRD-GigabitEthernet1/0/0] mpls ldp
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] commit
# After completing the configuration, run the display mpls lsp command on LSRA
to check information about the established LSP.
[~LSRA] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 3/NULL -/-
2.2.2.9/32 NULL/3 -/GE1/0/0
2.2.2.9/32 1024/3 -/GE1/0/0
3.3.3.9/32 NULL/3 -/GE1/0/1
3.3.3.9/32 1025/3 -/GE1/0/1
4.4.4.9/32 NULL/1026 -/GE1/0/1
4.4.4.9/32 1026/1026 -/GE1/0/1
The command output shows that host routes with 32-bit masks are used to
trigger LDP LSP establishment. This is the default triggering policy.
Step 4 Enable IS-IS Auto FRR on LSRA and check routing information and backup LSP
information.
# Enable IS-IS Auto FRR on LSRA.
[~LSRA] isis
[~LSRA-isis-1] frr
[*LSRA-isis-1-frr] loop-free-alternate
[*LSRA-isis-1-frr] quit
[*LSRA-isis-1] quit
[*LSRA] commit
# Display routing information of direct links between LSRA and LSRC and between
LSRC and LSRD.
Destination: 10.1.4.0/24
Protocol: ISIS Process ID: 1
Preference: 15 Cost: 20
NextHop: 10.1.2.2 Neighbour: 0.0.0.0
State: Active Adv Age: 00h05m38s
Tag: 0 Priority: low
Label: NULL QoSInfo: 0x0
IndirectID: 0x0
RelayNextHop: 0.0.0.0 Interface: GigabitEthernet1/0/1
TunnelID: 0x0 Flags: D
BkNextHop: 10.1.1.2 BkInterface: GigabitEthernet1/0/0
BkLabel: NULL SecTunnelID: 0x0
BkPETunnelID: 0x0 BkPESecTunnelID: 0x0
BkIndirectID: 0x0
The command output shows that a backup IS-IS route is generated after IS-IS
Auto FRR is enabled.
# Run the display mpls lsp command on LSRA to check LSP information.
[~LSRA] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 3/NULL -/-
2.2.2.9/32 NULL/3 -/GE1/0/0
2.2.2.9/32 23/3 -/GE1/0/0
**LDP FRR** NULL/17 -/GE1/0/1
**LDP FRR** 23/17 -/GE1/0/1
3.3.3.9/32 NULL/18 -/GE1/0/1
3.3.3.9/32 24/18 -/GE1/0/1
**LDP FRR** NULL/18 -/GE1/0/0
**LDP FRR** 24/18 -/GE1/0/0
4.4.4.9/32 NULL/3 -/GE1/0/1
4.4.4.9/32 25/3 -/GE1/0/1
**LDP FRR** NULL/19 -/GE1/0/0
**LDP FRR** 25/19 -/GE1/0/0
The command output shows that backup routes with 32-bit masks are used to
trigger backup LDP LSP establishment. This is the default triggering policy.
Step 5 Configure a policy to allow all routes to be used to trigger LDP LSP establishment
and check LSP information.
# Run the lsp-trigger command on LSRA to allow all routes to be used to trigger
LDP LSP establishment and check LSP information.
[~LSRA] mpls
[~LSRA-mpls] lsp-trigger all
[*LSRA-mpls] quit
[*LSRA] commit
# Run the lsp-trigger command on LSRB to allow all routes to be used to trigger
LDP LSP establishment and check LSP information.
[~LSRB] mpls
[~LSRB-mpls] lsp-trigger all
[*LSRB-mpls] quit
[*LSRB] commit
# Run the lsp-trigger command on LSRC to allow all routes to be used to trigger
LDP LSP establishment and check LSP information.
[~LSRC] mpls
[~LSRC-mpls] lsp-trigger all
[*LSRC-mpls] quit
[*LSRC] commit
# Run the lsp-trigger command on LSRD to allow all routes to be used to trigger
LDP LSP establishment and check LSP information.
[~LSRD] mpls
[~LSRD-mpls] lsp-trigger all
[*LSRD-mpls] quit
[*LSRD] commit
# Run the display mpls lsp command on LSRA to check LSP information.
[~LSRA] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 3/NULL -/-
2.2.2.9/32 NULL/3 -/GE1/0/0
2.2.2.9/32 23/3 -/GE1/0/0
**LDP FRR** NULL/17 -/GE1/0/1
**LDP FRR** 23/17 -/GE1/0/1
3.3.3.9/32 NULL/18 -/GE1/0/1
3.3.3.9/32 24/18 -/GE1/0/1
**LDP FRR** NULL/18 -/GE1/0/0
**LDP FRR** 24/18 -/GE1/0/0
4.4.4.9/32 NULL/3 -/GE1/0/1
4.4.4.9/32 25/3 -/GE1/0/1
**LDP FRR** NULL/19 -/GE1/0/0
**LDP FRR** 25/19 -/GE1/0/0
10.1.1.0/24 3/NULL -/-
10.1.2.0/24 3/NULL -/-
10.1.3.0/24 NULL/3 -/GE1/0/0
10.1.3.0/24 28/3 -/GE1/0/0
10.1.3.0/24 NULL/3 -/GE1/0/1
10.1.3.0/24 28/3 -/GE1/0/1
10.1.4.0/24 NULL/3 -/GE1/0/1
10.1.4.0/24 29/3 -/GE1/0/1
The command output shows that routes to addresses with 24-bit masks are used
to trigger LSP establishment.
Step 6 Configure a policy for triggering backup LDP LSP establishment based all routes.
# Run the auto-frr lsp-trigger command on LSRA to allow LDP to use all backup
routes to establish backup LSPs.
[~LSRA] mpls ldp
[~LSRA-mpls-ldp] auto-frr lsp-trigger all
[*LSRA-mpls-ldp] quit
[*LSRA] commit
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 3/NULL -/-
2.2.2.9/32 NULL/3 -/GE1/0/0
2.2.2.9/32 23/3 -/GE1/0/0
**LDP FRR** NULL/17 -/GE1/0/1
**LDP FRR** 23/17 -/GE1/0/1
3.3.3.9/32 NULL/18 -/GE1/0/1
3.3.3.9/32 24/18 -/GE1/0/1
**LDP FRR** NULL/18 -/GE1/0/0
**LDP FRR** 24/18 -/GE1/0/0
4.4.4.9/32 NULL/3 -/GE1/0/1
4.4.4.9/32 25/3 -/GE1/0/1
**LDP FRR** NULL/19 -/GE1/0/0
**LDP FRR** 25/19 -/GE1/0/0
10.1.1.0/24 3/NULL -/-
10.1.2.0/24 3/NULL -/-
10.1.3.0/24 NULL/3 -/GE1/0/0
10.1.3.0/24 28/3 -/GE1/0/0
10.1.3.0/24 NULL/3 -/GE1/0/1
10.1.3.0/24 28/3 -/GE1/0/1
10.1.4.0/24 NULL/3 -/GE1/0/1
10.1.4.0/24 29/3 -/GE1/0/1
**LDP FRR** NULL/26 -/GE1/0/0
**LDP FRR** 29/26 -/GE1/0/0
The command output shows that a backup LSP has been established for the
primary LSP on the direct link LSRA -> LSRC -> LSRD.
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
auto-frr lsp-trigger all
#
isis 1
frr
loop-free-alternate level-1
loop-free-alternate level-2
network-entity 10.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.9
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
isis 1
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.9
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
isis 1
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.4.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.3.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-55, PE1 and PE2 are directly connected, and a
redundancy link is deployed between them. The customer requires that the LDP
session between PE1 and PE2 and and their peer relationship remain connected if
the direct link between the PEs fails. To meet this requirement, you can configure
LDP session protection.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure an IGP.
2. Configure a local LDP session.
3. Configure LDP session protection.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface and configure an IGP. For configuration
details, see Configuration Files in this section.
Step 2 Configure a local LDP session.
# Configure PE1.
<PE1> system-view
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 1/0/1
[*PE1-GigabitEthernet1/0/1] mpls
[*PE1-GigabitEthernet1/0/1] mpls ldp
[*PE1-GigabitEthernet1/0/1] quit
[*PE1-GigabitEthernet1/0/1] commit
# Repeat this step for PE2. For configuration details, see Configuration Files in
this section.
Step 3 Configure LDP session protection.
# Configure PE1.
[~PE1] mpls ldp
[*PE1-mpls-ldp] session protection duration infinite
[*PE1-mpls-ldp] commit
[~PE1-mpls-ldp] quit
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] session protection duration infinite
[*PE2-mpls-ldp] commit
[~PE2-mpls-ldp] quit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
session protection duration infinite
#
ipv4-family
#
isis 1
is-level level-2
network-entity 10.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
isis enable 1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.252
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
return
● PE3 configuration file
#
sysname PE3
#
isis 1
is-level level-2
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
isis enable 1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.3.1 255.255.255.252
isis enable 1
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
session protection duration infinite
#
ipv4-family
#
isis 1
is-level level-2
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
Networking Requirements
On the network shown in Figure 1-56, three paths are established between PE1
and PE2. The path PE1 -> P1 -> P2 -> PE2 functions as the primary path. P3
functions a backup device because P4 operates important services. Therefore, the
path PE1 -> P1 -> P3 -> PE2 functions as a backup path.
Enable LDP-IGP synchronization on the interfaces on both ends of the link
between P1 (intersection node of the active and backup links) and P2 (LDP
neighbor node on the active link). On a network with both a primary LSP and a
backup LSP, after the primary LSP recovers, LDP-IGP synchronization minimizes the
traffic interruption period to milliseconds.
Setting a delay time for deleting upstream labels prevents traffic interruptions if
LDP traffic is switched to a backup path.
P1 Loopback1 1.1.1.9/32
GigabitEthernet1/0/0 10.1.1.1/30
GigabitEthernet1/0/1 10.5.1.1/30
GigabitEthernet2/0/0 10.3.1.1/30
P2 Loopback1 2.2.2.9/32
GigabitEthernet1/0/0 10.1.1.2/30
GigabitEthernet2/0/0 10.2.1.1/30
P3 Loopback1 3.3.3.9/32
GigabitEthernet1/0/0 10.3.1.2/30
GigabitEthernet2/0/0 10.4.1.1/30
P4 Loopback1 4.4.4.9/32
GigabitEthernet1/0/0 10.5.1.2/30
GigabitEthernet2/0/0 10.6.1.1/30
GigabitEthernet1/0/0 10.2.1.2/32
GigabitEthernet1/0/1 10.6.1.2/30
GigabitEthernet2/0/0 10.4.1.2/30
Configuration Notes
During the configuration, note the following:
To prevent repeated failures in LDP session reestablishment, you can set the Hold-
max-cost timer to adjust the interval at which OSPF sends LSAs to advertise the
maximum cost on the local device. Ensure that traffic is transmitted along the
backup link before the LDP session is reestablished on the active link.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure basic OSPF functions on P1, P2, P3, and P4 to allow them to
interwork.
2. Establish LDP sessions between neighboring nodes and between P1 and PE2.
3. Set the priorities of equal-cost routes on P1 to ensure that the link PE1 -> P1 -
> P2 ->PE2 functions as the primary link.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of the interfaces on each node (as shown in Figure 1-56), OSPF
process ID, and OSPF area ID
● Priorities of equal-cost routes on P1
● Values of the Hold-max-cost and igp-sync-delay timers
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface (as shown in Figure 1-56), including the
loopback interfaces. Configure OSPF to advertise the route to the network
segment to which each interface is connected and the host route to each LSR ID.
For details, see Example for Configuring Basic OSPF Functions.
After completing the configuration, run the display ip routing-table command on
each node. The command outputs show that the nodes have learned routes from
each other. P1 has three equal-cost routes to 5.5.5.9/32. The following example
uses the command output on P1.
[~P1] display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 20 Routes : 20
Step 2 Set the priorities of equal-cost routes on P1 to ensure that the link PE1 -> P1 -> P2
->PE2 functions as the primary link.
[~P1] ospf
[~P1-ospf-1] nexthop 10.1.1.2 weight 1
[*P1-ospf-1] nexthop 10.3.1.2 weight 2
[*P1-ospf-1] nexthop 10.5.1.2 weight 2
[*P1-ospf-1] commit
[~P1-ospf-1] quit
Step 3 Enable MPLS and MPLS LDP globally on each node and on each interface.
# Configure P1.
[~P1] mpls lsr-id 1.1.1.9
[*P1] mpls
[*P1-mpls] quit
[*P1] mpls ldp
[*P1-mpls-ldp] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls ldp
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls ldp
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] mpls lsr-id 2.2.2.9
[*P2] mpls
[*P2-mpls] quit
[*P2] mpls ldp
[*P2-mpls-ldp] quit
# Configure P3.
[~P3] mpls lsr-id 3.3.3.9
[*P3] mpls
[*P3-mpls] quit
[*P3] mpls ldp
[*P3-mpls-ldp] quit
[*P3] interface gigabitethernet 1/0/0
[*P3-GigabitEthernet1/0/0] mpls
[*P3-GigabitEthernet1/0/0] mpls ldp
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet 2/0/0
[*P3-GigabitEthernet2/0/0] mpls
[*P3-GigabitEthernet2/0/0] mpls ldp
[*P3-GigabitEthernet2/0/0] commit
[~P3-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 5.5.5.9
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls ldp
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
After completing the preceding configuration, check that LDP sessions have been
established between neighboring nodes. Run the display mpls ldp session
command on each node. The command output shows that Status is Operational.
The following example uses the command output on P1.
[~P1] display mpls ldp session
LDP Session(s) in Public Network
Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
An asterisk (*) before a session means the session is being deleted.
--------------------------------------------------------------------------
PeerID Status LAM SsnRole SsnAge KASent/Rcv
--------------------------------------------------------------------------
2.2.2.9:0 Operational DU Passive 0000:00:56 227/227
3.3.3.9:0 Operational DU Passive 0000:00:56 227/227
5.5.5.9:0 Operational DU Passive 0000:00:56 227/227
--------------------------------------------------------------------------
TOTAL: 3 Session(s) Found.
[*P1-ospf-1-area-0.0.0.0] commit
[~P1-ospf-1-area-0.0.0.0] quit
[~P1-ospf-1] quit
# Configure P2.
[~P2] ospf 1
[~P2-ospf-1] area 0
[~P2-ospf-1-area-0.0.0.0] ldp-sync enable
[*P2-ospf-1-area-0.0.0.0] commit
[~P2-ospf-1-area-0.0.0.0] quit
[~P2-ospf-1] quit
# Configure P1.
[~P1] interface gigabitethernet 1/0/1
[~P1-GigabitEthernet1/0/1] ospf ldp-sync block
[*P1-GigabitEthernet1/0/1] commit
[~P1-GigabitEthernet1/0/1] quit
Step 6 Set the value of the Hold-max-cost timer on the interfaces on both ends of the
link between P1 and P2.
# Configure P1.
[~P1] interface gigabitethernet 1/0/0
[~P1-GigabitEthernet1/0/0] ospf timer ldp-sync hold-max-cost 9
[*P1-GigabitEthernet1/0/0] commit
[~P1-GigabitEthernet1/0/0] quit
# Configure P2.
[~P2] interface gigabitethernet 1/0/0
[~P2-GigabitEthernet1/0/0] ospf timer ldp-sync hold-max-cost 9
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
Step 7 Set the value of the delay timer on the interfaces on both ends of the link
between P1 and P2.
# Configure P1.
[~P1] interface gigabitethernet 1/0/0
[~P1-GigabitEthernet1/0/0] mpls ldp timer igp-sync-delay 6
[*P1-GigabitEthernet1/0/0] commit
[~P1-GigabitEthernet1/0/0] quit
# Configure P2.
[~P2] interface gigabitethernet 1/0/0
[~P2-GigabitEthernet1/0/0] mpls ldp timer igp-sync-delay 6
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
After completing the preceding configuration, run the display ospf ldp-sync
command on P1. The interface status is Sync-Achieved.
[~P1] display ospf ldp-sync interface gigabitethernet 1/0/0
Interface GE1/0/0
HoldDown Timer: 10 HoldMaxCost Timer: 9
LDP State: Up OSPF Sync State: Sync-Achieved
----End
Configuration Files
● P1 configuration file
#
sysname P1
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
ospf ldp-sync
ospf timer ldp-sync holdmaxcost 9
mpls ldp timer igp-sync-delay 6
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.252
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.5.1.1 255.255.255.252
ospf ldp-sync block
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
nexthop 10.1.1.2 weight 1
nexthop 10.3.1.2 weight 2
nexthop 10.5.1.2 weight 2
area 0.0.0.0
ldp-sync enable
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.3
network 10.3.1.0 0.0.0.3
network 10.5.1.0 0.0.0.3
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 2.2.2.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.252
ospf ldp-sync
ospf timer ldp-sync holdmaxcost 9
mpls ldp timer igp-sync-delay 6
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.252
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
ldp-sync enable
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.3
network 10.2.1.0 0.0.0.3
#
return
● P3 configuration file
#
sysname P3
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.252
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.252
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.3.1.0 0.0.0.3
network 10.4.1.0 0.0.0.3
#
return
● P4 configuration file
#
sysname P4
#
mpls lsr-id 4.4.4.9
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.5.1.2 255.255.255.252
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.6.1.1 255.255.255.252
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.5.1.0 0.0.0.3
Networking Requirements
On the network shown in Figure 1-57, LSRA, LSRB, and LSRC are devices with a
single main control board. Without GR, during a master/backup switchover or
system upgrade, the LSPs are torn down because the neighbor goes Down, which
causes a traffic interruption for a short period. LDP GR can be configured to
remain labels before and after the master/backup switchover or protocol restart.
This allows the LDP sessions and LSPs to be successfully reestablished after the
active/standby switchover or system upgrade. The MPLS forwarding is
uninterrupted, and traffic is unaffected.
Configuration Notes
When configuring LDP GR, note the following:
● Enabling or disabling LDP GR causes an LDP session to be reestablished.
● Changing the value of an LDP GR timer also causes an LDP session to be
reestablished.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface, configure loopback addresses as LSR
IDs, and configure OSPF to advertise the route to the network segment to
which each interface is connected and the host route to each LSR ID.
2. Enable MPLS and MPLS LDP globally on each LSR.
3. Enable MPLS and MPLS LDP on each interface.
4. Configure LDP GR.
5. Set LDP GR parameters on a GR Restarter.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on each LSR as shown in Figure 1-57, OSPF
process ID, and area ID
● OSPF GR interval
● LDP reconnecting time
● LDP neighbor-liveness time
● LDP recovery time
Procedure
Step 1 Assign an IP address to each interface and configure OSPF to advertise the route
to the network segment to which each interface is connected and the host route
to each LSR ID. For configuration details, see Configuration Files in this section.
Step 2 Enable MPLS and MPLS LDP globally on each LSR.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.9
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.9
[*LSRB] mpls
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] commit
[~LSRB-mpls-ldp] quit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.9
[*LSRC] mpls
[*LSRC-mpls] quit
[*LSRC] mpls ldp
[*LSRC-mpls-ldp] commit
[~LSRC-mpls-ldp] quit
# Configure LSRB.
[~LSRB] interface gigabitethernet 1/0/0
[~LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls ldp
[*LSRB-GigabitEthernet1/0/0] commit
[~LSRB-GigabitEthernet1/0/0] quit
[~LSRB] interface gigabitethernet 2/0/0
[~LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] mpls ldp
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 1/0/0
[~LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] mpls ldp
[*LSRC-GigabitEthernet1/0/0] commit
[~LSRC-GigabitEthernet1/0/0] quit
After the preceding configurations are complete, local LDP sessions are
successfully established between LSRA and LSRB, and between LSRB and LSRC.
# Run the display mpls ldp session command on an LSR to view information
about the established LDP session. The following example uses the command
output on LSRA.
[~LSRA] display mpls ldp session
LDP Session(s) in Public Network
Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDD:HH:MM)
An asterisk (*) before a session means the session is being deleted.
--------------------------------------------------------------------------
PeerID Status LAM SsnRole SsnAge KASent/Rcv
--------------------------------------------------------------------------
2.2.2.9:0 Operational DU Passive 000:00:02 9/9
--------------------------------------------------------------------------
TOTAL: 1 Session(s) Found.
# Configure LSRA.
[~LSRA] mpls ldp
[~LSRA-mpls-ldp] graceful-restart
Warning: All the related sessions will be deleted if the operation is performed!
Continue? [Y/N]:y
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
# Configure LSRB.
[~LSRB] mpls ldp
[~LSRB-mpls-ldp] graceful-restart
Warning: All the related sessions will be deleted if the operation is performed!
Continue? [Y/N]:y
[*LSRB-mpls-ldp] commit
[~LSRB-mpls-ldp] quit
# Configure LSRC.
[~LSRC] mpls ldp
[~LSRC-mpls-ldp] graceful-restart
Warning: All the related sessions will be deleted if the operation is performed!
Continue? [Y/N]:y
[*LSRC-mpls-ldp] commit
[~LSRC-mpls-ldp] quit
# After completing the preceding configuration, run the display mpls ldp session
verbose command on an LSR. The command output shows that the Session FT
Flag field indicates On. The following example uses the command output on
LSRA.
[~LSRA] display mpls ldp session verbose
LDP Session(s) in Public Network
------------------------------------------------------------------------
Peer LDP ID : 2.2.2.9:0 Local LDP ID : 1.1.1.9:0
TCP Connection : 1.1.1.9 <- 2.2.2.9
Session State : Operational Session Role : Passive
Session FT Flag : On MD5 Flag : Off
Reconnect Timer : 300 Sec Recovery Timer : 300 Sec
Keychain Name : kc1
Tcpao Name : ---
Authentication applied : ---
Capability:
Capability-Announcement : Off
mLDP P2MP Capability : Off
mLDP MBB Capability : Off
mLDP MP2MP Capability : Off
# Alternatively, run the display mpls ldp peer verbose command on an LSR. The
command output shows that the Peer FT Flag field indicates On. The following
example uses the command output on LSRA.
[~LSRA] display mpls ldp peer verbose
LDP Peer Information in Public network
-------------------------------------------------------------------------------
Peer LDP ID : 2.2.2.9:0
Peer Max PDU Length : 4096 Peer Transport Address : 2.2.2.9
Peer Loop Detection : Off Peer Path Vector Limit : --
Peer FT Flag : On Peer Keepalive Timer : 45 Sec
Recovery Timer : 300 Sec Reconnect Timer : 300 Sec
Peer Type : Local
Peer Label Advertisement Mode : Downstream Unsolicited
Distributed ID :0
Peer Discovery Source : GigabitEthernet1/0/0
Capability-Announcement : On
-------------------------------------------------------------------------------
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
graceful-restart
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.3
#
return
Networking Requirements
On the network shown in Figure 1-58, LSRB and LSRD are on the edge of a
backbone network. LDP over TE is to be deployed on this network to allow an LDP
LSP to across an RSVP-TE area. LDP services can be transmitted between LSRA and
LSRB, and between LSRD and LSRE. In addition, TE services are transmitted
between LSR B, LSRC, and between LSRC and LSRD. A TE tunnel destined for LSRD
is established on LSRB, and an RSVP tunnel destined for LSRB is established on
LSRD. Traffic between LSRA and LSRE needs to be transmitted through the tunnel.
LDP over TE can transmit VPN services.
GigabitEthernet1/0/0 10.1.1.1/24
GigabitEthernet1/0/0 10.1.1.2/24
GigabitEthernet2/0/0 10.2.1.1/24
GigabitEthernet1/0/0 10.2.1.2/24
GigabitEthernet2/0/0 10.3.1.1/24
GigabitEthernet1/0/0 10.3.1.2/24
GigabitEthernet2/0/0 10.4.1.2/24
GigabitEthernet1/0/0 10.4.1.1/24
Configuration Notes
When configuring LDP over TE, note that the tunnel destination address must be
the LSR ID of the egress.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID and OSPF area ID
● Policy for triggering the LSP establishment
● Name and IP address of each remote LDP peer of LSRB and LSRD
● Link bandwidth attributes of the tunnel
● Tunnel interface number, IP address, destination address, tunnel ID, RSVP-TE
tunnel signaling protocol, tunnel bandwidth, TE metric value, link cost on
LSRB and LSRD
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface, including the loopback interface according
to Figure 1-58. For configuration details, see the configuration files.
Step 2 Configure OSPF to advertise the route to the network segment to which each
interface is connected and the host route to each LSR ID. For configuration details,
see Configuration Files in this section.
Step 3 Enable MPLS on each LSR. Enable LDP to set up LDP sessions between LSRA and
LSRB, and between LSRD and LSRE. Enable RSVP to establish RSVP neighbor
relationships between LSRB and LSRC, and between LSRC and LSRD.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] quit
[*LSRA] mpls ldp
[*LSRA-mpls-ldp] quit
[*LSRA] interface gigabitethernet 1/0/0
[*LSRA-GigabitEthernet1/0/0] mpls
[*LSRA-GigabitEthernet1/0/0] mpls ldp
[*LSRA-GigabitEthernet1/0/0] commit
[~LSRA-GigabitEthernet1/0/0] quit
# Configure LSRB.
[~LSRB] mpls lsr-id 2.2.2.2
[*LSRB] mpls
[*LSRB-mpls] mpls te
[*LSRB-mpls] mpls rsvp-te
[*LSRB-mpls] mpls te cspf
[*LSRB-mpls] quit
[*LSRB] mpls ldp
[*LSRB-mpls-ldp] quit
[*LSRB] interface gigabitethernet 1/0/0
[*LSRB-GigabitEthernet1/0/0] mpls
[*LSRB-GigabitEthernet1/0/0] mpls ldp
[*LSRB-GigabitEthernet1/0/0] quit
[*LSRB] interface gigabitethernet 2/0/0
[*LSRB-GigabitEthernet2/0/0] mpls
[*LSRB-GigabitEthernet2/0/0] mpls te
[*LSRB-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
[~LSRC] mpls lsr-id 3.3.3.3
[*LSRC] mpls
[*LSRC-mpls] mpls te
[*LSRC-mpls] mpls rsvp-te
[*LSRC-mpls] quit
[*LSRC] interface gigabitethernet 1/0/0
[*LSRC-GigabitEthernet1/0/0] mpls
[*LSRC-GigabitEthernet1/0/0] mpls te
[*LSRC-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] interface gigabitethernet 2/0/0
[*LSRC-GigabitEthernet2/0/0] mpls
[*LSRC-GigabitEthernet2/0/0] mpls te
[*LSRC-GigabitEthernet2/0/0] mpls rsvp-te
[*LSRC-GigabitEthernet2/0/0] commit
[~LSRC-GigabitEthernet2/0/0] quit
# Configure LSRD.
[~LSRD] mpls lsr-id 4.4.4.4
[*LSRD] mpls
[*LSRD-mpls] mpls te
[*LSRD-mpls] mpls rsvp-te
[*LSRD-mpls] mpls te cspf
[*LSRD-mpls] quit
[*LSRD] mpls ldp
[*LSRD-mpls-ldp] quit
[*LSRD] interface gigabitethernet 1/0/0
[*LSRD-GigabitEthernet1/0/0] mpls
[*LSRD-GigabitEthernet1/0/0] mpls te
[*LSRD-GigabitEthernet1/0/0] mpls rsvp-te
[*LSRD-GigabitEthernet1/0/0] quit
[*LSRD] interface gigabitethernet 2/0/0
[*LSRD-GigabitEthernet2/0/0] mpls
[*LSRD-GigabitEthernet2/0/0] mpls ldp
[*LSRD-GigabitEthernet2/0/0] commit
[~LSRD-GigabitEthernet2/0/0] quit
# Configure LSRE.
[~LSRE] mpls lsr-id 5.5.5.5
[*LSRE] mpls
[*LSRE-mpls] quit
[*LSRE] mpls ldp
[*LSRE-mpls-ldp] quit
[*LSRE] interface gigabitethernet 1/0/0
[*LSRE-GigabitEthernet1/0/0] mpls
[*LSRE-GigabitEthernet1/0/0] mpls ldp
[*LSRE-GigabitEthernet1/0/0] commit
[~LSRE-GigabitEthernet1/0/0] quit
After the preceding configurations are complete, the local LDP sessions are
successfully set up between LSRA and LSRB, and between LSRD and LSRE.
# Run the display mpls ldp session command on LSRA, LSRB, LSRD, or LSRE to
view information about the established LDP session. The following example uses
the command output on LSRA.
# Run the display mpls ldp peer command on an LSR to view information about
the established LDP peer. The following example uses the command output on
LSRA.
[~LSRA] display mpls ldp peer
LDP Peer Information in Public network
An asterisk (*) before a peer means the peer is being deleted.
-------------------------------------------------------------------------
PeerID TransportAddress DiscoverySource
-------------------------------------------------------------------------
2.2.2.2:0 2.2.2.2 GigabitEthernet1/0/0
-------------------------------------------------------------------------
TOTAL: 1 Peer(s) Found.
# Run the display mpls lsp command on an LSR. You can view information about
LDP LSPs and RSVP tunnels are not set up. The following example uses the
command output on LSRA.
[~LSRA] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL -/-
2.2.2.2/32 NULL/3 -/GE1/0/0
2.2.2.2/32 32841/3 -/GE1/0/0
# Configure LSRD.
[~LSRD] mpls ldp remote-peer lsrb
[*LSRD-mpls-ldp-remote-lsrb] remote-ip 2.2.2.2
[*LSRD-mpls-ldp-remote-lsrb] commit
[~LSRD-mpls-ldp-remote-lsrb] quit
# After completing the preceding configurations, run the display mpls ldp
remote-peer command on LSRB or LSRD. The commando output shows that a
remote LDP session has been established between LSRB and LSRD. The following
example uses the command output on LSRB.
[~LSRB] display mpls ldp remote-peer lsrd
LDP Remote Entity Information
------------------------------------------------------------------------------
Remote Peer Name : lsrd
Description : ----
Remote Peer IP : 4.4.4.4 LDP ID : 2.2.2.2:0
Transport Address : 2.2.2.2 Entity Status : Active
Step 5 Configure bandwidth attributes on each outbound interface along the link of the
TE tunnel.
# Configure LSRB.
[~LSRB] interface gigabitethernet 2/0/0
[~LSRB-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*LSRB-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
[*LSRB-GigabitEthernet2/0/0] commit
[~LSRB-GigabitEthernet2/0/0] quit
# Configure LSRC.
[~LSRC] interface gigabitethernet 1/0/0
[~LSRC-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*LSRC-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
[*LSRC-GigabitEthernet1/0/0] quit
[*LSRC] interface gigabitethernet 2/0/0
[*LSRC-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*LSRC-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
[*LSRC-GigabitEthernet2/0/0] commit
[~LSRC-GigabitEthernet2/0/0] quit
# Configure LSRD.
[~LSRD] interface gigabitethernet 1/0/0
[~LSRD-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*LSRD-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
[*LSRD-GigabitEthernet1/0/0] commit
[~LSRD-GigabitEthernet1/0/0] quit
[*LSRB-ospf-1] commit
# Run the display mpls lsp command on LSRB, LSRC, or LSRD to view information
about LSPs. You can view information about RSVP LSPs. The following example
uses the command output on LSRB.
[~LSRB] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: RSVP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
4.4.4.4/32 NULL/32832 -/GE2/0/0
2.2.2.2/32 3/NULL GE2/0/0/-
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 NULL/3 -/GE1/0/0
1.1.1.1/32 32834/3 -/GE1/0/0
2.2.2.2/32 3/NULL -/-
4.4.4.4/32 NULL/3 -/Tun1
4.4.4.4/32 32844/3 -/Tun1
5.5.5.5/32 NULL/32837 -/Tun1
5.5.5.5/32 32845/32837 -/Tun1
# Run the display ip routing-table command to view the routing table on LSRA.
The command output shows that the cost values change after the forwarding
adjacency was configured.
[~LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 16 Routes : 16
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
destination 4.4.4.4
mpls te igp advertise
mpls te igp metric absolute 1
mpls te bandwidth ct0 10000
mpls te tunnel-id 100
#
ospf 1
opaque-capability enable
enable traffic-adjustment advertise
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
mpls-te enable
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
mpls-te enable
#
return
● LSRD configuration file
#
sysname LSRD
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer lsrb
remote-ip 2.2.2.2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te igp advertise
mpls te igp metric absolute 1
mpls te bandwidth ct0 10000
mpls te tunnel-id 101
#
ospf 1
opaque-capability enable
enable traffic-adjustment advertise
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.3.1.0 0.0.0.255
network 10.4.1.0 0.0.0.255
mpls-te enable
#
return
● LSRE configuration file
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
#
ospf 1
area 0.0.0.0
network 5.5.5.5 0.0.0.0
network 10.4.1.0 0.0.0.255
#
return
Networking Requirements
The mLDP P2MP technique is driven by the increasing demand to support the
growing scale of multicast services on IP/MPLS backbone networks. A P2P LDP-
enabled transmit end must replicate a packet and send it to multiple receive ends.
Each replicated packet is sent along a separate LSP to its receive end, which
wastes bandwidth resources. To address this problem, enable mLDP P2MP to
establish P2MP LSPs, without the need to deploy Protocol Independent Multicast
(PIM). A tree-shaped mLDP P2MP LSP consists of sub-LSPs originating from the
root node (ingress) and destined for leaf nodes. The root node directs multicast
traffic to the P2MP LSP and sends packets to a branch node for replication. The
branch node replicates the packets and forwards them to each leaf node
connected to the branch node.
On the network shown in Figure 1-59, an mLDP P2MP LSP originates from root
node LSRA and is destined for leaf nodes LSRC, LSRE, and LSRF.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign IP addresses to all physical and loopback interfaces listed in Table
1-25.
2. Configure Intermediate System to Intermediate System (IS-IS) to advertise the
route to each network segment to which each interface is connected and
advertise the host route to each LSR ID.
3. Set an MPLS LSR ID and globally enable MPLS, MPLS LDP, and mLDP P2MP
on each node.
4. Configure MPLS LDP to establish a local LDP session on each interface along
a P2MP LSP to be established.
5. Configure leaf nodes LSRC, LSRE, and LSRF to trigger P2MP LSP
establishment.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface on every node listed in Table 1-25
● IS-IS process ID (1) and IS-IS level (Level-2) on each node
● Root node address (1.1.1.1), mLDP P2MP LSP name (lsp1), and LSP ID (1)
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each interface according to Table 1-25 and create a
loopback interface on each node. For configuration details, see Configuration
Files in this section.
Step 2 Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each LSR ID.
Configure IS-IS on each node to implement network layer connectivity. For
configuration details, see the configuration files.
Step 3 Configure mLDP P2MP globally on each node.
Set an MPLS LSR ID and globally enable MPLS, MPLS LDP, and mLDP P2MP on
each node.
# Configure LSRA.
[~LSRA] mpls lsr-id 1.1.1.1
[*LSRA] mpls
[*LSRA-mpls] mpls ldp
[*LSRA-mpls-ldp] mldp p2mp
[*LSRA-mpls-ldp] commit
[~LSRA-mpls-ldp] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Step 4 Establish local LDP sessions between nodes.
Configure MPLS and MPLS LDP on each directly connected interface to establish a
local LDP session.
# Configure LSRA.
[~LSRA] interface gigabitethernet 1/0/1
[*LSRA-GigabitEthernet1/0/1] mpls
[*LSRA-GigabitEthernet1/0/1] mpls ldp
[*LSRA-GigabitEthernet1/0/1] commit
[~LSRA-GigabitEthernet1/0/1] quit
Repeat this step for LSRB, LSRC, LSRD, LSRE, and LSRF. For configuration details,
see Configuration Files in this section.
Step 5 Configure leaf nodes to trigger mLDP P2MP LSP establishment.
# Configure LSRC.
<LSRC> system-view
[~LSRC] mpls ldp
[*LSRC-mpls-ldp] mldp p2mp-lsp name lsp1 root-ip 1.1.1.1 lsp-id 1
[*LSRC-mpls-ldp] commit
[~LSRC-mpls-ldp] quit
# Configure LSRE.
<LSRE> system-view
[~LSRE] mpls ldp
[*LSRE-mpls-ldp] mldp p2mp-lsp name lsp1 root-ip 1.1.1.1 lsp-id 1
[*LSRE-mpls-ldp] commit
[~LSRE-mpls-ldp] quit
# Configure LSRF.
<LSRF> system-view
[~LSRF] mpls ldp
[*LSRF-mpls-ldp] mldp p2mp-lsp name lsp1 root-ip 1.1.1.1 lsp-id 1
[*LSRF-mpls-ldp] commit
[~LSRF-mpls-ldp] quit
# Run the display mpls mldp lsp p2mp command on LSRB. The command output
shows that mLDP P2MP LSP information is consistent with the configuration.
<LSRB> display mpls mldp lsp p2mp
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before a Peer means the session is stale
-------------------------------------------------------------------------------
LSP Information: mLDP P2MP-LSP
-------------------------------------------------------------------------------
Root IP : 1.1.1.1 Instance : --
Opaque decoded : LSP-ID 1
Opaque value : 01 0004 00000001
Lsr Type : Transit
Trigger Type : --
Upstream Count : 1 Downstream Count : 2
Upstream:
In Label Peer MBB State
4101 1.1.1.1 --
Downstream:
Out Label Peer MBB State Next Hop Out Interface
4101 4.4.4.4 -- 10.2.1.2 GigabitEthernet1/0/0
4101 3.3.3.3 -- 10.3.1.2 GigabitEthernet1/0/2
----End
Configuration Files
● LSRA configuration file
#
sysname LSRA
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
mldp p2mp
#
ipv4-family
#
isis 1
is-level level-2
network-entity 00.0005.0000.0000.0001.00
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
return
● LSRB configuration file
#
sysname LSRB
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
mldp p2mp
#
ipv4-family
#
isis 1
is-level level-2
network-entity 00.0005.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
return
● LSRC configuration file
#
sysname LSRC
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
mldp p2mp
#
ipv4-family
mldp p2mp-lsp name lsp1 root-ip 1.1.1.1 lsp-id 1
#
isis 1
is-level level-2
network-entity 00.0005.0000.0000.0003.00
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.3.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
#
sysname LSRE
#
mpls lsr-id 5.5.5.5
#
mpls
#
mpls ldp
mldp p2mp
#
ipv4-family
mldp p2mp-lsp name lsp1 root-ip 1.1.1.1 lsp-id 1
#
isis 1
is-level level-2
network-entity 00.0005.0000.0000.0005.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Usage Scenario
As shown in Figure 1-60, the access, aggregation, and core layers belong to the
same AS. Intra-AS seamless MPLS can be configured to transmit services between
gNodeBs or eNodeBs and a Mobility Management Entity (MME) or Serving
Gateway (SGW). Intra-AS seamless MPLS applies to mobile bearer networks.
NOTE
When intra-AS seamless MPLS is configured for an L3VPN, BGP LSPs can be recursed to
load-balancing LDP, TE, or LDP over TE tunnels and support ECMP/UCMP.
Pre-configuration Tasks
Before configuring intra-AS seamless MPLS, complete the following tasks:
If MPLS TE tunnels are used across the three layers, a tunnel policy or tunnel selector must
be configured. For configuration details, see VPN Tunnel Management Configuration.
Procedure
Step 1 Run system-view
The AGG's clients are its connected CSG and core ABR. The core ABR's clients are
its connected AGG and MASG.
The device is configured to use its own IP address as the next-hop address of
routes when advertising these routes.
To enable the AGG or core ABR to advertise routes with the next-hop address set
to its own IP address, run the peer next-hop-local command on the AGG or core
ABR.
----End
Procedure
Step 1 Run system-view
The ability to exchange labeled IPv4 routes with a BGP peer is enabled.
----End
Prerequisites
Before configuring a BGP LSP, configure an IGP on each device to implement
interworking at the network layer, configure basic MPLS functions on each device,
and establish MPLS tunnels.
Procedure
● Perform the following steps on each CSG and MASG:
a. Run system-view
NOTE
NOTE
----End
Context
Traffic statistics collection for BGP LSPs allows you to query and monitor the
traffic statistics of BGP LSPs in real time. To enable this function, run the bgp host
command.
NOTE
Traffic statistics collection for BGP LSPs takes effect only for BGP LSPs of which the FEC
mask length is 32 bits.
Procedure
Step 1 Run system-view
MPLS traffic statistics collection is enabled globally, and the traffic statistics
collection view is displayed.
If the ip-prefix parameter needs to be set to limit the range of BGP LSPs for which
traffic statistics collection is to be enabled, run the ip ip-prefix command to
create an IP prefix list first.
----End
Context
On an intra-AS seamless MPLS network that has protection switching enabled, if a
link or node fails, traffic switches to a backup path, which implements
uninterrupted traffic transmission.
Tunnel Type Protected Nodes to Be Detection Protection
Object Configured Method Function
Procedure
● Configure BFD for interface.
a. Run system-view
The system view is displayed.
b. Run bfd session-name bind peer-ip peer-ip [ vpn-instance vpn-name ]
interface interface-type interface-number [ source-ip source-ip ]
A BFD session for IPv4 is bound to an interface.
c. Run discriminator local discr-value
The local discriminator of the BFD session is created.
d. Run discriminator remote discr-value
The remote discriminator of the BFD session is configured.
NOTE
The local and remote discriminators on the two ends of a BFD session must be
correctly associated. That is, the local discriminator of the local device must be
the same as the remote discriminator of the remote device, and the remote
discriminator of the local device must be the same as the local discriminator of
the remote device. If the association is incorrect, a BFD session cannot be set up.
e. Run commit
The configuration is committed.
a. Run system-view
a. Run system-view
NOTE
Physical links of a bypass tunnel cannot overlap protected physical links of the
primary tunnel.
g. (Optional) Run mpls te bandwidth ct0 bandwidth
NOTE
NOTE
NOTE
● If the mpls te auto-frr default command is run, the interface Auto FRR
capability status is the same as the global Auto FRR capability status.
g. Run mpls te fast-reroute [ bandwidth ]
The TE FRR function is enabled.
The bandwidth parameter can be configured to enable FRR bandwidth
protection for the primary tunnel.
h. (Optional) Run mpls te bypass-attributes bandwidth bandwidth
[ priority setup-priority [ hold-priority ] ]
Attributes for the Auto FRR bypass tunnel are set.
NOTE
● These attributes for the Auto FRR bypass tunnel can be set only after the
mpls te fast-reroute bandwidth command is run for the primary tunnel.
● The Auto FRR bypass tunnel bandwidth cannot exceed the primary tunnel
bandwidth.
● If no attributes are configured for an Auto FRR bypass tunnel, the Auto FRR
bypass tunnel by default uses the same bandwidth as that of the primary
tunnel.
● The setup priority of the bypass tunnel cannot be higher than the holding
priority. Each priority of the bypass tunnel cannot be higher than that of the
primary tunnel.
● If the primary tunnel FRR is disabled, the bypass tunnel attributes are
automatically deleted.
● On one TE tunnel interface, the bypass tunnel bandwidth and the multi-CT
are mutually exclusive.
i. Run commit
The configuration is committed.
● Configure static BFD for CR-LSP.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run quit
Return to the system view.
d. Run bfd session-name bind mpls-te interface tunnel interface-number
te-lsp [ backup ]
The BFD session is bound to the primary or backup CR-LSP of the
specified tunnel.
If the backup parameter is specified, the BFD session is bound to the
backup CR-LSP.
e. Run discriminator local discr-value
The local discriminator of the BFD session is configured.
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run interface tunnel interface-number
The command configured in the tunnel interface view takes effect only
on the current tunnel interface.
e. Run commit
a. Run system-view
The system view is displayed.
b. Run interface tunnel tunnel-number
The MPLS TE tunnel interface view is displayed.
c. Run mpls te backup hot-standby [ mode { revertive [ wtr interval ] |
non-revertive } | overlap-path | wtr [ interval ] | dynamic-bandwidth ]
CR-LSP hot standby is configured.
Select the following parameters as needed to enable sub-functions:
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
h. Run process-pst
BFD is enabled to modify the port status table or link status table.
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run quit
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
If the BFD session on a trunk or VLAN member interface allows BFD to
modify the port status table or link status table, and the interface is
configured with the BFD session, you must configure the WTR time for
the BFD session for detecting the interface. This prevents the BFD session
on the interface from flapping when the member interface joins or leave
the interface.
h. (Optional) Run min-tx-interval tx-interval
The minimum interval at which BFD packets are sent is configured.
i. (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is
configured.
j. (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is configured.
k. Run commit
The configuration is committed.
● Configure dynamic BFD for LDP LSPs.
Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bfd enable
The capability of dynamically establishing a BFD session is configured on
the ingress.
f. Run mpls bfd-trigger { host | fec-list list-name }
A policy for establishing an LDP BFD session is configured.
g. Run commit
The configuration is committed.
In a seamless MPLS scenario, BGP LSP FRR must be configured on both the ingress
and a transit node.
In a seamless MPLS scenario, before you configure BGP LSP FRR, run the ingress-lsp
trigger route-policy command on a transit node to filter the ingress role and then run
the auto-frr command on the transit node to enable BGP LSP FRR to take effect.
NOTE
Perform this step on each CSG and MASG to enable the protection switching
function for the whole BGP LSP.
g. (Optional) Run route-select delay delay-value
A delay for selecting a route is configured. After the primary path
recovers, the device on the primary path performs route selection only
after the corresponding forwarding entries on the device are stable. This
prevents traffic loss during traffic switchback.
h. Run commit
The configuration is committed.
Perform the following steps on the transit node:
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run ipv4-family unicast
The BGP-IPv4 unicast address family view is displayed.
d. Run auto-frr
BGP Auto FRR for unicast routes is enabled.
e. Run bestroute nexthop-resolved tunnel [ inherit-ip-cost ]
Labeled BGP IPv4 unicast routes can participate in route selection only
when their next hops recurse to tunnels.
f. (Optional) Run route-select delay delay-value
A delay for selecting a route to the intermediate device on the primary
path is configured. After the primary path recovers, an appropriate delay
ensures that traffic switches back to the primary path after the
intermediate device completes refreshing forwarding entries.
g. Run commit
The configuration is committed.
● Configure BFD for BGP tunnel.
Perform the following steps on the ingress of an E2E BGP tunnel:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bgp bfd enable
The ability to dynamically establish BGP BFD sessions is enabled.
f. Run mpls bgp bfd-trigger-tunnel { host | ip-prefix ip-prefix-name }
The policy for dynamically establishing BGP BFD sessions is configured.
g. Run commit
The configuration is committed.
● Enable VPN FRR in the BGP-VPN instance IPv4 address family view.
a. Run system-view
----End
Context
In seamless MPLS scenarios, when an egress MASG fails, E2E BFD for BGP tunnel
is used to instruct a CSG to perform VPN FRR switching. In this protection solution,
both BGP LSPs and BFD sessions are in great numbers, which consumes a lot of
bandwidth resources and burdens the device. To optimize the solution, the egress
protection function can be configured on the master and backup MASGs. With this
function enabled, both the master and backup MASGs assign the same private
network label value to a core ASBR. If the master MASG fails, BFD for LDP LSP or
BFD for TE can instruct a core ASBR to perform BGP FRR protection switching.
After traffic is switched to the backup MASG, the MASG removes the BGP public
network label and uses the private network label the same as that on the faulty
master MASG to search for a matching VPN instance. Traffic can then be properly
forwarded.
The egress protection function is configured on both the master and backup
MASGs.
NOTE
If the egress protection function is configured on egress MASGs between which a tunnel
exists and a route imported by BGP on one of the MASGs recurses to the tunnel, this MASG
then recurses the route to another tunnel of a different type. In this case, traffic is directed
to the other MASG, which slows down traffic switchover. As a result, the egress protection
function does not take effect. To address this problem, specify non-relay-tunnel when
running the import-route or network command to prevent the routes imported by BGP
from recursing to tunnels.
Prerequisites
Before configuring the egress protection function, complete the following tasks:
● Configure a loopback interface on each of the master and backup MASGs.
The IP address of each loopback interface on an MASG is used to establish a
remote BGP peer relationship with a remote device.
● Host routes to the loopback interfaces are imported into the BGP routing
table, and both the master and backup MASGs assigns BGP labeled routes to
a core ASBR. Therefore, the core ASBR has two BGP labeled routes destined
for the same loopback interface. A routing policy is configured to enable the
core ASBR to select one route to implement BGP FRR.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
Step 3 Run ipv4-family
The VPN instance IPv4 address family view is displayed.
Step 4 Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
Step 5 Run apply-label per-instance static static-label-value
A device is enabled to assign the same static label to all routes destined for a
remote PE in a VPN instance IPv4 address family.
The same static label value must be set on both the master and backup MASGs.
NOTE
A change in the label allocation mode leads to re-advertising of IPv4 address family routes
in a VPN instance. This step causes a temporary service interruption. Exercise caution when
using this command.
----End
Prerequisites
Intra-AS seamless MPLS has been configured.
Procedure
● Run the display ip routing-table command on a CSG or an MASG to check
the routes to the peer end.
● Run the display mpls lsp command to check LSP information.
● Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m
interval | -r reply-mode | -s packet-size | -t time-out | -v ] * bgp destination-
iphost mask-length [ ip-address ] command on a CSG or an MASG to check
the BGP LSP connectivity.
● Run the display mpls lsp protocol bgp traffic-statistics inbound command
to check the incoming traffic statistics of BGP LSPs.
● Run the display mpls lsp protocol bgp traffic-statistics outbound [ ipv4-
address mask-length ] verbose command to check the outgoing traffic
statistics of BGP LSPs.
● Run the display mpls lsp protocol bgp traffic-statistics outbound
aggregated command to check the traffic statistics of BGP LSPs aggregated
by FEC.
----End
Usage Scenario
As shown in Figure 1-61, the access and aggregation layers belong to one AS, and
the core layer belongs to another AS. Inter-AS seamless MPLS can be configured
to transmit services between gNodeBs or eNodeBs and a Mobility Management
Entity (MME) or Serving Gateway (SGW).
NOTE
When inter-AS seamless MPLS is configured for an L3VPN, BGP LSPs can be recursed to
load-balancing LDP, TE, or LDP over TE tunnels and support ECMP/UCMP.
Pre-configuration Tasks
Before configuring inter-AS seamless MPLS, complete the following tasks:
If MPLS TE tunnels are used across the three layers, a tunnel policy or tunnel selector must
be configured. For configuration details, see VPN Tunnel Management Configuration.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 Run ipv4-family unicast
The IPv4 unicast address family view is displayed.
Step 4 Run peer { ipv4-address | group-name } reflect-client
An RR is configured, and the CSG and AGG ASBR are specified as its clients.
Step 5 Run peer { ipv4-address | group-name } next-hop-local
The device is configured to use its own IP address as the next-hop address of
routes when advertising these routes.
To enable the AGG to advertise routes with the next-hop address set to its own
address, run the peer next-hop-local command on the AGG.
Step 6 Run commit
The configuration is committed.
----End
Procedure
● Perform the following steps on each CSG, AGG, and MASG:
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv4-address | group-name } label-route-capability [ check-
tunnel-reachable ]
The ability to exchange labeled IPv4 routes between devices in the local
AS is enabled.
MPLS is enabled.
e. Run quit
The ability to exchange labeled IPv4 routes between BGP peers, including
the peer ASBR and the devices in the local AS, is enabled.
----End
a labeled IPv4 route from downstream, the downstream node must re-assign an
MPLS label to the transit node and advertises the label upstream.
Procedure
● Perform the following steps on each CSG and MASG:
a. Run system-view
The system view is displayed.
b. Run route-policy route-policy-name matchMode node node
A Route-Policy node is created.
c. Run apply mpls-label
The local device is enabled to assign a label to an IPv4 route.
d. Run quit
Return to the system view.
e. Run bgp as-number
The BGP view is displayed.
f. Run peer { ipv4-address | group-name } route-policy route-policy-name
export
A routing policy for advertising routes matching Route-Policy conditions
to a BGP peer or a BGP peer group is configured.
NOTE
NOTE
----End
Context
Traffic statistics collection for BGP LSPs allows you to query and monitor the
traffic statistics of BGP LSPs in real time. To enable this function, run the bgp host
command.
NOTE
Traffic statistics collection for BGP LSPs takes effect only for BGP LSPs of which the FEC
mask length is 32 bits.
Procedure
Step 1 Run system-view
MPLS traffic statistics collection is enabled globally, and the traffic statistics
collection view is displayed.
If the ip-prefix parameter needs to be set to limit the range of BGP LSPs for which
traffic statistics collection is to be enabled, run the ip ip-prefix command to
create an IP prefix list first.
Step 6 Run commit
The configuration is committed.
----End
1.1.5.4.5 (Optional) Configuring the Mode in Which a BGP Label Inherits the QoS
Priority in an Outer Tunnel Label
When data packets are transmitted from a core ASBR to an AGG ASBR, you can
determine whether a BGP label inherits the QoS priority carried in an outer tunnel
label.
Context
In the inter-AS seamless MPLS or inter-AS seamless MPLS+HVPN networking, each
packet arriving at a core ASBR or AGG ASBR carries an inner private label, a BGP
LSP label, and an outer MPLS tunnel label. The core ASBR and AGG ASBR remove
outer MPLS tunnel labels from packets before sending the packets to each other. If
a BGP LSP label in a packet carries a QoS priority different from that in the outer
MPLS tunnel label in the packet, you can configure the core ASBR or AGG ASBR to
determine whether the BGP LSP label inherits the QoS priority carried in the outer
MPLS tunnel label to be removed.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 (Optional) Run ipv4-family unicast
The BGP-IPv4 unicast address family view is displayed.
Step 4 Run peer { group-name | ipv4-address } exp-mode { pipe | uniform }
The mode in which a BGP label inherits the QoS priority in the outer tunnel label
is specified.
You can configure either of the following parameters:
● uniform: The BGP label inherits the QoS priority carried in the outer MPLS
tunnel label.
● pipe: The QoS priority carried in the BGP label does not change, and the BGP
label does not inherit the QoS priority carried in the outer MPLS tunnel label.
The default QoS priority inheriting mode varies according to the outer MPLS
tunnel type:
● LDP: By default, the BGP label inherits the QoS priority carried in the outer
MPLS tunnel label.
● TE: By default, the BGP label does not inherit the QoS priority carried in the
outer MPLS tunnel label.
----End
Context
On an inter-AS seamless MPLS network that has protection switching enabled, if a
link or node fails, traffic switches to a backup path, which implements
uninterrupted traffic transmission.
Tunnel Type Protected Nodes to Be Detection Protection
Object Configured Method Function
Procedure
● Configure BFD for interface.
a. Run system-view
The system view is displayed.
b. Run bfd session-name bind peer-ip peer-ip [ vpn-instance vpn-name ]
interface interface-type interface-number [ source-ip source-ip ]
A BFD session for IPv4 is bound to an interface.
c. Run discriminator local discr-value
The local discriminator of the BFD session is created.
d. Run discriminator remote discr-value
The remote discriminator of the BFD session is configured.
NOTE
The local and remote discriminators on the two ends of a BFD session must be
correctly associated. That is, the local discriminator of the local device must be
the same as the remote discriminator of the remote device, and the remote
discriminator of the local device must be the same as the local discriminator of
the remote device. If the association is incorrect, a BFD session cannot be set up.
e. Run commit
a. Run system-view
a. Run system-view
NOTE
Physical links of a bypass tunnel cannot overlap protected physical links of the
primary tunnel.
g. (Optional) Run mpls te bandwidth ct0 bandwidth
NOTE
NOTE
NOTE
● If the mpls te auto-frr default command is run, the interface Auto FRR
capability status is the same as the global Auto FRR capability status.
g. Run mpls te fast-reroute [ bandwidth ]
The TE FRR function is enabled.
The bandwidth parameter can be configured to enable FRR bandwidth
protection for the primary tunnel.
h. (Optional) Run mpls te bypass-attributes bandwidth bandwidth
[ priority setup-priority [ hold-priority ] ]
Attributes for the Auto FRR bypass tunnel are set.
NOTE
● These attributes for the Auto FRR bypass tunnel can be set only after the
mpls te fast-reroute bandwidth command is run for the primary tunnel.
● The Auto FRR bypass tunnel bandwidth cannot exceed the primary tunnel
bandwidth.
● If no attributes are configured for an Auto FRR bypass tunnel, the Auto FRR
bypass tunnel by default uses the same bandwidth as that of the primary
tunnel.
● The setup priority of the bypass tunnel cannot be higher than the holding
priority. Each priority of the bypass tunnel cannot be higher than that of the
primary tunnel.
● If the primary tunnel bandwidth is changed or FRR is disabled, the bypass
tunnel attributes are automatically deleted.
● On one TE tunnel interface, the bypass tunnel bandwidth and the multi-CT
are mutually exclusive.
i. Run commit
The configuration is committed.
● Configure static BFD for CR-LSP.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run quit
Return to the system view.
d. Run bfd session-name bind mpls-te interface tunnel interface-number
te-lsp [ backup ]
The BFD session is bound to the primary or backup CR-LSP of the
specified tunnel.
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
If the BFD session on a trunk or VLAN member interface allows BFD to
modify the port status table or link status table, and the interface is
configured with the BFD session, you must configure the WTR time for
the BFD session for detecting the interface. This prevents the BFD session
on the interface from flapping when the member interface joins or leave
the interface.
h. (Optional) Run min-tx-interval tx-interval
The minimum interval at which BFD packets are sent is configured.
i. (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is
configured.
j. (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is configured.
k. Run commit
The configuration is committed.
● Configure dynamic BFD for CR-LSP.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run interface tunnel interface-number
The tunnel interface view is displayed.
d. Run mpls te bfd enable
The capability of dynamically creating BFD sessions is enabled on the TE
tunnel.
The command configured in the tunnel interface view takes effect only
on the current tunnel interface.
e. Run commit
The configuration is committed.
● Configure CR-LSP hot standby.
a. Run system-view
The system view is displayed.
b. Run interface tunnel tunnel-number
The MPLS TE tunnel interface view is displayed.
c. Run mpls te backup hot-standby [ mode { revertive [ wtr interval ] |
non-revertive } | overlap-path | wtr [ interval ] | dynamic-bandwidth ]
CR-LSP hot standby is configured.
Select the following parameters as needed to enable sub-functions:
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run quit
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
If the BFD session on a trunk or VLAN member interface allows BFD to
modify the port status table or link status table, and the interface is
configured with the BFD session, you must configure the WTR time for
the BFD session for detecting the interface. This prevents the BFD session
on the interface from flapping when the member interface joins or leave
the interface.
h. (Optional) Run min-tx-interval tx-interval
The minimum interval at which BFD packets are sent is configured.
i. (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is
configured.
j. (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is configured.
k. Run commit
The configuration is committed.
● Configure dynamic BFD for LDP LSPs.
Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bfd enable
The capability of dynamically establishing a BFD session is configured on
the ingress.
f. Run mpls bfd-trigger { host | fec-list list-name }
A policy for establishing an LDP BFD session is configured.
g. Run commit
The configuration is committed.
In a seamless MPLS scenario, BGP LSP FRR must be configured on both the ingress
and a transit node.
In a seamless MPLS scenario, before you configure BGP LSP FRR, run the ingress-lsp
trigger route-policy command on a transit node to filter the ingress role and then run
the auto-frr command on the transit node to enable BGP LSP FRR to take effect.
NOTE
Perform this step on each CSG and MASG to enable the protection switching
function for the whole BGP LSP.
g. (Optional) Run route-select delay delay-value
A delay for selecting a route to the intermediate device on the primary
path is configured. After the primary path recovers, an appropriate delay
ensures that traffic switches back to the primary path after the
intermediate device completes refreshing forwarding entries.
h. Run commit
The configuration is committed.
Perform the following steps on the transit node:
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run ipv4-family unicast
The BGP-IPv4 unicast address family view is displayed.
d. Run auto-frr
BGP Auto FRR for unicast routes is enabled.
e. Run bestroute nexthop-resolved tunnel [ inherit-ip-cost ]
Labeled BGP IPv4 unicast routes can participate in route selection only
when their next hops recurse to tunnels.
f. (Optional) Run route-select delay delay-value
A delay for selecting a route to the intermediate device on the primary
path is configured. After the primary path recovers, an appropriate delay
ensures that traffic switches back to the primary path after the
intermediate device completes refreshing forwarding entries.
g. Run commit
The configuration is committed.
● Configure BFD for BGP tunnel.
Perform the following steps on the ingress of an E2E BGP tunnel:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bgp bfd enable
The capability of dynamically establishing BGP BFD sessions is enabled on
the ingress.
f. Run mpls bgp bfd-trigger-tunnel { host | ip-prefix ip-prefix-name }
A policy for dynamically establishing a BGP BFD session is configured.
g. Run commit
If a fault occurs on the tunnel between the CSG and MASG, traffic
recurses to the MPLS local IFNET tunnel, not a backup tunnel or an FRR
bypass tunnel. As the MPLS local IFNET tunnel cannot forward traffic,
traffic is interrupted. To prevent the traffic interruption, run the peer
mpls-local-ifnet disable command to disable the establishment of an
MPLS local IFNET tunnel between the CSG and MASG.
d. Run ipv4-family vpn-instance vpn-instance-name
----End
Context
In seamless MPLS scenarios, when an egress MASG fails, E2E BFD for BGP tunnel
is used to instruct a CSG to perform VPN FRR switching. In this protection solution,
both BGP LSPs and BFD sessions are in great numbers, which consumes a lot of
bandwidth resources and burdens the device. To optimize the solution, the egress
protection function can be configured on the master and backup MASGs. With this
function enabled, both the master and backup MASGs assign the same private
network label value to a core ASBR. If the master MASG fails, BFD for LDP LSP or
BFD for TE can instruct a core ASBR to perform BGP FRR protection switching.
After traffic is switched to the backup MASG, the MASG removes the BGP public
network label and uses the private network label the same as that on the faulty
master MASG to search for a matching VPN instance. Traffic can then be properly
forwarded.
The egress protection function is configured on both the master and backup
MASGs.
NOTE
If the egress protection function is configured on egress MASGs between which a tunnel
exists and a route imported by BGP on one of the MASGs recurses to the tunnel, this MASG
then recurses the route to another tunnel of a different type. In this case, traffic is directed
to the other MASG, which slows down traffic switchover. As a result, the egress protection
function does not take effect. To address this problem, specify non-relay-tunnel when
running the import-route or network command to prevent the routes imported by BGP
from recursing to tunnels.
Prerequisites
Before configuring the egress protection function, complete the following tasks:
● Host routes to the loopback interfaces are imported into the BGP routing
table, and both the master and backup MASGs assigns BGP labeled routes to
a core ASBR. Therefore, the core ASBR has two BGP labeled routes destined
for the same loopback interface. A routing policy is configured to enable the
core ASBR to select one route to implement BGP FRR.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
Step 3 Run ipv4-family
The VPN instance IPv4 address family view is displayed.
Step 4 Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
Step 5 Run apply-label per-instance static static-label-value
A device is enabled to assign the same static label to all routes destined for a
remote PE in a VPN instance IPv4 address family.
The same static label value must be set on both the master and backup MASGs.
NOTE
A change in the label allocation mode leads to re-advertising of IPv4 address family routes
in a VPN instance. This step causes a temporary service interruption. Exercise caution when
using this command.
----End
Prerequisites
Inter-AS seamless MPLS has been configured.
Procedure
● Run the display ip routing-table command on a CSG or an MASG to check
the routes to the peer end.
● Run the display mpls lsp command to check LSP information.
● Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m
interval | -r reply-mode | -s packet-size | -t time-out | -v ] * bgp destination-
----End
Usage Scenario
Figure 1-62 illustrates the inter-AS seamless MPLS+HVPN networking. A Cell Site
Gateway (CSG) and an Aggregation (AGG) establish an HVPN connection, and the
AGG and a Mobile Aggregate Service Gateway (MASG) establish a seamless MPLS
LSP. The AGG provides hierarchical L3VPN access services and routing
management services. Seamless MPLS+HVPN combines the advantages of both
seamless MPLS and HVPN. Seamless MPLS allows any two nodes to be
interconnected through an LSP in scenarios where the access, aggregation, and
core layers involve different domains, providing high service scalability. HVPN
enables carriers to cut down network deployment costs by deploying devices with
layer-specific capacities to meet service requirements.
Pre-configuration Tasks
Before configuring inter-AS seamless MPLS+HVPN, complete the following tasks:
If MPLS TE tunnels are used across the three layers, a tunnel policy or tunnel selector must
be configured. For configuration details, see VPN Tunnel Management Configuration.
1.1.5.5.1 Establishing an MP-EBGP Peer Relationship Between Each AGG and MASG
MP-EBGP supports BGP extended community attributes that are used to advertise
VPNv4 routes between each pair of the AGG and MASG.
Procedure
Step 1 Run system-view
NOTE
The AGG and MASG must use loopback interface addresses with 32-bit masks to establish
an MP-EBGP peer relationship so that the MP-EBGP connection can recurse to a tunnel.
----End
Procedure
● Perform the following steps on each AGG and MASG:
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv4-address | group-name } label-route-capability
The ability to exchange labeled IPv4 routes between devices in the local
AS is enabled.
d. Run commit
The configuration is committed.
● Perform the following steps on each AGG ASBR and core ASBR:
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The view of the interface connected to the peer ASBR is displayed.
c. Run ip address ip-address { mask | mask-length }
An IP address is assigned to the interface.
d. Run mpls
MPLS is enabled.
e. Run quit
Return to the system view.
f. Run bgp as-number
The BGP view is displayed.
g. Run peer { ipv4-address | group-name } label-route-capability [ check-
tunnel-reachable ]
The ability to exchange labeled IPv4 routes between BGP peers, including
the peer ASBR and the devices in the local AS, is enabled.
Procedure
● Perform the following steps on each AGG and MASG:
a. Run system-view
The system view is displayed.
b. Run route-policy route-policy-name matchMode node node
A Route-Policy node is created.
c. Run apply mpls-label
The local device is enabled to assign a label to an IPv4 route.
d. Run quit
Return to the system view.
NOTE
NOTE
Context
Traffic statistics collection for BGP LSPs allows you to query and monitor the
traffic statistics of BGP LSPs in real time. To enable this function, run the bgp host
command.
NOTE
Traffic statistics collection for BGP LSPs takes effect only for BGP LSPs of which the FEC
mask length is 32 bits.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
MPLS is enabled globally, and the MPLS view is displayed.
Step 3 Run quit
Return to the system view.
Step 4 Run mpls traffic-statistics
MPLS traffic statistics collection is enabled globally, and the traffic statistics
collection view is displayed.
Step 5 Run bgp host [ ip-prefix ip-prefix-name ]
Traffic statistics collection is enabled for BGP LSPs.
If the ip-prefix parameter needs to be set to limit the range of BGP LSPs for which
traffic statistics collection is to be enabled, run the ip ip-prefix command to
create an IP prefix list first.
Step 6 Run commit
The configuration is committed.
----End
1.1.5.5.5 (Optional) Configuring the Mode in Which a BGP Label Inherits the QoS
Priority in an Outer Tunnel Label
When data packets are transmitted from a core ASBR to an AGG ASBR, you can
determine whether a BGP label inherits the QoS priority carried in an outer tunnel
label.
Context
In the inter-AS seamless MPLS or inter-AS seamless MPLS+HVPN networking, each
packet arriving at a core ASBR or AGG ASBR carries an inner private label, a BGP
LSP label, and an outer MPLS tunnel label. The core ASBR and AGG ASBR remove
outer MPLS tunnel labels from packets before sending the packets to each other. If
a BGP LSP label in a packet carries a QoS priority different from that in the outer
MPLS tunnel label in the packet, you can configure the core ASBR or AGG ASBR to
determine whether the BGP LSP label inherits the QoS priority carried in the outer
MPLS tunnel label to be removed.
Procedure
Step 1 Run system-view
The mode in which a BGP label inherits the QoS priority in the outer tunnel label
is specified.
The default QoS priority inheriting mode varies according to the outer MPLS
tunnel type:
● LDP: By default, the BGP label inherits the QoS priority carried in the outer
MPLS tunnel label.
● TE: By default, the BGP label does not inherit the QoS priority carried in the
outer MPLS tunnel label.
----End
Context
On an inter-AS seamless MPLS+HVPN network that has protection switching
enabled, if a link or node fails, traffic switches to a backup path, which
implements uninterrupted traffic transmission.
NOTE
If both LDP FRR and BGP Auto FRR functions are configured, only BGP Auto FRR takes
effect.
Procedure
● Configure BFD for interface.
a. Run system-view
The system view is displayed.
b. Run bfd session-name bind peer-ip peer-ip [ vpn-instance vpn-name ]
interface interface-type interface-number [ source-ip source-ip ]
A BFD session for IPv4 is bound to an interface.
c. Run discriminator local discr-value
The local discriminator of the BFD session is created.
d. Run discriminator remote discr-value
The remote discriminator of the BFD session is configured.
NOTE
The local and remote discriminators on the two ends of a BFD session must be
correctly associated. That is, the local discriminator of the local device must be
the same as the remote discriminator of the remote device, and the remote
discriminator of the local device must be the same as the local discriminator of
the remote device. If the association is incorrect, a BFD session cannot be set up.
e. Run commit
a. Run system-view
TE FRR is enabled.
d. Run commit
a. Run system-view
NOTE
Physical links of a bypass tunnel cannot overlap protected physical links of the
primary tunnel.
g. (Optional) Run mpls te bandwidth ct0 bandwidth
NOTE
NOTE
NOTE
● If the mpls te auto-frr default command is run, the interface Auto FRR
capability status is the same as the global Auto FRR capability status.
g. Run mpls te fast-reroute [ bandwidth ]
The TE FRR function is enabled.
The bandwidth parameter can be configured to enable FRR bandwidth
protection for the primary tunnel.
h. (Optional) Run mpls te bypass-attributes bandwidth bandwidth
[ priority setup-priority [ hold-priority ] ]
Attributes for the Auto FRR bypass tunnel are set.
NOTE
● These attributes for the Auto FRR bypass tunnel can be set only after the
mpls te fast-reroute bandwidth command is run for the primary tunnel.
● The Auto FRR bypass tunnel bandwidth cannot exceed the primary tunnel
bandwidth.
● If no attributes are configured for an Auto FRR bypass tunnel, the Auto FRR
bypass tunnel by default uses the same bandwidth as that of the primary
tunnel.
● The setup priority of the bypass tunnel cannot be higher than the holding
priority. Each priority of the bypass tunnel cannot be higher than that of the
primary tunnel.
● If the primary tunnel bandwidth is changed or FRR is disabled, the bypass
tunnel attributes are automatically deleted.
● On one TE tunnel interface, the bypass tunnel bandwidth and the multi-CT
are mutually exclusive.
i. Run commit
The configuration is committed.
● Configure static BFD for CR-LSP.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run quit
Return to the system view.
d. Run bfd session-name bind mpls-te interface tunnel interface-number
te-lsp [ backup ]
The BFD session is bound to the primary or backup CR-LSP of the
specified tunnel.
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
If the BFD session on a trunk or VLAN member interface allows BFD to
modify the port status table or link status table, and the interface is
configured with the BFD session, you must configure the WTR time for
the BFD session for detecting the interface. This prevents the BFD session
on the interface from flapping when the member interface joins or leave
the interface.
h. (Optional) Run min-tx-interval tx-interval
The minimum interval at which BFD packets are sent is configured.
i. (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is
configured.
j. (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is configured.
k. Run commit
The configuration is committed.
● Configure dynamic BFD for CR-LSP.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run interface tunnel interface-number
The tunnel interface view is displayed.
d. Run mpls te bfd enable
The capability of dynamically creating BFD sessions is enabled on the TE
tunnel.
The command configured in the tunnel interface view takes effect only
on the current tunnel interface.
e. Run commit
The configuration is committed.
● Configure CR-LSP hot standby.
a. Run system-view
The system view is displayed.
b. Run interface tunnel tunnel-number
The MPLS TE tunnel interface view is displayed.
c. Run mpls te backup hot-standby [ mode { revertive [ wtr interval ] |
non-revertive } | overlap-path | wtr [ interval ] | dynamic-bandwidth ]
CR-LSP hot standby is configured.
Select the following parameters as needed to enable sub-functions:
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
BFD is enabled globally on the local node, and the BFD view is displayed.
c. Run quit
NOTE
The local discriminator of the local device and the remote discriminator of the
remote device are the same, and the remote discriminator of the local device and
the local discriminator of the remote device are the same. A discriminator
inconsistency causes the BFD session to fail to be established.
g. Run process-pst
BFD is enabled to modify the port status table or link status table.
If the BFD session on a trunk or VLAN member interface allows BFD to
modify the port status table or link status table, and the interface is
configured with the BFD session, you must configure the WTR time for
the BFD session for detecting the interface. This prevents the BFD session
on the interface from flapping when the member interface joins or leave
the interface.
h. (Optional) Run min-tx-interval tx-interval
The minimum interval at which BFD packets are sent is configured.
i. (Optional) Run min-rx-interval rx-interval
The local minimum interval at which BFD packets are received is
configured.
j. (Optional) Run detect-multiplier multiplier
The local BFD detection multiplier is configured.
k. Run commit
The configuration is committed.
● Configure dynamic BFD for LDP LSPs.
Perform the following steps on the ingress:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bfd enable
The capability of dynamically establishing a BFD session is configured on
the ingress.
f. Run mpls bfd-trigger { host | fec-list list-name }
A policy for establishing an LDP BFD session is configured.
g. Run commit
The configuration is committed.
a. Run system-view
NOTE
In a seamless MPLS scenario, BGP LSP FRR must be configured on both the ingress
and a transit node.
In a seamless MPLS scenario, before you configure BGP LSP FRR, run the ingress-lsp
trigger route-policy command on a transit node to filter the ingress role and then run
the auto-frr command on the transit node to enable BGP LSP FRR to take effect.
a. Run system-view
Labeled BGP IPv4 unicast routes can participate in route selection only
when their next hops recurse to tunnels.
f. Run ingress-lsp protect-mode bgp-frr
NOTE
Perform this step on each CSG and MASG to enable the protection switching
function for the whole BGP LSP.
g. (Optional) Run route-select delay delay-value
a. Run system-view
Labeled BGP IPv4 unicast routes can participate in route selection only
when their next hops recurse to tunnels.
f. (Optional) Run route-select delay delay-value
Context
In seamless MPLS scenarios, when an egress MASG fails, E2E BFD for BGP tunnel
is used to instruct a CSG to perform VPN FRR switching. In this protection solution,
both BGP LSPs and BFD sessions are in great numbers, which consumes a lot of
bandwidth resources and burdens the device. To optimize the solution, the egress
protection function can be configured on the master and backup MASGs. With this
function enabled, both the master and backup MASGs assign the same private
network label value to a core ASBR. If the master MASG fails, BFD for LDP LSP or
BFD for TE can instruct a core ASBR to perform BGP FRR protection switching.
After traffic is switched to the backup MASG, the MASG removes the BGP public
network label and uses the private network label the same as that on the faulty
master MASG to search for a matching VPN instance. Traffic can then be properly
forwarded.
The egress protection function is configured on both the master and backup
MASGs.
NOTE
If the egress protection function is configured on egress MASGs between which a tunnel
exists and a route imported by BGP on one of the MASGs recurses to the tunnel, this MASG
then recurses the route to another tunnel of a different type. In this case, traffic is directed
to the other MASG, which slows down traffic switchover. As a result, the egress protection
function does not take effect. To address this problem, specify non-relay-tunnel when
running the import-route or network command to prevent the routes imported by BGP
from recursing to tunnels.
Prerequisites
Before configuring the egress protection function, complete the following tasks:
● Configure a loopback interface on each of the master and backup MASGs.
The IP address of each loopback interface on an MASG is used to establish a
remote BGP peer relationship with a remote device.
● Host routes to the loopback interfaces are imported into the BGP routing
table, and both the master and backup MASGs assigns BGP labeled routes to
a core ASBR. Therefore, the core ASBR has two BGP labeled routes destined
for the same loopback interface. A routing policy is configured to enable the
core ASBR to select one route to implement BGP FRR.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
Step 3 Run ipv4-family
The VPN instance IPv4 address family view is displayed.
Step 4 Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
Step 5 Run apply-label per-instance static static-label-value
A device is enabled to assign the same static label to all routes destined for a
remote PE in a VPN instance IPv4 address family.
The same static label value must be set on both the master and backup MASGs.
NOTE
A change in the label allocation mode leads to re-advertising of IPv4 address family routes
in a VPN instance. This step causes a temporary service interruption. Exercise caution when
using this command.
----End
Prerequisites
Inter-AS seamless MPLS+HVPN has been configured.
Procedure
● Run the display bgp vpnv4 all peer command on an AGG or MASG to check
BGP peer relationship information.
● Run the display bgp vpnv4 all routing-table command to check the VPNv4
routing table on an AGG or MASG.
● Run the display bgp routing-table label command on an AGG, AGG ASBR,
core ASBR, or MASG to check label information of IPv4 routes.
● Run the display ip routing-table vpn-instance vpn-instance-name command
to check the VRF table on an AGG or MASG.
● Run the display mpls lsp protocol bgp traffic-statistics inbound command
to check the incoming traffic statistics of BGP LSPs.
● Run the display mpls lsp protocol bgp traffic-statistics outbound [ ipv4-
address mask-length ] verbose command to check the outgoing traffic
statistics of BGP LSPs.
● Run the display mpls lsp protocol bgp traffic-statistics outbound
aggregated command to check the traffic statistics of BGP LSPs aggregated
by FEC.
----End
Usage Scenario
On an IP/MPLS network transmitting VPN services, PEs establish a multi-segment
MPLS tunnel between each other. Therefore VPN services are sent to multiple PEs.
In this case, VPN service provision on PEs becomes complex, and the VPN service
scalability decreases. As PEs establish BGP peer relationships, a routing policy can
be used to assign MPLS labels for BGP routes so that an End-to-end (E2E) BGP
tunnel can be established. The BGP tunnel consists of a primary BGP LSP and a
backup BGP LSP. VPN services can travel along the E2E BGP tunnel, which
simplifies service provision and improves VPN service scalability.
To rapidly detect faults in an E2E BGP tunnel, BFD for BGP tunnel is used. BFD for
BGP tunnel establishes a dynamic BFD session, also called a BGP BFD session,
which is bound to both the primary and backup BGP LSPs. If both BGP LSPs fail,
the BGP BFD session detects the faults and triggers VPN FRR switching.
Pre-configuration Tasks
Before configuring dynamic BFD to monitor a BGP tunnel, configure basic MPLS
functions.
Procedure
● Perform the following steps on the ingress of an E2E BGP tunnel:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
c. Run quit
Return to the system view.
d. Run mpls
The MPLS view is displayed.
e. Run mpls bgp bfd enable
The ability to dynamically establish BGP BFD sessions is enabled on the
ingress.
The mpls bgp bfd enable command does not create a BFD session. A
BGP BFD session can only be dynamically established only after a policy
for dynamically establish BGP BFD session is configured.
f. Run commit
The configuration is committed.
● Perform the following steps on the egress of an E2E BGP tunnel:
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally, and the BFD view is displayed.
c. Run mpls-passive
The capability of passively creating a BFD session is configured on the
egress.
The mpls-passive command does not create a BFD session. The egress
has to receive an LSP ping request carrying a BFD TLV before creating a
BFD session with the ingress.
d. Run commit
The configuration is committed.
----End
Context
The policies for dynamically establishing BGP BFD sessions are as follows:
● Host address-based policy: used when all host addresses are available to
trigger the creation of BGP BFD sessions.
● IP address prefix list-based policy: used when only some host addresses can
be used to establish BFD sessions.
Perform the following steps on the ingress of an E2E BGP tunnel:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 (Optional) Run ip ip-prefix ip-prefix-name [ index index-number ] { permit |
deny } ipv4-address mask-length [ match-network ] [ greater-equal greater-
equal-value ] [ less-equal less-equal-value ]
An IPv4 address prefix list is configured, and list entries are configured.
You can perform this step when you want to use an IP address prefix list to
dynamically establish BGP BFD sessions. For configuration details about how to
configure an IP address prefix list, see Configuring an IPv4 Address Prefix List.
Step 3 Run mpls
The MPLS view is displayed.
Step 4 Run mpls bgp bfd-trigger-tunnel { host | ip-prefix ip-prefix-name }
A policy for dynamically establishing a BGP BFD session is configured.
After a policy is configured, the device starts to dynamically establish a BFD
session.
Step 5 Run commit
The configuration is committed.
----End
Context
Perform the following steps on the ingress of an E2E BGP tunnel:
Procedure
Step 1 Run system-view
● Effective interval at which BFD packets are sent from the local device = Max
(min-tx-interval configured on the local device, min-rx-interval configured
on the remote device)
● Effective interval at which BFD packets are received by the local device = Max
(min-tx-interval configured on the remote device, min-rx-interval
configured on the local device)
● Detection interval of the local device = Effective interval at which BFD packets
are received by the local device x BFD detection multiplier configured on the
remote device
The egress has the fixed minimum interval at which BGP BFD packets are sent, the
fixed minimum interval at which BGP BFD packets are received, and the detection
multiplier of 3. You can only change the time parameters on the ingress so that
the BFD time parameters can be updated on both the ingress and egress.
----End
Prerequisites
The dynamic BFD for BGP tunnel function has been configured.
Procedure
● Run the display mpls bfd session protocol bgp [ fec fec-address
[ verbose ] ] command to check information about a BFD session with the
protocol type of BGP on the ingress on an E2E BGP tunnel.
----End
Context
Run the following commands in any view of a BGP LSP endpoint node to check
the connectivity and reachability of a BGP LSP.
Procedure
● Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m
interval | -r reply-mode | -s packet-size | -t time-out | -v ] * bgp destination-
iphost mask-length [ ip-address ] command to check BGP LSP connectivity.
----End
Context
By default, the reset mpls traffic-statistics bgp command clears the traffic
statistics of all BGP LSPs. If an IPv4 address and a mask length are specified, the
command clears only the traffic statistics of the BGP LSPs whose FEC matches the
specified IPv4 address and mask length.
Procedure
● Run the reset mpls traffic-statistics bgp [ipv4-address mask-length]
command to clear the traffic statistics of BGP LSPs.
----End
Networking Requirements
In Figure 1-63, the access, aggregation, and core layers belong to the same AS.
Base stations need to communicate with an MME or SGW through a VPN. To meet
this requirement, intra-AS seamless MPLS can be configured.
Addresses of interfaces are planned for CSGs, AGGs, core ABRs, and MASGs shown
in Figure 1-64.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IGP protocols at the access, aggregation, and core layers to
implement network connectivity at each layer.
2. Configure MPLS and MPLS LDP and establish MPLS LSPs on devices.
3. Establish IBGP peer relationships at each layer and enable devices to
exchange labeled routes.
4. Configure each AGG and core ABR as RRs to help a CSG and MASG obtain the
route destined for each other's loopback interface.
5. Configure a routing policy to control label distribution for a BGP LSP to be
established on each device. The egress of the BGP LSP to be established needs
to assign an MPLS label to the route advertised to an upstream node. If a
transit node receives a labeled IPv4 route from downstream, the downstream
node must re-assign an MPLS label to the transit node.
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID (1) at the access layer, IS-IS process ID (1) at the aggregation
layer, and OSPF process ID (2) at the core layer
● MPLS LSR IDs: 1.1.1.1 for the CSG, 2.2.2.2 for the AGG, 3.3.3.3 for the core
ABR, and 4.4.4.4 for the MASG
● Name of a routing policy (policy1)
Procedure
Step 1 Assign an IP address to each interface.
Configure interface IP addresses and masks; configure a loopback interface
address as an LSR ID on every device shown in Figure 1-64; configure OSPF and
IS-IS to advertise the route to the network segment of each interface and a host
route to each loopback interface address (LSR ID). For configuration details, see
Configuration Files in this section.
Step 2 Enable MPLS and LDP globally on each device.
# Configure the CSG.
[~CSG] mpls lsr-id 1.1.1.1
[*CSG] mpls
[*CSG-mpls] quit
[*CSG] mpls ldp
[*CSG-mpls-ldp] quit
[*CSG] interface GigabitEthernet 1/0/0
[*CSG-GigabitEthernet1/0/0] mpls
[*CSG-GigabitEthernet1/0/0] mpls ldp
[*CSG-GigabitEthernet1/0/0] quit
[*CSG] commit
Step 3 Establish IBGP peer relationships at each layer and enable devices to exchange
labeled routes.
# Configure the CSG.
[~CSG] bgp 100
[*CSG-bgp] peer 2.2.2.2 as-number 100
[*CSG-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*CSG-bgp] peer 2.2.2.2 label-route-capability
[*CSG-bgp] network 1.1.1.1 32
[*CSG-bgp] quit
[*CSG] commit
Step 4 Configure each AGG and core ABR as RRs to help a CSG and MASG obtain the
route destined for each other's loopback interface.
# Configure the AGG.
[~AGG] bgp 100
[~AGG-bgp] peer 1.1.1.1 reflect-client
[*AGG-bgp] peer 1.1.1.1 next-hop-local
[*AGG-bgp] peer 3.3.3.3 reflect-client
[*AGG-bgp] peer 3.3.3.3 next-hop-local
[*AGG-bgp] quit
[*AGG] commit
Repeat this step for the MASG. For configuration details, see Configuration Files in
this section.
# Configure a routing policy for advertising routes matching Route-Policy
conditions to the AGG's BGP peer.
[~AGG] route-policy policy1 permit node 1
[*AGG-route-policy] if-match mpls-label
[*AGG-route-policy] apply mpls-label
[*AGG-route-policy] quit
[*AGG] bgp 100
[*AGG-bgp] peer 1.1.1.1 route-policy policy1 export
Repeat this step for the core ABR. For configuration details, see Configuration Files
in this section.
Step 6 Verify the configuration.
After completing the configuration, run the display ip routing-table command on
a CSG or MASG to view information about a route to the BGP peer's loopback
interface.
The following example uses the command output on the CSG.
<CSG> display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 10 Routes : 10
Run the display mpls lsp command on the CSG or MASG to view LSP information.
The following example uses the command output on the CSG.
<CSG> display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL -/-
2.2.2.2/32 NULL/3 -/GE1/0/0
2.2.2.2/32 32828/3 -/GE1/0/0
-------------------------------------------------------------------------------
LSP Information: BGP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 32829/NULL -/-
4.4.4.4/32 NULL/32831 -/-
----End
Configuration Files
● CSG configuration file
#
sysname CSG
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
network 1.1.1.1 255.255.255.255
peer 2.2.2.2 enable
peer 2.2.2.2 route-policy policy1 export
peer 2.2.2.2 label-route-capability
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
return
● AGG configuration file
#
sysname AGG
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0000.0010.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
peer 1.1.1.1 route-policy policy1 export
Networking Requirements
In Figure 1-65, the access and aggregation layers belong to AS 100, and the core
layer belongs to AS 200. Base stations and an MME or SGW can communicate
with each other through a VPN. To meet this requirement, inter-AS seamless MPLS
can be configured.
Addresses of interfaces are planned for the CSGs, AGGs, AGG ASBRs, core ASBRs,
and MASGs shown in Figure 1-66.
Configuration Roadmap
The configuration roadmap is as follows:
5. Configure each AGG as an RR to help the CSG and MASG obtain the route
destined for each other's loopback interface.
6. Configure a routing policy to control label distribution for a BGP LSP to be
established on each device. The egress of the BGP LSP to be established needs
to assign an MPLS label to the route advertised to an upstream node. If a
transit node receives a labeled IPv4 route from downstream, the downstream
node must re-assign an MPLS label to the transit node.
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID (1) at the access layer, IS-IS process ID (1) at the aggregation
layer, and OSPF process ID (2) at the core layer
● MPLS LSR IDs: 1.1.1.1 for the CSG, 2.2.2.2 for the AGG, 3.3.3.3 for the AGG
ASBR, 4.4.4.4 for the core ASBR, and 5.5.5.5 for the MASG.
● Name of a routing policy (policy1)
Procedure
Step 1 Assign an IP address to each interface.
Step 3 Establish IBGP peer relationships at each layer and enable devices to exchange
labeled routes.
# Configure the CSG.
[~CSG] bgp 100
[*CSG-bgp] peer 2.2.2.2 as-number 100
[*CSG-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*CSG-bgp] peer 2.2.2.2 label-route-capability
[*CSG-bgp] network 1.1.1.1 32
[*CSG-bgp] quit
[*CSG] commit
Step 4 Establish an EBGP peer relationship for each AGG ASBR-and-core ASBR pair and
enable these devices to exchange labeled routes.
# Configure the AGG ASBR.
[~AGG ASBR] interface GigabitEthernet 2/0/0
[~AGG ASBR-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*AGG ASBR-GigabitEthernet2/0/0] mpls
[*AGG ASBR-GigabitEthernet2/0/0] quit
[*AGG ASBR] bgp 100
[*AGG ASBR-bgp] peer 10.3.1.2 as-number 200
[*AGG ASBR-bgp] peer 10.3.1.2 label-route-capability check-tunnel-reachable
[*AGG ASBR-bgp] quit
[*AGG ASBR] commit
Step 5 Configure each AGG as an RR to help the CSG and MASG obtain the route
destined for each other's loopback interface.
# Configure the AGG.
[~AGG] bgp 100
[~AGG-bgp] peer 1.1.1.1 reflect-client
[*AGG-bgp] peer 1.1.1.1 next-hop-local
[*AGG-bgp] peer 3.3.3.3 reflect-client
[*AGG-bgp] peer 3.3.3.3 next-hop-local
[*AGG-bgp] quit
[*AGG] commit
Repeat this step for the MASG. For configuration details, see Configuration Files in
this section.
# Configure a routing policy for advertising routes matching Route-Policy
conditions to the AGG's BGP peer.
[~AGG] route-policy policy1 permit node 1
[*AGG-route-policy] if-match mpls-label
[*AGG-route-policy] apply mpls-label
[*AGG-route-policy] quit
[*AGG] bgp 100
[*AGG-bgp] peer 1.1.1.1 route-policy policy1 export
[*AGG-bgp] peer 3.3.3.3 route-policy policy1 export
[*AGG-bgp] quit
[*AGG] commit
Repeat this step for the AGG ASBR and core ASBR. For configuration details, see
Configuration Files in this section.
Step 7 Verify the configuration.
After completing the configuration, run the display ip routing-table command on
a CSG or MASG to view information about a route to the BGP peer's loopback
interface.
The following example uses the command output on the CSG.
<CSG> display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 10 Routes : 10
Run the display mpls lsp command on the CSG or MASG to view LSP information.
The following example uses the command output on the CSG.
<CSG> display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL -/-
2.2.2.2/32 NULL/3 -/GE1/0/0
----End
Configuration Files
● CSG configuration file
#
sysname CSG
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
network 1.1.1.1 255.255.255.255
peer 2.2.2.2 enable
peer 2.2.2.2 route-policy policy1 export
peer 2.2.2.2 label-route-capability
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
return
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
peer 1.1.1.1 route-policy policy1 export
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 1.1.1.1 label-route-capability
peer 3.3.3.3 enable
peer 3.3.3.3 route-policy policy1 export
peer 3.3.3.3 reflect-client
peer 3.3.3.3 next-hop-local
peer 3.3.3.3 label-route-capability
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
route-policy policy1 permit node 1
if-match mpls-label
apply mpls-label
#
return
● AGG ASBR configuration file
#
sysname AGG ASBR
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0000.0020.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 10.3.1.2 as-number 200
#
ipv4-family unicast
peer 2.2.2.2 enable
peer 2.2.2.2 route-policy policy1 export
peer 2.2.2.2 label-route-capability
peer 10.3.1.2 enable
peer 10.3.1.2 route-policy policy1 export
peer 10.3.1.2 label-route-capability check-tunnel-reachable
#
route-policy policy1 permit node 1
if-match mpls-label
apply mpls-label
#
return
#
sysname MASG
#
mpls lsr-id 5.5.5.5
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
#
bgp 200
peer 4.4.4.4 as-number 200
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
network 5.5.5.5 255.255.255.255
peer 4.4.4.4 enable
peer 4.4.4.4 route-policy policy1 export
peer 4.4.4.4 label-route-capability
#
ospf 2
area 0.0.0.0
network 5.5.5.5 0.0.0.0
network 10.4.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
return
Networking Requirements
In Figure 1-67, the access and aggregation layers belong to AS 100, and the core
layer belongs to AS 200. To provision VPN services, the inter-AS seamless MPLS
+HVPN networking can be deployed. This networking allows base stations and the
MME/SGW to communicate with each other and cuts networking construction
costs with the use of HVPN. An HVPN connection for each CSG-and-AGG pair is
established, and an inter-AS seamless MPLS LSP for each AGG-and-MASG pair is
established.
Addresses of interfaces are planned for the CSGs, AGGs, AGG ASBRs, core ASBRs,
and MASGs shown in Figure 1-68.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID (1) at the access layer, IS-IS process ID (1) at the aggregation
layer, and OSPF process ID (2) at the core layer
● MPLS LSR IDs: 1.1.1.1 for the CSG, 2.2.2.2 for the AGG, 3.3.3.3 for the AGG
ASBR, 4.4.4.4 for the core ASBR, and 5.5.5.5 for the MASG.
● Name of a routing policy (policy1)
Procedure
Step 1 Assign an IP address to each interface.
Configure interface IP addresses and masks; configure a loopback interface
address as an LSR ID on every device shown in Figure 1-68; configure OSPF and
IS-IS to advertise the route to the network segment of each interface and a host
route to each loopback interface address (LSR ID). For configuration details, see
Configuration Files in this section.
Step 2 Enable MPLS and LDP globally on each device.
# Configure the CSG.
[~CSG] mpls lsr-id 1.1.1.1
[*CSG] mpls
[*CSG-mpls] quit
[*CSG] mpls ldp
[*CSG-mpls-ldp] quit
[*CSG] interface GigabitEthernet 1/0/0
[*CSG-GigabitEthernet1/0/0] mpls
[*CSG-GigabitEthernet1/0/0] mpls ldp
[*CSG-GigabitEthernet1/0/0] quit
[*CSG] commit
Step 3 Establish IBGP peer relationships at the aggregation and core layers and enable
devices to exchange labeled routes.
# Configure the AGG.
[~AGG] bgp 100
[*AGG-bgp] peer 3.3.3.3 as-number 100
[*AGG-bgp] peer 3.3.3.3 connect-interface LoopBack 1
[*AGG-bgp] peer 3.3.3.3 label-route-capability
[*AGG-bgp] network 2.2.2.2 32
[*AGG-bgp] quit
[*AGG] commit
Step 4 Establish an EBGP peer relationship for each AGG ASBR-and-core ASBR pair and
enable these devices to exchange labeled routes.
# Configure the AGG ASBR.
[~AGG ASBR] interface GigabitEthernet 2/0/0
[~AGG ASBR-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*AGG ASBR-GigabitEthernet2/0/0] mpls
[*AGG ASBR-GigabitEthernet2/0/0] quit
[*AGG ASBR] bgp 100
[*AGG ASBR-bgp] peer 10.3.1.2 as-number 200
[*AGG ASBR-bgp] peer 10.3.1.2 label-route-capability check-tunnel-reachable
[*AGG ASBR-bgp] quit
[*AGG ASBR] commit
Repeat this step for the MASG. For configuration details, see Configuration Files in
this section.
# Configure a routing policy for advertising routes matching Route-Policy
conditions to the AGG ASBR's BGP peer.
[~AGG ASBR] route-policy policy1 permit node 1
[*AGG ASBR-route-policy] if-match mpls-label
[*AGG ASBR-route-policy] apply mpls-label
[*AGG ASBR-route-policy] quit
[*AGG ASBR] bgp 100
[*AGG ASBR-bgp] peer 2.2.2.2 route-policy policy1 export
[*AGG ASBR-bgp] peer 10.3.1.2 route-policy policy1 export
[*AGG ASBR-bgp] quit
[*AGG ASBR] commit
Repeat this step for the core ASBR. For configuration details, see Configuration
Files in this section.
Step 7 Configure an MP-IBGP peer relationship for each CSG-and-AGG pair.
# Configure the CSG.
[~CSG] bgp 100
[~CSG-bgp] peer 2.2.2.2 as-number 100
[*CSG-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*CSG-bgp] network 1.1.1.1 32
[*CSG-bgp] ipv4-family vpnv4
[*CSG-bgp-af-vpnv4] peer 2.2.2.2 enable
[*CSG-bgp-af-vpnv4] quit
[*CSG-bgp] quit
[*CSG] commit
Step 8 Configure a VPN instance and bind an interface of each device to the VPN
instance.
# Configure the CSG.
[~CSG] ip vpn-instance vpn1
[*CSG-vpn-instance-vpn1] ipv4-family
[*CSG-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*CSG-vpn-instance-vpn1-af-ipv4] vpn-target 1:1
[*CSG-vpn-instance-vpn1-af-ipv4] quit
[*CSG-vpn-instance-vpn1] quit
[*CSG] interface GigabitEthernet 2/0/0
[*CSG-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*CSG-GigabitEthernet2/0/0] ip address 10.5.1.1 255.255.255.0
[*CSG-GigabitEthernet2/0/0] quit
[*CSG] bgp 100
[*CSG-bgp] ipv4-family vpn-instance vpn1
[*CSG-bgp-vpn1] import-route direct
[*CSG-bgp-vpn1] quit
[*CSG-bgp] quit
[*CSG] commit
[~CSG] quit
Repeat this step for the MASG. For configuration details, see Configuration Files in
this section.
# Configure the AGG.
[~AGG] ip vpn-instance vpn1
[*AGG-vpn-instance-vpn1] ipv4-family
[*AGG-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*AGG-vpn-instance-vpn1-af-ipv4] vpn-target 1:1
[*AGG-vpn-instance-vpn1-af-ipv4] quit
[*AGG-vpn-instance-vpn1] quit
[*AGG] commit
Step 9 Configure a default route and an IP address prefix list on each AGG so that the
AGG only advertises the default route to its directly connected CSG.
[~AGG] ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0
[*AGG] ip ip-prefix default index 10 permit 0.0.0.0 0
[*AGG] bgp 100
[*AGG-bgp] ipv4-family vpnv4
[*AGG-bgp-af-vpnv4] peer 1.1.1.1 ip-prefix default export
[*AGG-bgp-af-vpnv4] quit
[*AGG-bgp] ipv4-family vpn-instance vpn1
[*AGG-bgp-vpn1] network 0.0.0.0 0
[*AGG-bgp-vpn1] quit
[*AGG-bgp] quit
[*AGG] commit
----End
Configuration Files
● CSG configuration file
#
sysname CSG
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 10.5.1.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
peer 2.2.2.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● AGG configuration file
#
sysname AGG
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0000.0010.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 5.5.5.5 as-number 200
peer 5.5.5.5 ebgp-max-hop 10
peer 5.5.5.5 connect-interface LoopBack1
#
ipv4-family unicast
network 2.2.2.2 255.255.255.255
peer 1.1.1.1 enable
peer 3.3.3.3 enable
peer 3.3.3.3 route-policy policy1 export
peer 3.3.3.3 label-route-capability
peer 5.5.5.5 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 ip-prefix default export
peer 5.5.5.5 enable
#
ipv4-family vpn-instance vpn1
network 0.0.0.0
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
ip ip-prefix default index 10 permit 0.0.0.0 0
#
ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0
#
return
● AGG ASBR configuration file
#
sysname AGG ASBR
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0000.0020.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 10.3.1.2 as-number 200
#
ipv4-family unicast
peer 2.2.2.2 enable
peer 2.2.2.2 route-policy policy1 export
peer 2.2.2.2 label-route-capability
peer 10.3.1.2 enable
peer 10.3.1.2 route-policy policy1 export
peer 10.3.1.2 label-route-capability check-tunnel-reachable
#
route-policy policy1 permit node 1
if-match mpls-label
apply mpls-label
#
return
● Core ASBR configuration file
#
sysname Core ASBR
#
mpls lsr-id 4.4.4.4
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 200
peer 5.5.5.5 as-number 200
peer 5.5.5.5 connect-interface LoopBack1
peer 10.3.1.1 as-number 100
#
ipv4-family unicast
peer 5.5.5.5 enable
peer 5.5.5.5 route-policy policy1 export
peer 5.5.5.5 label-route-capability
peer 10.3.1.1 enable
peer 10.3.1.1 route-policy policy1 export
peer 10.3.1.1 label-route-capability check-tunnel-reachable
#
ospf 2
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.4.1.0 0.0.0.255
#
route-policy policy1 permit node 1
if-match mpls-label
apply mpls-label
#
return
● MASG configuration file
#
sysname MASG
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 5.5.5.5
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 10.6.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
#
bgp 200
peer 2.2.2.2 as-number 100
peer 2.2.2.2 ebgp-max-hop 10
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 200
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
network 5.5.5.5 255.255.255.255
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 route-policy policy1 export
peer 4.4.4.4 label-route-capability
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 2
area 0.0.0.0
network 5.5.5.5 0.0.0.0
network 10.4.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
return
Networking Requirements
Seamless MPLS integrates the access, aggregation, and core layers on the same
MPLS network to transmit VPN services. Seamless MPLS establishes an E2E BGP
tunnel to provide E2E access services. To rapidly detect faults in BGP tunnels, BFD
for BGP tunnel needs to be configured.
In Figure 1, the access and aggregation layers belong to one AS, and the core
layer belongs to another AS. The base station needs to communicate with an
MME or SGW over a VPN. To meet this requirement, inter-AS seamless MPLS can
be configured to form a BGP tunnel between the CSG and MASG. To monitor the
connectivity of the BGP tunnel, BFD for BGP tunnel needs to be configured.
GE 2/0/0 10.2.1.1/24
GE 2/0/0 10.3.1.1/24
GE 2/0/0 10.4.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● OSPF process ID (1) at the access layer, IS-IS process ID (1) at the aggregation
layer, and OSPF process ID (2) at the core layer
● IS-IS area number (10.0001) and IS-IS system IDs (which are obtained based
on loopback addresses)
● MPLS LSR IDs: 1.1.1.1 for the CSG, 2.2.2.2 for the AGG, 3.3.3.3 for the AGG
ASBR, 4.4.4.4 for the core ASBR, and 5.5.5.5 for the MASG
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address and mask to each interface, including each loopback
interface, according to Figure 1. For configuration details, see Configuration Files
in this section.
Step 2 Configure an IGP.
Configure OSPF with process ID 1 at the access layer, IS-IS with process ID 1 at the
aggregation layer, and OSPF with process ID 2 at the core layer. Configure IGP
protocols to advertise the route to each network segment to which each interface
is connected and to advertise the host route to each loopback address which is
used as an LSR ID. For configuration details, see Configuration Files in this
section.
Step 3 Configure basic MPLS and MPLS LDP functions.
Enable MPLS and MPLS LDP globally on each device and on interfaces in each AS.
For configuration details, see Configuration Files in this section.
Step 4 Establish IBGP peer relationships at each layer and enable devices to exchange
labeled routes.
# Configure the CSG.
[~CSG] bgp 100
[*CSG-bgp] peer 2.2.2.2 as-number 100
[*CSG-bgp] peer 2.2.2.2 connect-interface LoopBack 0
[*CSG-bgp] peer 2.2.2.2 label-route-capability
[*CSG-bgp] network 1.1.1.1 32
[*CSG-bgp] quit
[*CSG] commit
Step 5 Establish an EBGP peer relationship for each AGG ASBR-and-core ASBR pair and
enable these devices to exchange labeled routes.
# Configure the AGG ASBR.
[~AGG ASBR] interface GigabitEthernet 2/0/0
[~AGG ASBR-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*AGG ASBR-GigabitEthernet2/0/0] mpls
[*AGG ASBR-GigabitEthernet2/0/0] quit
[*AGG ASBR] bgp 100
[*AGG ASBR-bgp] peer 10.3.1.2 as-number 200
[*AGG ASBR-bgp] peer 10.3.1.2 label-route-capability check-tunnel-reachable
[*AGG ASBR-bgp] quit
[*AGG ASBR] commit
Step 6 Configure each AGG as an RR to help the CSG and MASG obtain the route
destined for each other's loopback interface.
# Configure the AGG.
[~AGG] bgp 100
[~AGG-bgp] peer 1.1.1.1 reflect-client
[*AGG-bgp] peer 1.1.1.1 next-hop-local
[*AGG-bgp] peer 3.3.3.3 reflect-client
[*AGG-bgp] peer 3.3.3.3 next-hop-local
[*AGG-bgp] quit
[*AGG] commit
Repeat this step for the MASG. For configuration details, see Configuration Files
in this section.
Repeat this step for the AGG ASBR and core ASBR. For configuration details, see
Configuration Files in this section.
Step 8 Configure BFD for BGP tunnel.
# On the CSG, enable the MPLS capability to dynamically establish BGP BFD
sessions based on host addresses.
[~CSG] bfd
[*CSG-bfd] quit
[*CSG] mpls
[*CSG-mpls] mpls bgp bfd enable
[*CSG-mpls] mpls bgp bfd-trigger-tunnel host
[*CSG-mpls] quit
[*CSG] commit
# On the MASG, enable the MPLS capability of passively creating a BFD session.
[~MASG] bfd
[*MASG-bfd] mpls-passive
[*MASG-bfd] quit
[*MASG] commit
----End
Configuration Files
● CSG configuration file
#
sysname CSG
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
mpls bgp bfd enable
mpls bgp bfd-trigger-Tunnel host
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack0
#
ipv4-family unicast
network 1.1.1.1 255.255.255.255
peer 2.2.2.2 enable
peer 2.2.2.2 route-policy policy1 export
peer 2.2.2.2 label-route-capability
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
return
● AGG configuration file
#
sysname AGG
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0001.0020.0200.2002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
peer 1.1.1.1 enable
peer 1.1.1.1 route-policy policy1 export
#
sysname Core ASBR
#
mpls lsr-id 4.4.4.4
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 4.4.4.4 255.255.255.255
#
bgp 200
peer 5.5.5.5 as-number 200
peer 5.5.5.5 connect-interface LoopBack0
peer 10.3.1.1 as-number 100
#
ipv4-family unicast
peer 5.5.5.5 enable
peer 5.5.5.5 route-policy policy1 export
peer 5.5.5.5 label-route-capability
peer 10.3.1.1 enable
peer 10.3.1.1 route-policy policy1 export
peer 10.3.1.1 label-route-capability check-tunnel-reachable
#
ospf 2
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.4.1.0 0.0.0.255
#
route-policy policy1 permit node 1
if-match mpls-label
apply mpls-label
#
return
● MASG configuration file
#
sysname MASG
#
bfd
mpls-passive
#
mpls lsr-id 5.5.5.5
#
mpls
#
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 5.5.5.5 255.255.255.255
#
bgp 100
GMPLS UNI
GMPLS is developed from MPLS so that it inherits nearly all MPLS features and
protocols. GMPLS also extends the definition of MPLS labels and it can be
considered as an extension of MPLS in transmission networks. GMPLS provides a
unified control plane for the IP layer and transport layer. In this manner, the
network architecture is simplified, the network management cost is reduced, and
the network performance is optimized.
Usage Scenario
GMPLS UNI tunneling technology is applicable to the following scenarios, as
shown in Table 1-27.
Scenario Description
NOTE
Some GMPLS UNI tunnel configurations need to be performed on edge devices on the
transport network. This document mainly describes tunnel configurations on IP network
devices. For tunnel configurations of transport network edge devices, see the related
configuration guide.
Pre-configuration Tasks
Before configuring a GMPLS UNI tunnel, complete the following tasks:
● Enable MPLS-TE and RSVP-TE globally on the ingress EN and egress EN.
● (Optional) Configure static routes to ensure that the out-of-band control
channel between an IP network and a transport network is reachable at the
network layer if the out-of-band mode is used to separate the data channel
from the control channel.
● (Optional) Enable EFM globally if the in-band mode is used to separate the
data channel from the control channel.
Configuration Procedures
Either of the following methods can be used to calculate paths for GMPLS UNI
tunnels:
● Independent path calculation at IP and optical layers
● PCE path calculation for a path crossing the IP and optical layers
Figure 1-70 illustrates path calculation processes using the preceding two
methods.
1.1.6.3.1 (Optional) Configuring PCE to Calculate a Path Crossing Both the IP and
Optical Layers
A specific path calculation mode must be planned for a GMPLS UNI tunnel.
Context
The NE9000 calculates a path for a GMPLS UNI tunnel in either of the following
modes:
● Independent path calculation at IP and optical layers
● PCE path calculation for a path crossing the IP and optical layers
The independent path calculation mode is enabled by default. PCE path
calculation can be configured to calculate a path crossing both the IP and optical
layers.
Procedure
Step 1 Configure the ingress EN to send a request to a PCE server to calculate a path
crossing both the IP and optical layers.
1. Run system-view
----End
Context
Using logical GMPLS UNIs as service interfaces facilitates redundancy protection
for interfaces connecting the IP and optical layer devices. If a GMPLS UNI tunnel
bound to a logical GMPLS UNI fails, the logical GMPLS UNI automatically searches
for another available GMPLS UNI tunnel and switches traffic to the new tunnel,
which implements redundancy protection.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface gmpls-uni interface-number
A service interface is created, and the logical GMPLS UNI view is displayed.
Step 3 Run ip address ip-address { mask | mask-length }
An IP address is assigned to the interface.
Step 4 (Optional) Configure upper layer service applications, such as a static route, an
IGP, or MPLS.
The configuration procedure is similar to that on a physical interface. The
configuration details are not provided.
Step 5 Run commit
The configuration is committed.
----End
Context
GMPLS uses LMP to manage links of the control and data channels. LMP is
classified into the following types:
● Static LMP: LMP neighbors, control channels, and data channels are manually
configured, without exchanging LMP packets.
● Dynamic LMP: LMP neighbors, control channels, and data channels are all
automatically discovered, which minimizes configurations and speeds up
network deployment.
Currently, the NE9000 supports only static LMP. Perform the following steps to
configure LMP and a neighbor on an edge node:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run lmp
The LMP view is displayed.
Step 3 Run peer name
An LMP neighbor is created, and its view is displayed.
NOTE
The neighbor of an IP network edge node is a transport network edge node that directly
connects to the IP network edge node. This means that LMP neighbor relationships are
created between the ingress EN and ingress CN, and between the egress EN and egress CN.
----End
Context
A GMPLS UNI control channel carries control packets such as RSVP-TE signaling
packets. GMPLS, as an enhancement to MPLS, separates control and data
channels physically and uses LMP to manage and maintain control and data
channels. A fault in the control channel does not affect the data channel, which
implements uninterrupted service forwarding and improves the reliability of the
entire network.
The data and control channels are separated in either out-of-band or in-band
mode. Table 1-28 describes the comparison between the two modes.
The two separation modes have advantages of their own. Select one mode as
needed.
Procedure
● In-band mode
NOTE
In-band control channel configurations depend on the EFM OAM function. Enable
EFM OAM globally before performing the following steps.
a. Run system-view
NOTE
a. Run system-view
----End
Context
Determine whether you need to configure an explicit path based on a tunnel
calculation mode:
● If paths are calculated separately at the IP and optical layers, configure an
explicit path.
● If PCE is used to calculate a patch across both the IP and optical layers, you
do not need to configure an explicit path. This is because PCE automatically
calculates a path for a tunnel.
A GMPLS UNI tunnel originates from the ingress EN. An explicit path must be
configured on the ingress EN for a GMPLS UNI tunnel.
NOTE
An explicit path for a GMPLS UNI tunnel, different from that for an MPLS TE tunnel, must
pass through only four data channel interfaces on the ingress EN, ingress CN, egress EN,
and egress CN.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run explicit-path path-name
An explicit path is created and the explicit path view is displayed.
Step 3 Run next hop ip-address
A next-hop address is specified for the explicit path.
NOTE
Perform Step 3 for four consecutive times to complete the configuration of the explicit path
for a GMPLS UNI tunnel.
Step 4 (Optional) Perform the following steps to modify the configured explicit path:
1. Run add hop ip-address1 { after | before } ip-address2
A node is added to the explicit path.
2. Run modify hop ip-address1 ip-address2
The address of a node is changed to the address of another existing node.
3. Run delete hop ip-address
A node is deleted from the explicit path.
Step 5 (Optional) Run list hop [ ip-address ]
Information about nodes on the explicit path is displayed.
Step 6 Run commit
The configuration is committed.
----End
Context
Forward and reverse UNI-LSPs are established for bidirectional GMPLS UNI tunnels
and have the same requirements on traffic engineering. A GMPLS UNI tunnel is
established using extended RSVP-TE. The ingress EN initiates tunnel establishment
requests containing tunnel attributes by sending Path messages. Therefore, tunnel
attributes and functions need to be configured on the ingress EN and do not need
to be configured on the egress EN for a reverse GMPLS UNI tunnel.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run gmpls-tunnel gmpls-tunnel-name
A GMPLS UNI tunnel is established, and the tunnel view is displayed.
Step 3 Run tunnel-id tunnel-id
The tunnel ID is configured.
Step 4 Run destination ip-address
A destination IP address is set for the GMPLS UNI tunnel. Generally, the LSR ID of
the sink C node is set as the destination IP address.
Step 5 Run bandwidth bw-value
The bandwidth is configured for the GMPLS UNI tunnel.
Step 6 Run explicit-path path-name
Explicit path constraints are configured.
Step 7 Run switch-type { dcsc | evpl }
A data switching type is set for a GMPLS UNI tunnel.
Step 8 Run bind interface interface-type interface-number
A GMPLS UNI tunnel interface is bound to a service interface.
Only a local GMPLS UNI can function as a service interface.
Step 9 (Optional) Run link protection-type unprotected
The link protection function is configured for the GMPLS UNI tunnel.
If PCE path calculation is configured, this command does not need to be run. This
is because the device forcibly sets the protection type to rerouting for PCE path
calculation.
----End
Context
A GMPLS UNI tunnel is bound to the physical interface of the data channel to
carry data services on the ingress EN and egress EN. A GMPLS UNI tunnel is
established using extended RSVP-TE. The ingress EN initiates tunnel establishment
requests containing tunnel attributes by sending Path messages. Therefore, tunnel
attributes and functions need to be configured on the ingress EN and do not need
to be configured on the egress EN for a reverse GMPLS UNI tunnel. A reverse
GMPLS UNI tunnel only needs to be matched with its corresponding forward
GMPLS UNI tunnel and bound to the service bearer interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run gmpls-tunnel gmpls-tunnel-name
A GMPLS UNI tunnel is established, and the tunnel view is displayed.
Step 3 Run passive
The established tunnel is configured as a reverse GMPLS UNI tunnel.
Step 4 Run bind interface interface-type interface-number
A GMPLS UNI tunnel interface is bound to a service interface.
Only a local GMPLS UNI can function as a service interface.
Step 5 Run match-tunnel ingress-lsr-id ingress-lsr-id tunnel-id tunnel-id
The name and node ID of the ingress are configured for the forward GMPLS UNI
tunnel.
The ingress-lsr-id parameter is set to the LSR ID of the ingress EN. The tunnel-id
parameter is set to the forward tunnel ID.
Step 6 Run commit
The configuration is committed.
----End
Follow-up Procedure
After the preceding configurations are complete, a GMPLS UNI tunnel is
established, and the service bearer interface bound to the tunnel goes Up. IP and
MPLS services can be configured on the interface. Configurations are the same as
those of other interfaces that are not bound to the GMPLS UNI tunnel. Detailed
configuration procedures are not mentioned here.
Prerequisites
A GMPLS UNI tunnel has been configured.
Procedure
● Run the display lmp peer command to check information about LMP
neighbors.
● Run the display explicit-path [ path-name ] [ verbose ] command to check
information about the configured explicit path.
● Run the display mpls te gmpls tunnel path [ path-name ] [ verbose ]
command to check GMPLS UNI tunnel path information.
● Run the display mpls te gmpls tunnel c-hop [ tunnel-name tunnel-name ]
[ lsp-id ingress-lsr-id egress-lsr-id tunnel-id lsp-id ] command to check
information about the calculated path for a GMPLS UNI tunnel.
● Run the display mpls gmpls lsp [ in-label in-label | incoming-interface
interface-type interface-number | lsr-role { egress | ingress } | out-label out-
label | outgoing-interface interface-type interface-number ] * [ verbose ]
command to check GMPLS UNI LSP information.
● Run the display mpls te gmpls tunnel [ name gmpls-tunnel-name ]
[ verbose ] command to check information about a GMPLS UNI tunnel.
● Run the display mpls te gmpls tunnel-interface [ name gmpls-tunnel-
name ] command to check information about GMPLS UNI tunnel interfaces
on the ingress EN and egress EN.
----End
Context
To shut down an existing GMPLS UNI tunnel, run the shutdown command on the
ingress EN to release label and bandwidth resources assigned to the tunnel.
GMPLS UNI tunnel configurations, however, are kept.
To start this tunnel again, run the undo shutdown command to re-establish the
tunnel based on the original configuration file. The path of the re-established UNI
LSP within a transport network may be different from the original one because
topology or bandwidth within a transport network may change.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run gmpls-tunnel gmpls-tunnel-name
The view of the established GMPLS UNI tunnel is displayed.
Step 3 Run shutdown
A GMPLS UNI tunnel is disabled.
Step 4 Run commit
The configuration is committed.
----End
Context
If the path within a transport network is re-planned, and configurations on the
ingress EN do not change, you can run the following command to reset a GMPLS
UNI tunnel. UNI LSPs are re-established according to the new path.
Procedure
● Run the reset mpls te gmpls tunnel gmpls-tunnel-name command to reset a
GMPLS UNI tunnel.
----End
Networking Requirements
In Figure 1-71, PE1 and PE2 are IP devices, and NE1 and NE2 are optical transport
devices. A customer wants to establish a GMPLS UNI tunnel to connect the IP
GE 3/0/0 -
This interface is a link
interface of the TE-link
and does not need to be
assigned an IP address.
GMPLS-UNI1 10.2.1.1/30
GE 3/0/0 -
This interface is a link
interface of the TE-link
and does not need to be
assigned an IP address.
GMPLS-UNI1 10.2.1.2/30
Configuration Notes
● Configurations on the ingress and the egress of the GMPLS UNI tunnel are
different.
● In this example, configurations only of IP devices (PE1 and PE2) are described.
For configuration details about optical devices, see the configuration guide for
a specific optical device.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure loopback interfaces and a GMPLS UNI service interface and assign
IP addresses to the interfaces.
2. Enable MPLS, MPLS TE, and MPLS RSVP-TE globally.
3. Configure LMP, a TE-link, and a data-link.
4. Configure an in-band control channel.
5. Configure an explicit path.
6. Configure a GMPLS UNI tunnel along the path PE1 -> PE2 to connect the IP
network to the transport network.
Data Preparation
To complete the configuration, you need the following data:
Tunnel ID 1
TE-link Number: 1
Local interface ID: 192.168.1.1
Remote interface ID: 192.168.1.2
TE-link Number: 1
Local interface ID: 192.168.2.2
Remote interface ID: 192.168.2.1
Procedure
Step 1 Configure loopback interfaces and a GMPLS UNI service interface and assign IP
addresses to the interfaces.
# Configure PE1.
<PE1> system-view
[~PE1] interface LoopBack 0
[*PE1-LoopBack0] ip address 1.1.1.1 32
[*PE1-LoopBack0] quit
[*PE1] interface Gmpls-Uni 1
[*PE1-Gmpls-Uni1] ip address 10.2.1.1 255.255.255.252
[*PE1-Gmpls-Uni1] quit
[*PE1] commit
# Configure PE2.
<PE2> system-view
[~PE2] interface LoopBack 0
[*PE2-LoopBack0] ip address 2.2.2.2 32
[*PE2-LoopBack0] quit
[*PE2] interface Gmpls-Uni 1
[*PE2-Gmpls-Uni1] ip address 10.2.1.2 255.255.255.252
[*PE2-Gmpls-Uni1] quit
[*PE2] commit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] commit
[~PE2-mpls] quit
[~PE1] lmp
[*PE1-lmp] peer ne1
[*PE1-lmp-peer-ne1] lmp static
[*PE1-lmp-peer-ne1] node-id 7.7.7.7
[*PE1-lmp-peer-ne1] te-link 1
[*PE1-lmp-peer-ne1-te-link-1] link-id local ip 192.168.1.1
[*PE1-lmp-peer-ne1-te-link-1] link-id remote ip 192.168.1.2
[*PE1-lmp-peer-ne1-te-link-1] data-link interface GigabitEthernet3/0/0 local interface-id 192.168.1.1
remote interface-id 192.168.1.2
[*PE1-lmp-peer-ne1-te-link-1] commit
[~PE1-lmp-peer-ne1-te-link-1] quit
[~PE1-lmp-peer-ne1] quit
[~PE1-lmp] quit
# Configure PE2.
[~PE2] lmp
[*PE2-lmp] peer ne2
[*PE2-lmp-peer-ne2] lmp static
[*PE2-lmp-peer-ne2] node-id 8.8.8.8
[*PE2-lmp-peer-ne2] te-link 1
[*PE2-lmp-peer-ne2-te-link-1] link-id local ip 192.168.2.2
[*PE2-lmp-peer-ne2-te-link-1] link-id remote ip 192.168.2.1
[*PE2-lmp-peer-ne2-te-link-1] data-link interface GigabitEthernet3/0/0 local interface-id 192.168.2.2
remote interface-id 192.168.2.1
[*PE2-lmp-peer-ne2-te-link-1] commit
[~PE2-lmp-peer-ne2-te-link-1] quit
[~PE2-lmp-peer-ne2] quit
[~PE2-lmp] quit
# Configure PE2.
[~PE2] efm enable
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] efm enable
[*PE2-GigabitEthernet3/0/0] efm packet max-size 1518
[*PE2-GigabitEthernet3/0/0] lmp interface enable
[*PE2-GigabitEthernet3/0/0] quit
# Configure PE2.
<PE2> system-view
[~PE2] gmpls-tunnel toPE1
[*PE2-gmpls-tunnel-toPE1] passive
[*PE2-gmpls-tunnel-toPE1] match-tunnel ingress-lsr-id 1.1.1.1 tunnel-id 1
[*PE2-gmpls-tunnel-toPE1] bind interface Gmpls-Uni1
[*PE2-gmpls-tunnel-toPE1] commit
[~PE2-gmpls-tunnel-toPE1] quit
# After the tunnel goes Up, initiate a ping to the IP address of the service interface
bound to the tunnel. The ping is successful, which indicates that the IP and optical
networks have been successfully connected.
[~PE1] ping 10.2.1.2
PING 10.2.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.2.1.2: bytes=56 Sequence=1 ttl=255 time=6 ms
Reply from 10.2.1.2: bytes=56 Sequence=2 ttl=255 time=2 ms
Reply from 10.2.1.2: bytes=56 Sequence=3 ttl=255 time=1 ms
Reply from 10.2.1.2: bytes=56 Sequence=4 ttl=255 time=3 ms
Reply from 10.2.1.2: bytes=56 Sequence=5 ttl=255 time=2 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE-1
#
efm enable
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path unipath
next hop 192.168.1.1
next hop 192.168.1.2
next hop 192.168.2.1
next hop 192.168.2.2
#
interface GigabitEthernet3/0/0
undo shutdown
lmp interface enable
efm enable
efm packet max-size 1518
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
interface Gmpls-Uni1
undo shutdown
ip address 10.2.1.1 255.255.255.252
#
lmp
peer ne1
lmp static
node-id 7.7.7.7
te-link 1
link-id local ip 192.168.1.1
link-id remote ip 192.168.1.2
data-link interface GigabitEthernet3/0/0 local interface-id 192.168.1.1 remote interface-id
192.168.1.2
#
gmpls-tunnel toPE2
destination 2.2.2.2
bind interface Gmpls-Uni1
switch-type dcsc
bandwidth 10000
explicit-path unipath
tunnel-id 1
#
return
● PE2 configuration file
#
sysname PE2
#
efm enable
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet3/0/0
undo shutdown
lmp interface enable
efm enable
efm packet max-size 1518
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
interface Gmpls-Uni1
undo shutdown
ip address 10.2.1.2 255.255.255.252
#
lmp
peer ne2
lmp static
node-id 8.8.8.8
te-link 1
link-id local ip 192.168.2.2
link-id remote ip 192.168.2.1
data-link interface GigabitEthernet3/0/0 local interface-id 192.168.2.2 remote interface-id
192.168.2.1
#
gmpls-tunnel toPE1
passive
bind interface Gmpls-Uni1
match-tunnel ingress-lsr-id 1.1.1.1 tunnel-id 1
#
return
Networking Requirements
In Figure 1-72, PE1 and PE2 are IP devices, and NE1 and NE2 are optical transport
devices. A customer wants to establish a GMPLS UNI tunnel to connect the IP
network to the optical network. Since devices have sufficient interfaces, an out-of-
bound control channel can be used to establish a GMPLS UNI tunnel.
GE 3/0/0 -
This interface is a link
interface of the TE-
link and does not
need to be assigned
an IP address.
GMPLS-UNI 1 10.2.1.1/30
GE 3/0/1 10.1.1.1/30
Port 0 10.1.1.2/30
GE 3/0/0 -
This interface is a link
interface of the TE-
link and does not
need to be assigned
an IP address.
GE 3/0/1 10.1.2.2/30
GMPLS-UNI 1 10.2.1.2/30
Port 0 10.1.2.1/30
Configuration Notes
● Configurations on the ingress and the egress of the GMPLS UNI tunnel are
different.
● In this example, configurations only of IP devices (PE1 and PE2) are described.
For configuration details about optical devices, see the configuration guide for
a specific optical device.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure loopback interfaces and a GMPLS UNI service interface and assign
IP addresses to the interfaces.
Data Preparation
To complete the configuration, you need the following data:
Tunnel ID 1
TE-link Number: 1
Local interface ID: 192.168.1.1
Remote interface ID: 192.168.1.2
TE-link Number: 1
Local interface ID: 192.168.2.2
Remote interface ID: 192.168.2.1
Procedure
Step 1 Configure loopback interfaces and a GMPLS UNI service interface and assign IP
addresses to the interfaces.
# Configure PE1.
<PE1> system-view
[~PE1] interface LoopBack 0
[~PE1-LoopBack0] ip address 1.1.1.1 32
[*PE1-LoopBack0] quit
[*PE1] interface Gmpls-Uni 1
[*PE1-Gmpls-Uni1] ip address 10.2.1.1 255.255.255.252
[*PE1-Gmpls-Uni1] quit
[*PE1] commit
# Configure PE2.
<PE2> system-view
[~PE2] interface LoopBack 0
[~PE2-LoopBack0] ip address 2.2.2.2 32
[*PE2-LoopBack0] quit
[*PE2] interface Gmpls-Uni 1
[*PE2-Gmpls-Uni1] ip address 10.2.1.2 255.255.255.252
[*PE2-Gmpls-Uni1] quit
[*PE2] commit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] mpls rsvp-te
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure PE2.
[~PE2] lmp
[*PE2-lmp] peer ne2
# Configure PE2.
[~PE2] ip route-static 8.8.8.8 32 10.1.2.1
[*PE2] commit
# Configure PE2.
<PE2> system-view
[~PE2] gmpls-tunnel toPE1
[*PE2-gmpls-tunnel-toPE1] passive
[*PE2-gmpls-tunnel-toPE1] match-tunnel ingress-lsr-id 1.1.1.1 tunnel-id 1
[*PE2-gmpls-tunnel-toPE1] bind interface Gmpls-Uni1
[*PE2-gmpls-tunnel-toPE1] commit
[~PE2-gmpls-tunnel-toPE1] quit
# After the tunnel goes Up, initiate a ping to the IP address of the service interface
bound to the tunnel. The ping is successful, which indicates that the IP and optical
networks have been successfully connected.
[~PE1] ping 10.2.1.2
PING 10.2.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.2.1.2: bytes=56 Sequence=1 ttl=255 time=6 ms
Reply from 10.2.1.2: bytes=56 Sequence=2 ttl=255 time=2 ms
Reply from 10.2.1.2: bytes=56 Sequence=3 ttl=255 time=1 ms
Reply from 10.2.1.2: bytes=56 Sequence=4 ttl=255 time=3 ms
Reply from 10.2.1.2: bytes=56 Sequence=5 ttl=255 time=2 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path unipath
next hop 192.168.1.1
next hop 192.168.1.2
next hop 192.168.2.1
next hop 192.168.2.2
#
interface GigabitEthernet3/0/0
undo shutdown
#
interface GigabitEthernet3/0/1
undo shutdown
ip address 10.1.1.1 255.255.255.252
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
interface Gmpls-Uni1
undo shutdown
ip address 10.2.1.1 255.255.255.252
#
lmp
peer ne1
lmp static
node-id 7.7.7.7
te-link 1
link-id local ip 192.168.1.1
link-id remote ip 192.168.1.2
data-link interface GigabitEthernet3/0/0 local interface-id 192.168.1.1 remote interface-id
192.168.1.2
#
ip route-static 7.7.7.7 255.255.255.255 10.1.1.2
#
gmpls-tunnel gmpls-tunnel toPE2
destination 2.2.2.2
bind interface Gmpls-Uni1
switch-type dcsc
bandwidth 1000
explicit-path unipath
tunnel-id 1
#
return