Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

5 Label Distribution Protocol: 5.1 in This Chapter

Download as pdf or txt
Download as pdf or txt
You are on page 1of 132

MPLS GUIDE Label Distribution Protocol

RELEASE 15.0.R1

5 Label Distribution Protocol

5.1 In This Chapter


This chapter provides information to enable Label Distribution Protocol (LDP).

5.2 Label Distribution Protocol


Label Distribution Protocol (LDP) is a protocol used to distribute labels in non-traffic-
engineered applications. LDP allows routers to establish label switched paths (LSPs)
through a network by mapping network-layer routing information directly to data link
layer-switched paths.

An LSP is defined by the set of labels from the ingress Label Switching Router (LSR)
to the egress LSR. LDP associates a Forwarding Equivalence Class (FEC) with each
LSP it creates. A FEC is a collection of common actions associated with a class of
packets. When an LSR assigns a label to a FEC, it must let other LSRs in the path
know about the label. LDP helps to establish the LSP by providing a set of
procedures that LSRs can use to distribute labels.

The FEC associated with an LSP specifies which packets are mapped to that LSP.
LSPs are extended through a network as each LSR splices incoming labels for a
FEC to the outgoing label assigned to the next hop for the given FEC. The next-hop
for a FEC prefix is resolved in the routing table. LDP can only resolve FECs for IGP
and static prefixes. LDP does not support resolving FECs of a BGP prefix.

LDP allows an LSR to request a label from a downstream LSR so it can bind the label
to a specific FEC. The downstream LSR responds to the request from the upstream
LSR by sending the requested label.

LSRs can distribute a FEC label binding in response to an explicit request from
another LSR. This is known as Downstream On Demand (DOD) label distribution.
LSRs can also distribute label bindings to LSRs that have not explicitly requested
them. This is called Downstream Unsolicited (DU).

SR OS supports IPv4 and IPv6 in LDP control and data planes; as IPv6 has been
added subsequently, CLI commands have changed to support both IPv4 and IPv6.
Refer to the Release 13.0.R1 SR OS Software Release Notes for more information.

Issue: 01 3HE 11972 AAAA TQZZA 01 655


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.2.1 LDP and MPLS


LDP performs the label distribution only in MPLS environments. The LDP operation
begins with a hello discovery process to find LDP peers in the network. LDP peers
are two LSRs that use LDP to exchange label/FEC mapping information. An LDP
session is created between LDP peers. A single LDP session allows each peer to
learn the other's label mappings (LDP is bi-directional) and to exchange label binding
information.

LDP signaling works with the MPLS label manager to manage the relationships
between labels and the corresponding FEC. For service-based FECs, LDP works in
tandem with the Service Manager to identify the virtual leased lines (VLLs) and
Virtual Private LAN Services (VPLSs) to signal.

An MPLS label identifies a set of actions that the forwarding plane performs on an
incoming packet before discarding it. The FEC is identified through the signaling
protocol (in this case, LDP) and allocated a label. The mapping between the label
and the FEC is communicated to the forwarding plane. In order for this processing
on the packet to occur at high speeds, optimized tables are maintained in the
forwarding plane that enable fast access and packet identification.

When an unlabeled packet ingresses the router, classification policies associate it


with a FEC. The appropriate label is imposed on the packet, and the packet is
forwarded. Other actions that can take place before a packet is forwarded are
imposing additional labels, other encapsulations, learning actions, etc. When all
actions associated with the packet are completed, the packet is forwarded.

When a labeled packet ingresses the router, the label or stack of labels indicates the
set of actions associated with the FEC for that label or label stack. The actions are
preformed on the packet and then the packet is forwarded.

The LDP implementation provides DOD, DU, ordered control, liberal label retention
mode support.

5.2.2 LDP Architecture


LDP comprises a few processes that handle the protocol PDU transmission, timer-
related issues, and protocol state machine. The number of processes is kept to a
minimum to simplify the architecture and to allow for scalability. Scheduling within
each process prevents starvation of any particular LDP session, while buffering
alleviates TCP-related congestion issues.

656 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

The LDP subsystems and their relationships to other subsystems are illustrated in
Figure 59. This illustration shows the interaction of the LDP subsystem with other
subsystems, including memory management, label management, service
management, SNMP, interface management, and RTM. In addition, debugging
capabilities are provided through the logger.

Communication within LDP tasks is typically done by inter-process communication


through the event queue, as well as through updates to the various data structures.
The primary data structures that LDP maintains are:

• FEC/label database — Contains all FEC to label mappings that include both
sent and received. It also contains both address FECs (prefixes and host
addresses) and service FECs (L2 VLLs and VPLS)
• Timer database — Contains all timers for maintaining sessions and adjacencies
• Session database — Contains all session and adjacency records, and serves as
a repository for the LDP MIB objects

5.2.3 Subsystem Interrelationships


The sections below describe how LDP and the other subsystems work to provide
services. Figure 59 shows the interrelationships among the subsystems.

Issue: 01 3HE 11972 AAAA TQZZA 01 657


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 59 Subsystem Interrelationships

Memory
Mgr

LDP MIB Session DB Label


Mgr

Send/
Timer Protocol
Receive

Send/
Config
Receive

(CLI/SNMP) Timer DB FEC/


Label DB

Logger
Event
Queue

Interface RTM
Mgr
Service
Mgr

Event
Queue
Event
Queue

OSSRG017

5.2.3.1 Memory Manager and LDP

LDP does not use any memory until it is instantiated. It pre-allocates some amount
of fixed memory so that initial startup actions can be performed. Memory allocation
for LDP comes out of a pool reserved for LDP that can grow dynamically as needed.
Fragmentation is minimized by allocating memory in larger chunks and managing the
memory internally to LDP. When LDP is shut down, it releases all memory allocated
to it.

658 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.2.3.2 Label Manager

LDP assumes that the label manager is up and running. LDP will abort initialization
if the label manager is not running. The label manager is initialized at system boot-
up; hence, anything that causes it to fail will likely imply that the system is not
functional. The router uses a dynamic label range from values 18,432 through
262,143 to allocate all dynamic labels, including RSVP and BGP allocated labels and
VC labels.

5.2.3.3 LDP Configuration

The router uses a single consistent interface to configure all protocols and services.
CLI commands are translated to SNMP requests and are handled through an agent-
LDP interface. LDP can be instantiated or deleted through SNMP. Also, LDP
targeted sessions can be set up to specific endpoints. Targeted-session parameters
are configurable.

5.2.3.4 Logger

LDP uses the logger interface to generate debug information relating to session
setup and teardown, LDP events, label exchanges, and packet dumps. Per-session
tracing can be performed.

5.2.3.5 Service Manager

All interaction occurs between LDP and the service manager, since LDP is used
primarily to exchange labels for Layer 2 services. In this context, the service manager
informs LDP when an LDP session is to be set up or torn down, and when labels are
to be exchanged or withdrawn. In turn, LDP informs service manager of relevant LDP
events, such as connection setups and failures, timeouts, labels signaled/withdrawn.

5.2.4 Execution Flow


LDP activity in the operating system is limited to service-related signaling. Therefore,
the configurable parameters are restricted to system-wide parameters, such as hello
and keepalive timeouts.

Issue: 01 3HE 11972 AAAA TQZZA 01 659


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.2.4.1 Initialization

LDP makes sure that the various prerequisites, such as ensuring the system IP
interface is operational, the label manager is operational, and there is memory
available, are met. It then allocates itself a pool of memory and initializes its
databases.

5.2.4.2 Session Lifetime

In order for a targeted LDP (T-LDP) session to be established, an adjacency must be


created. The LDP extended discovery mechanism requires hello messages to be
exchanged between two peers for session establishment. After the adjacency
establishment, session setup is attempted.

5.2.4.2.1 Adjacency Establishment

In the router, the adjacency management is done through the establishment of a


Service Distribution Path (SDP) object, which is a service entity in the Nokia service
model.

The Nokia service model uses logical entities that interact to provide a service. The
service model requires the service provider to create configurations for four main
entities:

• Customers
• Services
• Service Access Paths (SAPs) on the local routers
• Service Distribution Points (SDPs) that connect to one or more remote routers.

An SDP is the network-side termination point for a tunnel to a remote router. An SDP
defines a local entity that includes the system IP address of the remote routers and
a path type. Each SDP comprises:

• The SDP ID
• The transport encapsulation type, either MPLS or GRE
• The far-end system IP address

If the SDP is identified as using LDP signaling, then an LDP extended hello
adjacency is attempted.

660 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

If another SDP is created to the same remote destination, and if LDP signaling is
enabled, no further action is taken, since only one adjacency and one LDP session
exists between the pair of nodes.

An SDP is a uni-directional object, so a pair of SDPs pointing at each other must be


configured in order for an LDP adjacency to be established. Once an adjacency is
established, it is maintained through periodic hello messages.

5.2.4.2.2 Session Establishment

When the LDP adjacency is established, the session setup follows as per the LDP
specification. Initialization and keepalive messages complete the session setup,
followed by address messages to exchange all interface IP addresses. Periodic
keepalives or other session messages maintain the session liveliness.

Since TCP is back-pressured by the receiver, it is necessary to be able to push that


back-pressure all the way into the protocol. Packets that cannot be sent are buffered
on the session object and re-attempted as the back-pressure eases.

5.2.5 Label Exchange


Label exchange is initiated by the service manager. When an SDP is attached to a
service (for example, the service gets a transport tunnel), a message is sent from the
service manager to LDP. This causes a label mapping message to be sent.
Additionally, when the SDP binding is removed from the service, the VC label is
withdrawn. The peer must send a label release to confirm that the label is not in use.

5.2.5.1 Other Reasons for Label Actions

Other reasons for label actions include:

• MTU changes: LDP withdraws the previously assigned label, and re-signals the
FEC with the new MTU in the interface parameter.
• Clear labels: When a service manager command is issued to clear the labels,
the labels are withdrawn, and new label mappings are issued.
• SDP down: When an SDP goes administratively down, the VC label associated
with that SDP for each service is withdrawn.
• Memory allocation failure: If there is no memory to store a received label, it is
released.

Issue: 01 3HE 11972 AAAA TQZZA 01 661


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

• VC type unsupported: When an unsupported VC type is received, the received


label is released.

5.2.5.2 Cleanup

LDP closes all sockets, frees all memory, and shuts down all its tasks when it is
deleted, so its memory usage is 0 when it is not running.

5.2.5.3 Configuring Implicit Null Label

The implicit null label option allows an egress LER to receive MPLS packets from the
previous hop without the outer LSP label. The user can configure to signal the implicit
operation of the previous hop is referred to as penultimate hop popping (PHP). This
option is signaled by the egress LER to the previous hop during the FEC signaling
by the LDP control protocol.

Enable the use of the implicit null option, for all LDP FECs for which this node is the
egress LER, using the following command:

config>router>ldp>implicit-null-label

When the user changes the implicit null configuration option, LDP withdraws all the
FECs and re-advertises them using the new label value.

5.2.6 Global LDP Filters


Both inbound and outbound LDP label binding filtering are supported.

Inbound filtering is performed by way of the configuration of an import policy to


control the label bindings an LSR accepts from its peers. Label bindings can be
filtered based on:

• Prefix-list: Match on bindings with the specified prefix/prefixes.


• Neighbor: Match on bindings received from the specified peer.

The default import policy is to accept all FECs received from peers.

662 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Outbound filtering is performed by way of the configuration of an export policy. The


Global LDP export policy can be used to explicitly originate label bindings for local
interfaces. The Global LDP export policy does not filter out or stop propagation of any
FEC received from neighbors. Use the LDP peer export prefix policy for this purpose.
The system IP address AND static FECs cannot be blocked using an export policy.

Export policy enables configuration of a policy to advertise label bindings based on:

• Direct: All local subnets.


• Prefix-list: Match on bindings with the specified prefix or prefixes.

The default export policy is to originate label bindings for system address only and to
propagate all FECs received from other LDP peers.

Finally, the 'neighbor interface' statement inside a global import policy is not
considered by LDP.

5.2.6.1 Per LDP Peer FEC Import and Export Policies

The FEC prefix export policy provides a way to control which FEC prefixes received
from prefixes received from other LDP and T-LDP peers are re-distributed to this
LDP peer.

The user configures the FEC prefix export policy using the following command:

config>router>ldp>session-params>peer>export-prefixes policy-name

By default, all FEC prefixes are exported to this peer.

The FEC prefix import policy provides a mean of controlling which FEC prefixes
received from this LDP peer are imported and installed by LDP on this node. If
resolved these FEC prefixes are then re-distributed to other LDP and T-LDP peers.

The user configures the FEC prefix export policy using the following command:

config>router>ldp>session-params>peer>import-prefixes policy-name

By default, all FEC prefixes are imported from this peer.

Issue: 01 3HE 11972 AAAA TQZZA 01 663


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.2.7 Configuring Multiple LDP LSR ID


The multiple LDP LSR-ID feature provides the ability to configure and initiate multiple
Targeted LDP (T-LDP) sessions on the same system using different LDP LSR-IDs.
In the current implementation, all T-LDP sessions must have the LSR-ID match the
system interface address. This feature continues to allow the use of the system
interface by default, but also any other network interface, including a loopback,
address on a per T-LDP session basis. The LDP control plane will not allow more
than a single T-LDP session with different local LSR ID values to the same LSR-ID
in a remote node.

An SDP of type LDP can use a provisioned targeted session with the local LSR-ID
set to any network IP for the T-LDP session to the peer matching the SDP far-end
address. If, however, no targeted session has been explicitly pre-provisioned to the
far-end node under LDP, then the SDP will auto-establish one but will use the system
interface address as the local LSR-ID.

An SDP of type RSVP must use an RSVP LSP with the destination address matching
the remote node LDP LSR-ID. An SDP of type GRE can only use a T-LDP session
with a local LSR-ID set to the system interface.

The multiple LDP LSR-ID feature also provides the ability to use the address of the
local LDP interface, or any other network IP interface configured on the system, as
the LSR-ID to establish link LDP Hello adjacency and LDP session with directly
connected LDP peers. The network interface can be a loopback or not.

Link LDP sessions to all peers discovered over a given LDP interface share the same
local LSR-ID. However, LDP sessions on different LDP interfaces can use different
network interface addresses as their local LSR-ID.

By default, the link and targeted LDP sessions to a peer use the system interface
address as the LSR-ID unless explicitly configured using this feature. The system
interface must always be configured on the router or else the LDP protocol will not
come up on the node. There is no requirement to include it in any routing protocol.

When an interface other than system is used as the LSR-ID, the transport connection
(TCP) for the link or targeted LDP session will also use the address of that interface
as the transport address.

664 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.2.8 T-LDP hello reduction


This feature implements a new mechanism to suppress the transmission of the Hello
messages following the establishment of a Targeted LDP session between two LDP
peers. The Hello adjacency of the targeted session does not require periodic
transmission of Hello messages as in the case of a link LDP session. In link LDP, one
or more peers can be discovered over a given network IP interface and as such, the
periodic transmission of Hello messages is required to discover new peers in addition
to the periodic Keep-Alive message transmission to maintain the existing LDP
sessions. A Targeted LDP session is established to a single peer. Thus, once the
Hello Adjacency is established and the LDP session is brought up over a TCP
connection, Keep-Alive messages are sufficient to maintain the LDP session.

When this feature is enabled, the targeted Hello adjacency is brought up by


advertising the Hold-Time value the user configured in the Hello timeout parameter
for the targeted session. The LSR node will then start advertising an exponentially
increasing Hold-Time value in the Hello message as soon as the targeted LDP
session to the peer is up. Each new incremented Hold-Time value is sent in a number
of Hello messages equal to the value of the Hello reduction factor before the next
exponential value is advertised. This provides time for the two peers to settle on the
new value. When the Hold-Time reaches the maximum value of 0xffff (binary 65535),
the two peers will send Hello messages at a frequency of every [(65535-1)/local
helloFactor] seconds for the lifetime of the targeted-LDP session (for example, if the
local Hello Factor is three (3), then Hello messages will be sent every 21844
seconds).

Both LDP peers must be configured with this feature to bring gradually their
advertised Hold-Time up to the maximum value. If one of the LDP peers does not,
the frequency of the Hello messages of the targeted Hello adjacency will continue to
be governed by the smaller of the two Hold-Time values. This feature complies to
draft-pdutta-mpls-tldp-hello-reduce.

5.2.9 Tracking a T-LDP Peer with BFD


BFD tracking of an LDP session associated with a T-LDP adjacency allows for faster
detection of the liveliness of the session by registering the peer transport address of
a LDP session with a BFD session. The source or destination address of the BFD
session is the local or remote transport address of the targeted or link (if peers are
directly connected) Hello adjacency which triggered the LDP session.

Issue: 01 3HE 11972 AAAA TQZZA 01 665


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

By enabling BFD for a selected targeted session, the state of that session is tied to
the state of the underneath BFD session between the two nodes. The parameters
used for the BFD are set with the BFD command under the IP interface which has
the source address of the TCP connection.

5.2.10 Link LDP Hello Adjacency Tracking with BFD


LDP can only track an LDP peer using the Hello and Keep-Alive timers. If an IGP
protocol registered with BFD on an IP interface to track a neighbor, and the BFD
session times out, the next-hop for prefixes advertised by the neighbor are no longer
resolved. This however does not bring down the link LDP session to the peer since
the LDP peer is not directly tracked by BFD.

In order to properly track the link LDP peer, LDP needs to track the Hello adjacency
to its peer by registering with BFD.

The user effects Hello adjacency tracking with BFD by enabling BFD on an LDP
interface:

config>router>ldp>if-params>if>enable-bfd [ipv4][ipv6]

The parameters used for the BFD session, i.e., transmit-interval, receive-interval,
and multiplier, are those configured under the IP interface:

config>router>if>bfd

The source or destination address of the BFD session is the local or remote address
of link Hello adjacency. When multiple links exist to the same LDP peer, a Hello
adjacency is established over each link. However, a single LDP session will exist to
the peer and will use a TCP connection over one of the link interfaces. Also, a
separate BFD session should be enabled on each LDP interface. If a BFD session
times out on a specific link, LDP will immediately bring down the Hello adjacency on
that link. In addition, if the there are FECs that have their primary NHLFE over this
link, LDP triggers the LDP FRR procedures by sending to IOM and line cards the
neighbor/next-hop down message. This will result in moving the traffic of the
impacted FECs to an LFA next-hop on a different link to the same LDP peer or to an
LFA backup next-hop on a different LDP peer depending on the lowest backup cost
path selected by the IGP SPF.

As soon as the last Hello adjacency goes down as a result of the BFD timing out, the
LDP session goes down and the LDP FRR procedures will be triggered. This will
result in moving the traffic to an LFA backup next-hop on a different LDP peer.

666 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.2.11 LDP LSP Statistics


RSVP-TE LSP statistics is extended to LDP to provide the following counters:

• Per-forwarding-class forwarded in-profile packet count


• Per-forwarding-class forwarded in-profile byte count
• Per-forwarding-class forwarded out-of-profile packet count
• Per-forwarding-class forwarded out-of-profile byte count

The counters are available for the egress data path of an LDP FEC at ingress LER
and at LSR. Because an ingress LER is also potentially an LSR for an LDP FEC,
combined egress data path statistics will be provided whenever applicable.

5.2.12 MPLS Entropy Label


The router supports the MPLS entropy label (RFC 6790) on LDP LSPs used for IGP
and BGP shortcuts. This allows LSR nodes in a network to load-balance labeled
packets in a much more granular fashion than allowed by simply hashing on the
standard label stack.

5.3 TTL Security for BGP and LDP


The BGP TTL Security Hack (BTSH) was originally designed to protect the BGP
infrastructure from CPU utilization-based attacks. It is derived from the fact that the
vast majority of ISP eBGP peerings are established between adjacent routers. Since
TTL spoofing is considered nearly impossible, a mechanism based on an expected
TTL value can provide a simple and reasonably robust defense from infrastructure
attacks based on forged BGP packets.

While TTL Security Hack (TSH) is most effective in protecting directly connected
peers, it can also provide a lower level of protection to multi-hop sessions. When a
multi-hop BGP session is required, the expected TTL value can be set to 255 minus
the configured range-of-hops. This approach can provide a qualitatively lower
degree of security for BGP (such as a DoS attack could, theoretically, be launched
by compromising a box in the path). However, BTSH will catch a vast majority of
observed distributed DoS (DDoS) attacks against eBGP.

TSH can be used to protect LDP peering sessions as well. For details, see draft-
chen-ldp-ttl-xx.txt, TTL-Based Security Option for LDP Hello Message.

Issue: 01 3HE 11972 AAAA TQZZA 01 667


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

The TSH implementation supports the ability to configure TTL security per BGP/LDP
peer and evaluate (in hardware) the incoming TTL value against the configured TTL
value. If the incoming TTL value is less than the configured TTL value, the packets
are discarded and a log is generated.

5.4 ECMP Support for LDP


ECMP support for LDP performs load balancing for LDP based LSPs by having
multiple outgoing next-hops for a given IP prefix on ingress and transit LSRs.

An LSR that has multiple equal cost paths to a given IP prefix can receive an LDP
label mapping for this prefix from each of the downstream next-hop peers. As the
LDP implementation uses the liberal label retention mode, it retains all the labels for
an IP prefix received from multiple next-hop peers.

Without ECMP support for LDP, only one of these next-hop peers will be selected
and installed in the forwarding plane. The algorithm used to determine the next-hop
peer to be selected involves looking up the route information obtained from the RTM
for this prefix and finding the first valid LDP next-hop peer (for example, the first
neighbor in the RTM entry from which a label mapping was received). If, for some
reason, the outgoing label to the installed next-hop is no longer valid, say the session
to the peer is lost or the peer withdraws the label, a new valid LDP next-hop peer will
be selected out of the existing next-hop peers and LDP will reprogram the forwarding
plane to use the label sent by this peer.

With ECMP support, all the valid LDP next-hop peers, those that sent a label
mapping for a given IP prefix, will be installed in the forwarding plane. In both cases,
ingress LER and transit LSR, an ingress label will be mapped to the nexthops that
are in the RTM and from which a valid mapping label has been received. The
forwarding plane will then use an internal hashing algorithm to determine how the
traffic will be distributed amongst these multiple next-hops, assigning each “flow” to
a particular next-hop.

The hash algorithm at LER and transit LSR are described in the LAG and ECMP
Hashing section of the SR OS Interface Guide.

668 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.4.1 Label Operations


If an LSR is the ingress for a given IP prefix, LDP programs a push operation for the
prefix in the forwarding engine. This creates an LSP ID to the Next Hop Label
Forwarding Entry (NHLFE) (LTN) mapping and an LDP tunnel entry in the forwarding
plane. LDP will also inform the Tunnel Table Manager (TTM) of this tunnel. Both the
LTN entry and the tunnel entry will have a NHLFE for the label mapping that the LSR
received from each of its next-hop peers.

If the LSR is to behave as a transit for a given IP prefix, LDP will program a swap
operation for the prefix in the forwarding engine. This involves creating an Incoming
Label Map (ILM) entry in the forwarding plane. The ILM entry will have to map an
incoming label to possibly multiple NHLFEs. If an LSR is an egress for a given IP
prefix, LDP will program a POP entry in the forwarding engine. This too will result in
an ILM entry being created in the forwarding plane but with no NHLFEs.

When unlabeled packets arrive at the ingress LER, the forwarding plane will consult
the LTN entry and will use a hashing algorithm to map the packet to one of the
NHLFEs (push label) and forward the packet to the corresponding next-hop peer. For
labeled packets arriving at a transit or egress LSR, the forwarding plane will consult
the ILM entry and either use a hashing algorithm to map it to one of the NHLFEs if
they exist (swap label) or simply route the packet if there are no NHLFEs (pop label).

Static FEC swap will not be activated unless there is a matching route in system route
table that also matches the user configured static FEC next-hop.

5.5 Unnumbered Interface Support in LDP


This feature allows LDP to establish Hello adjacency and to resolve unicast and
multicast FECs over unnumbered LDP interfaces.

This feature also extends the support of lsp-ping, p2mp-lsp-ping, and ldp-treetrace
to test an LDP unicast or multicast FEC which is resolved over an unnumbered LDP
interface.

Issue: 01 3HE 11972 AAAA TQZZA 01 669


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.5.1 Feature Configuration


This feature does not introduce a new CLI command for adding an unnumbered
interface into LDP. Rather, the fec-originate command is extended to specify the
interface name because an unnumbered interface does not have an IP address of its
own. The user can, however, specify the interface name for numbered interfaces.

See the CLI section for the changes to the fec-originate command.

5.5.2 Operation of LDP over an Unnumbered IP Interface


Consider the setup shown in Figure 60.

Figure 60 LDP Adjacency and Session over Unnumbered Interface

I/F 1
LSR-A:0 LSR-B:0

I/F 2
al_0213

LSR A and LSR B have the following LDP identifiers respectively:

<LSR Id=A> : <label space id=0>

<LSR Id=B> : <label space id=0>

There are two P2P unnumbered interfaces between LSR A and LSR B. These
interfaces are identified on each system with their unique local link identifier. In other
words, the combination of {Router-ID, Local Link Identifier} uniquely identifies the
interface in OSPF or IS-IS throughout the network.

A borrowed IP address is also assigned to the interface to be used as the source


address of IP packets which need to be originated from the interface. The borrowed
IP address defaults to the system loopback interface address, A and B respectively
in this setup. The user can change the borrowed IP interface to any configured IP
interface, loopback or not, by applying the following command:

config>router>if>unnumbered [<ip-int-name | ip-address>]

When the unnumbered interface is added into LDP, it will have the following
behavior.

670 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.5.2.1 Link LDP

Hello adjacency will be brought up using link Hello packet with source IP address set
to the interface borrowed IP address and a destination IP address set to 224.0.0.2.

As a consequence of (1), Hello packets with the same source IP address should be
accepted when received over parallel unnumbered interfaces from the same peer
LSR-ID. The corresponding Hello adjacencies would be associated with a single LDP
session.

The transport address for the TCP connection, which is encoded in the Hello packet,
will always be set to the LSR-ID of the node regardless if the user enabled the
interface option under config>router>ldp>if-params>if>ipv4>transport-address.

The user can configure the local-lsr-id option on the interface and change the value
of the LSR-ID to either the local interface or to some other interface name, loopback
or not, numbered or not. If the local interface is selected or the provided interface
name corresponds to an unnumbered IP interface, the unnumbered interface
borrowed IP address will be used as the LSR-ID. In all cases, the transport address
for the LDP session will be updated to the new LSR-ID value but the link Hello
packets will continue to use the interface borrowed IP address as the source IP
address.

The LSR with the highest transport address, i.e., LSR-ID in this case, will bootstrap
the TCP connection and LDP session.

Source and destination IP addresses of LDP packets are the transport addresses,
i.e., LDP LSR-IDs of systems A and B in this case.

5.5.2.2 Targeted LDP

Source and destination addresses of targeted Hello packet are the LDP LSR-IDs of
systems A and B.

The user can configure the local-lsr-id option on the targeted session and change the
value of the LSR-ID to either the local interface or to some other interface name,
loopback or not, numbered or not. If the local interface is selected or the provided
interface name corresponds to an unnumbered IP interface, the unnumbered
interface borrowed IP address will be used as the LSR-ID. In all cases, the transport
address for the LDP session and the source IP address of targeted Hello message
will be updated to the new LSR-ID value.

The LSR with the highest transport address, i.e., LSR-ID in this case, will bootstrap
the TCP connection and LDP session.

Issue: 01 3HE 11972 AAAA TQZZA 01 671


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Source and destination IP addresses of LDP messages are the transport addresses,
i.e., LDP LSR-IDs of systems A and B in this case.

5.5.2.3 FEC Resolution

LDP will advertise/withdraw unnumbered interfaces using the Address/Address-


Withdraw message. The borrowed IP address of the interface is used.

A FEC can be resolved to an unnumbered interface in the same way as it is resolved


to a numbered interface. The outgoing interface and next-hop are looked up in RTM
cache. The next-hop consists of the router-id and link identifier of the interface at the
peer LSR.

LDP FEC ECMP next-hops over a mix of unnumbered and numbered interfaces is
supported.

All LDP FEC types are supported.

The fec-originate command is supported when the next-hop is over an unnumbered


interface.

All LDP features are supported except for the following:

• BFD cannot be enabled on an unnumbered LDP interface. This is a


consequence of the fact that BFD is not supported on unnumbered IP interface
on the system.
• As a consequence of (1), LDP FRR procedures will not be triggered via a BFD
session timeout but only by physical failures and local interface down events.
• Unnumbered IP interfaces cannot be added into LDP global and peer prefix
policies.

5.6 LDP over RSVP Tunnels


LDP over RSVP-TE provides end-to-end tunnels that have two important properties,
fast reroute and traffic engineering which are not available in LDP. LDP over RSVP-
TE is focused at large networks (over 100 nodes in the network). Simply using end-
to-end RSVP-TE tunnels will not scale. While an LER may not have that many
tunnels, any transit node will potentially have thousands of LSPs, and if each transit
node also has to deal with detours or bypass tunnels, this number can make the LSR
overly burdened.

672 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

LDP over RSVP-TE allows tunneling of user packets using an LDP LSP inside an
RSVP LSP.The main application of this feature is for deployment of MPLS based
services, for example, VPRN, VLL, and VPLS services, in large scale networks
across multiple IGP areas without requiring full mesh of RSVP LSPs between PE
routers.

Figure 61 LDP over RSVP Application

LSP1 LSP3

PE 1 LSP2 PE 2

IP/MPLS IP/MPLS
Metro Network ABR 1 ABR 2 Metro Network
(Area 1) (Area 2)
IP/MPLS
LSP1a
Core Network
LSP2a
(Area 3)

ABR 3
ABR 4
al_0901

The network displayed in Figure 61 consists of two metro areas, Area 1 and 2
respectively, and a core area, Area 3. Each area makes use of TE LSPs to provide
connectivity between the edge routers. In order to enable services between PE1 and
PE2 across the three areas, LSP1, LSP2, and LSP3 are set up using RSVP-TE.
There are in fact 6 LSPs required for bidirectional operation but we will refer to each
bi-directional LSP with a single name, for example, LSP1. A targeted LDP (T-LDP)
session is associated with each of these bidirectional LSP tunnels. That is, a T-LDP
adjacency is created between PE1 and ABR1 and is associated with LSP1 at each
end. The same is done for the LSP tunnel between ABR1 and ABR2, and finally
between ABR2 and PE2. The loopback address of each of these routers is
advertised using T-LDP. Similarly, backup bidirectional LDP over RSVP tunnels,
LSP1a and LSP2a, are configured by way of ABR3.

This setup effectively creates an end-to-end LDP connectivity which can be used by
all PEs to provision services. The RSVP LSPs are used as a transport vehicle to
carry the LDP packets from one area to another. Only the user packets are tunneled
over the RSVP LSPs. The T-LDP control messages are still sent unlabeled using the
IGP shortest path.

In this application, the bi-directional RSVP LSP tunnels are not treated as IP
interfaces and are not advertised back into the IGP. A PE must always rely on the
IGP to look up the next hop for a service packet. LDP-over-RSVP introduces a new
tunnel type, tunnel-in-tunnel, in addition to the existing LDP tunnel and RSVP tunnel
types. If multiple tunnels types match the destination PE FEC lookup, LDP will prefer
an LDP tunnel over an LDP-over-RSVP tunnel by default.

Issue: 01 3HE 11972 AAAA TQZZA 01 673


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

The design in Figure 61 allows a service provider to build and expand each area
independently without requiring a full mesh of RSVP LSPs between PEs across the
three areas.

To participate in a VPRN service, the PE1 and PE2 perform the autobind to LDP. The
LDP label which represents the target PE loopback address is used below the RSVP
LSP label. Therefore a 3 label stack is required.

In order to provide a VLL service, PE1 and PE2 are still required to set up a targeted
LDP session directly between them. Again a 3 label stack is required, the RSVP LSP
label, followed by the LDP label for the loopback address of the destination PE, and
finally the pseudowire label (VC label).

This implementation supports a variation of the application in Figure 61, in which


area 1 is an LDP area. In that case, PE1 will push a two label stack while ABR1 will
swap the LDP label and push the RSVP label as illustrated in Figure 62. LDP-over-
RSVP tunnels can also be used as IGP shortcuts.

Figure 62 LDP over RSVP Application Variant

LSP3

PE 1 LDP LSP2 PE 2
LSP1
IP/MPLS IP/MPLS
Metro Network ABR 1 ABR 2 Metro Network
(LDP Area 1) (RSVP Area 2)
IP/MPLS
LDP LSP1a
Core Network
LSP2a
(RSVP Area 3)

ABR 3
ABR 4
al_0902

5.6.1 Signaling and Operation


• LDP Label Distribution and FEC Resolution
• Default FEC Resolution Procedure

674 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.6.1.1 LDP Label Distribution and FEC Resolution

The user creates a targeted LDP (T-LDP) session to an ABR or the destination PE.
This results in LDP hellos being sent between the two routers. These messages are
sent unlabeled over the IGP path. Next, the user enables LDP tunneling on this T-
LDP session and optionally specifies a list of LSP names to associate with this T-LDP
session. By default, all RSVP LSPs which terminate on the T-LDP peer are
candidates for LDP-over-RSVP tunnels. At this point in time, the LDP FECs resolving
to RSVP LSPs are added into the Tunnel Table Manager as tunnel-in-tunnel type.

If LDP is running on regular interfaces also, the prefixes LDP learns are going to be
distributed over both the T-LDP session as well as regular IGP interfaces. LDP FEC
prefixes with a subnet mask lower or equal than 32 will be resolved over RSVP LSPs.
The policy controls which prefixes go over the T-LDP session, for example, only /32
prefixes, or a particular prefix range.

LDP-over-RSVP works with both OSPF and ISIS. These protocols include the
advertising router when adding an entry to the RTM. LDP-over-RSVP tunnels can be
used as shortcuts for BGP next-hop resolution.

5.6.1.2 Default FEC Resolution Procedure

When LDP tries to resolve a prefix received over a T-LDP session, it performs a
lookup in the Routing Table Manager (RTM). This lookup returns the next hop to the
destination PE and the advertising router (ABR or destination PE itself). If the next-
hop router advertised the same FEC over link-level LDP, LDP will prefer the LDP
tunnel by default unless the user explicitly changed the default preference using the
system wide prefer-tunnel-in-tunnel command. If the LDP tunnel becomes
unavailable, LDP will select an LDP-over-RSVP tunnel if available.

When searching for an LDP-over-RSVP tunnel, LDP selects the advertising router(s)
with best route. If the advertising router matches the T-LDP peer, LDP then performs
a second lookup for the advertising router in the Tunnel Table Manager (TTM) which
returns the user configured RSVP LSP with the best metric. If there are more than
one configured LSP with the best metric, LDP selects the first available LSP.

If all user configured RSVP LSPs are down, no more action is taken. If the user did
not configure any LSPs under the T-LDP session, the lookup in TTM will return the
first available RSVP LSP which terminates on the advertising router with the lowest
metric.

Issue: 01 3HE 11972 AAAA TQZZA 01 675


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.6.1.3 FEC Resolution Procedure When prefer-tunnel-in-tunnel is


Enabled

When LDP tries to resolve a prefix received over a T-LDP session, it performs a
lookup in the Routing Table Manager (RTM). This lookup returns the next hop to the
destination PE and the advertising router (ABR or destination PE itself).

When searching for an LDP-over-RSVP tunnel, LDP selects the advertising router(s)
with best route. If the advertising router matches the targeted LDP peer, LDP then
performs a second lookup for the advertising router in the Tunnel Table Manager
(TTM) which returns the user configured RSVP LSP with the best metric. If there are
more than one configured LSP with the best metric, LDP selects the first available
LSP.

If all user configured RSVP LSPs are down, then an LDP tunnel will be selected if
available.

If the user did not configure any LSPs under the T-LDP session, a lookup in TTM will
return the first available RSVP LSP which terminates on the advertising router. If
none are available, then an LDP tunnel will be selected if available.

5.6.2 Rerouting Around Failures


Every failure in the network can be protected against, except for the ingress and
egress PEs. All other constructs have protection available. These constructs are
LDP-over-RSVP tunnel and ABR.

• LDP-over-RSVP Tunnel Protection


• ABR Protection

5.6.2.1 LDP-over-RSVP Tunnel Protection

An RSVP LSP can deal with a failure in two ways:

• If the LSP is a loosely routed LSP, then RSVP will find a new IGP path around
the failure, and traffic will follow this new path. This may involve some churn in
the network if the LSP comes down and then gets re-routed. The tunnel damping
feature was implemented on the LSP so that all the dependent protocols and
applications do not flap unnecessarily.

676 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• If the LSP is a CSPF-computed LSP with the fast reroute option enabled, then
RSVP will switch to the detour path very quickly. From that point, a new LSP will
be attempted from the head-end (global revertive). When the new LSP is in
place, the traffic switches over to the new LSP with make-before-break.

5.6.2.2 ABR Protection

If an ABR fails, then routing around the ABR requires that a new next-hop LDP-over-
RSVP tunnel be found to a backup ABR. If an ABR fails, then the T-LDP adjacency
fails. Eventually, the backup ABR becomes the new next hop (after SPF converges),
and LDP learns of the new next-hop and can reprogram the new path.

5.7 LDP over RSVP Without Area Boundary


The LDP over RSVP capability set includes the ability to stitch LDP-over-RSVP
tunnels at internal (non-ABR) OSPF and IS-IS routers.

Figure 63 LDP over RSVP Without ABR Stitching Point


LSP1
A>B>D>F>G
B D F
20 15

25
25
Link & IGP Cost
A
SPF Path
25

25 LSP3 LSP2
A>C A>C>E
10 10 20

25 C E G X Dest.

10
H K
al_0214

Issue: 01 3HE 11972 AAAA TQZZA 01 677


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

In Figure 63, assume that the user wants to use LDP over RSVP between router A
and destination “Dest”. The first thing that happens is that either OSPF or IS-IS will
perform an SPF calculation resulting in an SPF tree. This tree specifies the lowest
possible cost to the destination. In the example shown, the destination “Dest” is
reachable at the lowest cost through router X. The SPF tree will have the following
path: A>C>E>G>X.

Using this SPF tree, router A will search for the endpoint that is closest (farthest/
highest cost from the origin) to “Dest” that is eligible. Assuming that all LSPs in the
above diagram are eligible, LSP endpoint G will be selected as it terminates on router
G while other LSPs only reach routers C and E, respectively.

IGP and LSP metrics associated with the various LSP are ignores; only tunnel
endpoint matters to IGP. The endpoint that terminates closest to “Dest” (highest IGP
path cost) will be selected for further selection of the LDP over RSVP tunnels to that
endpoint. The explicit path the tunnel takes may not match the IGP path that the SPF
computes.

If router A and G have an additional LSP terminating on router G, there would now
be two tunnels both terminating on the same router closest to the final destination.
For IGP, it does not make any difference on the numbers of LDPs to G, only that
there is at least one LSP to G. In this case, the LSP metric will be considered by LDP
when deciding which LSP to stitch for the LDP over RSVP connection.

The IGP only passes endpoint information to LDP. LDP looks up the tunnel table for
all tunnels to that endpoint and picks up the one with the least tunnel metric. There
may be many tunnels with the same least cost. LDP FEC prefixes with a subnet mask
lower or equal than 32 will be resolved over RSVP LSPs within an area.

5.7.1 LDP over RSVP and ECMP


ECMP for LDP over RSVP is supported (also see ECMP Support for LDP). If ECMP
applies, all LSP endpoints found over the ECMP IGP path will be installed in the
routing table by the IGP for consideration by LDP. IGP costs to each endpoint may
differ because IGP selects the farthest endpoint per ECMP path.

LDP will choose the endpoint that is highest cost in the route entry and will do further
tunnel selection over those endpoints. If there are multiple endpoints with equal
highest cost, then LDP will consider all of them.

678 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.8 Class-Based Forwarding of LDP Prefix


Packets over IGP Shortcuts
Within large ISP networks, services are typically required from any PE to any PE and
can traverse multiple domains. Also, within a service, different traffic classes can co-
exist, each with specific requirements on latency and jitter.

The class-based forwarding feature enables service providers to control which LSPs,
of a set of ECMP tunnel next-hops that resolve an LDP FEC prefix, to forward
packets that were classified to specific forwarding classes, as opposed to normal
ECMP spraying where packets are sprayed over the whole set of LSPs.

5.8.1 Configuration and Operation


To achieve the behavior described above, the user must first enable the following:

• IGP shortcuts or forwarding adjacencies in the routing instance


• ECMP
• the advertisement of unicast prefix FECs on the Targeted LDP session to the
peer
• class-based forwarding in the LDP context

Enabling these options is achieved by using the following commands:

Either one of:

• config>router>isis>igp-shortcut
• config>router>ospf>igp-shortcut
• Or one of:
− config>router>isis>advertise-tunnel-link
− config>router>ospf>advertise-tunnel-link
• All of:
− config>router>ecmp max-ecmp-routes
− config>router>ldp>targ-session>peer>tunneling
− config>router>ldp>class-forwarding

Issue: 01 3HE 11972 AAAA TQZZA 01 679


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

If the user specifies LSP names under the tunneling option, these LSPs are not
directly used by LDP when the igp-shortcut option is enabled. With IGP shortcuts,
the set of tunnel next-hops is always provided by IGP in RTM. Consequently, the
class-based forwarding rules described below do not apply to this set of named LSPs
unless they were populated by IGP in RTM as next-hops for a prefix.

The prefer-tunnel-in-tunnel must be disabled for class-based forwarding to apply


to LDP prefixes which are the endpoint of the tunnels.

The user must also bind traffic classes to designated LSPs. This is performed using
the following commands:

config>router>mpls>lsp>class-forwarding>fc {be | l2 | af | l1 | h2 | ef | h1 | nc}

The user can also designate a given LSP as a Default LSP using the following
command:

config>router>mpls>lsp>class-forwarding>default-lsp

These two commands can also be passed in the lsp-template context such that
LSPs created from that template will have the assigned Class-Based Forwarding
(CBF) configurations.

When an LDP prefix is resolved to a set of ECMP tunnel next hops, the selection
process by which the set is returned does not take into account any CBF
configuration. As such, even if the user has assigned CBF configurations to one or
more LSPs, those may not be selected as part of the set of ECMP tunnel next hops.
The assignments of CBF configurations are done on a per-LSP (or LSP template)
basis and, as such, are independent one from another. The evaluation of the
consistency of the assignments is performed by LDP at the time the FEC is resolved
to a set of ECMP tunnel next hops, and the following rules are applied.

• If no single LSP of the set has a CBF configuration assigned (either a forwarding
class or the default-lsp option), then normal ECMP spraying will occur over the
whole set of LSPs.
• If at least one LSP has a CBF configuration assigned, then class-based
forwarding will occur. If the default-lsp option has not been assigned to an LSP,
one will be automatically selected for that assignment by LDP. That LSP is the
one with the lowest tunnel-id amongst the set of LSPs with one (or more)
forwarding classes assigned to.
• Multiple LSPs can have the same forwarding class assigned. However, for each
of these forwarding classes, only a single LSP will be used to forward packets
classified into this forwarding class. That LSP is the one with the lowest tunnel-
id amongst those sharing a given forwarding class.

680 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• Similarly, multiple LSPs can have the default-lsp configuration assigned. Only
a single one will be designated to be the Default LSP. That LSP is the one with
the lowest tunnel-id amongst those with the default-lsp option assigned.

Therefore, under normal conditions, LDP prefix packets will be sprayed over a set of
ECMP tunnel next-hops by selecting either the LSP to which is assigned the
forwarding class of the packets, if one exists, or the Default LSP, if one does not
exist. However, the CBF is suspended until LDP downloads a new consistent set of
tunnel next-hops for the FEC. For example, if the IOM detects that the LSP to which
is assigned a forwarding class is not usable, it will switch the forwarding of packets
classified to that forwarding class into the Default LSP, and if the IOM detects that
the Default LSP is not usable, then it will revert to regular ECMP spraying across all
tunnels in the set of ECMP tunnel next-hops.

In case a user changes (adds, modifies, or deletes) the CBF configuration associated
to an LSP which has previously been selected as part of a set of ECMP tunnel next
hops, this change will automatically lead to an updated FEC resolution and CBF
consistency check and may lead to an update of the forwarding configuration.

This functionality only applies to LSR forwarding LDP FEC prefix packets over a set
of MPLS LSPs using IGP shortcuts. It does not apply to LER forwarding of shortcut
packets over LDP FEC which is resolved to a set of MPLS LSPs using IGP shortcuts,
nor does it apply to LER forwarding of packets of VPRN and Layer-2 services, which
use auto-binding to LDP when the LDP FEC is resolved to a set of MPLS LSPs using
IGP shortcuts.

5.8.2 Support of a Class Forwarding Policy with LDP-


over-RSVP
An alternative configuration of the Class-Based Forwarding feature is supported
within the CLI using the concept of a class forwarding policy. A class forwarding
policy enables the mapping of FCs to up to four forwarding sets for the class-based
forwarding (CBF) of an LDP FEC over IGP shortcuts.

The following commands can be used to perform the configuration:

config>router>mpls>class-forwarding-policy policy-name

config>router>mpls>lsp>class-forwarding>forwarding-set policy policy-name


set set-id

config>router>mpls>lsp-template>class-forwarding>forwarding-set policy
policy-name set set-id

Issue: 01 3HE 11972 AAAA TQZZA 01 681


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

A default forwarding set forwards packets of an FC when all LSPs of the forwarding
set that the FC maps to become operationally down. The router uses the user-
configured default set as the initial default set; otherwise, the lowest numbered set is
elected as the default forwarding set in a class forwarding policy. When the last LSP
in a default forwarding set goes into an operationally down state, the router
designates the next lowest numbered set as the new default forwarding set.

The configuration of CBF parameters is mutually exclusive on a per-LSP basis. Only


one of the following CLI commands can be used:

• CLI to directly map one or more FCs to the LSP as described in Configuration
and Operation.
• CLI to map a class-forwarding policy ID and a set ID to the LSP

MPLS populates the LSP in TTM. When the router resolves an LDP prefix FEC, the
subset of tunnel next-hops is selected from the full ECMP set based on the priority
set out below.

1. Select the subset of LSPs with the CBF configuration that uses the direct FC-to-
LSP mapping.
2. If no LSPs are found, select the subset of LSPs with the CBF configuration that
uses the class-forwarding policy.
3. If LSPs are found with the appropriate configuration, use plain ECMP spraying
on the full set of LSPs as per the existing behavior.

Class-based forwarding in LDP-over-RSVP using the forwarding class follows the


same rules as in the CBF with direct mapping of FC-to-LSP to select at most one LSP
per FC. A maximum of four LSPs, one per forwarding set, can be used by all eight
FCs of an LDP FEC with the class-based forwarding CLI.

5.9 LDP ECMP Uniform Failover


LDP ECMP uniform failover allows the fast re-distribution by the ingress data path of
packets forwarded over an LDP FEC next-hop to other next-hops of the same FEC
when the currently used next-hop fails. The switchover is performed within a
bounded time, which does not depend on the number of impacted LDP ILMs (LSR
role) or service records (ingress LER role). The uniform failover time is only
supported for a single LDP interface or LDP next-hop failure event.

This feature complements the coverage provided by the LDP Fast-ReRoute (FRR)
feature, which provides a Loop-Free Alternate (LFA) backup next-hop with uniform
failover time. Prefixes that have one or more ECMP next-hop protection are not
programmed with a LFA back-up next-hop, and vice-versa.

682 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

The LDP ECMP uniform failover feature builds on the concept of Protect Group ID
(PG-ID) introduced in LDP FRR. LDP assigns a unique PG-ID to all FECs that have
their primary Next-Hop Label Forwarding Entry (NHLFE) resolved to the same
outgoing interface and next-hop.

When an ILM record (LSR role) or LSPid-to-NHLFE (LTN) record (LER role) is
created on the IOM, it has the PG-ID of each ECMP NHLFE the FEC is using.

When a packet is received on this ILM/LTN, the hash routine selects one of the up to
32, or the ECMP value configured on the system, whichever is less, ECMP NHLFEs
for the FEC based on a hash of the packet’s header. If the selected NHLFE has its
PG-ID in DOWN state, the hash routine re-computes the hash to select a backup
NHLFE among the first 16, or the ECMP value configured on the system, whichever
is less, NHLFEs of the FEC, excluding the one that is in DOWN state. Packets of the
subset of flows that resolved to the failed NHLFE are thus sprayed among a
maximum of 16 NHLFEs.

LDP then re-computes the new ECMP set to exclude the failed path and downloads
it into the IOM. At that point, the hash routine will update the computation and begin
spraying over the updated set of NHLFEs.

LDP sends the DOWN state update of the PG-ID to the IOM when the outgoing
interface or a specific LDP next-hop goes down. This can be the result of any of the
following events:

• Interface failure detected directly.


• Failure of the LDP session detected via T-LDP BFD or LDP Keep-Alive.
• Failure of LDP Hello adjacency detected via link LDP BFD or LDP Hello.

In addition, PIP will send an interface down event to the IOM if the interface failure is
detected by other means than the LDP control plane or BFD. In that case, all PG-IDs
associated with this interface will have their state updated by the IOM.

When tunneling LDP packets over an RSVP LSP, it is the detection of the T-LDP
session going down, via BFD or Keep-Alive, which triggers the LDP ECMP uniform
failover procedures. If the RSVP LSP alone fails and the latter is not protected by
RSVP FRR, the failure event will trigger the re-resolution of the impacted FECs in the
slow path.

When a multicast LDP (mLDP) FEC is resolved over ECMP links to the same
downstream LDP LSR, the PG-ID DOWN state will cause packets of the FEC
resolved to the failed link to be switched to another link using the linear FRR
switchover procedures.

The LDP ECMP uniform failover is not supported in the following forwarding
contexts:

Issue: 01 3HE 11972 AAAA TQZZA 01 683


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

• VPLS BUM packets.


• Packets forwarded to an IES/VPRN spoke-interface.
• Packets forwarded towards VPLS spoke in routed VPLS.

Finally, the LDP ECMP uniform failover is only supported for a single LDP interface,
LDP next-hop, or peer failure event.

5.10 LDP Fast-Reroute for IS-IS and OSPF Prefixes


LDP Fast Re-Route (FRR) is a feature which allows the user to provide local
protection for an LDP FEC by pre-computing and downloading to the IOM or XCM
both a primary and a backup NHLFE for this FEC.

The primary NHLFE corresponds to the label of the FEC received from the primary
next-hop as per standard LDP resolution of the FEC prefix in RTM. The backup
NHLFE corresponds to the label received for the same FEC from a Loop-Free
Alternate (LFA) next-hop.

The LFA next-hop pre-computation by IGP is described in RFC 5286 – “Basic


Specification for IP Fast Reroute: Loop-Free Alternates”. LDP FRR relies on using
the label-FEC binding received from the LFA next-hop to forward traffic for a given
prefix as soon as the primary next-hop is not available. This means that a node
resumes forwarding LDP packets to a destination prefix without waiting for the
routing convergence. The label-FEC binding is received from the loop-free alternate
next-hop ahead of time and is stored in the Label Information Base since LDP on the
router operates in the liberal retention mode.

This feature requires that IGP performs the Shortest Path First (SPF) computation of
an LFA next-hop, in addition to the primary next-hop, for all prefixes used by LDP to
resolve FECs. IGP also populates both routes in the Routing Table Manager (RTM).

5.10.1 LDP FRR Configuration


The user enables Loop-Free Alternate (LFA) computation by SPF under the IS-IS or
OSPF routing protocol level:

config>router>isis>loopfree-alternate
config>router>ospf>loopfree-alternate.

684 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

The above commands instruct the IGP SPF to attempt to pre-compute both a primary
next-hop and an LFA next-hop for every learned prefix. When found, the LFA next-
hop is populated into the RTM along with the primary next-hop for the prefix.

Next the user enables the use by LDP of the LFA next-hop by configuring the
following option:

config>router>ldp>fast-reroute

When this command is enabled, LDP will use both the primary next-hop and LFA
next-hop, when available, for resolving the next-hop of an LDP FEC against the
corresponding prefix in the RTM. This will result in LDP programming a primary
NHLFE and a backup NHLFE into the IOM or XCM for each next-hop of a FEC prefix
for the purpose of forwarding packets over the LDP FEC.

Because LDP can detect the loss of a neighbor/next-hop independently, it is possible


that it switches to the LFA next-hop while IGP is still using the primary next-hop. In
order to avoid this situation, it is recommended to enable IGP-LDP synchronization
on the LDP interface:

config>router>if>ldp-sync-timer seconds

5.10.1.1 Reducing the Scope of the LFA Calculation by SPF

The user can instruct IGP to not include all interfaces participating in a specific IS-IS
level or OSPF area in the SPF LFA computation. This provides a way of reducing the
LFA SPF calculation where it is not needed.

config>router>isis>level>loopfree-alternate-exclude
config>router>ospf>area>loopfree-alternate-exclude

If IGP shortcut are also enabled in LFA SPF, the LSPs with destination address in
that IS-IS level or OSPF area are also not included in the LFA SPF calculation.

The user can also exclude a specific IP interface from being included in the LFA SPF
computation by IS-IS or OSPF:

config>router>isis>interface> loopfree-alternate-exclude
config>router>ospf>area>interface> loopfree-alternate-exclude

Issue: 01 3HE 11972 AAAA TQZZA 01 685


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

When an interface is excluded from the LFA SPF in IS-IS, it is excluded in both level
1 and level 2. When the user excludes an interface from the LFA SPF in OSPF, it is
excluded in all areas. However, the above OSPF command can only be executed
under the area in which the specified interface is primary and once enabled, the
interface is excluded in that area and in all other areas where the interface is
secondary. If the user attempts to apply it to an area where the interface is
secondary, the command will fail.

Finally, the user can apply the same above commands for an OSPF instance within
a VPRN service:

config>service>vprn>ospf>area>loopfree-alternate-exclude
config>service>vprn>ospf>area>interface>loopfree-alternate-exclude

5.10.2 LDP FRR Procedures


The LDP FEC resolution when LDP FRR is not enabled operates as follows. When
LDP receives a FEC, label binding for a prefix, it will resolve it by checking if the exact
prefix, or a longest match prefix when the aggregate-prefix-match option is
enabled in LDP, exists in the routing table and is resolved against a next-hop which
is an address belonging to the LDP peer which advertised the binding, as identified
by its LSR-id. When the next-hop is no longer available, LDP de-activates the FEC
and de-programs the NHLFE in the data path. LDP will also immediately withdraw
the labels it advertised for this FEC and deletes the ILM in the data path unless the
user configured the label-withdrawal-delay option to delay this operation. Traffic
that is received while the ILM is still in the data path is dropped. When routing
computes and populates the routing table with a new next-hop for the prefix, LDP
resolves again the FEC and programs the data path accordingly.

When LDP FRR is enabled and an LFA backup next-hop exists for the FEC prefix in
RTM, or for the longest prefix the FEC prefix matches to when aggregate-prefix-
match option is enabled in LDP, LDP will resolve the FEC as above but will program
the data path with both a primary NHLFE and a backup NHLFE for each next-hop of
the FEC.

In order perform a switchover to the backup NHLFE in the fast path, LDP follows the
uniform FRR failover procedures which are also supported with RSVP FRR.

When any of the following events occurs, LDP instructs in the fast path the IOM on
the line cards to enable the backup NHLFE for each FEC next-hop impacted by this
event. The IOM line cards do that by simply flipping a single state bit associated with
the failed interface or neighbor/next-hop:

686 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

1. An LDP interface goes operationally down, or is admin shutdown. In this case,


LDP sends a neighbor/next-hop down message to the IOM line cards for each
LDP peer it has adjacency with over this interface.
2. An LDP session to a peer went down as the result of the Hello or Keep-Alive
timer expiring over a specific interface. In this case, LDP sends a neighbor/next-
hop down message to the IOM line cards for this LDP peer only.
3. The TCP connection used by a link LDP session to a peer went down, due say
to next-hop tracking of the LDP transport address in RTM, which brings down
the LDP session. In this case, LDP sends a neighbor/next-hop down message
to the IOM line cards for this LDP peer only.
4. A BFD session, enabled on a T-LDP session to a peer, times-out and as a result
the link LDP session to the same peer and which uses the same TCP connection
as the T-LDP session goes also down. In this case, LDP sends a neighbor/next-
hop down message to the IOM line cards for this LDP peer only.
5. A BFD session enabled on the LDP interface to a directly connected peer, times-
out and brings down the link LDP session to this peer. In this case, LDP sends
a neighbor/next-hop down message to the IOM line cards for this LDP peer only.
BFD support on LDP interfaces is a new feature introduced for faster tracking of
link LDP peers.

The tunnel-down-dump-time option or the label-withdrawal-delay option, when


enabled, does not cause the corresponding timer to be activated for a FEC as long
as a backup NHLFE is still available.

5.10.2.1 ECMP Considerations

Whenever the SPF computation determined that there is more than one primary
next-hop for a prefix, it will not program any LFA next-hop in RTM. In this case, the
LDP FEC will resolve to the multiple primary next-hops, which provides the required
protection.

Also, when the system ECMP value is set to ecmp=1 or to no ecmp, which
translates to the same and is the default value, SPF can use the overflow ECMP links
as LFA next-hops in these two cases.

5.10.2.2 LDP FRR and LDP Shortcut

When LDP FRR is enabled in LDP and the ldp-shortcut option is enabled in the router
level, in transit IPv4 packets and specific CPM generated IPv4 control plane packets
with a prefix resolving to the LDP shortcut are protected by the backup LDP NHLFE.

Issue: 01 3HE 11972 AAAA TQZZA 01 687


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.10.2.3 LDP FRR and LDP-over-RSVP

When LDP-over-RSVP is enabled, the RSVP LSP is modeled as an endpoint, i.e.,


the destination node of the LSP, and not as a link in the IGP SPF. Thus, it is not
possible for IGP to compute a primary or alternate next-hop for a prefix which FEC
path is tunneled over the RSVP LSP. Only LDP is aware of the FEC tunneling but it
cannot determine on its own a loop-free backup path when it resolves the FEC to an
RSVP LSP.

As a result, LDP does not activate the LFA next-hop it learned from RTM for a FEC
prefix when the FEC is resolved to an RSVP LSP. LDP will activate the LFA next-hop
as soon as the FEC is resolved to direct primary next-hop.

LDP FEC tunneled over an RSVP LSP due to enabling the LDP-over-RSVP feature
will thus not support the LDP FRR procedures and will follow the slow path procedure
of prior implementation.

When the user enables the lfa-only option for an RSVP LSP, as described in Loop-
Free Alternate Calculation in the Presence of IGP shortcuts, the LSP will not be used
by LDP to tunnel an LDP FEC even when IGP shortcut is disabled but LDP-over-
RSVP is enabled in IGP.

5.10.2.4 LDP FRR and RSVP Shortcut (IGP Shortcut)

When an RSVP LSP is used as a shortcut by IGP, it is included by SPF as a P2P link
and can also be optionally advertised into the rest of the network by IGP. Thus the
SPF is able of using a tunneled next-hop as the primary next-hop for a given prefix.
LDP is also able of resolving a FEC to a tunneled next-hop when the IGP shortcut
feature is enabled.

When both IGP shortcut and LFA are enabled in IS-IS or OSPF, and LDP FRR is also
enabled, then the following additional LDP FRR capabilities are supported:

1. A FEC which is resolved to a direct primary next-hop can be backed up by a LFA


tunneled next-hop.
2. A FEC which is resolved to a tunneled primary next-hop will not have an LFA
next-hop. It will rely on RSVP FRR for protection.

The LFA SPF is extended to use IGP shortcuts as LFA next-hops as explained in
Loop-Free Alternate Calculation in the Presence of IGP shortcuts.

688 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.10.3 IS-IS and OSPF Support for Loop-Free Alternate


Calculation
SPF computation in IS-IS and OSPF is enhanced to compute LFA alternate routes
for each learned prefix and populate it in RTM.

Figure 64 illustrates a simple network topology with point-to-point (P2P) interfaces


and highlights three routes to reach router R5 from router R1.

Figure 64 Topology with Primary and LFA Routes


R1 R2

5
5
5 (8)
R3
R4

5 5

R5
Primary Route
LFA Link-Protect Route
LFA Node-Protect Route
al_0215

The primary route is by way of R3. The LFA route by way of R2 has two equal cost
paths to reach R5. The path by way of R3 protects against failure of link R1-R3. This
route is computed by R1 by checking that the cost for R2 to reach R5 by way of R3
is lower than the cost by way of routes R1 and R3. This condition is referred to as the
loop-free criterion. R2 must be loop-free with respect to source node R1.

The path by way of R2 and R4 can be used to protect against the failure of router R3.
However, with the link R2-R3 metric set to 5, R2 sees the same cost to forward a
packet to R5 by way of R3 and R4. Thus R1 cannot guarantee that enabling the LFA
next-hop R2 will protect against R3 node failure. This means that the LFA next-hop
R2 provides link-protection only for prefix R5. If the metric of link R2-R3 is changed
to 8, then the LFA next-hop R2 provides node protection since a packet to R5 will
always go over R4. In other words it is required that R2 becomes loop-free with
respect to both the source node R1 and the protected node R3.

Consider the case where the primary next-hop uses a broadcast interface as
illustrated in Figure 65

Issue: 01 3HE 11972 AAAA TQZZA 01 689


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 65 Example Topology with Broadcast Interfaces


R1 R2

5
5 5 (8)

PN 5

5
R4
R3

5 5

R5
Primary Route
LFA Link-Protect Route
al_0216

In order for next-hop R2 to be a link-protect LFA for route R5 from R1, it must be loop-
free with respect to the R1-R3 link’s Pseudo-Node (PN). However, since R2 has also
a link to that PN, its cost to reach R5 by way of the PN or router R4 are the same.
Thus R1 cannot guarantee that enabling the LFA next-hop R2 will protect against a
failure impacting link R1-PN since this may cause the entire subnet represented by
the PN to go down. If the metric of link R2-PN is changed to 8, then R2 next-hop will
be an LFA providing link protection.

The following are the detailed rules for this criterion as provided in RFC 5286:

• Rule 1: Link-protect LFA backup next-hop (primary next-hop R1-R3 is a P2P


interface):
Distance_opt(R2, R5) < Distance_opt(R2, R1) + Distance_opt(R1, R5)
and,
Distance_opt(R2, R5) >= Distance_opt(R2, R3) + Distance_opt(R3, R5)
• Rule 2: Node-protect LFA backup next-hop (primary next-hop R1-R3 is a P2P
interface):
Distance_opt(R2, R5) < Distance_opt(R2, R1) + Distance_opt(R1, R5)
and,
Distance_opt(R2, R5) < Distance_opt(R2, R3) + Distance_opt(R3, R5)
• Rule 3: Link-protect LFA backup next-hop (primary next-hop R1-R3 is a
broadcast interface):
Distance_opt(R2, R5) < Distance_opt(R2, R1) + Distance_opt(R1, R5)
and,
Distance_opt(R2, R5) < Distance_opt(R2, PN) + Distance_opt(PN, R5)
where; PN stands for the R1-R3 link Pseudo-Node.

690 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

For the case of P2P interface, if SPF finds multiple LFA next-hops for a given primary
next-hop, it follows the following selection algorithm:

1. It will pick the node-protect type in favor of the link-protect type.


2. If there is more than one LFA next-hop within the selected type, then it will pick
one based on the least cost.
3. If more than one LFA next-hop with the same cost results from Step B, then SPF
will select the first one. This is not a deterministic selection and will vary following
each SPF calculation.

For the case of a broadcast interface, a node-protect LFA is not necessarily a link
protect LFA if the path to the LFA next-hop goes over the same PN as the primary
next-hop. Similarly, a link protect LFA may not guarantee link protection if it goes
over the same PN as the primary next-hop.

The selection algorithm when SPF finds multiple LFA next-hops for a given primary
next-hop is modified as follows:

1. The algorithm splits the LFA next-hops into two sets:


− The first set consists of LFA next-hops which do not go over the PN used
by primary next-hop.
− The second set consists of LFA next-hops which do go over the PN used by
the primary next-hop.
2. If there is more than one LFA next-hop in the first set, it will pick the node-protect
type in favor of the link-protect type.
3. If there is more than one LFA next-hop within the selected type, then it will pick
one based on the least cost.
4. If more than one LFA next-hop with equal cost results from Step C, SPF will
select the first one from the remaining set. This is not a deterministic selection
and will vary following each SPF calculation.
5. If no LFA next-hop results from Step D, SPF will rerun Steps B-D using the
second set.

This algorithm is more flexible than strictly applying Rule 3 above; the link protect rule
in the presence of a PN and specified in RFC 5286. A node-protect LFA which does
not avoid the PN; does not guarantee link protection, can still be selected as a last
resort. The same thing, a link-protect LFA which does not avoid the PN may still be
selected as a last resort.Both the computed primary next-hop and LFA next-hop for
a given prefix are programmed into RTM.

Issue: 01 3HE 11972 AAAA TQZZA 01 691


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.10.3.1 Loop-Free Alternate Calculation in the Presence of IGP


shortcuts

In order to expand the coverage of the LFA backup protection in a network, RSVP
LSP based IGP shortcuts can be placed selectively in parts of the network and be
used as an LFA backup next-hop.

When IGP shortcut is enabled in IS-IS or OSPF on a given node, all RSVP LSP
originating on this node and with a destination address matching the router-id of any
other node in the network are included in the main SPF by default.

In order to limit the time it takes to compute the LFA SPF, the user must explicitly
enable the use of an IGP shortcut as LFA backup next-hop using one of a couple of
new optional argument for the existing LSP level IGP shortcut command:

config>router>mpls>lsp>igp-shortcut [lfa-protect | lfa-only]

The lfa-protect option allows an LSP to be included in both the main SPF and the
LFA SPFs. For a given prefix, the LSP can be used either as a primary next-hop or
as an LFA next-hop but not both. If the main SPF computation selected a tunneled
primary next-hop for a prefix, the LFA SPF will not select an LFA next-hop for this
prefix and the protection of this prefix will rely on the RSVP LSP FRR protection. If
the main SPF computation selected a direct primary next-hop, then the LFA SPF will
select an LFA next-hop for this prefix but will prefer a direct LFA next-hop over a
tunneled LFA next-hop.

The lfa-only option allows an LSP to be included in the LFA SPFs only such that the
introduction of IGP shortcuts does not impact the main SPF decision. For a given
prefix, the main SPF always selects a direct primary next-hop. The LFA SPF will
select a an LFA next-hop for this prefix but will prefer a direct LFA next-hop over a
tunneled LFA next-hop.

Thus the selection algorithm when SPF finds multiple LFA next-hops for a given
primary next-hop is modified as follows:

1. The algorithm splits the LFA next-hops into two sets:


− the first set consists of direct LFA next-hops
− the second set consists of tunneled LFA next-hops. after excluding the
LSPs which use the same outgoing interface as the primary next-hop.
2. The algorithms continues with first set if not empty, otherwise it continues with
second set.
3. If the second set is used, the algorithm selects the tunneled LFA next-hop which
endpoint corresponds to the node advertising the prefix.

692 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

− If more than one tunneled next-hop exists, it selects the one with the lowest
LSP metric.
− If still more than one tunneled next-hop exists, it selects the one with the
lowest tunnel-id.
− If none is available, it continues with rest of the tunneled LFAs in second set.
4. Within the selected set, the algorithm splits the LFA next-hops into two sets:
− The first set consists of LFA next-hops which do not go over the PN used
by primary next-hop.
− The second set consists of LFA next-hops which go over the PN used by
the primary next-hop.
5. If there is more than one LFA next-hop in the selected set, it will pick the node-
protect type in favor of the link-protect type.
6. If there is more than one LFA next-hop within the selected type, then it will pick
one based on the least total cost for the prefix. For a tunneled next-hop, it means
the LSP metric plus the cost of the LSP endpoint to the destination of the prefix.
7. If there is more than one LFA next-hop within the selected type (ecmp-case) in
the first set, it will select the first direct next-hop from the remaining set. This is
not a deterministic selection and will vary following each SPF calculation.
8. If there is more than one LFA next-hop within the selected type (ecmp-case) in
the second set, it will pick the tunneled next-hop with the lowest cost from the
endpoint of the LSP to the destination prefix. If there remains more than one, it
will pick the tunneled next-hop with the lowest tunnel-id.

5.10.3.2 Loop-Free Alternate Calculation for Inter-Area/inter-Level


Prefixes

When SPF resolves OSPF inter-area prefixes or IS-IS inter-level prefixes, it will
compute an LFA backup next-hop to the same exit area/border router as used by the
primary next-hop.

5.10.3.3 Loop-Free Alternate Shortest Path First (LFA SPF) Policies

An LFA SPF policy allows the user to apply specific criteria, such as admin group and
SRLG constraints, to the selection of a LFA backup next-hop for a subset of prefixes
that resolve to a specific primary next-hop. See more details in the section titled
“Loop-Free Alternate Shortest Path First (LFA SPF) Policies” in the Routing
Protocols Guide.

Issue: 01 3HE 11972 AAAA TQZZA 01 693


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.11 LDP FEC to BGP Label Route Stitching


The stitching of an LDP FEC to a BGP labeled route allows the LDP capable PE
devices to offer services to PE routers in other areas or domains without the need to
support BGP labeled routes.

This feature is used in a large network to provide services across multiple areas or
autonomous systems. Figure 66 shows a network with a core area and regional
areas.

Figure 66 Application of LDP to BGP FEC Stitching


Redistribute
(DSLAM Prefix)
i-BGP (LBL)

LDP LDP

LBL
REQ
LBL PE21 ABR3
RSP

DSLAM
2
LBL
RSP PE22 ABR4
LBL
REQ

SVC LBL
SVC LBL 311
2121 334
al_0217

Specific /32 routes in a regional area are not redistributed into the core area.
Therefore, only nodes within a regional area and the ABR nodes in the same area
exchange LDP FECs. A PE router, for example, PE21, in a regional area learns the
reachability of PE routers in other regional areas by way of RFC 3107 BGP labeled
routes redistributed by the remote ABR nodes by way of the core area. The remote
ABR then sets the next-hop self on the labeled routes before re-distributing them into
the core area. The local ABR for PE2, for example, ABR3 may or may not set next-
hop self when it re-distributes these labeled BGP routes from the core area to the
local regional area.

When forwarding a service packet to the remote PE, PE21 inserts a VC label, the
BGP route label to reach the remote PE, and an LDP label to reach either ABR3, if
ABR3 sets next-hop self, or ABR1.

694 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

In the same network, an MPLS capable DSLAM also act as PE router for VLL
services and will need to establish a PW to a PE in a different regional area by way
of router PE21, acting now as an LSR. To achieve that, PE21 is required to perform
the following operations:

• Translate the LDP FEC it learned from the DSLAM into a BGP labeled route and
re-distribute it by way of iBGP within its area. This is in addition to redistributing
the FEC to its LDP neighbors in the same area.
• Translate the BGP labeled routes it learns through iBGP into an LDP FEC and
re-distribute it to its LDP neighbors in the same area. In the application in
Figure 66, the DSLAM requests the LDP FEC of the remote PE router using LDP
Downstream on Demand (DoD).
• When a packet is received from the DSLAM, PE21 swaps the LDP label into a
BGP label and pushes the LDP label to reach ABR3 or ABR1. When a packet is
received from ABR3, the top label is removed and the BGP label is swapped for
the LDP label corresponding to the DSLAM FEC.

5.11.1 Configuration
The user enables the stitching of routes between the LDP and BGP by configuring
separately tunnel table route export policies in both protocols and enabling the
advertising of RFC 3107 formatted labeled routes for prefixes learned from LDP
FECs.

The route export policy in BGP instructs BGP to listen to LDP route entries in the
CPM tunnel table. If a /32 LDP FEC prefix matches an entry in the export policy, BGP
originates a BGP labeled route, stitches it to the LDP FEC, and re-distributes the
BGP labeled route to its iBGP neighbors.

The user adds LDP FEC prefixes with the statement ‘from protocol ldp’ in the
configuration of the existing BGP export policy at the global level, the peer-group
level, or at the peer level using the commands:

• config>router>bgp>export policy-name
• config>router>bgp>group>export policy-name
• config>router>bgp>group>neighbor>export policy-name

To indicate to BGP to evaluate the entries with the ‘from protocol ldp’ statement in
the export policy when applied to a specific BGP neighbor, use commands:

• config>router>bgp>group>neighbor>family label-ipv4
• config>router>bgp>group>neighbor>advertise-ldp-prefix

Issue: 01 3HE 11972 AAAA TQZZA 01 695


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Without this, only core IPv4 routes learned from RTM are advertised as BGP labeled
routes to this neighbor. And the stitching of LDP FEC to the BGP labeled route is not
performed for this neighbor even if the same prefix was learned from LDP.

The tunnel table route export policy in LDP instructs LDP to listen to BGP route
entries in the CPM Tunnel Table. If a /32 BGP labeled route matches a prefix entry
in the export policy, LDP originates an LDP FEC for the prefix, stitches it to the BGP
labeled route, and re-distributes the LDP FEC its iBGP neighbors.

The user adds BGP labeled route prefixes with the statement ‘from protocol bgp’ in
the configuration of a new LDP tunnel table export policy using the command:

config>router>ldp>export-tunnel-table policy-name.

The ‘from protocol’ statement has an effect only when the protocol value is ldp.
Policy entries with protocol values of rsvp, bgp, or any value other than ldp are
ignored at the time the policy is applied to LDP.

5.11.2 Detailed LDP FEC Resolution


When an LSR receives a FEC-label binding from an LDP neighbor for a given
specific FEC1 element, the following procedures are performed.

1. LDP installs the FEC if:


− It was able to perform a successful exact match or a longest match, if
aggregate-prefix-match option is enabled in LDP, of the FEC /32 prefix with
a prefix entry in the routing table.
− The advertising LDP neighbor is the next-hop to reach the FEC prefix.
2. When such a FEC-label binding has been installed in the LDP FIB, LDP will
perform the following:
− Program a push and a swap NHLFE entries in the egress data path to
forward packets to FEC1.
− Program the CPM tunnel table with a tunnel entry for the NHLFE.
− Advertise a new FEC-label binding for FEC1 to all its LDP neighbors
according to the global and per-peer LDP prefix export policies.
− Install the ILM entry pointing to the swap NHLFE.
3. When BGP learns the LDP FEC by way of the CPM tunnel table and the FEC
prefix exists in the BGP route export policy, it will perform the following:
− Originate a labeled BGP route for the same prefix with this node as the next-
hop and advertise it by way of iBGP to its BGP neighbors, for example, the
local ABR/ASBR nodes, which have the advertise-ldp-prefix enabled.

696 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

− Install the ILM entry pointing to the swap NHLFE programmed by LDP.

5.11.3 Detailed BGP Labeled Route Resolution


When an LSR receives a BGP labeled route by way of iBGP for a given specific /32
prefix, the following procedures are performed.

1. BGP resolves and installs the route in BGP if:


− There exists an LDP LSP to the BGP neighbor, for example, the ABR or
ASBR, which advertised it and which is the next-hop of the BGP labeled
route.
2. Once the BGP route is installed, BGP programs the following:
− Push NHLFE in the egress data path to forward packets to this BGP labeled
route.
− The CPM tunnel table with a tunnel entry for the NHLFE.
3. When LDP learns the BGP labeled route by way of the CPM tunnel table and the
prefix exists in the new LDP tunnel table route export policy, it performs the
following:
− Advertise a new LDP FEC-label binding for the same prefix to its LDP
neighbors according the global and per-peer LDP export prefix policies. If
LDP already advertised a FEC for the same /32 prefix after receiving it from
an LDP neighbor then no action is required. For LDP neighbors that
negotiated LDP Downstream on Demand (DoD), the FEC is advertised only
when this node receives a Label Request message for this FEC from its
neighbor.
− Install the ILM entry pointing the BGP NHLFE if a new LDP FEC-label
binding is advertised. If an ILM entry exists and points to an LDP NHLFE for
the same prefix then no update to ILM entry is performed. The LDP route
has always preference over the BGP labeled route.

5.11.4 Data Plane Forwarding


When a packet is received from an LDP neighbor, the LSR swaps the LDP label into
a BGP label and pushes the LDP label to reach the BGP neighbor, for example, ABR/
ASBR, which advertised the BGP labeled route with itself as the next-hop.

When a packet is received from a BGP neighbor such as an ABR/ASBR, the top label
is removed and the BGP label is swapped for the LDP label to reach the next-hop for
the prefix.

Issue: 01 3HE 11972 AAAA TQZZA 01 697


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.12 LDP-SR Stitching for IPv4 prefixes (IS-IS)


This feature enables stitching between an LDP FEC and an SR node-SID route for
the same IPv4 /32prefix.

5.12.1 LDP-SR Stitching Configuration


The user enables the stitching between an LDP FEC and an SR node-SID route for
the same prefix by configuring the export of SR (LDP) tunnels from the CPM Tunnel
Table Manager (TTM) into LDP (IGP).

In the LDP-to-SR data path direction, the existing tunnel table route export policy in
LDP, which was introduced for LDP-BGP stitching, is enhanced to include support
for exporting SR tunnels from the TTM to LDP. The user adds the new statement
from protocol isis [instance instance-id] to the LDP tunnel table export policy:

CLI Syntax: config>router>ldp>export-tunnel-table policy-name

The user can restrict the export to LDP of SR tunnels from a specific prefix list. The
user can also restrict the export to a specific IGP instance by optionally specifying
the instance ID in the from statement.

The from protocol statement has an effect only when the protocol value is isis or
bgp.

Policy entries with any other protocol value are ignored at the time the policy is
applied. If the user configures multiple from statements in the same policy or does
not include the from statement but adds a default action of accept, then LDP will
follow the TTM selection rules as described in the Segment Routing Tunnel
Management section of the Unicast Routing Protocol Guide to select a tunnel to
stitch the LDP ILM to:

• LDP selects the tunnel from the lowest TTM preference protocol.
• If IS-IS and BGP protocols have the same preference, then LDP selects the
protocol using the default TTM protocol preference.
• Within the same IGP protocol, LDP selects the lowest instance ID.

698 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

When this policy is enabled in LDP, LDP listens to SR tunnel entries in the TTM.
Whenever an LDP FEC primary next-hop cannot be resolved using an RTM route
and a SR tunnel of type SR-ISIS to the same destination, IPv4 /32 prefix matches an
entry in the export policy, LDP programs an LDP ILM and stitches it to the SR node-
SID tunnel endpoint. LDP also originates an FEC for the prefix and re-distributes it to
its LDP and T-LDP peers. The latter allows an LDP FEC that is tunneled over a
RSVP-TE LSP to have its ILM stitched to an SR tunnel endpoint. When a LDP FEC
is stitched to a SR tunnel, packets forwarded will benefit from the protection of the
LFA/remote LFA backup next-hop of the SR tunnel.

When resolving a FEC, LDP will prefer resolution in RTM over that in TTM when both
resolutions are possible. In other words, the swapping of the LDP ILM to a LDP
NHLFE is preferred over stitching it to an SR tunnel endpoint.

In the SR-to-LDP data path direction, the SR mapping server provides a global policy
for the prefixes corresponding to the LDP FECs the SR needs to stitch to. Refer to
the Segment Routing Mapping Server section of the Unicast Routing Protocols
Guide for more information. Thus, a tunnel table export policy is not required.
Instead, the user enables exporting to an IGP instance the LDP tunnels for FEC
prefixes advertised by the mapping server using the following command:

CLI Syntax: config>router>isis>segment-routing>export-tunnel-table


ldp

When this command is enabled in the segment-routing context of an IGP instance,


IGP listens to LDP tunnel entries in the TTM. Whenever a /32 LDP tunnel destination
matches a prefix for which IGP received a prefix-SID sub-TLV from a mapping
server, it instructs the SR module to program the SR ILM and to stitch it to the LDP
tunnel endpoint. The SR ILM can stitch to an LDP FEC resolved over either link LDP
or T-LDP. In the latter, the stitching is performed to an LDP-over-RSVP tunnel and
only supported when the ldp-over-rsvp option is enabled in IGP. It is not supported
when the igp-shortcut option is enabled. When an SR tunnel is stitched to an LDP
FEC, packets forwarded will benefit from the FRR protection of the LFA backup next-
hop of the LDP FEC.

When resolving a node SID, IGP will prefer resolution of prefix SID received in an IP
Reach TLV over a prefix SID received via the mapping server. In other words, the
swapping of the SR ILM to a SR NHLFE is preferred over stitching it to a LDP tunnel
endpoint. Refer to the Segment Routing Mapping Server Prefix SID Resolution
section of the Unicast Routing Protocols Guide for more information about prefix SID
resolution.

It is recommended to enable the bfd-enable option on the interfaces in both LDP and
IGP instance contexts to speed up the failure detection and the activation of the LFA/
remote-LFA backup next-hop in either direction. This is particularly true if the injected
failure is a remote failure.

Issue: 01 3HE 11972 AAAA TQZZA 01 699


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

This feature is limited to IPv4 /32 prefixes in both LDP and SR.

5.12.2 Stitching in the LDP-to-SR Direction


The stitching in data-plane from the LDP-to-SR direction is based on the LDP module
monitoring the TTM for a SR tunnel of a prefix matching an entry in the LDP TTM
export policy.

Figure 67 Stitching in the LDP-to-SR Direction

R1 R2
Prefix Y, Prefix X, LDP
Node SID=20 Label=5135

Ry R5 Mapping Server Rx
Prefix Y, SID=20

R3 R4

SR Domain LDP Domain


0970

With reference to Figure 67, the following procedure is performed by the boundary
router R1 to effect stitching:

Step 1. Router R1 is at the boundary between an SR domain and an LDP domain


and is configured to stitch between SR and LDP.
Step 2. Link R1-R2 is LDP-enabled, but router R2 does not support SR (or SR is
disabled).
Step 3. Router R1 receives a prefix-SID sub-TLV in an IS-IS IP reachability TLV
originated by router Ry for prefix Y.
Step 4. R1 resolves it and programs an NHLFE on the link towards the next-hop in
the SR domain. It programs an SR ILM and points it to this NHLFE.
Step 5. Because R1 is programmed to stitch LDP to SR, the LDP in R1 discovers
in TTM the SR tunnel to Y. LDP programs a LDP ILM and points it to the
SR tunnel. As a result, both the SR ILM and LDP ILM are now pointing to
the SR tunnel, one via the SR NHLFE and the other via the SR tunnel
endpoint.

700 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Step 6. R1 advertises the LDP FEC for the prefix Y to all its LDP peers. R2 is now
able to install a LDP tunnel towards Ry.
Step 7. If R1 found multiple SR tunnels to destination prefix Y, it should use the
TTM tunnel selection rules to pick the SR tunnel. The rules follow the
following steps:
i. R1 selects the tunnel from the lowest preference IGP protocol.
ii. Select the protocol using the default TTM protocol preference.
iii. Within the same IGP protocol, R1 uses the lowest instance ID to select
the tunnel.
Step 8. If the user configured in the same LDP tunnel table export policy
concurrently from protocol isis and from protocol bgp, or did not include
the from statement but added a default action of accept, R1 will select the
tunnel to destination prefix Y to stitch the LDP ILM to using the TTM tunnel
selection rules:
i. R1 selects the tunnel from the lowest preference protocol.
ii. If IS-IS and BGP protocols have the same preference, then R1 selects
the protocol using the default TTM protocol preference.
iii. Within the same IGP protocol, R1 uses the lowest instance ID to select
the tunnel.

Note: If R1 has already resolved a LDP FEC for prefix Y, it would have had an ILM for it,
but this ILM is not be updated to point towards the SR tunnel. This is because LDP resolves
in RTM first before going to TTM and thus prefers the LDP tunnel over the SR tunnel.
Similarly, if an LDP FEC is received subsequently to programming the stitching, the LDP
ILM will be updated to point to the LDP NHLFE because LDP will be able to resolve the LDP
FEC in RTM.

Step 9. The user enables SR in R2. R2 resolves the prefix SID for Y and installs
the SR ILM and the SR NHLFE. R2 is now able of forwarding packets over
the SR tunnel to router Ry. Nothing happens in R1 because the SR ILM is
already programmed.
Step 10. The user disables LDP on the interface R1-R2 (both directions) and the
LDP FEC ILM and NHLFE are removed in R1. The same occurs in R2
which can then only forward using the SR tunnel towards Ry.

Issue: 01 3HE 11972 AAAA TQZZA 01 701


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.12.3 Stitching in the SR-to-LDP Direction


The stitching in data-plane from the SR-to-LDP direction is based on the IGP
monitoring the TTM for a LDP tunnel of a prefix matching an entry in the SR TTM
export policy.

With reference to Figure 67, the following procedure is performed by the boundary
router R1 to effect stitching:

Step 1. Router R1 is at the boundary between a SR domain and a LDP domain and
is configured to stitch between SR and LDP.
Link R1-R2 is LDP enabled but router R2 does not support SR (or SR is
disabled).
Step 2. R1 receives an LDP FEC for prefix X owned by router Rx further down in
the LDP domain.
RTM in R1 shows that the interface to R2 is the next-hop for prefix X.
Step 3. LDP in R1 resolves this FEC in RTM and creates an LDP ILM for it with, for
example, ingress label L1, and points it to an LDP NHLFE towards R2 with
egress label L2.
Step 4. Later on, R1 receives a prefix-SID sub-TLV from the mapping server R5 for
prefix X.
Step 5. IGP in R1 is resolving in its routing table the next-hop of prefix X to the
interface to R2. R1 knows that R2 did not advertise support of Segment
Routing and, thus, SID resolution for prefix X in routing table fails.
Step 6. IGP in R1 attempts to resolve prefix SID of X in TTM because it is
configured to stitch SR-to-LDP. R1 finds a LDP tunnel to X in TTM,
instructs the SR module to program a SR ILM with ingress label L3, and
points it to the LDP tunnel endpoint, thus stitching ingress L3 label to
egress L2 label.

Note:

• Here, two ILMs, the LDP and SR, are pointing to the same LDP tunnel one via NHLFE
and one via tunnel endpoint.
• No SR tunnel to destination X should be programmed in TTM following this resolution
step.
• A trap will be generated for prefix SID resolution failure only after IGP fails to complete
Step 5 and Step 6. The existing trap for prefix SID resolution failure is enhanced to
state whether the prefix SID which failed resolution was part of mapping server TLV or
a IP reachability TLV (ISIS).

Step 7. The user enables segment routing on R2.

702 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Step 8. IGP in R1 discovers that R2 supports SR via the SR capability.


Because R1 still has a prefix-SID for X from the mapping server R5, it
maintains the stitching of the SR ILM for X to the LDP FEC unchanged.
Step 9. The operator disables the LDP interface between R1 and R2 (both
directions) and the LDP FEC ILM and NHLFE for prefix X are removed in
R1.
Step 10. This triggers the re-evaluation of the SIDs. R1 first attempts the resolution
in routing table and since the next-hop for X now supports SR, IGP
instructs the SR module to program a NHLFE for prefix-SID of X with
egress label L4 and outgoing interface to R2. R1 installs a SR tunnel in
TTM for destination X. R1 also changes the SR ILM with ingress label L3
to point to the SR NHLFE with egress label L4.
Router R2 now becomes the SR-LDP stitching router.
Step 11. Much later on, the router that owns prefix X, Rx, was upgraded to support
SR. R1 now receives a prefix-SID sub-TLV in a ISIS IP reachability TLV
originated by Rx for prefix X. The SID information may or may not be the
same as the one received from the mapping server R5. In this case, IGP in
R1 will prefer the prefix-SID originated by Rx and will thus update the SR
ILM and NHLFE with appropriate labels.
Step 12. Finally, the operator cleans up the mapping server and removes the
mapping entry for prefix X, which then gets withdrawn by IS-IS.

5.13 LDP FRR Remote LFA Backup using SR


Tunnel for IPv4 Prefixes (IS-IS)
The user enables the use of SR tunnel as a remote LFA backup tunnel next-hop by
an LDP FEC via the following CLI command:

CLI Syntax: config>router>ldp>fast-reroute [backup-sr-tunnel]

As a pre-requisite, the user must enable the stitching of LDP and SR in the LDP-to-
SR direction as explained in 5.12.1. That is because the LSR must perform the
stitching of the LDP ILM to SR tunnel when the primary LDP next-hop of the FEC
fails. Thus, LDP must listen to SR tunnels programmed by the IGP in TTM, but the
mapping server feature is not required.

Issue: 01 3HE 11972 AAAA TQZZA 01 703


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Assume the backup-sr-tunnel option is enabled in LDP and the {loopfree-alternate


remote-lfa} option is enabled in the IGP instance, and that LDP was able to resolve
the primary next-hop of the LDP FEC in RTM. If the IGP LFA SPF does not find a
regular LFA backup next-hop for a prefix of an LDP FEC, it will run the remote LFA
algorithm. If IGP finds a remote LFA tunnel next-hop, LDP programs the primary
next-hop of the FEC using an LDP NHLFE and programs the LFA backup next-hop
using an LDP NHLFE pointing to the SR tunnel endpoint.

Note: The LDP packet is not “tunneled” over the SR tunnel. The LDP label is actually
stitched to the segment routing label stack. LDP points both the LDP ILM and the LTN to
the backup LDP NHLFE which itself uses the SR tunnel endpoint.

The behavior of the feature is similar to the LDP-to-SR stitching feature described in
the 5.12LDP-SR Stitching for IPv4 prefixes (IS-IS) section, except the behavior is
augmented to allow the stitching of an LDP ILM/LTN to an SR tunnel also when the
primary LDP next-hop of the FEC fails.

The following is the behavior of this feature:

• When LDP resolves a primary next-hop in RTM and a remote LFA backup next-
hop using SR tunnel in TTM, LDP programs a primary LDP NHLFE as usual and
a backup LDP NHLFE pointing to the SR tunnel, which has the remote LFA
backup for the same prefix.
• If the LDP FEC primary next-hop failed and LDP has pre-programmed a remote
LFA next-hop with an LDP backup NHLFE pointing to the SR tunnel, the LDP
ILM/LTN switches to it.

Note: If, for some reason, the failure impacted only the LDP tunnel primary next-hop but not
the SR tunnel primary next-hop, the LDP backup NHLFE will effectively point to the primary
next-hop of the SR tunnel and traffic of the LDP ILM/LTN will follow this path instead of the
remote LFA next-hop of the SR tunnel until the latter is activated.

• If the LDP FEC primary next-hop becomes unresolved in RTM, LDP switches
the resolution to a SR tunnel in TTM, if one exists, as per the LDP-to-SR stitching
procedures described in 5.12.2.
• If both the LDP primary next-hop and a regular LFA next-hop become resolved
in RTM, the LDP FEC programs the primary and backup NHLFEs as usual.
• It is recommended to enable the bfd-enable option on the interfaces in both LDP
and IGP instance contexts to speed up the failure detection and the activation of
the LFA/remote-LFA backup next-hop in either direction.

704 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.14 Automatic Creation of a Targeted Hello


Adjacency and LDP Session
This feature enables the automatic creation of a targeted Hello adjacency and LDP
session to a discovered peer.

5.14.1 Feature Configuration


The user first creates a targeted LDP session peer parameter template:

config>router>ldp>targ-session>peer-template template-name

Inside the template the user configures the common T-LDP session parameters or
options shared by all peers using this template. These are the following:

bfd-enable, hello, hello-reduction, keepalive, local-lsr-id, and tunneling.

The tunneling option does not support adding explicit RSVP LSP names. LDP will
select RSVP LSP for an endpoint in LDP-over-RSVP directly from the Tunnel Table
Manager (TTM).

Then the user references the peer prefix list which is defined inside a policy
statement defined in the global policy manager.

config>router>ldp>targ-session>peer-template-map peer-template template-


name policy peer-prefix-policy

Each application of a targeted session template to a given prefix in the prefix list will
result in the establishment of a targeted Hello adjacency to an LDP peer using the
template parameters as long as the prefix corresponds to a router-id for a node in the
TE database. The targeted Hello adjacency will either trigger a new LDP session or
will be associated with an existing LDP session to that peer.

Up to five (5) peer prefix policies can be associated with a single peer template at all
times. Also, the user can associate multiple templates with the same or different peer
prefix policies. Thus multiple templates can match with a given peer prefix. In all
cases, the targeted session parameters applied to a given peer prefix are taken from
the first created template by the user. This provides a more deterministic behavior
regardless of the order in which the templates are associated with the prefix policies.

Issue: 01 3HE 11972 AAAA TQZZA 01 705


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Each time the user executes the above command, with the same or different prefix
policy associations, or the user changes a prefix policy associated with a targeted
peer template, the system re-evaluates the prefix policy. The outcome of the re-
evaluation will tell LDP if an existing targeted Hello adjacency needs to be torn down
or if an existing targeted Hello adjacency needs to have its parameters updated on
the fly.

If a /32 prefix is added to (removed from) or if a prefix range is expanded (shrunk) in


a prefix list associated with a targeted peer template, the same prefix policy re-
evaluation described above is performed.

The template comes up in the no shutdown state and as such it takes effect
immediately. Once a template is in use, the user can change any of the parameters
on the fly without shutting down the template. In this case, all targeted Hello
adjacencies are.

5.14.2 Feature Behavior


Whether the prefix list contains one or more specific /32 addresses or a range of
addresses, an external trigger is required to indicate to LDP to instantiate a targeted
Hello adjacency to a node which address matches an entry in the prefix list. The
objective of the feature is to provide an automatic creation of a T-LDP session to the
same destination as an auto-created RSVP LSP to achieve automatic tunneling of
LDP-over-RSVP. The external trigger is when the router with the matching address
appears in the Traffic Engineering database. In the latter case, an external module
monitoring the TE database for the peer prefixes provides the trigger to LDP. As a
result of this, the user must enable the traffic-engineering option in ISIS or OSPF.

Each mapping of a targeted session peer parameter template to a policy prefix which
exists in the TE database will result in LDP establishing a targeted Hello adjacency
to this peer address using the targeted session parameters configured in the
template. This Hello adjacency will then either get associated with an LDP session
to the peer if one exists or it will trigger the establishment of a new targeted LDP
session to the peer.

The SR OS supports multiple ways of establishing a targeted Hello adjacency to a


peer LSR:

706 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• User configuration of the peer with the targeted session parameters inherited
from the config>router>ldp>targ-session>ipv4 in the top level context or
explicitly configured for this peer in the config>router>ldp>targ-session>peer
context and which overrides the top level parameters shared by all targeted
peers. Let us refer to the top level configuration context as the global context.
Some parameters only exist in the global context; their value will always be
inherited by all targeted peers regardless of which event triggered it.
• User configuration of an SDP of any type to a peer with the signaling tldp option
enabled (default configuration). In this case the targeted session parameter
values are taken from the global context.
• User configuration of a (FEC 129) PW template binding in a BGP-VPLS service.
In this case the targeted session parameter values are taken from the global
context.
• User configuration of a (FEC 129 type II) PW template binding in a VLL service
(dynamic multi-segment PW). In this case the target session parameter values
are taken from the global context
• This Release 11.0.R4 user configuration of a mapping of a targeted session
peer parameter template to a prefix policy when the peer address exists in the
TE database. In this case, the targeted session parameter values are taken from
the template.
• Features using an LDP LSP, which itself is tunneled over an RSVP LSP (LDP-
over-RSVP), as a shortcut do not trigger automatically the creation of the
targeted Hello adjacency and LDP session to the destination of the RSVP LSP.
The user must configure manually the peer parameters or configure a mapping
of a targeted session peer parameter template to a prefix policy. These features
are:
− BGP shortcut (next-hop-resolution shortcut-tunnel option in BGP),
− IGP shortcut (igp-shortcut option in IGP),
− LDP shortcut for IGP routes (ldp-shortcut option in router level),
− static route LDP shortcut (ldp option in a static route),
− VPRN service (auto-bind-tunnel ldp option), and

Since the above triggering events can occur simultaneously or in any arbitrary order,
the LDP code implements a priority handling mechanism in order to decide which
event overrides the active targeted session parameters. The overriding trigger will
become the owner of the targeted adjacency to a given peer and will be shown in
show router ldp targ-peer.

Table 40 summarizes the triggering events and the associated priority.

Issue: 01 3HE 11972 AAAA TQZZA 01 707


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Table 40 Targeted LDP Adjacency Triggering Events and Priority

Triggering Event Automatic Creation of Active Targeted Adjacency


Targeted Hello Adjacency Parameter Override Priority

Manual configuration of peer parameters Yes 1


(creator=manual)

Mapping of targeted session template to Yes 2


prefix policy (creator=template)

Manual configuration of SDP with signaling Yes 3


tldp option enabled (creator=service
manager)

PW template binding in BGP-AD VPLS Yes 3


(creator=service manager)

PW template binding in FEC 129 VLL Yes 3


(creator=service manager)

LDP-over-RSVP as a BGP/IGP/LDP/Static No N/A


shortcut

LDP-over-RSVP in VPRN auto-bind No N/A

LDP-over-RSVP in BGP Label Route No N/A


resolution

Any parameter value change to an active targeted Hello adjacency caused by any of
the above triggering events is performed by having LDP immediately send a Hello
message with the new parameters to the peer without waiting for the next scheduled
time for the Hello message. This allows the peer to adjust its local state machine
immediately and maintains both the Hello adjacency and the LDP session in UP
state. The only exceptions are the following:

• The triggering event caused a change to the local-lsr-id parameter value. In this
case, the Hello adjacency is brought down which will also cause the LDP
session to be brought down if this is the last Hello adjacency associated with the
session. A new Hello adjacency and LDP session will then get established to the
peer using the new value of the local LSR ID.
• The triggering event caused the targeted peer shutdown option to be enabled.
In this case, the Hello adjacency is brought down which will also cause the LDP
session to be brought down if this is the last Hello adjacency associated with the
session.

708 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Finally, the value of any LDP parameter which is specific to the LDP/TCP session to
a peer is inherited from the config>router>ldp>session-params>peer context.
This includes MD5 authentication, LDP prefix per-peer policies, label distribution
mode (DU or DOD), etc.

5.15 Multicast P2MP LDP for GRT


The P2MP LDP LSP setup is initiated by each leaf node of multicast tree. A leaf PE
node learns to initiate a multicast tree setup from client application and sends a label
map upstream towards the root node of the multicast tree. On propagation of label
map, intermediate nodes that are common on path for multiple leaf nodes become
branch nodes of the tree.

Figure 68 illustrates wholesale video distribution over P2MP LDP LSP. Static IGMP
entries on edge are bound to P2MP LDP LSP tunnel-interface for multicast video
traffic distribution.

Issue: 01 3HE 11972 AAAA TQZZA 01 709


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 68 Video Distribution using P2MP LDP


CE Node CE Node
(SOURCE) (SOURCE)

IP Multicast IP Multicast
Join Join

Provider PE1 PE2 Provider


Network Network
Edge Node Edge Node
(ROOT) (ROOT)

Transit Branch
Node Node
CE Node (BLUE) (RED)
(RCVR)
PE Node
Transit
(LEAF)
Node
(BLUE)

PE Node
(LEAF)
PE Node PE Node PE Node PE Node
(LEAF) (LEAF) (LEAF) (LEAF)
IP Multicast
Join
CE Node IP Multicast IP Multicast
(RCVR) Join Join

CE Node CE Node CE Node CE Node


(RCVR) (RCVR) (RCVR) (RCVR)
al_0218

5.16 LDP P2MP Support

5.16.1 LDP P2MP Configuration


A node running LDP also supports P2MP LSP setup using LDP. By default, it would
advertise the capability to a peer node using P2MP capability TLV in LDP
initialization message.

710 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

This configuration option per interface is provided to restrict/allow the use of interface
in LDP multicast traffic forwarding towards a downstream node. Interface
configuration option does not restrict/allow exchange of P2MP FEC by way of
established session to the peer on an interface, but it would only restrict/allow use of
next-hops over the interface.

5.16.2 LDP P2MP Protocol


Only a single generic identifier range is defined for signaling multipoint tree for all
client applications. Implementation on the 7750 SR or 7950 XRS reserves the range
(1..8292) of generic LSP P2MP-ID on root node for static P2MP LSP.

5.16.3 Make Before Break (MBB)


When a transit or leaf node detects that the upstream node towards the root node of
multicast tree has changed, it follows graceful procedure that allows make-before-
break transition to the new upstream node. Make-before-break support is optional. If
the new upstream node doe not support MBB procedures then the downstream node
waits for the configured timer before switching over to the new upstream node.

5.16.4 ECMP Support


If multiple ECMP paths exist between two adjacent nodes on the then the upstream
node of the multicast receiver programs all entries in forwarding plane. Only one
entry is active based on ECMP hashing algorithm.

Issue: 01 3HE 11972 AAAA TQZZA 01 711


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.16.5 Inter-AS Non-segmented mLDP


This feature allows multicast services to use segmented protocols and span them
over multiple autonomous systems (ASs), like in unicast services. As IP VPN or GRT
services span multiple IGP areas or multiple ASs, either due to a network designed
to deal with scale or as result of commercial acquisitions, operators may require
inter-AS VPN (unicast) connectivity. For example, an inter-AS VPN can break the
IGP, MPLS, and BGP protocols into access segments and core segments, allowing
higher scaling of protocols by segmenting them into their own islands. SR OS allows
for similar provision of multicast services and for spanning these services over
multiple IGP areas or multiple ASs.

mLDP supports non-segmented mLDP trees for inter-AS solutions, applicable for
multicast services in the GRT (Global Routing Table) where they need to traverse
mLDP point-to-multipoint tunnels as well as NG-MVPN services.

5.16.5.1 In-band Signaling with Non-segmented mLDP Trees in GRT

mLDP can be used to transport multicast in GRT. For mLDP LSPs to be generated,
a multicast request from the leaf node is required to force mLDP to generate a
downstream unsolicited (DU) FEC toward the root to build the P2MP LSPs.

For inter-AS solutions, the root might not be in the leaf’s RTM or, if it is present, it is
installed using BGP with ASBRs acting as the leaf’s local AS root. Therefore, the
leaf’s local AS intermediate routers might not know the path to the root.

Control protocols used for constructing P2MP LSPs contain a field that identifies the
address of a root node. Intermediate nodes are expected to be able to look up that
address in their routing tables; however, this is not possible if the route to the root
node is a BGP route and the intermediate nodes are part of a BGP-free core (for
example, if they use IGP).

To enable an mLDP LSP to be constructed through a BGP-free segment, the root


node address is temporarily replaced by an address that is known to the intermediate
nodes and is on the path to the true root node. For example, Figure 69 shows the
procedure when the PE-2 (leaf) receives the route for root through ASBR3. This
route resembles the root next-hop ASBR-3. The leaf, in this case, generates an LDP
FEC which has an opaque value, and has the root address set as ASBR-3. This
opaque value has additional information needed to reach the root from ASBR-3. As
a result, the SR core AS3 only needs to be able to resolve the local AS ASBR-3 for
the LDP FEC. The ASBR-3 uses the LDP FEC opaque value to find the path to the
root.

712 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Figure 69 Inter-AS Option C


PE-2 PE-1
(LEAF) ASBR-3 ASBR-1 (ROOT-1)
100.0.0.8 MPBGP vpn-ipv4 100.0.0.21 100.0.0.2 MPBGP vpn-ipv4 100.0.0.14
iBGP Label Route eBGP Label Route iBGP Label Route

mLDP mLDP mLDP

IGP IGP

Source
SR Core AS3 SR Core AS1

VPRN-1 VPRN-1
RD 600:600 RD 60:60
Leaf Intermediate Nodes in AS3 Root
do not have the Root in
their Routing Table

RTM: RTM: RTM:


Root next-hop ASBR-3 Root next-hop ASBR-1 Root next-hop “local
(Protocol BGP Label-Route) (Protocol BGP Label-Route) node AS1” (Protocol IGP)
1024

Because non-segmented d-mLDP requires end-to-end mLDP signaling, the ASBRs


support both mLDP and BGP signaling between them.

5.16.5.2 LDP Recursive FEC Process

For inter-AS networks where the leaf node does not have the root in the RTM or
where the leaf node has the root in the RTM using BGP, and the leaf’s local AS
intermediate nodes do not have the root in their RTM because they are not BGP-
enabled, RFC 6512 defines a recursive opaque value and procedure for LDP to build
an LSP through multiple ASs.

For mLDP to be able to signal through a multiple-AS network where the intermediate
nodes do not have a routing path to the root, a recursive opaque value is needed.
The LDP FEC root resolves the local ASBR, and the recursive opaque value contains
the P2MP FEC element, encoded as specified in RFC 6513, with a type field, a
length field, and a value field of its own.

RFC 6826 section 3 defines the Transit IPv4 opaque for P2MP LDP FEC, where the
leaf in the local AS wants to establish an LSP to the root for P2MP LSP. Figure 70
shows this FEC representation.

Issue: 01 3HE 11972 AAAA TQZZA 01 713


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 70 mLDP FEC for Single AS with Transit IPv4 Opaque


PE-2 PE-1
(LEAF) (ROOT-1)
100.0.0.8 100.0.0.14
IGP, eBGP, mLDP

S1, G1
Join S1, G1 10.60.3.2, 230.0.0.60

SR Core AS1
Host
Source

LDP Fec:
PIM Join Root 100.0.0.14 PIM Join
(S1, G1) (S1, G1)
Opaque <S1, G1>

1025.1

Figure 71 shows an inter-AS FEC with recursive opaque based on RFC 6512.

Figure 71 mLDP FEC for Inter-AS with Recursive Opaque Value


PE-2 PE-1
(LEAF) ASBR-3 ASBR-1 (ROOT-1)
100.0.0.8 100.0.0.21 100.0.0.2 100.0.0.14
IGP, iBGP, mLDP eBGP, mLDP IGP, iBGP, mLDP

S1, G1
Join S1, G1 10.60.3.2, 230.0.0.60

SR Core AS3 SR Core AS1

Host Source

PIM Join LDP Fec: Root 100.0.0.21 LDP Fec: Root 100.0.0.2 LDP Fec: PIM Join
Opaque <Root 100.0.0.14, Opaque <Root 100.0.0.14, Root 100.0.0.14
(S1, G1) Opaque <S1, G1>> Opaque <S1, G1>> Opaque <S1, G1> (S1, G1)

1026.1

As shown in Figure 71, the root “100.0.0.21” is an ASBR and the opaque value
contains the original mLDP FEC. As such, in the leaf’s AS where the actual root
“100.0.0.14” is not known, the LDP FEC can be routed using the local root of ASBR.
When the FEC arrives at an ASBR that co-locates in the same AS as the actual root,
an LDP FEC with transit IPv4 opaque is generated. The end-to-end picture for inter-
AS mLDP for non-VPN multicast is shown in Figure 72.

714 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Figure 72 Non-VPN mLDP with Recursive Opaque for Inter-AS


PE-2 PE-1
(LEAF) ASBR-3 ASBR-1 (ROOT-1)
100.0.0.8 100.0.0.21 100.0.0.2 100.0.0.14
IGP, iBGP, mLDP eBGP, mLDP IGP, iBGP, mLDP

S1, G1
Join S1, G1 10.60.3.2, 230.0.0.60

SR Core AS3 SR Core AS1

Host Source

BGP update: BGP update: BGP update:


Prefix: Source, NH ASBR-1, Label X Prefix: Source, NH ASBR-3, Label Y Prefix: Source, NH Root-1, Label Z
RT Ext Com: XX RT Ext Com: XX RT Ext Com: XX
VRF RT IMP Ext Com ASBT-1:0 VRF RT IMP Ext Com ASBT-3:0 VRF RT IMP Ext Com Root-1:0
Source AS Ext Com: 2:0 Source AS Ext Com: 2:0 Source AS Ext Com: 2:0

PIM Join LDP Fec: Root 100.0.0.21 LDP Fec: Root 100.0.0.2 LDP Fec: PIM Join
Opaque <Root 100.0.0.14, Opaque <Root 100.0.0.14, Root 100.0.0.14
(S1, G1) Opaque <S1, G1>> Opaque <S1, G1>> Opaque <S1, G1> (S1, G1)

1027

As shown in Figure 72, the leaf is in AS3s where the AS3 intermediate nodes do not
have the ROOT-1 in their RTM. The leaf has the S1 installed in the RTM via BGP.
All ASBRs are acting as next-hop-self in the BGP domain. The leaf resolving the S1
via BGP will generate an mLDP FEC with recursive opaque, represented as:

Leaf FEC: <Root=ASBR-3, opaque-value=<Root=Root-1, <opaque-value =


S1,G1>>>

This FEC will be routed through the AS3 Core to ASBR-3.

Note: AS3 intermediate nodes do not have ROOT-1 in their RTM; that is, are not BGP-
capable.

At ASBR-3 the FEC will be changed to:

ASBR-3 FEC: <Root=ASBR-1, opaque-value=<Root=Root-1, <opaque-value =


S1,G1>>>

This FEC will be routed from ASBR-3 to ASBR-1. ASBR-1 is co-located in the same
AS as ROOT-1. Therefore, the ASBR-1 does not need a FEC with a recursive
opaque value.

ASBR-1 FEC: <Root=Root-1, <opaque-value =S1,G1>>

Issue: 01 3HE 11972 AAAA TQZZA 01 715


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

This process allows all multicast services to work over inter-AS networks. All d-mLDP
opaque types can be used in a FEC with a recursive opaque value.

5.16.5.3 Supported Recursive Opaque Values

A recursive FEC is built using the Recursive Opaque Value and VPN-Recursive
Opaque Value types (opaque values 7 and 8 respectively). All SR non-recursive
opaque values can be recursively embedded into a recursive opaque.

Table 41 displays all supported opaque values in SR OS.

Table 41 Opaque Types Supported By SR OS

Opaque Opaque Name RFC SR OS Use FEC Representation


Type

1 Generic LSP RFC 6388 VPRN Local AS <Root, Opaque<P2MPID>>


Identifier

3 Transit IPv4 RFC 6826 IPv4 multicast over mLDP in <Root, Opaque<SourceIPv4,
Source TLV Type GRT GroupIPv4>>

4 Transit IPv6 RFC 6826 IPv6 multicast over mLDP in <Root, Opaque<SourceIPv6,
Source TLV Type GRT GroupIPv6>>

7 Recursive RFC 6512 Inter-AS IPv4 multicast over <ASBR, Opaque<Root,


Opaque Value mLDP in GRT Opaque<SourceIPv4,
GroupIPv4>>>

Inter-AS IPv6 multicast over <ASBR, Opaque<Root,


mLDP in GRT Opaque<SourceIPv6,
GroupIPv6>>>

Inter-AS Option C MVPN over <ASBR, Opaque<Root,


mLDP Opaque<P2MPID>>>

8 VPN-Recursive RFC 6512 Inter-AS Option B MVPN over <ASBR, Opaque <RD, Root,
Opaque Value mLDP P2MPID>>

250 Transit VPNv4 RFC 7246 In-band signaling for VPRN <Root, Opaque<SourceIPv4
Source TLV Type or RPA, GroupIPv4, RD>>

251 Transit VPNv6 RFC 7246 In-band signaling for VPRN <Root, Opaque<SourceIPv6
Source TLV Type or RPA, GroupIPv6, RD>>

716 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.16.5.4 Optimized Option C and Basic FEC Generation for Inter-AS

Not all leaf nodes can support label route or recursive opaque, so recursive opaque
functionality can be transferred from the leaf to the ASBR, as shown in Figure 73.

Figure 73 Optimized Option C — Leaf Router Not Responsible for Recursive FEC
MP-BGP: MVPN-IPv4 address family
Route Type: Intra-ASI-PMSI A-D
NextHop: 100.0.0.14
NLRI: RD 60:60 100.0.0.14
PTA: PMSI-Tunnel, mLDP P2MP LSP, tunnelId: root: 100.0.0.14 Opaque: 8193

Family: VPN-IPV4, LABEL-IPV4 Family: VPN-IPV4, LABEL-IPV4


For Family IPV4: For Family IPV4:
BGPredistribute to IGP IGP redistribute to Label BGP

IGP, mLDP, iBGP eBGP, mLDP IGP, mLDP, iBGP


Join S1, G1 LEAF ASBR-1 ASBR-3 ROOT S1, G1
Join S2, G2 100.0.0.8 100.0.0.21 100.0.0.2 100.0.0.14 10.60.3.2, 230.0.0.60

SR Core SR Core
AS2 Advertise Label-Route
AS2
Host NHS Source
VPRN-1 NHS NHS VPRN-1
RD 60:60 RD 60:60

LDP FEC: Root 100.0.14 LDP FEC: Root 100.0.02 LDP FEC: Root 100.0.14
Opaque < PMPID 8193> Opaque < 100.0.0.14, Opaque < PMPID 8193>
PMPID 8193>
sw0065

In Figure 73, the root advertises its unicast routes to ASBR-3 using IGP, and the
ASBR-3 advertises these routes to ASBR-1 using label-BGP. ASBR-1 can
redistribute these routes to IGP with next-hop ASBR-1. The leaf will resolve the
actual root 100.0.0.14 using IGP and will create a type 1 opaque value <Root
100.0.0.14, Opaque <8193>> to ASBR-1. In addition, all P routers in AS 2 will know
how to resolve the actual root because of BGP-to-IGP redistribution within AS 2.

ASBR-1 will attempt to resolve the 100.0.0.14 actual route via BGP, and will create
a recursive type 7 opaque value <Root 100.0.0.2, Opaque <100.0.0.14, 8193>>.

5.16.5.5 Basic Opaque Generation When Root PE is Resolved Using


BGP

For inter-AS or intra-AS MVPN, the root PE (the PE on which the source resides)
loopback IP address is usually not advertised into each AS or area. As such, the P
routers in the ASs or areas that the root PE is not part of are not able to resolve the
root PE loopback IP address. To resolve this issue, the leaf PE, which has visibility
of the root PE loopback IP address using BGP, creates a recursive opaque with an
outer root address of the local ASBR or ABR and an inner recursive opaque of the
actual root PE.

Issue: 01 3HE 11972 AAAA TQZZA 01 717


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Some non-Nokia routers do not support recursive opaque FEC when the root node
loopback IP address is resolved using iBGP or eBGP. These routers will accept and
generate a basic opaque type. In such cases, there should not be any P routers
between a leaf PE and ASBR or ABR, or any P routers between ASBR or ABR and
the upstream ASBR or ABR. Figure 74 shows an example of this situation.

Figure 74 Example AS
Single AS

Client(s) Zone 1 IP Core Zone 2 Source


HL1 HL2 HL3 HL4 HL5 HL6
ISIS + L-BGP L-BGP ISIS + L-BGP L-BGP ISIS + L-BGP

IGMP Join
mLDP label map
mLDP label map
mLDP label map
mLDP label map
mLDP label map

LBL-iBGP LBL-iBGP LBL-iBGP LBL-iBGP LBL-iBGP


NHS NHS NHS NHS sw0047

In Figure 74, the leaf HL1 is directly attached to ABR HL2, and ABR HL2 is directly
attached to ABR HL3. In this case, it is possible to generate a non-recursive opaque
simply because there is no P router that cannot resolve the root PE loopback IP
address in between any of the elements. All elements are BGP-speaking and have
received the root PE loopback IP address via iBGP or eBGP.

In addition, SR OS does not generate a recursive FEC. The global generate-basic-


fec-only command disables recursive opaque FEC generation when the provider
desires basic opaque FEC generation on the node. In Figure 74, the basic non-
recursive FEC is generated even if the root node HL6 is resolved via BGP (iBGP or
eBGP).

Currently, when the root node HL6 systemIP is resolved via BGP, a recursive FEC
is generated by the leaf node HL1:

HL1 FEC = <HL2, <HL6, OPAQUE>>

When the generate-basic-fec-only command is enabled on the leaf node or any


ABR, they will generate a basic non-recursive FEC:

HL1 FEC = <HL6, OPAQUE>

When this FEC arrives at HL2, if the generate-basic-fec-only command is enabled


then HL2 will generate the following FEC:

718 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

HL2 FEC = <HL6, OPAQUE>

If there are any P routers between the leaf node and an ASBR or ABR, or any P
routers between ASBRs or ABRs that do not have the root node (HL6) in their RTM,
then this type 1 opaque FEC will not be resolved and forwarded upstream, and the
solution will fail.

Note: The generate-basic-fec-only command is only available for intra-AS solutions and
ABR with iBGP. It is not supported for inter-AS solutions and ASBR with eBGP

5.16.5.5.1 Leaf and ABR Behavior

When generate-basic-fec-only is enabled on a leaf node, LDP generates a basic


opaque type 1 FEC.

When generate-basic-fec-only is enabled on the ABR, LDP will accept a lower FEC
of basic opaque type 1 and generate a basic opaque type 1 upper FEC. LDP then
stitches the lower and upper FECs together to create a cross connect.

When generate-basic-fec-only is enabled and the ABR receives a lower FEC of:

a. recursive FEC with type 7 opaque — The ABR will stitch the lower FEC to an
upper FEC with basic opaque type 1.
b. any FEC type other than a recursive FEC with type 7 opaque or a non-recursive
FEC with type 1 basic opaque — ABR will process the packet in the same
manner as when generate-basic-fec-only is disabled.

5.16.5.5.2 Intra-AS Support

ABR uses iBGP and peers between systemIP or loopback IP addresses, as shown
in Figure 75.

Issue: 01 3HE 11972 AAAA TQZZA 01 719


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 75 ABR and iBGP

AS1
Client(s) Zone 1 IP Core Zone 2 Source
HL4 HL3 HL2 HL2 HL3 HL4
ISIS + L-BGP L-iBGP ISIS + L-BGP L-iBGP ISIS + L-BGP

iBGP session, between iBGP session, between


system IP or loopback interface system IP or loopback interface

LDP session between LDP session between


system IP or loopback interfaces system IP or loopback interfaces

sw0051

The generate-basic-fec-only command is supported on leaf PE and ABR nodes.


The generate-basic-fec-only command only interoperates with intra-AS as option
C, or opaque type 7 with inner opaque type 1. No other opaque type is supported.

5.16.5.5.3 Opaque Type Behavior with Basic FEC Generation

Table 42 describes the behavior of different opaque types when the generate-basic-
fec-only command is enabled or disabled.

Table 42 Opaque Type Behavior with Basic FEC Generation

FEC Opaque Type generate-basic-fec-only Enabled

1 Generate type 1 basic opaque when FEC is resolved using


BGP route

3 Same behavior as when generate-basic-fec-only is


disabled

4 Same behavior as when generate-basic-fec-only is


disabled

7 with inner type 1 Generate type 1 basic opaque

7 with inner type 3 or 4 Same behavior as when generate-basic-fec-only is


disabled

8 with inner type 1 Same behavior as when generate-basic-fec-only is


disabled

720 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.16.5.6 Redundancy and Resiliency

For mLDP, MoFRR is supported with the IGP domain; for example, ASBRs that are
not directly connected. MoFRR is not supported between directly connected ASBRs,
such as ASBRs that are using eBGP without IGP.

Figure 76 ASBRs Using eBGP Without IGP


Leaf ROOT
ASBR-3 ASBR-21 ASBR-20 ASBR-1
MoFRR MoFRR MoFRR

SR Core AS3 SR Core AS2 SR Core AS1

Host
VPRN-1 VPRN-1 Source
RD 600:600 eBGP eBGP RD 60:60
No No
MoFRR MoFRR
1034

5.16.5.7 ASBR Physical Connection

Non-segmented mLDP functions with ASBRs directly connected or connected via an


IGP domain, as shown in Figure 76.

5.16.5.8 OAM

LSPs are unidirectional tunnels. When an LSP ping is sent, the echo request is
transmitted via the tunnel and the echo response is transmitted via the vanilla IP to
the source. Similarly, for a p2mp-lsp-ping, on the root, the echo request is
transmitted via the mLDP P2MP tunnel to all leafs and the leafs use vanilla IP to
respond to the root.

The echo request for mLDP is generated carrying a root Target FEC Stack TLV,
which is used to identify the multicast LDP LSP under test at the leaf. The Target FEC
Stack TLV must carry an mLDP P2MP FEC Stack Sub-TLV from RFC 6388 or
RFC 6512.

Issue: 01 3HE 11972 AAAA TQZZA 01 721


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 77 ECHO Request Target FEC Stack TLV


1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

Address Family Addr Length

Root LSR Address

Opaque Length

Opaque Value...

1041

The same concept applies to inter-AS and non-segmented mLDP. The leafs in the
remote AS should be able to resolve the root via GRT routing. This is possible for
inter-AS Option C where the root is usually in the leaf RTM, which is a next-hop
ASBR.

For inter-AS Option B where the root is not present in the leaf RTM, the echo reply
cannot be forwarded via the GRT to the root. To solve this problem, for inter-AS
Option B, the SR OS uses VPRN unicast routing to transmit the echo reply from the
leaf to the root via VPRN.

Figure 78 MVPN Inter-AS Option B OAM


LEAF ROOT-1
7750-1 ASBR-1 ASBR-3 7750-4
100.0.0.8 100.0.0.21 100.0.0.2 100.0.0.14

ECMP ECHO-RESP via VPRN unicast


systemIP
as VPRN
Loopback
SR Core AS3 SR Core AS1

Host Source
VPRN-1 VPRN-1
RD 600:600 RD 60:60
LoopB:
100.0.0.14
OAM P2MP_LSP Ping:
FEC Stack Sub-TLV (NH:0.0.0.0 <RD:60:60 root 100.0.0.14 P2MP-ID: 8193>)

oam p2mp-lsp-ping vpn-recursive-fec ldp <8193>


1042

As shown in Figure 78, the echo request for VPN recursive FEC is generated from
the root node by executing the p2mp-lsp-ping with the vpn-recursive-fec option.
When the echo request reaches the leaf, the leaf uses the sub-TLV within the echo
request to identify the corresponding VPN via the FEC which includes the RD, the
root, and the P2MP-ID.

722 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

After identifying the VPRN, the echo response is sent back via the VPRN and unicast
routes. There should be a unicast route (for example, root 100.0.0.14, as shown in
Figure 78) present in the leaf VPRN to allow the unicast routing of the echo reply
back to the root via VPRN. To distribute this root from the root VPRN to all VPRN
leafs, a loopback interface should be configured in the root VPRN and distributed to
all leafs via MP-BGP unicast routes.

Notes:

1. For SR OS, all P2MP mLDP FEC types will respond to the vpn-recursive-fec
echo request. Leafs in the local-AS and inter-AS Option C will respond to the
recursive-FEC TLV echo request in addition to the leafs in the inter-AS Option B.
a. For non inter-AS Option B where the root system IP is visible through the
GRT, the echo reply will be sent via the GRT, that is, not via the VPRN.
2. This vpn-recursive-fec is a Nokia proprietary implementation, and therefore
third-party routers will not recognize the recursive FEC and will not generate an
echo respond.
a. The user can generate the p2mp-lsp-ping without the vpn-recursive-fec
to discover non-Nokia routers in the local-AS and inter-AS Option C, but not
the inter-AS Option B leafs.

Table 43 OAM Functionality for Options B and C

OAM Command (for Leaf and Root in Leaf and Root in Leaf and Root in
mLDP) Same AS Different AS Different AS
(Option B) (Option C)

p2mp-lsp-ping ldp ✓ X ✓

p2mp-lsp-ping ldp-ssm ✓ X ✓

p2mp-lsp-ping ldp vpn- ✓ ✓ ✓


recursive-fec

p2mp-lsp-trace X X X

5.16.5.9 ECMP Support

In Figure 79, the leaf discovers the ROOT-1 from all three ASBRs (ASBR-3, ASBR-
4 and ASBR-5).

Issue: 01 3HE 11972 AAAA TQZZA 01 723


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 79 ECMP Support


PE-2 PE-1
(LEAF) ASBR-3 ASBR-1 (ROOT-1)
100.0.0.8 100.0.0.21 100.0.0.2 100.0.0.14
IGP, iBGP, mLDP eBGP, mLDP IGP, iBGP, mLDP

SR Core AS3 SR Core AS1

Host
VPRN-1 VPRN-1 Source
RD 600:600 RD 60:60
Join S1, G1 S1, G1
10.60.3.2, 230.0.0.60

ASBR-4 ASBR-2
100.0.0.22 100.0.0.20

ASBR-5
100.0.0.23 1035

The leaf chooses which ASBR will be used for the multicast stream using the
following process.

1. The leaf determines the number of ASBRs that should be part of the hash
calculation.
The number of ASBRs that are part of the hash calculation comes from the
configured ECMP (config>router>ecmp). For example, if the ECMP value is
set to 2, only two of the ASBRs will be part of the hash algorithm selection.
2. After deciding the upstream ASBR, the leaf determines whether there are
multiple equal cost paths between it and the chosen ASBR.
− If there are multiple ECMP paths between the leaf and the ASBR, the leaf
performs another ECMP selection based on the configured value in
config>router>ecmp. This is a recursive ECMP lookup.
− The first lookup chooses the ASBR and the second lookup chooses the path
to that ASBR.
For example, if the ASBR 5 was chosen in Figure 79, there are three paths
between the leaf and ASBR-5. As such, a second ECMP decision is made
to choose the path.
3. At ASBR-5, the process is repeated. For example, in Figure 79, ASBR-5 will go
through steps 1 and 2 to choose between ASBR-1 and ASBR-2, and a second
recursive ECMP lookup to choose the path to that ASBR.

724 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

When there are several candidate upstream LSRs, the LSR must select one
upstream LSR. The algorithm used for the LSR selection is a local matter. If the LSR
selection is done over a LAN interface and the Section 6 procedures are applied, the
procedure described in ECMP Hash Algorithm should be applied to ensure that the
same upstream LSR is elected among a set of candidate receivers on that LAN.

The ECMP hash algorithm ensures that there is a single forwarder over the LAN for
a particular LSP.

5.16.5.9.1 ECMP Hash Algorithm

The ECMP hash algorithm requires the opaque value of the FEC (see Table 41) and
is based on RFC 6388 section 2.4.1.1.

• The candidate upstream LSRs are numbered from lower to higher IP addresses.
• The following hash is performed: H = (CRC32 (Opaque Value)) modulo N,
where N is the number of upstream LSRs. The “Opaque Value” is the field
identified in the FEC element after “Opaque Length”. The “Opaque Length”
indicates the size of the opaque value used in this calculation.
• The selected upstream LSR U is the LSR that has the number H above.

5.16.5.10 Dynamic mLDP and Static mLDP Co-existing on the Same


Node

When creating a static mLDP tunnel, the user must configure the P2MP tunnel ID.

Example: *A:SwSim2>config>router# tunnel-interface


no tunnel-interface ldp-p2mp p2mp-id sender sender-
address
tunnel-interface ldp-p2mp p2mp-id sender sender-
address [root-node]

This p2mp-id can coincide with a dynamic mLDP p2mp-id (the dynamic mLDP is
created by the PIM automatically without manual configuration required). If the node
has a static mLDP and dynamic mLDP with same label and p2mp-id, there will be
collisions and OAM errors.

Do not use a static mLDP and dynamic mLDP on same node. If it is necessary to do
so, ensure that the p2mp-id is not the same between the two tunnel types.

Issue: 01 3HE 11972 AAAA TQZZA 01 725


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Static mLDP FECs originate at the leaf node. If the FEC is resolved using BGP, it will
not be forwarded downstream. A static mLDP FEC will only be created and
forwarded if it is resolved using IGP. For optimized Option C, the static mLDP can
originate at the leaf node because the root is exported from BGP to IGP at the ASBR;
therefore the leaf node resolves the root using IGP.

In the optimized Option C scenario, it is possible to have a static mLDP FEC originate
from a leaf node as follows:

static-mLDP <Root: ROOT-1, Opaque: <p2mp-id-1>>

A dynamic mLDP FEC can also originate from a separate leaf node with the same
FEC:

dynamic-mLDP <Root: ROOT-1, Opaque: <p2mp-id-1>>

In this case, the tree and the up-FEC will merge the static mLDP and dynamic mLDP
traffic at the ASBR. The user must ensure that the static mLDP p2mp-id is not used
by any dynamic mLDP LSPs on the path to the root.

Figure 80 illustrates the scenario where one leaf (LEAF-1) is using dynamic mLDP
for NG-MVPN and a separate leaf (LEAF-2) is using static mLDP for a tunnel
interface.

Figure 80 Static and Dynamic mLDP Interaction

IGP, mLDP, iBGP eBGP, mLDP IGP, mLDP, iBGP

PE-2 (LEAF-1) ASBR-3 ASBR-1 PE-1 (ROOT-1)


100.0.0.8 100.0.0.21 100.0.0.2 100.0.0.14

SR Core SR Core
AS3 AS1
Host Source
VPRN-1 Dynamic mLDP FEC mLDP Up FEC VPRN-1
RD 600:600 <Root:100.0.0.14, <Root:100.0.0.14, RD 60:60
Join S1, G1 Opaque<8010>> Opaque<8010>> S1, G1
10.60.3.2, 230.0.0.60

SR Core
AS4
Join S1, G1
Static mLDP FEC
PE-3 (LEAF-2) <Root:100.0.0.14,
100.0.0.21 Opaque<8010>>
sw0066

In Figure 80, both FECs generated by LEAF-1 and LEAF-2 are identical, and the
ASBR-3 will merge the FECs into a single upper FEC. Any traffic arriving from
ROOT-1 to ASBR-3 over VPRN-1 will be forked to LEAF-1 and LEAF-2, even if the
tunnels were signaled for different services.

726 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.16.6 ASBR MoFRR


ASBR MoFRR in the inter-AS environment allows the leaf PE to signal a primary path
to the remote root through the first ASBR and a backup path through the second
ASBR, so that there is an active LSP signaled from the leaf node to the first local root
(ASBR-1 in Figure 81) and a backup LSP signaled from the leaf node to the second
local root (ASBR-2 in Figure 81) through the best IGP path in the AS.

Using Figure 81 as an example, ASBR-1 and ASBR-2 are local roots for the leaf
node, and ASBR-3 and ASBR-4 are local roots for ASBR-1 or ASBR-2. The actual
root node (ROOT-1) is also a local root for ASBR-3 and ASBR-4.

Figure 81 BGP Neighboring for MoFRR


BGP Neighbor ASBR-1 BGP Neighbor ASBR-3 BGP Neighbor
100.0.0.21 100.0.0.2

AS2 AS2

Host Source
VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
10.60.3.2, 230.0.0.60

LEAF ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
sw0084

In Figure 81, ASBR-2 is a disjointed ASBR; with the AS spanning from the leaf to the
local root, which is the ASBR selected in the AS, the traditional IGP MoFRR is used.
ASBR MoFRR is used from the leaf node to the local root, and IGP MoFRR is used
for any P router that connects the leaf node to the local root.

5.16.6.1 IGP MoFRR Versus BGP (ASBR) MoFRR

The local leaf can be the actual leaf node that is connected to the host, or an ASBR
node that acts as the local leaf for the LSP in that AS, as illustrated in Figure 82.

Issue: 01 3HE 11972 AAAA TQZZA 01 727


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 82 ASBR Node Acting as Local Leaf


Local Leaf for Local Leaf for ROOT-1
ASBR-3 and ASBR-4

BGP Neighbor ASBR-1 BGP Neighbor ASBR-3 BGP Neighbor


100.0.0.21 100.0.0.2

AS2 AS1

Host Source
VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
10.60.3.2, 230.0.0.60

LEAF ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
Local Leaf for Local Leaf for ROOT-1
ASBR-3 and ASBR-4 sw0086

Two types of MoFRR can exist in a unique AS:

• IGP MoFRR — When the mcast-upstream-frr command is enabled for LDP,


the local leaf selects a single local root, either ASBR or actual, and creates a
FEC towards two different upstream LSRs using LFA/ECMP for the ASBR route.
If there are multiple ASBRs directed towards the actual root, the local leaf only
selects a single ASBR; for example, ASBR-1 in Figure 83. In this example, LSPs
are not set up for ASBR-2. The local root ASBR-1 is selected by the local leaf
and the primary path is set up to ASBR-1, while the backup path is set up
through ASBR-2.
For more information, see Multicast LDP Fast Upstream Switchover.

Figure 83 IGP MoFRR


ASBR-1
100.0.0.21
Primary
AS2

Host
VPRN-1
RD 60:60

Backup

LEAF
ASBR-2
100.0.0.20
sw0087

728 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• ASBR MoFRR — When the mcast-upstream-asbr-frr command is enabled for


LDP, and the mcast-upstream-frr command is not enabled, the local leaf will
select a single ASBR as the primary ASBR and another ASBR as the backup
ASBR. The primary and backup LSPs will be set to these two ASBRs, as shown
in Figure 84. Because the mcast-upstream-frr command is not configured, IGP
MoFRR will not be enabled in the AS2, and therefore none of the P routers will
perform local IGP MoFRR.
BGP neighboring and sessions can be used to detect BGP peer failure from the
local leaf to the ASBR, and can cause a MoFRR switch from the primary LSP to
the backup LSP. Multihop BFD can be used between BGP neighbors to detect
failure more quickly and remove the primary BGP peer (ASBR-1 in Figure 84)
and its routes from the routing table so that the leaf can switch to the backup LSP
and backup ASBR.

Figure 84 ASBR MoFRR


ASBR-1
Primary 100.0.0.21
AS2

Host P Router
VPRN-1
RD 60:60 P Router

Backup

LEAF
ASBR-2
Acting as P Router
100.0.0.20
sw0083

The mcast-upstream-frr and mcast-upstream-asbr-frr commands can be


configured together on the local leaf of each AS to create a high-resilience MoFRR
solution. When both commands are enabled, the local leaf will set up ASBR MoFRR
first and set up a primary LSP to one ASBR (ASBR-1 in Figure 85) and a backup LSP
to another ASBR (ASBR-2 in Figure 85). In addition, the local leaf will protect each
LSP using IGP MoFRR through the P routers in that AS.

Issue: 01 3HE 11972 AAAA TQZZA 01 729


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 85 ASBR MoFRR and IGP MoFRR


mcast-upstream-asbr-frr mcast-upstream-frr

ASBR-1 ASBR-3 BGP Neighbor


100.0.0.21 100.0.0.2
P P-P P
AS2 AS1

Host P-B Source


VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
B-P Merge 10.60.3.2, 230.0.0.60
B B B-P

LEAF ROOT-1
B-B
ASBR-2 ASBR-4 In the Root-AS its regular MoFRR
100.0.0.20 100.0.0.8 Not ASBR MoFRR
B-B and P-B merge into a single LSP sw0079

Note: Enabling both the mcast-upstream-frr and mcast-upstream-asbr-frr commands


can cause extra multicast traffic to be created. Ensure that the network is designed and the
appropriate commands are enabled to meet network resiliency needs.

At each AS, either command can be configured; for example, in Figure 85, the leaf
is configured with mcast-upstream-asbr-frr enabled and will set up a primary LSP
to ASBR-1 and a backup LSP to ASBR-2. ASBR-1 and ASBR-2 are configured with
mcast-upstream-frr enabled, and will both perform IGP MoFRR to ASBR-3 only.
ASBR-2 can select ASBR-3 or ASBR-4 as its local root for IGP MoFRR; in this
example, ASBR-2 has selected ASBR-3 as its local root.

There are no ASBRs in the root AS (AS-1), so IGP MoFRR will be performed if
mcast-upstream-frr is enabled on ASBR-3.

The mcast-upstream-frr and mcast-upstream-asbr-frr commands work


separately depending on the desired behavior. If there is more than one local root,
then mcast-upstream-asbr-frr can provide extra resiliency between the local
ASBRs, and mcast-upstream-frr can provide extra redundancy between the local
leaf and the local root by creating a disjointed LSP for each ASBR.

If the mcast-upstream-asbr-frr command is disabled and mcast-upstream-frr is


enabled, and there is more than one local root, only a single local root will be selected
and IGP MoFRR can provide local AS resiliency.

In the actual root AS, only the mcast-upstream-frr command needs to be


configured.

730 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.16.6.2 ASBR MoFRR Leaf Behavior

With inter-AS MoFRR at the leaf, the leaf will select a primary ASBR and a backup
ASBR. These ASBRs are disjointed ASBRs.

The primary and backup LSPs will be set up using the primary and backup ASBRs,
as illustrated in Figure 86.

Figure 86 ASBR MoFRR Leaf Behavior


P:FEC:<ASBR1, opaque<root-1, pmpid>>

ASBR-1
100.0.0.21
P
AS2

Host
VPRN-1
RD 60:60

B
B:FEC:<ASBR2, opaque<root-1, pmpid>>

LEAF
ASBR-2
100.0.0.20

sw0082

Note: Using Figure 86 as a reference, ensure that the paths to ASBR-1 and ASBR-2 are
disjointed from the leaf. MLDP does not support TE and cannot create two disjointed LSPs
from the leaf to ASBR-1 and ASBR-2. The operator and IGP architect must define the
disjointed paths.

5.16.6.3 ASBR MoFRR ASBR Behavior

Each LSP at the ASBR will create its own primary and backup LSPs.

As shown in Figure 87, the primary LSP from the leaf to ASBR-1 will generate a
primary LSP to ASBR-3 (P-P) and a backup LSP to ASBR-4 (P-B). The backup LSP
from the leaf also generates a backup primary to ASBR-4 (B-P) and a backup backup
to ASBR-3 (B-B). When two similar FECs of an LSP intersect, the LSPs will merge.

Issue: 01 3HE 11972 AAAA TQZZA 01 731


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 87 ASBR MoFRR ASBR Behavior


P-P:FEC:<ASBR3, opaque<root-1, pmpid>>
P:FEC:<ASBR1, opaque<root-1, pmpid>> P-B:FEC:<ASBR4, opaque<root-1, pmpid>>

ASBR-1 ASBR-3
100.0.0.21 100.0.0.2
P P-P
AS2

Host P-B
VPRN-1
RD 60:60
B-B
B
B:FEC:<ASBR2, opaque<root-1, pmpid>>

LEAF B-P
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
B-P:FEC:<ASBR4, opaque<root-1, pmpid>>
B-B:FEC:<ASBR3, opaque<root-1, pmpid>>
sw0080

5.16.6.4 MoFRR Root AS Behavior

In the root AS, MoFRR is based on regular IGP MoFRR. At the root, there are primary
and backup LSPs for each of the primary and backup LSPs that arrive from the
neighboring AS, as shown in Figure 88.

Figure 88 MoFRR Root AS Behavior


P-P:FEC:<ASBR3, opaque<root-1, pmpid>> P-P:FEC:<root-1, opaque<pmpid>>
P:FEC:<ASBR1, opaque<root-1, pmpid>> P-B:FEC:<ASBR4, opaque<root-1, pmpid>> P-B:FEC:<root-1, opaque<pmpid>>

ASBR-1 ASBR-3 BGP Neighbor


100.0.0.21 100.0.0.2
P P-P P-P
AS2 AS1
B-B Merge
Host P-B to P-P Source
VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
B-B P-B Merge
to B-P 10.60.3.2, 230.0.0.60
B B-P
B:FEC:<ASBR2, opaque<root-1, pmpid>>
B-P:FEC:<root-1, opaque<pmpid>>
B-B:FEC:<root-1, opaque<pmpid>>

LEAF B-P
ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
B-P:FEC:<ASBR3, opaque<root-1, pmpid>>
B-B:FEC:<ASBR4, opaque<root-1, pmpid>> Regular MoFRR in the Root-AS,
not ASBR MoFRR
sw0088

732 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.16.6.5 Traffic Flow

Figure 89 illustrates traffic flow based on the LSP setup. The backup LSPs of the
primary and backup LSPs (B-B, P-B) will be blocked in the non-leaf AS.

Figure 89 Traffic Flow


ASBR-1 ASBR-3
100.0.0.21 100.0.0.2
P P-P P-P
AS2 AS2

Host P-B Source


VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
P-B B-B B-B P-B
10.60.3.2, 230.0.0.60
B B-P

LEAF B-P
ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8 sw0089

5.16.6.6 Failure Detection and Handling

Failure detection can be achieved by using either of the following:

• IGP failure detection


− Enabling BFD is recommended for IGP protocols or static route (if static
route is used for IGP forwarding). This enables faster IGP failure detection.
− IGP can detect P router failures for IGP MoFRR (single AS).
− If the ASBR fails, IGP can detect the failure and converge the route table to
the local leaf. The local leaf in an AS can be either the ASBR or the actual
leaf.
− IGP routes to the ASBR address must be deleted for IGP failure to be
handled.
• BGP failure detection
− BGP neighboring must be established between the local leaf and each
ASBR. Using multi-hop BFD for ASBR failure is recommended.
− Each local leaf will attempt to calculate a primary ASBR or backup ASBR.
The local leaf will set up a primary LSP to the primary ASBR and a backup
LSP to the backup ASBR. If the primary ASBR has failed, the local leaf will
remove the primary ASBR from the next-hop list and will allow traffic to be
processed from the backup LSP and the backup ASBR.
− BGP MoFRR can offer faster ASBR failure detection than IGP MoFRR.

Issue: 01 3HE 11972 AAAA TQZZA 01 733


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

− BGP MoFRR can also be activated via IGP changes, such as if the node
detects a direct link failure, or if IGP removes the BGP neighbor system IP
address from the routing table. These events can cause a switch from the
primary ASBR to a backup ASBR. It is recommended to deploy IGP and
BFD in tandem for fast failure detection.

5.16.6.7 Failure Scenario

As shown in Figure 90, when ASBR-3 fails, ASBR-1 will detect the failure using
ASBR MoFRR and will enable the primary backup path (P-B). This is the case for
every LSP that has been set up for ASBR MoFRR in any AS.

Figure 90 Failure Scenario 1


ASBR-1 ASBR-3
100.0.0.21 100.0.0.2
P P-P P-P
AS2 AS2

Host P-B Source


VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
P-B B-B B-B P-B
10.60.3.2, 230.0.0.60
B B-P

LEAF B-P
ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
sw0085

In another example, as shown in Figure 91, failure on ASBR-1 will cause the
attached P router to generate a route update to the leaf, removing the ASBR-1 from
the routing table and causing an ASBR-MoFRR on the leaf node.

Figure 91 Failure Scenario 2


IGP update removes
ASBR-1, causing
ASBR MoFRR ASBR-1 ASBR-3
100.0.0.21 100.0.0.2
P-P P-P
AS2
AS2

Host P-B Source


VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
P-B B-B B-B P-B
10.60.3.2, 230.0.0.60
B B-P

LEAF B-P
ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
sw0093

734 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.16.6.8 ASBR MoFRR Consideration

As illustrated in Figure 92, it is possible for the ASBR-1 primary-primary (P-P) LSP
to be resolved using ASBR-3, and for the ASBR-2 backup-primary (B-P) LSP to be
resolved using the same ASBR-3.

Figure 92 Resolution via ASBR-3


P-P:FEC:<ASBR3, opaque<root-1, pmpid>> P-P:FEC:<root-1, opaque<pmpid>>
P:FEC:<ASBR1, opaque<root-1, pmpid>> P-B:FEC:<ASBR4, opaque<root-1, pmpid>> P-B:FEC:<root-1, opaque<pmpid>>

ASBR-1 ASBR-3 BGP Neighbor


100.0.0.21 100.0.0.2
P P-P P-P
AS2 AS1
B-B Merge
Host P-B to P-P Source
VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
B-P P-B Merge
to B-P 10.60.3.2, 230.0.0.60
B B-P
B:FEC:<ASBR2, opaque<root-1, pmpid>>
B-P:FEC:<root-1, opaque<pmpid>>
B-B:FEC:<root-1, opaque<pmpid>>

LEAF B-B
ROOT-1
ASBR-2 ASBR-4 Regular MoFRR in the Root-AS,
100.0.0.20 100.0.0.8 not ASBR MoFRR
B-P:FEC:<ASBR3, opaque<root-1, pmpid>>
B-B:FEC:<ASBR4, opaque<root-1, pmpid>> sw0081

In this case, both the backup-primary LSP and primary-primary LSP will be affected
when a failure occurs on ASBR-3, as illustrated in Figure 93.

Figure 93 ASBR-3 Failure


ASBR-1 ASBR-3
100.0.0.21 100.0.0.2
P P-P P-P
AS2 AS2

Host P-B Source


VPRN-1 VPRN-1
RD 60:60 RD 60:60 S1, G1
B-B P-B
B-P 10.60.3.2, 230.0.0.60
B B-P

LEAF B-B
ROOT-1
ASBR-2 ASBR-4
100.0.0.20 100.0.0.8
sw0078

In Figure 93, the MoFRR can switch to the primary-backup LSP between ASBR-4
and ASBR-1 by detecting BGP MoFRR failure on ASBR-3.

Issue: 01 3HE 11972 AAAA TQZZA 01 735


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

It is strongly recommended that LDP signaling be enabled on all links between the
local leaf and local roots, and that all P routers enable ASBR MoFRR and IGP
MoFRR. If only LDP signaling is configured, the routing table may resolve a next hop
for LDP FEC when there is no LDP signaling and the primary or backup MoFRR
LSPs may not be set up.

ASBR MoFRR guarantees that ASBRs will be disjointed, but does not guarantee that
the path from the local leaf to the local ASBR will be disjointed. The primary and
backup LSPs take the best paths as calculated by IGP, and if IGP selects the same
path for the primary ASBR and the backup ASBR, then the two LSPs will not be
disjointed. Ensure that 2 disjointed paths are created to the primary and backup
ASBRs.

5.16.6.9 ASBR MoFRR Opaque Support

Table 44 lists the FEC opaque types that are supported by ASBR MoFRR.

Table 44 ASBR MoFRR Opaque Support

FEC Opaque Type Supported for ASBR MoFRR

Type 1 Y

Type 3 N

Type 4 N

Type 7, inner type 1 Y

Type 7, inner type 3 or 4 N

Type 8, inner type 1 Y

Type 250 N

Type 251 N

5.16.7 MBB for MoFRR


Any optimization of the MoFRR primary LSP should be performed by the Make
Before Break (MBB) mechanism. For example, if the primary LSP fails, a switch to
the backup LSP will occur and the primary LSP will be signaled. After the primary
LSP is successfully re-established, MoFRR will switch from the backup LSP to the
primary LSP.

736 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

MBB is performed from the leaf node to the root node, and as such it is not performed
per autonomous system (AS); the MBB signaling must be successful from the leaf
PE to the root PE, including all ASBRs and P routers in between.

The conditions of MBB for mLDP LSPs are:

• re-calculation of the SFP


• failure of the primary ASBR

If the primary ASBR fails and a switch is made to the backup ASBR, and the backup
ASBR is the only other ASBR available, the MBB mechanism will not signal any new
LSP and will use this backup LSP as the primary.

5.16.8 Add-path for Route Reflectors


If the ASBRs and the local leaf are connected by a route reflector, the BGP add-path
command must be enabled on the route reflector for mcast-vpn-ipv4 and mcast-
vpn-ipv6, or for label-ipv4 if Option C is used. The add-path command forces the
route reflector to advertise all ASBRs to the local leaf as the next hop for the actual
root.

If the add-path command is not enabled for the route reflector, only a single ASBR
will be advertised to the local root, and ASBR MoFRR will not be available.

5.17 Multicast LDP Fast Upstream Switchover


This feature allows a downstream LSR of a multicast LDP (mLDP) FEC to perform a
fast switchover and source the traffic from another upstream LSR while IGP and LDP
are converging due to a failure of the upstream LSR which is the primary next-hop of
the root LSR for the P2MP FEC. In essence it provides an upstream Fast-Reroute
(FRR) node-protection capability for the mLDP FEC packets. It does it at the expense
of traffic duplication from two different upstream nodes into the node which performs
the fast upstream switchover.

The detailed procedures for this feature are described in draft-pdutta-mpls-mldp-up-


redundancy.

Issue: 01 3HE 11972 AAAA TQZZA 01 737


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.17.1 Feature Configuration


The user enables the mLDP fast upstream switchover feature by configuring the
following option in CLI:

config>router>ldp>mcast-upstream-frr

When this command is enabled and LDP is resolving a mLDP FEC received from a
downstream LSR, it checks if an ECMP next-hop or a LFA next-hop exist to the root
LSR node. If LDP finds one, it programs a primary ILM on the interface
corresponding to the primary next-hop and a backup ILM on the interface
corresponding to the ECMP or LFA next-hop. LDP then sends the corresponding
labels to both upstream LSR nodes. In normal operation, the primary ILM accepts
packets while the backup ILM drops them. If the interface or the upstream LSR of the
primary ILM goes down causing the LDP session to go down, the backup ILM will
then start accepting packets.

In order to make use of the ECMP next-hop, the user must configure the ecmp value
in the system to at least two (2) using the following command:

config>router>ecmp

In order to make use of the LFA next-hop, the user must enable LFA using the
following commands:

config>router>isis>loopfree-alternate

config>router>ospf>loopfree-alternate

Enabling IP FRR or LDP FRR using the following commands is not strictly required
since LDP only needs to know where the alternate next-hop to the root LSR is to be
able to send the Label Mapping message to program the backup ILM at the initial
signaling of the tree. Thus enabling the LFA option is sufficient. If however, unicast
IP and LDP prefixes need to be protected, then these features and the mLDP fast
upstream switchover can be enabled concurrently:

config>router>ip-fast-reroute

config>router>ldp>fast-reroute

738 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Caution: The mLDP FRR fast switchover relies on the fast detection of loss of **LDP
session** to the upstream peer to which the primary ILM label had been advertised. We
strongly recommend that you perform the following:

1. Enable BFD on all LDP interfaces to upstream LSR nodes. When BFD detects the loss
of the last adjacency to the upstream LSR, it will bring down immediately the LDP
session which will cause the IOM to activate the backup ILM.
2. If there is a concurrent TLDP adjacency to the same upstream LSR node, enable BFD
on the T-LDP peer in addition to enabling it on the interface.
3. Enable ldp-sync-timer option on all interfaces to the upstream LSR nodes. If an LDP
session to the upstream LSR to which the primary ILM is resolved goes down for any
other reason than a failure of the interface or of the upstream LSR, routing and LDP
will go out of sync. This means the backup ILM will remain activated until the next time
SPF is rerun by IGP. By enabling IGP-LDP synchronization feature, the advertised link
metric will be changed to max value as soon as the LDP session goes down. This in
turn will trigger an SPF and LDP will likely download a new set of primary and backup
ILMs.

5.17.2 Feature Behavior


This feature allows a downstream LSR to send a label binding to a couple of
upstream LSR nodes but only accept traffic from the ILM on the interface to the
primary next-hop of the root LSR for the P2MP FEC in normal operation, and accept
traffic from the ILM on the interface to the backup next-hop under failure. Obviously,
a candidate upstream LSR node must either be an ECMP next-hop or a Loop-Free
Alternate (LFA) next-hop. This allows the downstream LSR to perform a fast
switchover and source the traffic from another upstream LSR while IGP is converging
due to a failure of the LDP session of the upstream peer which is the primary next-
hop of the root LSR for the P2MP FEC. In a sense it provides an upstream Fast-
Reroute (FRR) node-protection capability for the mLDP FEC packets.

Issue: 01 3HE 11972 AAAA TQZZA 01 739


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 94 mLDP LSP with Backup Upstream LSR Nodes


R

U’ U

5 4

10

Leaf Leaf

al_0219

Upstream LSR U in Figure 94 is the primary next-hop for the root LSR R of the P2MP
FEC. This is also referred to as primary upstream LSR. Upstream LSR U’ is an
ECMP or LFA backup next-hop for the root LSR R of the same P2MP FEC. This is
referred to as backup upstream LSR. Downstream LSR Z sends a label mapping
message to both upstream LSR nodes and programs the primary ILM on the
interface to LSR U and a backup ILM on the interface to LSR U’. The labels for the
primary and backup ILMs must be different. LSR Z thus will attract traffic from both
of them. However, LSR Z will block the ILM on the interface to LSR U’ and will only
accept traffic from the ILM on the interface to LSR U.

In case of a failure of the link to LSR U or of the LSR U itself causing the LDP session
to LSR U to go down, LSR Z will detect it and reverse the ILM blocking state and will
immediately start receiving traffic from LSR U’ until IGP converges and provides a
new primary next-hop, and ECMP or LFA backup next-hop, which may or may not
be on the interface to LSR U’. At that point LSR Z will update the primary and backup
ILMs in the data path.

The LDP uses the interface of either an ECMP next-hop or a LFA next-hop to the root
LSR prefix, whichever is available, to program the backup ILM. ECMP next-hop and
LFA next-hop are however mutually exclusive for a given prefix. IGP installs the
ECMP next-hop in preference to an LFA next-hop for a prefix in the Routing Table
Manager (RTM).

If one or more ECMP next-hops for the root LSR prefix exist, LDP picks the interface
for the primary ILM based on the rules of mLDP FEC resolution specified in RFC
6388:

740 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

1. The candidate upstream LSRs are numbered from lower to higher IP address.
2. The following hash is performed: H = (CRC32(Opaque Value)) modulo N,
where N is the number of upstream LSRs. The Opaque Value is the field
identified in the P2MP FEC Element right after 'Opaque Length' field. The
'Opaque Length' indicates the size of the opaque value used in this calculation.
3. The selected upstream LSR U is the LSR that has the number H.

LDP then picks the interface for the backup ILM using the following new rules:

if (H + 1 < NUM_ECMP) {

// If the hashed entry is not last in the next-hops then pick up the next as backup.

backup = H + 1;

} else {

// Wrap around and pickup the first.

backup = 1;

In some topologies, it is possible that none of ECMP or LFA next-hop will be found.
In this case, LDP programs the primary ILM only.

5.17.3 Uniform Failover from Primary to Backup ILM


When LDP programs the primary ILM record in the data path, it provides the IOM with
the Protect-Group Identifier (PG-ID) associated with this ILM and which identifies
which upstream LSR is protected.

In order for the system to perform a fast switchover to the backup ILM in the fast path,
LDP applies to the primary ILM uniform FRR failover procedures similar in concept
to the ones applied to an NHLFE in the existing implementation of LDP FRR for
unicast FECs. There are however important differences to note. LDP associates a
unique Protect Group ID (PG–ID) to all mLDP FECs which have their primary ILM on
any LDP interface pointing to the same upstream LSR. This PG-ID is assigned per
upstream LSR regardless of the number of LDP interfaces configured to this LSR. As
such this PG-ID is different from the one associated with unicast FECs and which
is assigned to each downstream LDP interface and next-hop. If however a failure

Issue: 01 3HE 11972 AAAA TQZZA 01 741


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

caused an interface to go down and also caused the LDP session to upstream peer
to go down, both PG-IDs have their state updated in the IOM and thus the uniform
FRR procedures will be triggered for both the unicast LDP FECs forwarding packets
towards the upstream LSR and the mLDP FECs receiving packets from the same
upstream LSR.

When the mLDP FEC is programmed in the data path, the primary and backup ILM
record thus contain the PG-ID the FEC is associated with. The IOM also maintains a
list of PG-IDs and a state bit which indicates if it is UP or DOWN. When the PG-ID
state is UP the primary ILM for each mLDP FEC is open and will accept mLDP
packets while the backup ILM is blocked and drops mLDP packets. LDP sends a PG-
ID DOWN notification to IOM when it detects that the LDP session to the peer is gone
down. This notification will cause the backup ILMs associated with this PG-ID to open
and accept mLDP packets immediately. When IGP re-converges, an updated pair of
primary and backup ILMs is downloaded for each mLDP FEC by LDP into the IOM
with the corresponding PG-IDs.

If multiple LDP interfaces exist to the upstream LSR, a failure of one interface will
bring down the link Hello adjacency on that interface but not the LDP session which
is still associated with the remaining link Hello adjacencies. In this case, the
upstream LSR updates in IOM the NHLFE for the mLDP FEC to use one of the
remaining links. The switchover time in this case is not managed by the uniform
failover procedures.

5.18 Multi-Area and Multi-Instance Extensions to


LDP
In order to extend LDP across multiple areas of an IGP instance or across multiple
IGP instances, the current standard LDP implementation based on RFC 3036
requires that all /32 prefixes of PEs be leaked between the areas or instances. This
is because an exact match of the prefix in the routing table is required to install the
prefix binding in the LDP Forwarding Information Base (FIB). Although a router will
do this by default when configured as Area Border Router (ABR), this increases the
convergence of IGP on routers when the number of PE nodes scales to thousands
of nodes.

Multi-area and multi-instance extensions to LDP provide an optional behavior by


which LDP installs a prefix binding in the LDP FIB by simply performing a longest
prefix match with an aggregate prefix in the routing table (RIB). That way, the ABR
will be configured to summarize the /32 prefixes of PE routers. This method is
compliant to RFC 5283, LDP Extension for Inter-Area Label Switched Paths (LSPs).

742 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.18.1 LDP Shortcut for BGP Next-Hop Resolution


LDP shortcut for BGP next-hop resolution shortcuts allow for the deployment of a
‘route-less core’ infrastructure on the 7750 SR and 7950 XRS. Many service
providers either have or intend to remove the IBGP mesh from their network core,
retaining only the mesh between routers connected to areas of the network that
require routing to external routes.

Shortcuts are implemented by utilizing Layer 2 tunnels (i.e., MPLS LSPs) as next
hops for prefixes that are associated with the far end termination of the tunnel. By
tunneling through the network core, the core routers forwarding the tunnel have no
need to obtain external routing information and are immune to attack from external
sources.

The tunnel table contains all available tunnels indexed by remote destination IP
address. LSPs derived from received LDP /32 route FECs will automatically be
installed in the table associated with the advertising router-ID when IGP shortcuts are
enabled.

Evaluating tunnel preference is based on the following order in descending priority:

1. LDP /32 route FEC shortcut


2. Actual IGP next-hop

If a higher priority shortcut is not available or is not configured, a lower priority


shortcut is evaluated. When no shortcuts are configured or available, the IGP next-
hop is always used. Shortcut and next-hop determination is event driven based on
dynamic changes in the tunneling mechanisms and routing states.

Refer to the OS Routing Protocols Guide for details on the use of LDP FEC and
RSVP LSP for BGP Next-Hop Resolution.

5.18.2 LDP Shortcut for IGP Routes


The LDP shortcut for IGP route resolution feature allows forwarding of packets to IGP
learned routes using an LDP LSP. When LDP shortcut is enabled globally, IP
packets forwarded over a network IP interface will be labeled with the label received
from the next-hop for the route and corresponding to the FEC-prefix matching the
destination address of the IP packet. In such a case, the routing table will have the
shortcut next-hop as the best route. If such a LDP FEC does not exist, then the
routing table will have the regular IP next-hop and regular IP forwarding will be
performed on the packet.

Issue: 01 3HE 11972 AAAA TQZZA 01 743


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

An egress LER advertises and maintains a FEC, label binding for each IGP learned
route. This is performed by the existing LDP fec-originate capability.

5.18.2.1 LDP Shortcut Configuration

The user enables the use of LDP shortcut for resolving IGP routes by entering the
global command config>router>ldp-shortcut.

This command enables forwarding of user IP packets and specified control IP


packets using LDP shortcuts over all network interfaces in the system which
participate in the IS-IS and OSPF routing protocols. The default is to disable the LDP
shortcut across all interfaces in the system.

5.18.2.2 IGP Route Resolution

When LDP shortcut is enabled, LDP populates the RTM with next-hop entries
corresponding to all prefixes for which it activated an LDP FEC. For a given prefix,
two route entries are populated in RTM. One corresponds to the LDP shortcut next-
hop and has an owner of LDP. The other one is the regular IP next-hop. The LDP
shortcut next-hop always has preference over the regular IP next-hop for forwarding
user packets and specified control packets over a given outgoing interface to the
route next-hop.

The prior activation of the FEC by LDP is done by performing an exact match with an
IGP route prefix in RTM. It can also be done by performing a longest prefix-match
with an IGP route in RTM if the aggregate-prefix-match option is enabled globally in
LDP.

This feature is not restricted to /32 FEC prefixes. However only /32 FEC prefixes will
be populated in the CPM Tunnel Table for use as a tunnel by services.

All user packets and specified control packets for which the longest prefix match in
RTM yields the FEC prefix will be forwarded over the LDP LSP. Currently, the control
packets that could be forwarded over the LDP LSP are ICMP ping and UDP-
traceroute. The following is an example of the resolution process.

Assume the egress LER advertised a FEC for some /24 prefix using the fec-originate
command. At the ingress LER, LDP resolves the FEC by checking in RTM that an
exact match exists for this prefix. Once LDP activated the FEC, it programs the
NHLFE in the egress data path and the LDP tunnel information in the ingress data
path tunnel table.

744 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Next, LDP provides the shortcut route to RTM which will associate it with the same /
24 prefix. There will be two entries for this /24 prefix, the LDP shortcut next-hop and
the regular IP next-hop. The latter was used by LDP to validate and activate the FEC.
RTM then resolves all user prefixes which succeed a longest prefix match against
the /24 route entry to use the LDP LSP.

Assume now the aggregate-prefix-match was enabled and that LDP found a /16
prefix in RTM to activate the FEC for the /24 FEC prefix. In this case, RTM adds a
new more specific route entry of /24 and has the next-hop as the LDP LSP but it will
still not have a specific /24 IP route entry. RTM then resolves all user prefixes which
succeed a longest prefix match against the /24 route entry to use the LDP LSP while
all other prefixes which succeed a longest prefix-match against the /16 route entry
will use the IP next-hop.

5.18.2.3 LDP Shortcut Forwarding Plane

Once LDP activated a FEC for a given prefix and programmed RTM, it also programs
the ingress Tunnel Table in forwarding engine with the LDP tunnel information.

When an IPv4 packet is received on an ingress network interface, or a subscriber IES


interface, or a regular IES interface, the lookup of the packet by the ingress
forwarding engine will result in the packet being sent labeled with the label stack
corresponding to the NHLFE of the LDP LSP when the preferred RTM entry
corresponds to an LDP shortcut.

If the preferred RTM entry corresponds to an IP next-hop, the IPv4 packet is


forwarded unlabeled.

5.18.3 ECMP Considerations


When ECMP is enabled and multiple equal-cost next-hops exit for the IGP route, the
ingress forwarding engine sprays the packets for this route based on hashing routine
currently supported for IPv4 packets.

When the preferred RTM entry corresponds to an LDP shortcut route, spraying will
be performed across the multiple next-hops for the LDP FEC. The FEC next-hops
can either be direct link LDP neighbors or T-LDP neighbors reachable over RSVP
LSPs in the case of LDP-over-RSVP but not both. This is as per ECMP for LDP in
existing implementation.

When the preferred RTM entry corresponds to a regular IP route, spraying will be
performed across regular IP next-hops for the prefix.

Issue: 01 3HE 11972 AAAA TQZZA 01 745


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.18.4 Disabling TTL Propagation in an LSP Shortcut


This feature provides the option for disabling TTL propagation from a transit or a
locally generated IP packet header into the LSP label stack when an LDP LSP is
used as a shortcut for BGP next-hop resolution, a static-route next-hop resolution, or
for an IGP route resolution.

A transit packet is a packet received from an IP interface and forwarded over the LSP
shortcut at ingress LER.

A locally-generated IP packet is any control plane packet generated from the CPM
and forwarded over the LSP shortcut at ingress LER.

TTL handling can be configured for all LDP LSP shortcuts originating on an ingress
LER using the following global commands:

config>router>ldp>[no] shortcut-transit-ttl-propagate

config>router>ldp>[no] shortcut-local-ttl-propagate

These commands apply to all LDP LSPs which are used to resolve static routes,
BGP routes, and IGP routes.

When the no form of the above command is enabled for local packets, TTL
propagation is disabled on all locally generated IP packets, including ICMP Ping,
traceroute, and OAM packets that are destined to a route that is resolved to the LSP
shortcut. In this case, a TTL of 255 is programmed onto the pushed label stack. This
is referred to as pipe mode.

Similarly, when the no form is enabled for transit packets, TTL propagation is
disabled on all IP packets received on any IES interface and destined to a route that
is resolved to the LSP shortcut. In this case, a TTL of 255 is programmed onto the
pushed label stack.

5.19 LDP Graceful Handling of Resource


Exhaustion
This feature enhances the behavior of LDP when a data path or a CPM resource
required for the resolution of a FEC is exhausted. In prior releases, the LDP module
shuts down. The user is required to fix the issue causing the FEC scaling to be
exceeded and to restart the LDP module by executing the unshut command.

746 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.19.1 LDP Base Graceful Handling of Resources


This feature implements a base graceful handling capability by which the LDP
interface to the peer, or the targeted peer in the case of Targeted LDP (T-LDP)
session, is shutdown. If LDP tries to resolve a FEC over a link or a targeted LDP
session and it runs out of data path or CPM resources, it will bring down that interface
or targeted peer which will bring down the Hello adjacency over that interface to the
resolved link LDP peer or to the targeted peer. The interface is brought down in LDP
context only and is still available to other applications such as IP forwarding and
RSVP LSP forwarding.

Depending of what type of resource was exhausted, the scope of the action taken by
LDP will be different. Some resource such as NHLFE have interface local impact,
meaning that only the interface to the downstream LSR which advertised the label is
shutdown. Some resources such as ILM have global impact, meaning that they will
impact every downstream peer or targeted peer which advertised the FEC to the
node. The following are examples to illustrate this.

• For NHLFE exhaustion, one or more interfaces or targeted peers, if the FEC is
ECMP, will be shut down. ILM is maintained as long as there is at least one
downstream for the FEC for which the NHLFE has been successfully
programmed.
• For an exhaustion of an ILM for a unicast LDP FEC, all interfaces to peers or all
target peers which sent the FEC will be shutdown. No deprogramming of data
path is required since FEC is not programmed.
• An exhaustion of ILM for an mLDP FEC can happen during primary ILM
programming, MBB ILM programming, or multicast upstream FRR backup ILM
programming. In all cases, the P2MP index for the mLDP tree is deprogrammed
and the interfaces to each downstream peer which sent a Label Mapping
message associated with this ILM are shutdown.

After the user has taken action to free resources up, he/she will require manually
unshut the interface or the targeted peer to bring it back into operation. This then re-
establishes the Hello adjacency and resumes the resolution of FECs over the
interface or to the targeted peer.

Detailed guidelines for using the feature and for troubleshooting a system which
activated this feature are provided in the following sections.

This new behavior will become the new default behavior in Release 11.0.R4 and will
interoperate with SR OS based LDP implementation and any other third party LDP
implementation.

The following data path resources can trigger this mechanism:

Issue: 01 3HE 11972 AAAA TQZZA 01 747


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

• NHLFE
• ILM
• Label-to-NHLFE (LTN)
• Tunnel Index
• P2MP Index

The following CPM resources can trigger this mechanism:

• Label allocation

5.20 LDP Enhanced Graceful Handling of


Resources
This feature is an enhanced graceful handling capability which is supported only
among SR OS based implementations. If LDP tries to resolve a FEC over a link or a
targeted session and it runs out of data path or CPM resources, it will put the LDP/
T-LDP session into overload state. As a result, it will release to its LDP peer the
labels of the FECs which it could not resolve and will also send an LDP notification
message to all LDP peers with the new status load of overload for the FEC type
which caused the overload. The notification of overload is per FEC type, i.e., unicast
IPv4, P2MP mLDP etc., and not per individual FEC. The peer which caused the
overload and all other peers will stop sending any new FECs of that type until this
node updates the notification stating that it is no longer in overload state for that FEC
type. FECs of this type previously resolved and other FEC types to this peer and all
other peers will continue to forward traffic normally.

After the user has taken action to free resources up, he/she will require manually
clear the overload state of the LDP/T-LDP sessions towards its peers.

The enhanced mechanism will be enabled instead of the base mechanism only if
both LSR nodes advertise this new LDP capability at the time the LDP session is
initialized. Otherwise, they will continue to use the base mechanism.

This feature operates among SR OS LSR nodes using a couple of private vendor
LDP capabilities:

• The first one is the LSR Overload Status TLV to signal or clear the overload
condition.
• The second one is the Overload Protection Capability Parameter which allows
LDP peers to negotiate the use or not of the overload notification feature and
hence the enhanced graceful handling mechanism.

748 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

When interoperating with an LDP peer which does not support the enhanced
resource handling mechanism, the router reverts automatically to the default base
resource handling mechanism.

The following are the details of the mechanism.

5.20.1 LSR Overload Notification


When an upstream LSR is overloaded for a FEC type, it notifies one or more
downstream peer LSRs that it is overloaded for the FEC type.

When a downstream LSR receives overload status ON notification from an upstream


LSR, it does not send further label mappings for the specified FEC type. When a
downstream LSR receives overload OFF notification from an upstream LSR, it sends
pending label mappings to the upstream LSR for the specified FEC type.

This feature introduces a new TLV referred to as LSR Overload Status TLV, shown
below. This TLV is encoded using vendor proprietary TLV encoding as per RFC
5036. It uses a TLV type value of 0x3E02 and the Timetra OUI value of 0003FA.

0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|U|F| Overload Status TLV Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Timetra OUI = 0003FA |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|S| Reserved |

where:
U-bit: Unknown TLV bit, as described in RFC 5036. The value MUST
be 1 which means if unknown to receiver then receiver should ignore

F-bit: Forward unknown TLV bit, as described in RFC RFC5036. The value
of this bit MUST be 1 since a LSR overload TLV is sent only between
two immediate LDP peers, which are not forwarded.

S-bit: The State Bit. It indicates whether the sender is setting the
LSR Overload Status ON or OFF. The State Bit value is used as
follows:

1 - The TLV is indicating LSR overload status as ON.

0 - The TLV is indicating LSR overload status as OFF.

Issue: 01 3HE 11972 AAAA TQZZA 01 749


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

When a LSR that implements the procedures defined in this document generates
LSR overload status, it MUST send LSR Overload Status TLV in a LDP Notification
Message accompanied by a FEC TLV. The FEC TLV must contain one Typed
Wildcard FEC TLV that specifies the FEC type to which the overload status
notification applies.

The feature in this document re-uses the Typed Wilcard FEC Element which is
defined in RFC 5918.

5.20.2 LSR Overload Protection Capability


To ensure backward compatibility with procedures in RFC 5036 an LSR supporting
Overload Protection need means to determine whether a peering LSR supports
overload protection or not.

An LDP speaker that supports the LSR Overload Protection procedures as defined
in this document MUST inform its peers of the support by including a LSR Overload
Protection Capability Parameter in its initialization message. The Capability
parameter follows the guidelines and all Capability Negotiation Procedures as
defined in RFC 5561. This TLV is encoded using vendor proprietary TLV encoding
as per RFC 5036. It uses a TLV type value of 0x3E03 and the Timetra OUI value of
0003FA.

0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|U|F| LSR Overload Cap TLV Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Timetra OUI = 0003FA |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|S| Reserved |
+-+-+-+-+-+-+-+-+
Where:

U and F bits : MUST be 1 and 0 respectively as per section 3 of LDP


Capabilities [RFC5561].

S-bit : MUST be 1 (indicates that capability is being advertised).

5.20.3 Procedures for LSR overload protection


The procedures defined in this document apply only to LSRs that support
Downstream Unsolicited (DU) label advertisement mode and Liberal Label Retention
Mode. An LSR that implements the LSR overload protection follows the following
procedures:

750 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

1. An LSR must not use LSR overload notification procedures with a peer LSR that
has not specified LSR Overload Protection Capability in Initialization Message
received from the peer LSR.
2. When an upstream LSR detects that it is overloaded with a FEC type then it
MUST initiate an LDP notification message with the S-bit ON in LSR Overload
Status TLV and a FEC TLV containing the Typed Wildcard FEC Element for the
specified FEC type. This message may be sent to one or more peers.
3. After it has notified peers of its overload status ON for a FEC type, the
overloaded upstream LSR can send Label Release for a set of FEC elements to
respective downstream LSRs to off load its LIB to below a certain watermark.
4. When an upstream LSR that was previously overloaded for a FEC type detects
that it is no longer overloaded, it must send an LDP notification message with
the S-bit OFF in LSR Overload Status TLV and FEC TLV containing the Typed
Wildcard FEC Element for the specified FEC type.
5. When an upstream LSR has notified its peers that it is overloaded for a FEC
type, then a downstream LSR must not send new label mappings for the
specified FEC type to the upstream LSR.
6. When a downstream LSR receives LSR overload notification from a peering
LSR with status OFF for a FEC type then the receiving LSR must send any label
mappings for the FEC type which were pending to the upstream LSR for which
are eligible to be sent now.
7. When an upstream LSR is overloaded for a FEC type and it receives Label
Mapping for that FEC type from a downstream LSR then it can send Label
Release to the downstream peer for the received Label Mapping with LDP
Status Code as No_Label_Resources as defined in RFC 5036.

5.21 Bidirectional Forwarding Detection for LDP


LSPs
Bidirectional forwarding detection (BFD) for MPLS LSPs monitors the LSP between
its LERs, irrespective of how many LSRs the LSP may traverse. This enables the
detection of faults that are local to individual LSPs, whether or not they also affect
forwarding for other LSPs or IP packet flows. BFD is ideal for monitoring LSPs that
carry high-value services, where detection of forwarding failures in a minimal amount
of time is critical. The system will raise an SNMP trap, as well as indicate the BFD
session state in show and tools dump commands if an LSP BFD session goes
down.

Issue: 01 3HE 11972 AAAA TQZZA 01 751


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

SR OS supports LSP BFD on RSVP and LDP LSPs. See MPLS and RSVP for
information on using LSP BFD on RSVP LSPs. BFD packets are encapsulated in an
MPLS label stack corresponding to the FEC that the BFD session is associated with,
as described in RFC 5884, Section 7. SR OS does not support the monitoring of
multiple ECMP paths that are associated with the same LDP FEC which is using
multiple LSP BFD sessions simultaneously. However, LSP BFD still provides
continuity checking for paths associated with a target FEC. LDP provides a single
path to LSP BFD, corresponding with the first resolved lower ifindex next-hop, and
the first resolved lower tid index for LDP-over-RSVP cases. The path may potentially
change over the lifetime of the FEC, based on resolution changes. The system tracks
the changing path and maintains the LSP BFD session.

Since LDP LSPs are unidirectional, a routed return path is used for the BFD control
packets traveling from the egress LER to the ingress LER.

5.21.1 Bootstrapping and Maintaining LSP BFD Sessions


A BFD session on an LSP is bootstrapped using LSP ping. LSP ping is used to
exchange the local and remote discriminator values to use for the BFD session for a
particular MPLS LSP or FEC.

The process for bootstrapping an LSP BFD session for LDP is the same as for RSVP,
as described in Bidirectional Forwarding Detection for MPLS LSPs.

SR OS supports the sending of periodic LSP ping messages on an LSP for which
LSP BFD has been configured, as specified in RFC 5884. The ping messages are
sent, along with the bootstrap TLV, at a configurable interval for LSPs on which bfd-
enable has been configured. The default interval is 60 s, with a maximum interval of
300 s. The LSP ping echo request message uses the system IP address as the
default source address. An alternative source address consisting of any routable
address that is local to the node may be configured, and will be used if the local
system IP address is not routable from the far-end node.

Note: SR OS does not take any action if a remote system fails to respond to a periodic LSP
ping message. However, when the show>test-oam>lsp-bfd command is executed, it will
display a return code of zero and a replying node address of 0.0.0.0 if the periodic LSP ping
times out.

The periodic LSP ping interval is configured using the config>router>ldp>lsp-bfd


prefix-list>lsp-ping-interval seconds command.

752 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Configuring an LSP ping interval of 0 disables periodic LSP ping for LDP FECs
matching the specified prefix list. The no lsp-ping-interval command reverts to the
default of 60 s.

LSP BFD sessions are recreated after a high availability switchover between active
and standby CPMs. However, some disruption may occur to LSP ping due to LSP
BFD.

At the head end of an LSP, sessions are bootstrapped if the local and remote
discriminators are not known. The sessions will experience jitter at 0 to 25% of a retry
time of 5 seconds. A side effect is that the following current information will be lost
from an active show test-oam lsp-bfd display:

• Replying Node
• Latest Return Code
• Latest Return SubCode
• Bootstrap Retry Count
• Tx Lsp Ping Requests
• Rx Lsp Ping Replies

If the local and remote discriminators are known, the system immediately begins
generating periodic LSP pings. The pings will experience jitter at 0 to 25% of the lsp-
ping-interval time of 60 to 300 seconds. The lsp-ping-interval time is synchronized
across by LSP BFD. A side effect is that the following current information will be lost
from an active show test-oam lsp-bfd display:

• Replying Node
• Latest Return Code
• Latest Return SubCode
• Bootstrap Retry Count
• Tx Lsp Ping Requests
• Rx Lsp Ping Replies

At the tail end of an LSP, sessions are recreated on the standby CPM following a
switchover. A side effect is that the following current information will be lost from an
active tools dump test-oam lsp-bfd tail display:

• handle
• seqNum
• rc
• rsc

Issue: 01 3HE 11972 AAAA TQZZA 01 753


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Any new, incoming bootstrap requests will be dropped until LSP BFD has become
active. When LSP BFD has finished becoming active, new bootstrap requests will be
considered.

5.21.2 BFD Configuration on LDP LSPs


LSP BFD is configured for LDP using the following CLI commands:

CLI Syntax: config


router
ldp
[no] lsp-bfd prefix-list-name
priority priority-level
no priority
bfd-template bfd-template-name
no bfd-template
source-address ip-address
no source-address
[no] bfd-enable
lsp-ping-interval seconds
no lsp-ping-interval
exit

The lsp-bfd command creates the context for LSP BFD configuration for a set of
LDP LSPs with a FEC matching the one defined by the prefix-list-name parameter.
The default is no lsp-bfd. Configuring no lsp-bfd for a specified prefix list will
remove LSP BFD for all matching LDP FECs except those that also match another
LSP BFD prefix list. The prefix-list-name parameter refers to a named prefix list
configured in the configure>router>policy-options context.

Up to 16 instances of LSP BFD can be configured under LDP in the base router
instance.

The optional priority command configures a priority value that is used to order the
processing if multiple prefix lists are configured. The default value is 1.

If more than one prefix in a prefix list, or more than one prefix list, contains a prefix
that corresponds to the same LDP FEC, then the system will test the prefix against
the configured prefix lists in the following order:

1. numerically by priority-level
2. alphabetically by prefix-list-name

The system will use the first matching configuration, if one exists.

754 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

If an LSP BFD is removed for a prefix list, but there remains another LSP BFD
configuration with a prefix list match, then any FECs matched against that prefix will
be rematched against the remaining prefix list configurations in the same manner as
described above.

A non-existent prefix list is equivalent to an empty prefix list. When a prefix list is
created and populated with prefixes, LDP will match its FECs against that prefix list.
It is not necessary to configure a named prefix list in the config>router>policy-
options context before specifying a prefix list using the config>router>ldp>lsp-bfd
command.

If a prefix list contains a longest match corresponding to one or more LDP FECs, the
BFD configuration is applied to all of the matching LDP LSPs.

Only /32 IPv4 and /128 IPv6 host prefix FECs will be considered for BFD. BFD on
PW FECs uses VCCV BFD.

The source-address command is used to configure the source address of periodic


LSP ping packets and BFD control packets for LSP BFD sessions associated with
LDP prefixes in the prefix list. The default value is the system IP address. If the
system IP address is not routable from the far-end node of the BFD session, then an
alternative routable IP address local to the source node should be used.

The system will not initialize an LSP BFD session if there is a mismatch between the
address family of the source address and the address family of the prefix in the prefix
list.

If the system has both IPv4 and IPv6 system IP addresses, and the source-address
command is not configured, then the system will use a source address of the
matching address family for IPv4 and IPv6 prefixes in the prefix list.

The bfd-template command applies the specified BFD template to the BFD sessions
for LDP LSPs with FECs that match the prefix list. The default is no bfd-template.
The named BFD template must first be configured using the
config>router>bfd>bfd-template command before it can be referenced by LSP
BFD, otherwise a CLI error is generated. The minimum receive interval and transmit
interval supported for LSP BFD is 1 s.

The bfd-enable command enables BFD on the LDP LSPs with FECs that match the
prefix list.

Issue: 01 3HE 11972 AAAA TQZZA 01 755


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.22 User Guidelines and Troubleshooting


Procedures

5.22.1 Common Procedures


When troubleshooting a LDP resource exhaustion situation on an LSR, the user must
first determine which of the LSR and its peers supports the enhanced handling of
resources. This is done by checking if the local LSR or its peers advertised the LSR
Overload Protection Capability:

show router ldp status


===============================================================================
LDP Status for LSR ID 110.20.1.110
===============================================================================
Admin State : Up Oper State : Up
Created at : 07/17/13 21:27:41 Up Time : 0d 01:00:41
Oper Down Reason : n/a Oper Down Events : 1
Last Change : 07/17/13 21:27:41 Tunn Down Damp Time : 20 sec
Label Withdraw Del*: 0 sec Implicit Null Label : Enabled
Short. TTL Prop Lo*: Enabled Short. TTL Prop Tran*: Enabled
Import Policies : Export Policies :
Import-LDP Import-LDP
External External
Tunl Exp Policies :
from-proto-bgp
Aggregate Prefix : False Agg Prefix Policies : None
FRR : Enabled Mcast Upstream FRR : Disabled
Dynamic Capability : False P2MP Capability : True
MP MBB Capability : True MP MBB Time : 10
Overload Capability: True <---- //Local Overload Capability
Active Adjacencies : 0 Active Sessions : 0
Active Interfaces : 2 Inactive Interfaces : 4
Active Peers : 62 Inactive Peers : 10
Addr FECs Sent : 0 Addr FECs Recv : 0
Serv FECs Sent : 0 Serv FECs Recv : 0
P2MP FECs Sent : 0 P2MP FECs Recv : 0
Attempted Sessions : 458
No Hello Err : 0 Param Adv Err : 0
Max PDU Err : 0 Label Range Err : 0
Bad LDP Id Err : 0 Bad PDU Len Err : 0
Bad Mesg Len Err : 0 Bad TLV Len Err : 0
Unknown TLV Err : 0
Malformed TLV Err : 0 Keepalive Expired Err: 4
Shutdown Notif Sent: 12 Shutdown Notif Recv : 5
===============================================================================

show router ldp session detail


===============================================================================
LDP Sessions (Detail)
===============================================================================
-------------------------------------------------------------------------------
Session with Peer 10.8.100.15:0, Local 110.20.1.110:0

756 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

-------------------------------------------------------------------------------
Adjacency Type : Targeted State : Nonexistent
Up Time : 0d 00:00:00
Max PDU Length : 4096 KA/Hold Time Remaining : 0
Link Adjacencies : 0 Targeted Adjacencies : 1
Local Address : 110.20.1.110 Peer Address : 10.8.100.15
Local TCP Port : 0 Peer TCP Port : 0
Local KA Timeout : 40 Peer KA Timeout : 40
Mesg Sent : 0 Mesg Recv : 1
FECs Sent : 0 FECs Recv : 0
Addrs Sent : 0 Addrs Recv : 0
GR State : Capable Label Distribution : DU
Nbr Liveness Time : 0 Max Recovery Time : 0
Number of Restart : 0 Last Restart Time : Never
P2MP : Not Capable MP MBB : Not Capable
Dynamic Capability : Not Capable LSR Overload : Not Capable <----
//Peer OverLoad Capab.
Advertise : Address/Servi*
Addr FEC OverLoad Sent : No Addr FEC OverLoad Recv : No
Mcast FEC Overload Sent: No Mcast FEC Overload Recv: No
Serv FEC Overload Sent : No Serv FEC Overload Recv : No
-------------------------------------------------------------------------------

5.22.2 Base Resource Handling Procedures


Step 1

If the peer OR the local LSR does not support the Overload Protection Capability it
means that the associated adjacency [interface/peer] will be brought down as part of
the base resource handling mechanism.

The user can determine which interface or targeted peer was shut down, by applying
the following commands:

- [show router ldp interface resource-failures]

- [show router ldp targ-peer resource-failures]

show router ldp interface resource-failures


===============================================================================
LDP Interface Resource Failures
===============================================================================
srl srr
sru4 sr4-1-5-1
===============================================================================

show router ldp targ-peer resource-failures


===============================================================================
LDP Peers Resource Failures
===============================================================================
10.20.1.22 110.20.1.3

Issue: 01 3HE 11972 AAAA TQZZA 01 757


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

===============================================================================

A trap is also generated for each interface or targeted peer:


16 2013/07/17 14:21:38.06 PST MINOR: LDP #2003 Base LDP Interface Admin State
"Interface instance state changed - vRtrID: 1, Interface sr4-1-5-1, administrati
ve state: inService, operational state: outOfService"

13 2013/07/17 14:15:24.64 PST MINOR: LDP #2003 Base LDP Interface Admin State
"Interface instance state changed - vRtrID: 1, Peer 10.20.1.22, administrative s
tate: inService, operational state: outOfService"

The user can then check that the base resource handling mechanism has been
applied to a specific interface or peer by running the following show commands:

- [show router ldp interface detail]

- [show router ldp targ-peer detail]

show router ldp interface detail


===============================================================================
LDP Interfaces (Detail)
===============================================================================
-------------------------------------------------------------------------------
Interface "sr4-1-5-1"
-------------------------------------------------------------------------------
Admin State : Up Oper State : Down
Oper Down Reason : noResources <----- //link LDP resource exhaustion handled
Hold Time : 45 Hello Factor : 3
Oper Hold Time : 45
Hello Reduction : Disabled Hello Reduction *: 3
Keepalive Timeout : 30 Keepalive Factor : 3
Transport Addr : System Last Modified : 07/17/13 14:21:38
Active Adjacencies : 0
Tunneling : Disabled
Lsp Name : None
Local LSR Type : System
Local LSR : None
BFD Status : Disabled
Multicast Traffic : Enabled
-------------------------------------------------------------------------------

show router ldp discovery interface "sr4-1-5-1" detail


===============================================================================
LDP Hello Adjacencies (Detail)
===============================================================================
-------------------------------------------------------------------------------
Interface "sr4-1-5-1"
-------------------------------------------------------------------------------
Local Address : 223.0.2.110 Peer Address : 224.0.0.2
Adjacency Type : Link State : Down
===============================================================================

show router ldp targ-peer detail

758 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

===============================================================================
LDP Peers (Detail)
===============================================================================
-------------------------------------------------------------------------------
Peer 10.20.1.22
-------------------------------------------------------------------------------
Admin State : Up Oper State : Down
Oper Down Reason : noResources <----- // T-LDP resource exhaustion handled
Hold Time : 45 Hello Factor : 3
Oper Hold Time : 45
Hello Reduction : Disabled Hello Reduction Fact*: 3
Keepalive Timeout : 40 Keepalive Factor : 4
Passive Mode : Disabled Last Modified : 07/17/13 14:15:24
Active Adjacencies : 0 Auto Created : No
Tunneling : Enabled
Lsp Name : None
Local LSR : None
BFD Status : Disabled
Multicast Traffic : Disabled
-------------------------------------------------------------------------------

show router ldp discovery peer 10.20.1.22 detail


===============================================================================
LDP Hello Adjacencies (Detail)
===============================================================================
-------------------------------------------------------------------------------
Peer 10.20.1.22
-------------------------------------------------------------------------------
Local Address : 110.20.1.110 Peer Address : 10.20.1.22
Adjacency Type : Targeted State : Down <-----
//T-LDP resource exhaustion handled
===============================================================================

Step 2

Besides interfaces and targeted peer, locally originated FECs may also be put into
overload. These are the following:

- unicast fec-originate pop

- multicast local static p2mp-fec type=1 [on leaf LSR]

- multicast local Dynamic p2mp-fec type=3 [on leaf LSR]

The user can check if only remote and/or local FECs have been set in overload by
the resource base resource exhaustion mechanism using the following command:

- [tools dump router ldp instance]

The relevant part of the output is described below:

{...... snip......}
Num OLoad Interfaces: 4 <----- //#LDP interfaces resource in exhaustion
Num Targ Sessions: 72 Num Active Targ Sess: 62
Num OLoad Targ Sessions: 7 <----- //#T-LDP peers in resource exhaustion

Issue: 01 3HE 11972 AAAA TQZZA 01 759


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Num Addr FECs Rcvd: 0 Num Addr FECs Sent: 0


Num Addr Fecs OLoad: 1 <----- //# of local/remote unicast FECs in Overload
Num Svc FECs Rcvd: 0 Num Svc FECs Sent: 0
Num Svc FECs OLoad: 0 <----- // # of local/
remote service Fecs in Overload
Num mcast FECs Rcvd: 0 Num Mcast FECs Sent: 0
Num mcast FECs OLoad: 0 <----- // # of local/
remote multicast Fecs in Overload
{...... snip......}

When at least one local FEC has been set in overload the following trap will occur:

23 2013/07/
17 15:35:47.84 PST MINOR: LDP #2002 Base LDP Resources Exhausted "Instance
state changed - vRtrID: 1, administrative state: inService, operationa l state:
inService"

Step 3

After the user has detected that at least, one link LDP or T-LDP adjacency has been
brought down by the resource exhaustion mechanism, he/she must protect the
router by applying one or more of the following to free resources up:

• Identify the source for the [unicast/multicast/service] FEC flooding.


• Configure the appropriate [import/export] policies and/or delete the excess
[unicast/multicast/service] FECs not currently handled.

Step 4

Next, the user has to manually attempt to clear the overload (no resource) state and
allow the router to attempt to restore the link and targeted sessions to its peer.

Note: Because of the dynamic nature of FEC distribution and resolution by LSR nodes, one
cannot predict exactly which FECs and which interfaces or targeted peers will be restored
after performing the following commands if the LSR activates resource exhaustion again.

One of the following commands can be used:

- [clear router ldp resource-failures]

• Clears the overload state and attempt to restore adjacency and session for LDP
interfaces and peers.
• Clear the overload state for the local FECs.

- [clear router ldp interface ifName]

- [clear router ldp peer peerAddress]

760 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• Clears the overload state and attempt to restore adjacency and session for LDP
interfaces and peers.
• These two commands *DO NOT* Clear the overload state for the local FECs.

5.22.3 Enhanced Resource Handling Procedures


Step 1

If the peer AND the local LSR do support the Overload Protection Capability it means
that the LSR will signal the overload state for the FEC type which caused the
resource exhaustion as part of the enhanced resource handling mechanism.

In order to verify if the local router has received or sent the overload status TLV,
perform the following:

-[show router ldp session detail]


show router ldp session 110.20.1.1 detail
-------------------------------------------------------------------------------
Session with Peer 110.20.1.1:0, Local 110.20.1.110:0
-------------------------------------------------------------------------------
Adjacency Type : Both State : Established
Up Time : 0d 00:05:48
Max PDU Length : 4096 KA/Hold Time Remaining : 24
Link Adjacencies : 1 Targeted Adjacencies : 1
Local Address : 110.20.1.110 Peer Address : 110.20.1.1
Local TCP Port : 51063 Peer TCP Port : 646
Local KA Timeout : 30 Peer KA Timeout : 45
Mesg Sent : 442 Mesg Recv : 2984
FECs Sent : 16 FECs Recv : 2559
Addrs Sent : 17 Addrs Recv : 1054
GR State : Capable Label Distribution : DU
Nbr Liveness Time : 0 Max Recovery Time : 0
Number of Restart : 0 Last Restart Time : Never
P2MP : Capable MP MBB : Capable
Dynamic Capability : Not Capable LSR Overload : Capable
Advertise : Address/Servi* BFD Operational Status : inService
Addr FEC OverLoad Sent : Yes Addr FEC OverLoad Recv : No <----
// this LSR sent overLoad for unicast FEC type to peer
Mcast FEC Overload Sent: No Mcast FEC Overload Recv: No
Serv FEC Overload Sent : No Serv FEC Overload Recv : No
-------------------------------------------------------------------------------

show router ldp session 110.20.1.110 detail


-------------------------------------------------------------------------------
Session with Peer 110.20.1.110:0, Local 110.20.1.1:0
-------------------------------------------------------------------------------
Adjacency Type : Both State : Established
Up Time : 0d 00:08:23
Max PDU Length : 4096 KA/Hold Time Remaining : 21
Link Adjacencies : 1 Targeted Adjacencies : 1
Local Address : 110.20.1.1 Peer Address : 110.20.1.110
Local TCP Port : 646 Peer TCP Port : 51063

Issue: 01 3HE 11972 AAAA TQZZA 01 761


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Local KA Timeout : 45 Peer KA Timeout : 30


Mesg Sent : 3020 Mesg Recv : 480
FECs Sent : 2867 FECs Recv : 16
Addrs Sent : 1054 Addrs Recv : 17
GR State : Capable Label Distribution : DU
Nbr Liveness Time : 0 Max Recovery Time : 0
Number of Restart : 0 Last Restart Time : Never
P2MP : Capable MP MBB : Capable
Dynamic Capability : Not Capable LSR Overload : Capable
Advertise : Address/Servi* BFD Operational Status : inService
Addr FEC OverLoad Sent : No Addr FEC OverLoad Recv : Yes <----
// this LSR received overLoad for unicast FEC type from peer
Mcast FEC Overload Sent: No Mcast FEC Overload Recv: No
Serv FEC Overload Sent : No Serv FEC Overload Recv : No
===============================================================================

A trap is also generated:

70002 2013/07/17 16:06:59.46 PST MINOR: LDP #2008 Base LDP Session State Change
"Session state is operational. Overload Notification message is sent to/from peer
110.20.1.1:0 with overload state true for fec type prefixes"

Step 2

Besides interfaces and targeted peer, locally originated FECs may also be put into
overload. These are the following:

- unicast fec-originate pop

- multicast local static p2mp-fec type=1 [on leaf LSR]

- multicast local Dynamic p2mp-fec type=3 [on leaf LSR]

The user can check if only remote and/or local FECs have been set in overload by
the resource enhanced resource exhaustion mechanism using the following
command:

- [tools dump router ldp instance]

The relevant part of the output is described below:

Num Entities OLoad (FEC: Address Prefix ): Sent: 7 Rcvd: 0 <-----


// # of session in OvLd for fec-type=unicast
Num Entities OLoad (FEC: PWE3 ): Sent: 0 Rcvd: 0 <-----
// # of session in OvLd for fec-type=service
Num Entities OLoad (FEC: GENPWE3 ): Sent: 0 Rcvd: 0 <-----
// # of session in OvLd for fec-type=service
Num Entities OLoad (FEC: P2MP ): Sent: 0 Rcvd: 0 <-----
// # of session in OvLd for fec-type=MulticastP2mp
Num Entities OLoad (FEC: MP2MP UP ): Sent: 0 Rcvd: 0 <-----
// # of session in OvLd for fec-type=MulticastMP2mp
Num Entities OLoad (FEC: MP2MP DOWN ): Sent: 0 Rcvd: 0 <-----
// # of session in OvLd for fec-type=MulticastMP2mp
Num Active Adjacencies: 9

762 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Num Interfaces: 6 Num Active Interfaces: 6


Num OLoad Interfaces: 0 <----- //
link LDP interfaces in resource exhaustion
should be zero when Overload Protection Capability is supported
Num Targ Sessions: 72 Num Active Targ Sess: 67
Num OLoad Targ Sessions: 0 <----- // T-LDP peers in resource exhaustion
should be zero if Overload Protection Capability is supported
Num Addr FECs Rcvd: 8667 Num Addr FECs Sent: 91
Num Addr Fecs OLoad: 1 <-----
// # of local/remote unicast Fecs in Overload
Num Svc FECs Rcvd: 3111 Num Svc FECs Sent: 0
Num Svc FECs OLoad: 0 <-----
// # of local/remote service Fecs in Overload
Num mcast FECs Rcvd: 0 Num Mcast FECs Sent: 0
Num mcast FECs OLoad: 0 <-----
// # of local/remote multicast Fecs in Overload
Num MAC Flush Rcvd: 0 Num MAC Flush Sent: 0

When at least one local FEC has been set in overload the following trap will occur:

69999 2013/07/17 16:06:59.21 PST MINOR: LDP #2002 Base LDP Resources Exhausted
"Instance state changed - vRtrID: 1, administrative state: inService, operational
state: inService"

Step 3

After the user has detected that at least one overload status TLV has been sent or
received by the LSR, he/she must protect the router by applying one or more of the
following to free resources up:

• Identify the source for the [unicast/multicast/service] FEC flooding. This is most
likely the LSRs which session received the overload status TLV.
• Configure the appropriate [import/export] policies and/or delete the excess
[unicast/multicast/service] FECs from the FEC type in overload.

Step 4

Next, the user has to manually attempt to clear the overload state on the affected
sessions and for the affected FEC types and allow the router to clear the overload
status TLV to its peers.

Note: Because of the dynamic nature of FEC distribution and resolution by LSR nodes, one
cannot predict exactly which sessions and which FECs will be cleared after performing the
following commands if the LSR activates overload again.

One of the following commands can be used depending if the user wants to clear all
sessions or at once or one session at a time:

- [clear router ldp resource-failures]

Issue: 01 3HE 11972 AAAA TQZZA 01 763


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

• Clears the overload state for the affected sessions and FEC types.
• Clear the overload state for the local FECs.

- [clear router ldp session a.b.c.d overload fec-type {services | prefixes |


multicast}]

• Clears the overload state for the specified session and FEC type.
• Clears the overload state for the local FECs.

5.23 LDP IPv6 Control and Data Planes


SR OS extends the LDP control plane and data plane to support LDP IPv6 adjacency
and session using 128-bit LSR-ID.

The implementation allows for concurrent support of independent LDP IPv4 (32-bit
LSR-ID) and IPv6 (128-bit LSR-iD) adjacencies and sessions between peer LSRs
and over the same or different set of interfaces.

5.23.1 LDP Operation in an IPv6 Network


LDP IPv6 can be enabled on the SR OS interface. Figure 95 shows the LDP
adjacency and session over an IPv6 interface.

Figure 95 LDP Adjacency and Session over an IPv6 Interface

I/F 1

LSR-A:0 LSR-B:0
I/F 2

al_0627

LSR-A and LSR-B have the following IPv6 LDP identifiers respectively:

• <LSR Id=A/128> : <label space id=0>


• <LSR Id=B/128> : <label space id=0>

By default, A/128 and B/128 use the system interface IPv6 address.

764 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Note: Although the LDP control plane can operate using only the IPv6 system address, the
user must configure the IPv4-formatted router ID for OPSF, IS-IS, and BGP to operate
properly.

The following sections describe the behavior when LDP IPv6 is enabled on the
interface.

5.23.2 Link LDP


The SR OS LDP IPv6 implementation uses a 128-bit LSR-ID as defined in draft-
pdutta-mpls-ldp-v2-00. See LDP Process Overview for more information about
interoperability of this implementation with 32-bit LSR-ID, as defined in draft-ietf-
mpls-ldp-ipv6-14.

Hello adjacency will be brought up using link Hello packet with source IP address set
to the interface link-local unicast address and a destination IP address set to the link-
local multicast address FF02:0:0:0:0:0:0:2.

The transport address for the TCP connection, which is encoded in the Hello packet,
will be set to the LSR-ID of the LSR by default. It will be set to the interface IPv6
address if the user enabled the interface option under one of the following contexts:

• config>router>ldp>if-params>ipv6>transport-address
• config>router>ldp>if-params>if>ipv6>transport-address

The interface global unicast address, meaning the primary IPv6 unicast address of
the interface, is used.

The user can configure the local-lsr-id option on the interface and change the value
of the LSR-ID to either the local interface or to another interface name, loopback or
not. The global unicast IPv6 address corresponding to the primary IPv6 address of
the interface is used as the LSR-ID. If the user invokes an interface which does not
have a global unicast IPv6 address in the configuration of the transport address or
the configuration of the local-lsr-id option, the session will not come up and an error
message will be displayed.

The LSR with the highest transport address will bootstrap the IPv6 TCP connection
and IPv6 LDP session.

Source and destination addresses of LDP/TCP session packets are the IPv6
transport addresses.

Issue: 01 3HE 11972 AAAA TQZZA 01 765


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.23.3 Targeted LDP


Source and destination addresses of targeted Hello packet are the LDP IPv6 LSR-
IDs of systems A and B.

The user can configure the local-lsr-id option on the targeted session and change
the value of the LSR-ID to either the local interface or to some other interface name,
loopback or not. The global unicast IPv6 address corresponding to the primary IPv6
address of the interface is used as the LSR-ID. If the user invokes an interface which
does not have a global unicast IPv6 address in the configuration of the transport
address or the configuration of the local-lsr-id option, the session will not come up
and an error message will be displayed. In all cases, the transport address for the
LDP session and the source IP address of targeted Hello message will be updated
to the new LSR-ID value.

The LSR with the highest transport address (in this case, the LSR-ID) will bootstrap
the IPv6 TCP connection and IPv6 LDP session.

Source and destination IP addresses of LDP/TCP session packets are the IPv6
transport addresses (in this case, LDP LSR-IDs of systems A and B).

5.23.4 FEC Resolution


LDP will advertise and withdraw all interface IPv6 addresses using the Address/
Address-Withdraw message. Both the link-local unicast address and the configured
global unicast addresses of an interface are advertised.

All LDP FEC types can be exchanged over a LDP IPv6 LDP session like in LDP IPv4
session.

The LSR does not advertise a FEC for a link-local address and, if received, the LSR
will not resolve it.

A IPv4 or IPv6 prefix FEC can be resolved to an LDP IPv6 interface in the same way
as it is resolved to an LDP IPv4 interface. The outgoing interface and next-hop are
looked up in RTM cache. The next-hop can be the link-local unicast address of the
other side of the link or a global unicast address. The FEC is resolved to the LDP
IPv6 interface of the downstream LDP IPv6 LSR that advertised the IPv4 or IPv6
address of the next hop.

766 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

An mLDP P2MP FEC with an IPv4 root LSR address, and carrying one or more IPv4
or IPv6 multicast prefixes in the opaque element, can be resolved to an upstream
LDP IPv6 LSR by checking if the LSR advertised the next-hop for the IPv4 root LSR
address. The upstream LDP IPv6 LSR will then resolve the IPv4 P2MP FEC to one
of the LDP IPV6 links to this LSR.

Note: Beginning in Release 13.0, a P2MP FEC with an IPv6 root LSR address, carrying one
or more IPv4 or IPv6 multicast prefixes in the opaque element, is not supported. Manually
configured mLDP P2MP LSP, NG-mVPN, and dynamic mLDP will not be able to operate in
an IPv6-only network.

A PW FEC can be resolved to a targeted LDP IPv6 adjacency with an LDP IPv6 LSR
if there is a context for the FEC with local spoke-SDP configuration or spoke-SDP
auto-creation from a service such as BGP-AD VPLS, BGP-VPWS or dynamic MS-
PW.

5.23.5 LDP Session Capabilities


LDP supports advertisement of all FEC types over an LDP IPv4 or an LDP IPv6
session. These FEC types are: IPv4 prefix FEC, IPv6 prefix FEC, IPv4 P2MP FEC,
PW FEC 128, and PW FEC 129.

In addition, LDP supports signaling the enabling or disabling of the advertisement of


the following subset of FEC types both during the LDP IPv4 or IPv6 session
initialization phase, and subsequently when the session is already up.

• IPv4 prefix FEC—This is performed using the State Advertisement Control


(SAC) capability TLV as specified in draft-ietf-mpls-ldp-ip-pw-capability. The
SAC capability TLV includes the IPv4 SAC element having the D-bit (Disable-
bit) set or reset to disable or enable this FEC type respectively. The LSR can
send this TLV in the LDP Initialization message and subsequently in a LDP
Capability message.
• IPv6 prefix FEC—This is performed using the State Advertisement Control
(SAC) capability TLV as specified in draft-ietf-mpls-ldp-ip-pw-capability. The
SAC capability TLV includes the IPv6 SAC element having the D-bit (Disable-
bit) set or reset to disable or enable this FEC type respectively. The LSR can
send this TLV in the LDP Initialization message and subsequently in a LDP
Capability message to update the state of this FEC type.

Issue: 01 3HE 11972 AAAA TQZZA 01 767


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

• P2MP FEC—This is performed using the P2MP capability TLV as specified in


RFC 6388. The P2MP capability TLV has the S-bit (State-bit) with a value of set
or reset to enable or disable this FEC type respectively. Unlike the IPv4 SAC and
IPv6 SAC capabilities, the P2MP capability does not distinguish between IPv4
and IPv6 P2MP FEC. The LSR can send this TLV in the LDP Initialization
message and, subsequently, in a LDP Capability message to update the state
of this FEC type.

During LDP session initialization, each LSR indicates to its peers which FEC type it
supports by including the capability TLV for it in the LDP Initialization message. The
SR OS implementation will enable the above FEC types by default and will thus send
the corresponding capability TLVs in the LDP initialization message. If one or both
peers advertise the disabling of a capability in the LDP Initialization message, no
FECs of the corresponding FEC type will be exchanged between the two peers for
the lifetime of the LDP session unless a Capability message is sent subsequently to
explicitly enable it. The same behavior applies if no capability TLV for a FEC type is
advertised in the LDP initialization message, except for the IPv4 prefix FEC which is
assumed to be supported by all implementations by default.

Dynamic Capability, as defined in RFC 5561, allows all above FEC types to update
the enabled or disabled state after the LDP session initialization phase. An LSR
informs its peer that it supports the Dynamic Capability by including the Dynamic
Capability Announcement TLV in the LDP Initialization message. If both LSRs
advertise this capability, the user is allowed to enable or disable any of the above
FEC types while the session is up and the change takes effect immediately. The LSR
then sends a SAC Capability message with the IPv4 or IPv6 SAC element having the
D-bit (Disable-bit) set or reset, or the P2MP capability TLV in a Capability message
with the S-bit (State-bit) set or reset. Each LSR then takes the consequent action of
withdrawing or advertising the FECs of that type to the peer LSR. If one or both LSRs
did not advertise the Dynamic Capability Announcement TLV in the LDP Initialization
message, any change to the enabled or disabled FEC types will only take effect at
the next time the LDP session is restarted.

The user can enable or disable a specific FEC type for a given LDP session to a peer
by using the following CLI commands:

• config>router>ldp>session-params>peer>fec-type-capability p2mp
• config>router>ldp>session-params>peer>fec-type-capability prefix-ipv4
• config>router>ldp>session-params>peer>fec-type-capability prefix-ipv6

768 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.23.6 LDP Adjacency Capabilities


Adjacency-level FEC-type capability advertisement is defined in draft-pdutta-mpls-
ldp-adj-capability. By default, all FEC types supported by the LSR are advertised in
the LDP IPv4 or IPv6 session initialization; see LDP Session Capabilities for more
information. If a given FEC type is enabled at the session level, it can be disabled
over a given LDP interface at the IPv4 or IPv6 adjacency level for all IPv4 or IPv6
peers over that interface. If a given FEC type is disabled at the session level, then
FECs will not be advertised and enabling that FEC type at the adjacency level will
not have any effect. The LDP adjacency capability can be configured on link Hello
adjacency only and does not apply to targeted Hello adjacency.

The LDP adjacency capability TLV is advertised in the Hello message with the D-bit
(Disable-bit) set or reset to disable or enable the resolution of this FEC type over the
link of the Hello adjacency. It is used to restrict which FECs can be resolved over a
given interface to a peer. This provides the ability to dedicate links and data path
resources to specific FEC types. For IPv4 and IPv6 prefix FECs, a subset of ECMP
links to a LSR peer may be each be configured to carry one of the two FEC types.
An mLDP P2MP FEC can exclude specific links to a downstream LSR from being
used to resolve this type of FEC.

Like the LDP session-level FEC-type capability, the adjacency FEC-type capability
is negotiated for both directions of the adjacency. If one or both peers advertise the
disabling of a capability in the LDP Hello message, no FECs of the corresponding
FEC type will be resolved by either peer over the link of this adjacency for the lifetime
of the LDP Hello adjacency, unless one or both peers sends the LDP adjacency
capability TLV subsequently to explicitly enable it.

The user can enable or disable a specific FEC type for a given LDP interface to a
peer by using the following CLI commands:

• config>router>ldp>if-params>if>ipv4/ipv6>fec-type-capability p2mp-ipv4
• config>router>ldp>if-params>if>ipv4/ipv6>fec-type-capability p2mp-ipv6
• config>router>ldp>if-params>if>ipv4/ipv6>fec-type-capability prefix-ipv4
• config>router>ldp>if-params>if> ipv4/ipv6>fec-type-capability prefix-ipv6

These commands, when applied for the P2MP FEC, deprecate the existing
command multicast-traffic {enable | disable} under the interface. Unlike the
session-level capability, these commands can disable multicast FEC for IPv4 and
IPv6 separately.

The encoding of the adjacency capability TLV uses a PRIVATE Vendor TLV. It is
used only in a hello message to negotiate a set of capabilities for a specific LDP IPv4
or IPv6 hello adjacency.

Issue: 01 3HE 11972 AAAA TQZZA 01 769


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| ADJ_CAPABILITY_TLV | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| VENDOR_OUI |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|S| Reserved | |
+-+-+-+-+-+-+-+-+ +
| Adjacency capability elements |
+ +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The value of the U-bit for the TLV is set to 1 so that a receiver must silently ignore if
the TLV is deemed unknown.

The value of the F-bit is 0. After being advertised, this capability cannot be
withdrawn; thus, the S-bit is set to 1 in a hello message.

Adjacency capability elements are encoded as follows:

0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|D| CapFlag |
+-+-+-+-+-+-+-+-+

D bit: Controls the capability state.

1 : Disable capability

0 : Enable capability

CapFlag: The adjacency capability

1 : Prefix IPv4 forwarding

2 : Prefix IPv6 forwarding

3 : P2MP IPv4 forwarding

4 : P2MP IPv6 forwarding

5 : MP2MP IPv4 forwarding

6 : MP2MP IPv6 forwarding

Each CapFlag appears no more than once in the TLV. If duplicates are found, the D-
bit of the first element is used. For forward compatibility, if the CapFlag is unknown,
the receiver must silently discard the element and continue processing the rest of the
TLV.

770 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.23.7 Address and FEC Distribution


After an LDP LSR initializes the LDP session to the peer LSR and the session comes
up, local IPv4 and IPv6 interface addresses are exchanged using the Address and
Address Withdraw messages. Similarly, FECs are exchanged using Label Mapping
messages.

By default, IPv6 address distribution is determined by whether the Dual-stack


capability TLV, which is defined in draft-ietf-mpls-ldp-ipv6, is present in the Hello
message from the peer. This coupling is introduced because of interoperability
issues found with existing third-party LDP IPv4 implementations.

The following is the detailed behavior:

• If the peer sent the dual-stack capability TLV in the Hello message, then IPv6
local addresses will be sent to the peer. The user can configure a new address
export policy to further restrict which local IPv6 interface addresses to send to
the peer. If the peer explicitly stated enabling of LDP IPv6 FEC type by including
the IPv6 SAC TLV with the D-bit (Disable-bit) set to 0 in the initialization
message, then IPv6 FECs will be sent to the peer. FEC prefix export policies can
be used to restrict which LDP IPv6 FEC can be sent to the peer.
• If the peer sent the dual-stack capability TLV in the Hello message, but explicitly
stated disabling of LDP IPv6 FEC type by including the IPv6 SAC TLV with the
D-bit (Disable-bit) set to 1 in the initialization message, then IPv6 FECs will not
be sent but IPv6 local addresses will be sent to the peer. A CLI is provided to
allow the configuration of an address export policy to further restrict which local
IPv6 interface addresses to send to the peer. FEC prefix export policy has no
effect because the peer explicitly requested disabling the IPv6 FEC type
advertisement.
• If the peer did not send the dual-stack capability TLV in the Hello message, then
no IPv6 addresses or IPv6 FECs will be sent to that peer, regardless of the
presence or not of the IPv6 SAC TLV in the initialization message. This case is
added to prevent interoperability issues with existing third-party LDP IPv4
implementations. The user can override this by explicitly configuring an address
export policy and a FEC export policy to select which addresses and FECs to
send to the peer.

The above behavior applies to LDP IPv4 and IPv6 addresses and FECs. The
procedure is summarized in the flowchart diagrams in Figure 96 and Figure 97.

Issue: 01 3HE 11972 AAAA TQZZA 01 771


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Figure 96 LDP IPv6 Address and FEC Distribution Procedure

End
Y

IPv4 LSR N
and Send LABEL
IPv6 FEC?

N
Policy N
accept?
Saw Y
dual-stack
TVL?

N Prefix
exp policy
config’d?

Nego.
N
FEC cap.
enabled?

Begin
al_0625

772 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Figure 97 LDP IPv6 Address and FEC Distribution Procedure

End
N

Addr family Y
same as Send ADDR
LSR id?

N
Policy N
accept?
Saw Y
dual-stack
TVL?

N Address
exp policy
config’d?

N Y

adv-adj-only Y Is adjacent N
config’d? addr?

Begin
al_0624

5.23.8 Controlling IPv6 FEC Distribution During an


Upgrade to SR OS Supporting LDP IPv6
A FEC for each of the IPv4 and IPv6 system interface addresses is advertised and
resolved automatically by the LDP peers when the LDP session comes up,
regardless of whether the session is IPv4 or IPv6.

Issue: 01 3HE 11972 AAAA TQZZA 01 773


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

To avoid the automatic advertisement and resolution of IPv6 system FEC when the
LDP session is IPv4, the following procedure must be followed before and after the
upgrade to the SR OS version which introduces support of LDP IPv6.

1. Before the upgrade, implement a global prefix policy which rejects prefix [::0/0
longer] to prevent IPv6 FECs from being installed after the upgrade.
2. In MISSU case:
− If new IPv4 sessions are created on the node, the per-peer FEC-capabilities
must be configured to filter out IPv6 FECs.
− Until an existing IPv4 session is flapped, FEC-capabilities have no effect on
filtering out IPv6 FECs, thus the import global policy must remain configured
in place until the session flaps. Alternatively, a per-peer-import-policy [::0/0
longer] can be associated with this peer.
3. In cold upgrade case:
− If new IPv4 sessions are created on the node, the per-peer FEC-capabilities
must be configured to filter out IPv6 FECs.
− On older, pre-existing IPv4 sessions, the per-peer FEC-capabilities must be
configured to filter out IPv6 FECs.
4. When all LDP IPv4 sessions have dynamic capabilities enabled, with per-peer
FEC-capabilities for IPv6 FECs disabled, then the GLOBAL IMPORT policy can
be removed.

5.23.9 Handling of Duplicate Link-Local IPv6 Addresses in


FEC Resolution
Link-local IPv6 addresses are scoped to a link and, as such, duplicate addresses can
be used on different links to the same or different peer LSR. When the duplicate
addresses exist on the same LAN, routing will detect them and block one of them. In
all other cases, duplicate links are valid because they are scoped to the local link.

In this section, LLn refers to Link-Local address (n).

Figure 98 shows FEC resolution in a LAN.

774 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

Figure 98 FEC Resolution in LAN


(LL3)-[C]-(LL1) [E]

[Root LSR] [A]-(LL1) [LAN] [B]

(LL2)-[D]
MPLS_02

LSR B resolves a mLDP FEC with the root node being Root LSR. The route lookup
shows that best route to loopback of Root LSR is {interface if-B and next-hop LL1}.

However, LDP will find that both LSR A and LSR C advertised address LL1 and that
there are hello adjacencies (IPv4 or IPv6) to both A an C. In this case, a change is
made so that an LSR only advertises link-local IPv6 addresses to a peer for the links
over which it established a Hello adjacency to that peer. In this case, LSR C will
advertise LL1 to LSR E but not to LSRs A, B, and D. This behavior will apply with
both P2P and broadcast interfaces.

Ambiguity also exists with prefix FEC (unicast FEC); the above solution also applies.

FEC Resolution over P2P links

---------(LL1)-[C]------

[Root LSR]-------[A]-(LL1)-----[B] ------(LL4)-[D]------

| |

|-(LL2)---------|

| |

|-(LL3)---------|

LSR B resolves an mLDP FEC with root node being Root LSR. The route lookup
shows that best route to loopback of Root LSR is {interface if-B and next-hop LL1}.

• Case 1—LDP is enabled on all links. This case has no ambiguity. LDP will only
select LSR A because the address LL1 from LSR C is discovered over a different
interface. This case also applies to prefix FEC (unicast FEC) and thus no
ambiguity in the resolution.

Issue: 01 3HE 11972 AAAA TQZZA 01 775


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

• Case 2—LDP is disabled on link A-B with next-hop LL1; LSR B can still select
one of the two other interfaces to upstream LSR A as long as LSR A advertised
LL1 address in the LDP session.

5.23.10 IGP and Static Route Synchronization with LDP


The IGP-LDP synchronization and the static route to LDP synchronization features
are modified to operate on a dual-stack IPv4/IPv6 LDP interface as follows:

1. If the router interface goes down or both LDP IPv4 and LDP IPv6 sessions go
down, IGP sets the interface metric to maximum value and all static routes with
the ldp-sync option enabled and resolved on this interface will be de-activated.
2. If the router interface is up and only one of the LDP IPv4 or LDP IPv6 interfaces
goes down, no action is taken.
3. When the router interface comes up from a down state, and one of either the
LDP IPv4 or LDP IPv6 sessions comes up, IGP starts the sync timer at the expiry
of which the interface metric is restored to its configured value. All static routes
with the ldp-sync option enabled are also activated at the expiry of the timer.

Given the above behavior, it is recommended that the user configures the sync timer
to a value which allows enough time for both the LDP IPv4 and LDP IPv6 sessions
to come up.

5.23.11 BFD Operation


The operation of BFD over a LDP interface tracks the next-hop of prefix IPv4 and
prefix IPv6 in addition to tracking of the LDP peer address of the Hello adjacency
over that link. This tracking is required as LDP can now resolve both IPv4 and IPv6
prefix FECs over a single IPv4 or IPv6 LDP session and, as such, the next-hop of a
prefix will not necessarily match the LDP peer source address of the Hello adjacency.
The failure of either or both of the BFD session tracking the FEC next-hop and the
one tracking the Hello adjacency will cause the LFA backup NHLFE for the FEC to
be activated, or the FEC to be re-resolved if there is no FRR backup.

The following CLI command allows the user to decide if they want to track only with
an IPv4 BFD session, only with an IPv6 BFD session, or both:

config>router>ldp>if-params>if>bfd-enable [ipv4] [ipv6]

776 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

This command provides the flexibility required in case the user does not need to track
both Hello adjacency and next-hops of FECs. For example, if the user configures
bfd-enable ipv6 only to save on the number of BFD sessions, then LDP will track
the IPv6 Hello adjacency and the next-hops of IPv6 prefix FECs. LDP will not track
next-hops of IPv4 prefix FECs resolved over the same LDP IPv6 adjacency. If the
IPv4 data plane encounters errors and the IPv6 Hello adjacency is not affected and
remains up, traffic for the IPv4 prefix FECs resolved over that IPv6 adjacency will be
black-holed. If the BFD tracking the IPv6 Hello adjacency times out, then all IPv4 and
IPv6 prefix FECs will be updated.

The tracking of a mLDP FEC has the following behavior:

• IPv4 and IPv6 mLDP FECs will only be tracked with the Hello adjacency
because they do not have the concept of downstream next-hop.
• The upstream LSR peer for an mLDP FEC supports the multicast upstream FRR
procedures, and the upstream peer will be tracked using the Hello adjacency on
each link or the IPv6 transport address if there is a T-LDP session.
• The tracking of a targeted LDP peer with BFD does not change with the support
of IPv6 peers. BFD tracks the transport address conveyed by the Hello
adjacency which bootstrapped the LDP IPv6 session.

5.23.12 Services Using SDP with an LDP IPv6 FEC


The SDP of type LDP with far-end and tunnel-farend options using IPv6 addresses
is supported. The addresses need not be of the same family (IPv6 or IPv4) for the
SDP configuration to be allowed. The user can have an SDP with an IPv4 (or IPv6)
control plane for the T-LDP session and an IPv6 (or IPv4) LDP FEC as the tunnel.

Because IPv6 LSP is only supported with LDP, the use of a far-end IPv6 address will
not be allowed with a BGP or RSVP/MPLS LSP. In addition, the CLI will not allow an
SDP with a combination of an IPv6 LDP LSP and an IPv4 LSP of a different control
plane. As a result, the following commands are blocked within the SDP configuration
context when the far-end is an IPv6 address:

• bgp-tunnel
• lsp
• mixed-lsp-mode

SDP admin groups are not supported with an SDP using an LDP IPv6 FEC, and the
attempt to assign them is blocked in CLI.

Issue: 01 3HE 11972 AAAA TQZZA 01 777


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

Services which use LDP control plane (such as T-LDP VPLS and R-VPLS, VLL, and
IES/VPRN spoke interface) will have the spoke-SDP (PW) signaled with an IPv6 T-
LDP session when the far-end option is configured to an IPv6 address. The spoke-
SDP for these services binds by default to an SDP that uses a LDP IPv6 FEC, which
prefix matches the far end address. The spoke-SDP can use a different LDP IPv6
FEC or a LDP IPv4 FEC as the tunnel by configuring the tunnel-far-end option. In
addition, the IPv6 PW control word is supported with both data plane packets and
VCCV OAM packets. Hash label is also supported with the above services, including
the signaling and negotiation of hash label support using T-LDP (Flow sub-TLV) with
the LDP IPv6 control plane. Finally, network domains are supported in VPLS.

5.23.13 Mirror Services and Lawful Intercept


The user can configure a spoke-SDP bound to an LDP IPv6 LSP to forward mirrored
packets from a mirror source to a remote mirror destination. In the configuration of
the mirror destination service at the destination node, the remote-source command
must use a spoke-SDP with a VC-ID that matches the one that is configured in the
mirror destination service at the mirror source node. The far-end option will not be
supported with an IPv6 address.

This also applies to the configuration of the mirror destination for a LI source.

5.23.13.1 Configuration at mirror source node

Use the following rules and syntax to configure at the mirror source node.

• The sdp-id must match an SDP which uses LDP IPv6 FEC
• Configuring egress-vc-label is optional.
config mirror mirror-dest 10

CLI Syntax: no spoke-sdp sdp-id:vc-id


spoke-sdp sdp-id:vc-id [create]
egress
vc-label egress-vc-label

5.23.13.2 Configuration at mirror destination node

Use the following rules and syntax to configure at the mirror destination node.

778 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• The far-end ip-address command is not supported with LDP IPv6 transport
tunnel. The user must reference a spoke-SDP using a LDP IPv6 SDP coming
from mirror source node.
• In the spoke-sdp sdp-id:vc-id command, vc-id should match that of the spoke-
sdp configured in the mirror-destination context at mirror source node.
• Configuring ingress-vc-label is optional; both static and t-ldp are supported.
configure mirror mirror-dest 10 remote-source

CLI Syntax: far-end ip-address [vc-id vc-id] [ing-svc-label ingress-


vc-label | tldp] [icb]
no far-end ip-address
spoke-sdp sdp-id:vc-id [create]
ingress-vc-label ingress-vc-label
exit
no shutdown
exit
exit

Mirroring and LI will also be supported with PW redundancy feature when the
endpoint spoke-SDP, including the ICB, is using a LDP IPv6 tunnel.

5.23.14 Static Route Resolution to a LDP IPv6 FEC


An LDP IPv6 FEC can be used to resolve a static IPv6 route with an indirect next-
hop matching the FEC prefix. The user configures a resolution filter to specify the
LDP tunnel type to be selected from TTM:

config>router>static-route-entry ip-prefix/prefix-length [mcast]


indirect ip-address
tunnel-next-hop
[no] disallow-igp
resolution {any | disabled | filter}
resolution-filter
[no] ldp

A static route of an IPv6 prefix cannot be resolved to an indirect next-hop using a LDP
IPv4 FEC. An IPv6 prefix can only be resolved to an IPv4 next-hop using the 6-over-
4 encapsulation by which the outer IPv4 header uses system IPv4 address as source
and the next-hop as a destination. So the following example will return an error:

A:SRU4>config>router# static-route 3ffe::30/128 indirect 110.20.1.1 tunnel-next-hop


resolution-filter ldp

MINOR: CLI LDP not allowed for 6over4.

Issue: 01 3HE 11972 AAAA TQZZA 01 779


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.23.15 IGP Route Resolution to a LDP IPv6 FEC


LDP IPv6 shortcut for IGP IPv6 prefix is supported. The following commands allow a
user to select if shortcuts must be enabled for IPv4 prefixes only, for IPv6 prefixes
only, or for both.

config>router>ldp-shortcut [ipv4][ipv6]
idp-shortcut [ipv4][ipv6]
no ldp-shortcut

This CLI command has the following behaviors:

• When executing a pre-Release 13.0 config file, the existing command is


converted as follows:
config>router>ldp-shortcut changed to config>router>ldp-shortcut ipv4
• If the user enters the command without the optional arguments in the Release
13.0 CLI, it defaults to enabling shortcuts for IPv4 IGP prefixes:
config>router>ldp-shortcut changed to config>router>ldp-shortcut ipv4
• When the user enters both IPv4 and IPv6 arguments in the Release 13.0 CLI,
shortcuts for both IPv4 and IPv6 prefixes are enabled:
config>router>ldp-shortcut ipv4 ipv6

5.23.16 OAM Support with LDP IPv6


MPLS OAM tools lsp-ping and lsp-trace are updated to operate with LDP IPv6 and
support the following:

• use of IPv6 addresses in the echo request and echo reply messages, including
in DSMAP TLV, as per RFC 4379
• use of LDP IPv6 prefix target FEC stack TLV as per RFC 4379
• use of IPv6 addresses in the DDMAP TLV and FEC stack change sub-TLV, as
per RFC 6424
• use of 127/8 IPv4 mapped IPv6 address; that is, in the range ::ffff:127/104, as
the destination address of the echo request message, as per RFC 4379.
• use of 127/8 IPv4 mapped IPv6 address; that is, in the range ::ffff:127/104, as
the path-destination address when the user wants to exercise a specific LDP
ECMP path.

The behavior at the sender and receiver nodes is updated to support both LDP IPv4
and IPv6 target FEC stack TLVs. Specifically:

780 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

1. The IP family (IPv4/IPv6) of the UDP/IP echo request message will always
match the family of the LDP target FEC stack TLV as entered by the user in the
prefix option.
2. The src-ip-address option is extended to accept IPv6 address of the sender
node. If the user did not enter a source IP address, the system IPv6 address will
be used. If the user entered a source IP address of a different family than the
LDP target FEC stack TLV, an error is returned and the test command is
aborted.
3. The IP family of the UDP/IP echo reply message must match that of the received
echo request message.
4. For lsp-trace, the downstream information in DSMAP/DDMAP will be encoded
as the same family as the LDP control plane of the link LDP or targeted LDP
session to the downstream peer.
5. The sender node inserts the experimental value of 65503 in the Router Alert
Option in the echo request packet’s IPv6 header as per RFC 5350. Once a value
is allocated by IANA for MPLS OAM as part of draft-ietf-mpls-oam-ipv6-rao, it
will be updated.

Finally, vccv-ping and vccv-trace for a single-hop PW are updated to support IPv6
PW FEC 128 and FEC 129 as per RFC 6829. In addition, the PW OAM control word
is supported with VCCV packets when the control-word option is enabled on the
spoke-SDP configuration. The value of the Channel Type field is set to 0x57
indicates that the Associated Channel carries an IPv6 packet, as per RFC 4385.

5.23.17 LDP IPv6 Interoperability Considerations

5.23.17.1 Interoperability with Implementations Compliant with draft-


ietf-mpls-ldp-ipv6

SR OS implementation uses a 128-bit LSR-ID as defined in draft-pdutta-mpls-ldp-v2


to establish an LDP IPv6 session with a peer LSR. This is done such that a routable
system IPv6 address can be used by default to bring up the LDP task on the router
and establish link LDP and T-LDP sessions to other LSRs, as is the common practice
with LDP IPv4 in existing customer deployments. More importantly, this allows for the
establishment of control plane independent LDP IPv4 and LDP IPv6 sessions
between two LSRs over the same interface or set of interfaces. The SR OS
implementation allows for two separate LDP IPv4 and LDP IPv6 sessions between
two LSRs over the same interface or a set of interfaces because each session uses
a unique LSR-ID (32-bit for IPv4 and 128-bit for IPv6).

Issue: 01 3HE 11972 AAAA TQZZA 01 781


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

The SR OS LDP implementation does not interoperate with an implementation using


a 32-bit LSR-ID as defined in draft-ietf-mpls-ldp-ipv6 to establish an IPv6 LDP
session. The latter specifies an LSR can send both IPv4 and IPv6 Hellos over an
interface such that it can establish either an IPv4 or an IPv6 LDP session with LSRs
on the same subnet. It thus does not allow for separate LDP IPv4 and LDP IPv6 LDP
sessions between two routers.

The SR OS LDP implementation should interoperate with an implementation using a


32-bit LSR-ID as defined in draft-ietf-mpls-ldp-ipv6 to establish an IPv4 LDP session
and to resolve both IPv4 and IPv6 prefix FECs.

The SR OS LDP implementation otherwise complies with all other aspects of draft-
ietf-mpls-ldp-ipv6, including the support of the dual-stack capability TLV in the Hello
message. The latter is used by an LSR to inform its peer that it is capable of
establishing either an LDP IPv4 or LDP IPv6 session and to convey the IP family
preference for the LDP Hello adjacency and thus for the resulting LDP session. This
is required because the implementation described in draft-ietf-mpls-ldp-ipv6 allows
for a single session between LSRs, and both LSRs must agree if the session should
be brought up using IPv4 or IPv6 when both IPv4 and IPv6 Hellos are exchanged
between the two LSRs. The SR OS implementation has a separate session for each
IP family between two LSRs and, as such, this TLV is used to indicate the family
preference and to also indicate that it supports resolving IPv6 FECs over an IPv4
LDP session.

5.23.17.2 Interoperability with Implementations Compliant with RFC


5036 for IPv4 LDP Control Plane Only

This implementation supports advertising and resolving IPv6 prefix FECs over an
LDP IPv4 session using a 32-bit LSR-ID in compliance with draft-ietf-mpls-ldp-ipv6.
When introducing an LSR based on the SR OS in a LAN with a broadcast interface,
it can peer with third party LSR implementations which support draft-ietf-mpls-ldp-
ipv6 and LSRs which do not. When its peers using IPv4 LDP control plane with a
third-party LSR implementation which does not support it, the advertisement of IPv6
addresses or IPv6 FECs to that peer may cause it to bring down the IPv4 LDP
session.

In other words, there are deployed third-party LDP implementations which are
compliant with RFC 5036 for LDP IPv4, but which are not compliant with RFC 5036
for handling IPv6 address or IPv6 FECs over an LDP IPv4 session. To address this
issue, draft-ietf-mpls-ldp-ipv6 modifies RFC 5036 by requiring implementations
complying with draft-ietf-mpls-ldp-ipv6 to check for the dual-stack capability TLV in
the IPv4 Hello message from the peer. Without the peer advertising this TLV, an LSR
must not send IPv6 addresses and FECs to that peer. SR OS implementation
implements this change.

782 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

5.24 LDP Process Overview


Figure 99 displays the process to provision basic LDP parameters.

Figure 99 LDP Configuration and Implementation

Start

Create an LDP Instance in admin shutdown state

Check that any ILM/LTN/NHLFE resources are available in relationship with the
amount of LDP FECs, peers, and policies the node is expected to manage.(*)

If applicable, apply LDP global Import/Export policies

Configure session parameters (such as policies, lbl-distribution, capabilities, etc…)

If applicable, apply tcp-session-parameters (such as security, path-mtu, ttl)

Configure interface parameters (such as v4, v6, hello,


keepalive, transport-address, bfd, etc…)

Configure targeted-session (such as policies, auto-tldp-templates,


bfd, hello, keepalive, lsr-id, etc...)

Complete LDP instance tuning with ttl settings, moFrr/Frr, implicit-null,


graceful-restart (if needed), stitching to BGP or SR, egress-stats, etc…

Admin no shut the LDP Instance

After a period of time, to allow the TCP sessions to be established and


the FECs to be exchanged, check that no session has gone in overload
(if supported by peer) or has been operationally shutdown because of
resource exhaustions.

Observe no errors in the [show router ldp statistics] protocol stats

End

(*) if some of the needed resources are not available consider implementing stricer import-policies
and/or enabling the per-peer fec-limit functionality.
MPLS_01

Issue: 01 3HE 11972 AAAA TQZZA 01 783


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

5.25 LDP-IGP Synchronization


The SR OS supports the synchronization of an IGP and LDP based on a solution
described in RFC 5443, which consists of setting the cost of a restored link to infinity
to give both the IGP and LDP time to converge. When a link is restored after a failure,
the IGP sets the link cost to infinity and advertises it. The actual value advertised in
OSPF is 0xFFFF (65535). The actual value advertised in in IS-IS regular metric is
0x3F (63) and in IS-IS wide-metric is 0xFFFFFE (16777214). This synchronization
feature is not supported on RIP interfaces.

When the LDP synchronization timer subsequently expires, the actual cost is put
back and the IGP will readvertise it and use it at the next SPF computation. The LDP
synchronization timer is configured using the following command:

config>router>if> [no] ldp-sync-timer seconds

The SR OS also supports an LDP End of LIB message, as defined in RFC 5919, that
allows a downstream node to indicate to its upstream peer that it has advertised its
entire label information base. The effect of this on the IGP-LDP synchronization timer
is described below.

If an interface belongs to both IS-IS and OSPF, a physical failure will cause both
IGPs to advertise an infinite metric and to follow the IGP-LDP synchronization
procedures. If only one IGP bounces on this interface or on the system, then only the
affected IGP advertises the infinite metric and follows the IGP-LDP synchronization
procedures.

Next, an LDP Hello adjacency is brought up with the neighbor. The LDP
synchronization timer is started by the IGP when the LDP session to the neighbor is
up over the interface. This is to allow time for the label-FEC bindings to be
exchanged.

When the LDP synchronization timer expires, the link cost is restored and is
readvertised. The IGP will announce a new best next hop and LDP will use it if the
label binding for the neighbor’s FEC is available.

If the user changes the cost of an interface, the new value is advertised at the next
flooding of link attributes by the IGP. However, if the LDP synchronization timer is still
running, the new cost value will only be advertised after the timer expires. The new
cost value will also be advertised after the user executes any of the following
commands:

• tools>perform>router>isis>ldp-sync-exit
• tools>perform>router>ospf>ldp-sync-exit
• config>router>if>no ldp-sync-timer

784 3HE 11972 AAAA TQZZA 01 Issue: 01


MPLS GUIDE Label Distribution Protocol
RELEASE 15.0.R1

• config>router>ospf>disable-ldp-sync
• router>isis>disable-ldp-sync

If the user changes the value of the LDP synchronization timer parameter, the new
value will take effect at the next synchronization event. If the timer is still running, it
will continue to use the previous value.

If parallel links exist to the same neighbor, then the bindings and services should
remain up as long as there is one interface that is up. However, the user-configured
LDP synchronization timer still applies on the interface that failed and was restored.
In this case, the router will only consider this interface for forwarding after the IGP
readvertises its actual cost value.

The LDP End of LIB message is used by a node to signal completion of label
advertisements, using a FEC TLV with the Typed Wildcard FEC element for all
negotiated FEC types. This is done even if the system has no label bindings to
advertise. The SR OS also supports the Unrecognized Notification TLV (RFC 5919)
that indicates to a peer node that it will ignore unrecognized status TLVs. This
indicates to the peer node that it is safe to send End of LIB notifications even if the
node is not configured to process them.

The behavior of a system that receives an End of LIB status notification is configured
through the CLI on a per-interface basis:

config>router>if>[no] ldp-sync-timer seconds end-of-lib

If the end-of lib option is not configured, then the LDP synchronization timer is
started when the LDP Hello adjacency comes up over the interface, as described
above. Any received End of LIB LDP messages are ignored.

If the end-of-lib option is configured, then the system will behave as follows on the
receive side:

• The ldp-sync-timer is started.


• If LDP End of LIB Typed Wildcard FEC messages are received for every FEC
type negotiated for a given session to an LDP peer for that IGP interface, the
ldp-sync-timer is terminated and processing proceeds as if the timer had
expired, that is, by restoring the IGP link cost.
• If the ldp-sync-timer expires before the LDP End of LIB messages are received
for every negotiated FEC type, then the system restores the IGP link cost.
• The receive side will drop any unexpected End of LIB messages.

If the end-of-lib option is configured, then the system will also send out an End of
LIB message for prefix and P2MP FECs once all FECs are sent for all peers that
have advertised the Unrecognized Notification Capability TLV.

Issue: 01 3HE 11972 AAAA TQZZA 01 785


Label Distribution Protocol MPLS GUIDE
RELEASE 15.0.R1

See the SR OS Router Configuration Guide for the CLI command descriptions for
LDP-IGP Synchronization.

786 3HE 11972 AAAA TQZZA 01 Issue: 01

You might also like