SPBM Fundamentals and Testing: Baystack PV
SPBM Fundamentals and Testing: Baystack PV
SPBM Fundamentals and Testing: Baystack PV
SPBM
Fundamentals and testing
Ionut Orbesteanu
Luxoft Professional Romania
Purpose of this presentation
• To introduce SPBM as a method of network virtualization and
possible advantages over the traditional enterprise network
model.
• Protocols like OSPF do not block links, and other features like
ECMP can be used for efficient bandwidth usage.
To achieve these ends, SPBM as used by Avaya uses two mechanisms: MAC-in-
MAC encapsulation and IS-IS:
•IS-IS is a robust link-state protocol. With SPBM, it is used by each node in the
network to discover a path to each other node. It is then used to discover where
certain services are configured.
• BCBs are the nodes in the SPBM cloud, they do not learn
customer mac-addresses (the standard L2 address, in a
SPBM deployment are known as C-MACs). BCBs only forward
IS-IS updates, as well as encapsulated traffic to other nodes,
based only on source and destination B-MAC header. These
nodes however “discover” I-SIDs, in order to compute
multicast trees to forward multicast/broadcast traffic(this will
be detailed later).
IS-IS terms:
Manual Area - The manual area or area address is
anywhere from 1 to 13 bytes long. The next bytes are the
assigned domain (area) identifier, which is up to 12
bytes. The manual area is configured by the user and
must be the same on all devices in the SPB network.
SEL (or NSEL) -The last byte (00) is the n-selector. This
part is automatically attached as there is no user input
accepted.
System ID - The system ID is manually configured by the
user. It is 6 bytes long(same as a MAC address), and
must be unique to the each node in the SPBM network.
The system ID is used as the nodes B-MAC. All
encapsulated traffic in the SPBM backbone will use
nodes system-ids for source/destination addressing.
Luxoft Professional Romania
IS-IS Packet Types
• IS-IS Hello Packets- same as with OSPF, the hello packets are
used to discover neighbor IS-IS nodes. These packets are
used to initialize adjacencies between nodes.
•Create B-VLANs and the SPBM instance. Specify the primary B-VLANs.
At this point the SPBM/IS-IS configuration is complete. If IS-IS hellos are received on
the NNIs the device will start establishing adjacencies. The user can now start
configuring C-VLANs by associating existing VLANs with individual service
identifiers (or I-SIDs).
Once adjacencies are up, nodes will start discovering the other SPBM devices in
the cloud. The “show isis SPBM unicast-fib” command will display the discovered
nodes in the topology:
Configuration example:
The type of UNIs used is relevant only on the BEB nodes; the
interface type describes how traffic received from hosts and non-
spbm devices is handled and on what I-SIDs it is placed on. Past
the edge bridge, only I-SID information is distributed in the SPBM
cloud. The I-SID information is propagated in the IS-IS network
through IS-IS LSPs(the I-SID information is carried in TLV 144).
•For unicast traffic, the destination B-MAC is the BEB where the
destination C-MAC is located. Unicast encapsulated traffic is
forwarded using the unicast-fib.
Each node will know where else in the network a particular I-SID is configured. It
will then use the unicast tree and the discovered i-sid information to compute the
multicast-fib, a database that associates an i-sid specific multicast address that
points to any other node that has that i-sid.
When B1 discovers that B2, and V2 have also configured i-sid 13501, it will update
the multicast FIB entry for the remote node/I-SID pair:
•It will calculate the multicast b-mac for B2/i-sid 13501 and V2/i-sid 13501. It will
then add entries in its multicast FIB using these addresses, pointing to all UNIs in
the VLAN associated with i-sid 13501.
•It will calculate the m-cast address for itself and i-sid 13501. This address is
attached to an entry pointing to the outbound NNIs.
B1 is now ready to handle multicast traffic received on its UNI or on either NNIs.
When each node needs to send out multicast traffic in the SPBM cloud, it
encapsulates it using the multicast address generated for its own node/i-sid pair.
Traffic received on its UNI will be forwarded out on both NNIs to B2 and V2,
encapsulated with the multicast B-MAC DA.
Notice that node B2 has two UNIs associated with i-sid 13501. Both entries are
present in the multicast table, because multicast traffic received on a UNI is
forwarded both on the i-sid as well as on other associate UNIs.
Luxoft Professional Romania
The Multicast-FIB
All BCBs in between the edge nodes generate their own multicast fibs to handle
broadcast encapsulated traffic received from BEBs:
PP1 multicast-fib :
PP2 multicast-fib :
All traffic originated at the edge is broadcasted at first. In the case of unknown
unicast traffic however, if the destination customer MAC-address is discovered by
the originating edge bridge, that stream will no longer be broadcast to all nodes
that have the i-sid configured, and will be sent to the specific end node only.
If anymore traffic needs to be sent to Client C, this traffic is now known unicast, and
will be forwarded only to B2. The specific destination node B-MAC address will be
used to encapsulate the packet. All BCB nodes in between will use the unicast-FIB to
forward the packet. For example, on PP1:
In the above output you can notice that one I-SID is allocated on
the primary B-VID, and the other I-SID on the secondary. This is
SPBM mechanism for load balancing on equal cost paths.
In the current software releases (5.8 and 10.3), different hardware platforms can
perform different roles in the SPBM network.
ERS4800 units can only act as BEB(send UNI-NNI/NNI-UNI traffic) devices. This
platform cannot forward NNI-to-NNI traffic*
To deny the forwarding of NNI-NNI traffic on ERS4800, the overload bit feature is
used.
The overload bit is a field in all LSPs/hello packets that when set, indicates that
the generating nodes LSDB is overloaded, and is only to receive traffic that it is
destined to. That means that this node cannot be used to transit traffic destined
for other nodes.
Consider the following example; two VSP7k nodes are connected via an ERS4800
device:
•In this case, the ERS4800 will establish adjacency with both VSP7Ks
•Each VSP7K will receive LSPs from the ERS4800 node notifying each other about
the other VSP. The LSPs are stored in each VSPs LSDB; however, the LSPs have
the overload bit set.
•Because the overload is set, the VSPs do not use ERS4800 as a transit nodes in
the unicast-fib, so the two VSPs cannot send traffic to each other.
•L2Ping – the equivalent of L3 ICMP, it can be used to see if a node in the SPBM
cloud, specified by system-id(B-MAC) or system name, is alive and reachable.
•L2Tracetree – this tool is used to generate a path from the originating node to all
other nodes in the network that have a particular service(i-sid) configured.
The last two parameters should be left at default values(they do not have
application in current deployments).
To use CFM, simply enable the feature globally. The device must be in SPBM
enabled state:
By specifying the other B-VLAN with traceroute in the second command we can
see that load balancing is being done by B1.
In addition to these tools, IXNetwork can be used to simulate high numbers of IS-
IS nodes and I-SIDs. IXNetwork is very useful for scaling tests.
With IXNetwork release 6.30 and above it is possible to simulate IS-IS nodes/I-
SIDs. The tester can configure an Ixia interface to simulate a BCB node and
establish adjacency with an Avaya SPBM stack. The software also permits
simulating a SPBM cloud behind the BCB, in order to inject a high number of
nodes and i-sids in the DUTs FIBs/LSDBs. The following slides will present the
configuration procedure.
In the protocol interfaces tab, use the SPB ISIS wizard to configure the interface
The protocol information has been generated on the interface. Some further
modifications need to be made however so that the protocol is simulated as close
as possible to the Avaya implementation;
Enable the protocol. Also, use the top filter to only display tabs relevant to SPBM.
In SPB Base VID Ranges change B-VLAN priority to 7. Also, change ECT
algorithm type for the secondary B-VLAN.
Modify the i-sid so that odd i-sids are sent on the primary b-vid and even i-sids on
the secondary.
On our ERS4800 stack we connected the ixia port to 1/15. We also enabled IS-IS
on this port and configured SPBM globally. Additionally we configured the same i-
sids as in the previous slides.
If everything is configured correctly the stack should establish adjacency with the
ixia port, and should learn all the simulated nodes and i-sids(in our case, the i-
sids are learned from each simulated node):
•L3 operation mode is currently not supported on stacks that have SPBM enabled.
L3 routing cannot be done by SPBM devices at this time.
•IGMP Snooping can be done on customer VLANs(where both server and clients
are in the local C-VLAN). However multicast traffic sent on the i-sid is handled the
same as broadcast traffic.
Features that are usually installed where hosts meet the network are expected to
work on C-VLANs and UNIs as well as on switched-UNIs:
•DHCP Snooping
•Dynamic ARP-inspection
•EAPOL
Configuration issues:
-Creating/deleting NNIs. Finding ways of removing the NNIs from the B-VLANs
without actually deleting the interfaces (for instance removing the tagging on
NNIs).
-Creating/deleting C-VLANs.
-Testing with DMLTs and NNIs discovered many issues: traffic multiplied or
partially or totally dropped when sent on DMLTs instead of single links.
Enabling/disabling particular links in DMLTs caused traffic drop(with enough
bandwidth available).
-Traffic forwarding patterns after stack failover, for instance traffic sent to the
wrong i-sid after a base-unit reset.
-IS-IS globally disabled after reset.
Certain access control features had issues when installed on C-VLANs (with no
issue when installed on regular VLANs):
-No guard-rail present to prevent the user from enabling access control features
on NNIs.
With the added Fabric Attach feature, it is desired in the future that configuration
complexity(and human error) will be reduced to a minimum. The technology is
designed to be used in data centers as well as in Campus type networks.
SPBM and Avaya Fabric Connect are marketed as a very simple and automatic
model to create connectivity. The key advantages that are advertised by Avaya are
the simple configuration and the ease of adding new services.
SPBM was already deployed in the 2014 Sochi Winter Olympics, where it was
used in the Olympic village, as well as for broadcasting video feeds from the
events.
Thank you