CCDE In-Depth
CCDE In-Depth
CCDE In-Depth
Orhan Ergun
CCDE #2014:17
CCIE #26567
CISCO
CERTIFIED DESIGN EXPERT
IN DEPTH
ORHAN ERGUN
www.orhanergun.net
Copyright
Orhan Ergun
WHAT EXPERTS ARE SAYING
I attended the Orhan’s training and passed the CCDE Practical exam in
August 2016. I highly recommend taking Orhan Ergun’s CCDE Bootcamp. I
found his resources to be very detailed, thorough and exceptionally the best
around.
I am now the first Nigerian CCDE and thanks Orhan.
Hashiru Aminu , Technical Leader at Cisco Systems
I passed the CCDE Practical exam and Orhan’s CCDE course was very
important contributor to my success.
I definitely recommend his bootcamp to anyone who wants to learn
network design and pass the Practical exam.
Daniel Lardeux, Senior Network Consultant at Post Telecom
I attended Orhan’s CCDE Course and I must say the guy has exceeded
my expectations in all ways in terms of quality, depth etc.
Deepak Kumar, Senior Network Engineer at HCL
Orhan’s Ability to cover the vast technical topics required for the CCDE
is tremendous. He is not only technical; he is also an amazing teacher.
Thanks Orhan, you are the best CCDE trainer for sure.
Jason Gooley, System Engineer at Cisco Systems
Cisco Certified Design Expert
Written and Practical Study Guide
VIDEOS
ARTICLES
CHAPTER 3
OSPF
OSPF THEORY
OSPF LINK-STATE ADVERTISEMENT
OSPF LSA TYPES
6 CRITICAL LSAS FOR OSPF DESIGN
OSPF ROUTER LSA
OSPF NETWORK LSA
OSPF SUMMARY LSA
OSPF ASBR SUMMARY LSA
OSPF EXTERNAL LSA
OSPF NSSA EXTERNAL LSA
OSPF AREA TYPES
OSPF STUB AREA
OSPF TOTALLY STUB AREA
OSPF NSSA AREA
OSPF TOTALLY NSSA AREA
OSPF MULTI-AREA DESIGN
HOW MANY ROUTERS SHOULD BE IN ONE OSPF AREA?
HOW MANY ABR (AREA BORDER ROUTER) PER OSPF AREA?
HOW MANY OSPF AREAS ARE SUITABLE PER OSPF ABR?
BEST PRACTICES ON OSPF AREAS:
OSPF SINGLE VS. MULTI AREA DESIGN COMPARISON
INTERACTION BETWEEN OSPF AND OTHER PROTOCOLS
OSPF-BGP INTERACTION CASE STUDY
OSPF ROUTE SUMMARIZATION
OSPF SUB OPTIMAL ROUTING WITH ROUTE SUMMARIZATION CASE STUDY:
OSPF FAST CONVERGENCE
FOUR NECESSARY STEPS IN FAST CONVERGENCE
OSPF FULL-MESH TOPOLOGY DESIGN
SOLUTION:
OSPF QUIZ QUESTIONS
OSPF FURTHER STUDY RESOURCES
BOOKS
VIDEOS
ARTICLES
CHAPTER 4
IS-IS
EXAMPLE:
OSPF VS. IS-IS TERMINOLOGY
IS-IS FAST CONVERGENCE
IS-IS ROUTER TYPES
THERE ARE THREE TYPE OF ROUTERS IN IS-IS:
LEVEL 1-2 IS-IS ROUTERS
IS-IS DESIGN
IS-IS AREA AND LEVELS DESIGN
L1 IN THE POP AND CORE
IS-IS AND MPLS INTERACTION
IS-IS – MPLS INTERACTION CASE STUDY
ANSWER 2:
ANSWER 3:
IS-IS: CASE STUDY – OSPF TO IS-IS MIGRATION
HIGH-LEVEL MIGRATION PLAN FROM OSPF TO IS-IS FOR
FASTNENT
DETAILED OSPF TO IS-IS MIGRATION STEPS
IS-IS SUB OPTIMAL ROUTING CASE STUDY
IS-IS BLACKHOLING AND LINK LEVEL PLACEMENT CASE STUDY
IS-IS REVIEW QUESTIONS
IS-IS FURTHER STUDY RESOURCES
BOOKS
VIDEOS
PODCAST
CHAPTER 5
EIGRP
EIGRP FEASIBILITY CONDITION
EIGRP RFC 7868
EIGRP THEORY DIAGRAMS
EIGRP Design over Different Network Topologies
EIGRP STUB
EIGRP SUMMARY AND FILTERS
EIGRP OVER DMVPN CASE STUDY
QUESTION 1:
QUESTION 2:
QUESTION 3:
QUESTION 4:
QUESTION 5:
EIGRP VS. OSPF DESIGN COMPARISON
EIGRP HUB AND SPOKE DESIGN AND STUB FEATURE CASE
STUDY
QUESTION 1:
Hub and Spoke Case Study Customer Topology
EIGRP IN MPLS LAYER 3 VPN CASE STUDY
EIGRP REVIEW QUESTIONS
EIGRP FURTHER STUDY RESOURCES
BOOKS
VIDEOS
PODCAST
ARTICLES
CHAPTER 6
VPN DESIGN
VPN THEORY
GRE
IPSEC
DMVPN
GETVPN
DMVPN VS. GETVPN COMPARISON
LISP DEVICE ROLES AND TERMINOLOGY
BOOKS
VIDEOS
ARTICLES
CHAPTER 9
MULTICAST
MULTICAST DISTRIBUTION TREES
PIM: PROTOCOL INDEPENDENT MULTICAST
PIM Dense Mode (PIM-DM):
PIM Sparse Mode (PIM-SM):
PIM-SSM (SOURCE SPECIFIC MULTICAST)
BIDIR-PIM (BIDIRECTIONAL PIM)
Orhan Ergun
What is the CCDE Practical Exam?
Below are the things, which you should know about CCDE Practical
exam.
Four scenarios in total, over the course of eight hours
There is no configuration for any vendor equipment
Vendor-agnostic, but some technologies relevant to Cisco (e.g., HSRP,
GLBP, EIGRP, DMVPN) may be asked in the exam
The CCDE Practical is reading-intensive exam; it will be necessary to skim
through some material in the scenario. In the last chapter of the book I will
share the hints.
Analysis, Design, Implementation and Optimization are the four job tasks
within the CCDE scenarios.
Analyzing the design is the most critical and hardest of these four job tasks.
Since you need to understand the background information about the
company, business and the technical requirements. In the CCDE Practical
scenarios, I will show you all the four tasks so different question types will
be understood very well.
Exam score is provided based on these job tasks.
Passing score is around 75–80%.
Exam score will be made available immediately after exam. You don’t
need to wait couple hours or days.
CCDE Task Areas
Generally, there are four task areas that test-takers will encounter in the
CCDE exam. One or more tasks can be found in any given scenario.
1. Merge & Divest
2. Add Technologies
3. Replace Technology
4. Scaling
This may not be the entire list but definitely you should at least start
asking these questions in your real life design. In the CCDE exam questions
will come most of the time from the above considerations.
Adding Technologies
If you are adding new technologies onto an existing network, these sorts
of questions should be kept in mind:
∗ What can be broken? Does this technology affect others
technologies/protocols in the network?
∗ What does this technology provide? Is it really necessary? If you have
enough bandwidth in your network, do you really need Quality of
Service for example? Or if you would arrange your routing protocol
metric well, would you need MPLS Traffic engineering at all?
∗ What are the alternatives of these technology or protocol? (Throughout
the book, you will see a lot of comparison charts which will help you to
evaluate alternatives to each technology/protocol)
∗ Which additional information do you need to deploy this
technology/protocol?
∗ Every new technology adds some amount of complexity, so consider
complexity vs. the benefits of the technology tradeoff! As it is
mentioned above, do you really need to deploy MPLS Traffic
Engineering for better utilization or with the IGP protocol metric
design could you achieve the same goal?
Replacing Technologies
If you are adding new technologies onto an existing network, these sorts
of questions should be kept in mind:
∗ Is this change really needed? Is there a valid business reason behind it?
∗ What is the potential impact to the overall network?
∗ What will the migration steps be? Order of operation is very important
in network design. If you cannot design the migration process carefully,
you might have unplanned down time or planned downtime takes
longer than your plan.
∗ Are there budget constraints?
∗ Will both of the technology run in the network at the same time? Are
there enough hardware resources on the existing networking
equipment?
∗ Does this new technology require a learning curve? Do your networking
team have an experience on the new technology?
∗ Does your network monitoring tool support new technology?
T here are many Layer 2 control and data plane technologies in today
networks. Ethernet is the main technology, which is used in the Local
Area Network, Wide Area Network and in the Datacenters.
In this chapter:
STP theory and design practices will be explained, as well as VLANs,
VTP, and Trunking best practices will be shared.
Layer 3 first-hop redundancy control mechanisms such as HSRP, VRRP
and GLBP will be explained from the network design perspective.
Campus and datacenter access networks can be built as Layer 2 and Layer
3 access. These two design approaches will be explained in detail and
examples will be provided to understand the optimal design for a given
business and application requirements.
Many case studies will be presented as complementary to the theory and
the best practice information.
At the end of this section, you will have many quiz questions and the
answers; you will be able to test your layer 2 design knowledge.
Common network design principles for the availability, scalability,
convergence, security, networking topologies, routing protocols, and
Layer 2 technologies will be shared
SPANNING TREE
Imagine you have 10 Vlans and the Switch2 is root switch for all 10
Vlans. And in every Vlan you have 10s of hosts.
If the link between switch 2 and switch 3 is Layer 3 link, spanning tree
doesn’t block any links in this topology. This topology is called then Layer 2
loop-free topology.
Spanning tree deals with the logical layer 2 topology. For the layer 3 part,
default gateway purpose; one of the first hop redundancy mechanisms is
used. It can be HSRP , VRRP or GLBP.
If HSRP or VRRP is used, one of the switch can be used as a primary for
the given Vlan and other switch is used as standby.
Switch 2 for example can be used for Vlan 5 as primary and switch 3 is
used as standby for Vlan 5. For another Vlan, for example Vlan 6, switch 3 is
used as primary and switch 2 as standby. This allows all the uplinks of switch
1 to be used thus bandwidth is not wasted.
HSRP and VRRP thats why provide Vlan based load balancing. Default
gateway for a particular Vlan can be only one of the switches.
If we use GLBP (Gateway Load Balancing Protocol) in this topology, for
any given Vlan, both Switch 2 and switch 3 can be used as default gateway.
For different host, which come from the same Vlan, Arp replies are sent by
different switches.
Switch 2 can be a default gateway of host 1 in Vlan 5 and switch 3 can be
a default gateway of host 2 in Vlan 5.
As you can understand, traffic for different set of hosts in the same Vlan
can be sent by switch 1 to switch 2 and switch 3 at the same time.
Flow is not described as host of course. For the same host, different
destination IP address and port numbers mean different flows. Then, we can
say that some of the traffic of host 1 in Vlan 5 can be sent to Switch 2, and
some of the traffic of the same host can be sent to switch 3.
SPANNING TREE THEORY
As soon as STP detects a loop, it blocks a link to prevent the loop. CST
(Common Spanning Tree) 802.1d which is classic/legacy STP supports only
one instance for all VLANs. One instance mean, there is only one topology,
thus only one root switch for all the Vlans in the network.
Question 1:
What would be the implication of this?
Question 2:
How can future problems be mitigated?
Answer
This problem happened in the early days of networking. Hubs don’t
generate STP BPDUs. If you connect a hub with two ports to a switch,
forwarding loop occurs.
In order to stop it you can remove one of the cables. However, had the
contractor known the complication from the start they most likely would have
chosen a different configuration.
That’s why a feature that can prevent a loop should be in place in
advance.
BPDU Guard and BPDU Filter are the two features, which react based on
Spanning Tree BPDU.
BPDU Guard shuts down the switch port if STP BPDU is received from
the port.
BPDU Filter doesn’t shut down the port but can give some information
about the BPDU.
According to this case study, BPDU is not generated. In this case; port-
security helps.
HSRP
HSRP VIP-VMAC
In the above figure only HSRP protocol is shown but VRRP works in
exactly the same way. One Virtual MAC address is mapped to one virtual IP
address. Switches have their own physical IP address as well.
Only one switch can be HSRP active switch in any given time. If Active
switch fails, standby takes the gateway responsibility by responding the ARP
requests with the common Virtual MAC address.
Host’s gateway IP address doesn’t change. On the hosts, virtual IP
address is configured.
GLBP
GLBP uses one virtual IP and several virtual MAC addresses. For the
client’s ARP requests, the Active Virtual Gateway (AVG) responds different
virtual MAC addresses, thus network-based load balancing can be achieved.
Multiple switches can be actively forwarding the network traffic. GLBP
is Cisco preparatory protocol and may not work with the different vendor
equipment together.
Different clients use different devices as their default gateway. But on all
the clients same IP address is configured as default gateway IP address. This
IP address is the Virtual GLBP IP.
GLBP might be suitable for a campus but not for Internet Edge since the
firewall uses the same IGW as its default gateway by using the same IP
address. In order to explain Why GLBP is not suitable on the Internet Edge in
detail, at the end of this chapter, case study will be presented.
Below table summarizes the similarities and the differences of all the first
hop redundancy protocols in great detail. Network designers should know the
pros and cons of the technologies, protocol alternatives and their capabilities
from the design point of view.
Design
HRSP VRRP GLBP
Requirement
Suitable on YES YES YES
LAN
Suitable on YES, if layer YES, if layer YES, if layer
Datacenter 3 access is not used 3 access is not used 3 access is not used
YES, but YES, but
theremight be better theremight be better NO, it creates
Suitable on options, such as options, such as polarization issues.
Internet Edge routing with the routing with the This is explained in
firewall or Router firewall or Router detail in Orhan Ergun’s
behind the firewall behind the firewall CCDE Course
Question:
Which FRHP should the company use? Why?
Answer:
As previously indicated in this chapter, only one device is used as an
active gateway with HSRP and VRRP.
If failure happens, standby device takes responsibility and even with fast
hellos and BFD there will still be downtime. During network convergence
client’s traffic will be affected.
With GLBP, in any given VLAN, there can be more than two active
gateways, thus allowing client traffic to be divided among the active
gateways.
If failure occours in a GLBP enabled network, only some of the client’s
traffic in a given VLAN is affected. If there are two active gateways, only
half of them will be affected in a given Vlan.
Thus, for the purposes of this question, GLBP is the best choice.
FIRST HOP REDUNDANCY PROTOCOL
CASE STUDY 2
Which one is more suitable for the Internet edge, HRSP or GLBP?
Let’s look at the images below.
STP/HSRP INTERACTION
In the networks, all protocols interact with each other. Whenever you add,
replace or change the protocol, as a network designer you should consider the
overall impact. Throughout the book many interactions will be shown and the
best practices will be shown to find an optimal design.
First interaction is between layer 2 protocols and the gateway protocols.
Spanning tree and the HSRP interaction is explained in the below example.
One important factor to take into account when tuning HRSP is its
preemptive behavior.
Preemption causes the primary HSRP peer to re-assume the primary role
when it comes back online after a failure or maintenance event.
Preemption is the desired behavior because the STP/RSTP root should be
the same device as the HSRP primary for a given subnet or VLAN. If HSRP
and STP/RSTP are not synchronized, the interconnection between the
distribution switches can become a transit link, and traffic takes a multi- hop
L2 path to its default gateway.
As a best practice, STP Root switch should be aligned with FHRP active
and if we have network services devices such as firewalls, we want to align
active firewalls with STP and FHRP.
Question 1.:
Why do we always place STP root switch and FHRP gateway at the
distribution layer in the campus networks?
What is the design implication if it were placed in the access layer
instead?
Answer 1:
A traffic pattern in campus networks is mostly in North/South direction.
In two or three layer hierarchical designs (Access-Distribution or Access-
Distribution-Core) , Layer 2 and Layer 3 are placed on the distribution layer.
Distribution layer is used for scalability, modularity, and hierarchy.
When the network has distribution layer, any access layer switches can be
upgraded smoothly. Also, some functions are shared between the access and
distribution layer devices.
Access layer provides edge functions such as filtering, client access, QoS,
and first hop security features such as Dynamic ARP inspection, DHCP
Snooping, Port-Security and so on.
Distribution layer is responsible for the route and traffic/speed
aggregation.
Layer 3 starts at the distribution layer. Thus, FHRPs are enabled at the
distribution layer.
Thus, it is logical to place STP root and FHRP gateway at the top position
at the network.
Question 2:
If there is a three-layer hierarchy, can the root switch functionality be put
into the Core layer?
Answer 2:
No. The Layer 2 domain would be much larger in that case and we
always want to keep Layer 2 domain small unless the application requires it
to be much larger such as with VMotion or Layer 2 extension.
With Layer 3 access design, since the default gateway is access layer
switches and there is no First Hop redundancy protocol on the switches, layer
2 domain size is the smallest compare to the other local area network design
options (Layer 2 looped or loop-free access designs).
ACCESS NETWORK DESIGN CASE STUDY 2
Question
Where would Layer 2 looped design is better from the Layer 2 campus
network design point of view?
Answer:
In an environment where Layer 2 VLAN needs to be spanned on many
access switches. Classic example is the datacenter.
In the datacenter’s hosts (specifically, virtual machines) can move
between access switches. VLANs should be spread on those switches.
It is also very common in campus environments where WLAN is used
commonly on every access switch.
In environments where Layer 2 needs to be extended on many access
switches, Layer 2 looped design is the only design option with Spanning
Tree.
There are many Virtual Overlay technologies, which works based on
Layer 3 access design. VXLAN, NVGRE, STT and GENEVE are the virtual
overlay protocols, which provide Layer2 over Layer 3 tunneling, and they are
mainly used in the datacenter environment.
LAYER 2 TECHNOLOGIES REVIEW QUESTIONS
Question 1:
What is the name of below topology?
A. Layer 2 loop free access design
B. Layer 2 looped access design
C. Layer 3 routed access design
D. Layer 2 routed access design
Answer 1:
The topology is called Layer 2 looped topology since the connection
between the two distribution layer switches is Layer 2. Once it is Layer 2,
STP has to block one link, which is far from the root switch to prevent a
forwarding loop.
Question 2:
Spanning tree blocks some link to prevent forwarding loop in layer 2
Ethernet topologies. With which below technologies the spanning tree does
not block any link? (Choose Two)
A. MST
B. LACP
C. PAGP
D. DTP
Answer 2:
MST is the standard spanning tree protocol.
DTP is dynamic trunking protocol and it is not used for link aggregation
purposes. Two protocols are used to aggregate multiple links in a bundle.
Spanning tree doesn’t block those aggregated links.
These protocols are LACP and Cisco preparatory protocol; PAGP.
That’s why the correct answer of this question is B and C.
Question 3:
Which below option is true for the LACP?
A. LACP system ID is generated with System Priority and switch
MAC address
B . LACP is a layer 3 mechanism which is used for Layer 3 load
balancing
C. LACP is a first hop redundancy mechanism
D. LACP is a Cisco proprietary link aggregation protocol
Answer 3:
Although it is a link aggregation protocol, LACP is not a Cisco
proprietary protocol.
That’s why Option D is incorrect. It is not a layer 3 load balancing
mechanism. It is not a first hop redundancy mechanism either. Thus Option B
and C are incorrect too.
System ID, which is an important component of LACP, is created with
System Priority and switch mac address. Answer of this question is A.
Question 4:
Which below technologies can be used as First Hop redundancy gateway
protocol? (Choose Three)
A. HSRP
B. VRRP
C. Spanning Tree
D. OSPF
E. GLBP
Answer 4:
HSRP, VRRP and GLBP can be used as first hop redundancy gateway
protocols. First hop redundancy means if the gateway of the users/hosts fail,
secondary device take the gateway responsibility.
That’s why the answer of this question is A, B and E.
Question 5:
Fictitious Company has two datacenters and two interconnect links
between the datacenters. Company is extending a specific Vlan between the
datacenters. Which below protocols allow this Vlan traffic to be used over the
both interconnect links? (Choose Two)
A. RPVST
B. MST
C. Etherchannel
D. Multi Chassis Etherchannel
Answer 5:
If any spanning tree mode is used for those two links, as it was explained
in the Layer 2 technologies chapter of the book, one of the link is blocked for
any particular Vlan as Spanning tree doesn’t support flow based load
balancing.
Etherchannel between the datacenter can provide flow based load
balancing for those two datacenter interconnect links, if both links are
terminated on the same devices.
If links are terminated on the different devices in each datacenter then
Multi Chassis Etherchannel provide flow based load balancing. Since in the
question is not told whether they are terminated on the same or different
devices, both are the options are true.
That’s why answer of this question is C and D.
Question 6:
Which first hop redundancy protocol is more suitable for the below
topology?
A. HSRP
B. GLBP
C. Spanning Tree
D. MLD
Answer 6:
Spanning Tree and MLD (Multicast Listener Discovery) are not the first
hop redundancy protocols. Before starting to explain whether HSRP or GLBP
is more suitable let me explain some concepts on GLBP and HSRP.
GLBP provides flow-based load balancing. Two common load-balancing
techniques are in the Layer 2 networks; VLAN-based and Flow-based load
balancing.
VLAN-based load balancing allows the switch to be an active Layer 3
gateway for only some set of VLANs and the other distribution stays as
standby. For the other set of VLANs the standby switch acts as an active
switch and the active switch acts as standby. HSRP and VRRP work in this
way.
VLAN 100 HSRP active gateway can be the left distribution switch, and
VLAN 101 can be the right distribution switch. Flow-based load balancing is
meant to allow both distribution switches to be used as an active-active for
the same VLAN.
Some users from the particular VLAN use one distribution switch as an
active default gateway and other users from the same VLAN use what was a
previously standby switch as an active switch.
In this way you can use both distribution switches as active-active and
you can utilize all the links in the Layer 2 networks. However, supporting
this configuration instead of using GLBP is more complex from a design
point of view.
If you want both right and left distribution switches to be used active-
active for the same VLAN, e.g., VLAN 100, then you need to use GLBP.
However, STP should not block the Layer 2 links. How can this be achieved?
One way is to change the inter-distribution link to Layer 3. In that way
none of the access layer links between access and distribution layer switches
will be blocked, thus you can use all the uplinks.
If you use GLBP with the above topology, since the right access to
distribution link will be blocked, all the user traffic from the right access
switch will go first to the left distribution switch then through the
interconnect link traffic will go to the right distribution switch since right
distribution switch as an Active GLBP virtual forwarder replies to the ARP
packets. That’s why in this way always sub optimal path is used.
That’s why answer of this question is HSRP.
Question 7:
Which below technology provide a Spanning Tree unidirectional failure
detection if BPDU is not received?
Answer 7:
As it was mentioned in the Spanning Tree section of the layer 2
technologies chapter of the book, Loop guard protects spanning tree
unidirectional link failure scenarios if BPDU is lost. That’s why the correct
answer of this question is C.
Question 8:
How fast convergence is achieved in RSTP (802.1w)?
Answer 8:
Fast convergence in RSTP (802.1w) and MST (802.1s) is achieved with
Proposal and Agreement Handshake mechanism as it was explained in the
Layer 2 technologies chapter.
Question 9:
Which below spanning tree mode provides maximum scaling?
A. CST
B. RSTP
C. MST
D. PVSTP+
Answer 9:
As a spanning tree mode, MST provides maximum scaling. If the
requirement is to provide scaling in Spanning Tree topologies, for example in
the datacenter, then MST is the best choice.
Question 10:
What is the main function of Access Layer in hierarchical campus
network design?
A . Provides aggregation points for the network services such as
Firewalls, load balancers
B. Provides user access, first hop security and QoS functions
C. Provides layer 3 routing to the wide area network
D. Provides layer 3 virtualization in the campus network
Answer 10:
Main function of access layer in campus network is providing user
access, first hop security mechanisms, QoS functions such as Classification
and markings and so on.
Layer 3 virtualization can be provided if there is routed access design and
VRF configured on the access layer devices but it is specific design and not
the main function.
Layer 3 routing is also the same; it can be done on the access layer
devices if the routed access design is used but not the main function.
That’s why the correct answer is B.
Question 11:
Which below mechanism provides flow based load balancing?
A. HSRP
B. VRRP
C. GLBP
D. Spanning Tree
Answer 11:
Out of given options, only GLBP supports flow based load balancing as it
was explained in detail in the Layer 2 technologies chapter.
Question 12:
Which below mechanism provide optimal layer 2 switching in a campus
network design?
A. BPDU Guard
B. Spanning Tree Portfast
C. Root Guard
D. BPDU filter
E. ECMP
Answer 12:
Since the question is asking optimal layer 2 forwarding/switching, ECMP
is not an option.
Portfast is used to reduce the convergence time on the edge/user ports and
preventing TCN on those ports. BPDU Guard and BPDU filter are used to
prevent the ports to be connected to another switch in the campus.
Root Guard is used for determinism and identification of the root switch
placement. When root guard is enabled on the root switch, even if new switch
is added onto the network, traffic flow doesn’t change.
But assumption is choosing the root switch placement accordingly.
In a campus network, since the most of the traffic is north south, root
switch is always place on the distribution layer devices in layer 2 access
design as it was explained in a case study earlier in the Layer 2 technologies
chapter.
The correct answer of this question is C.
Question 13:
Which below statements are true for Vlan based Load Balancing?
(Choose Two)
A. Hosts in different Vlans can use different default gateways
B. Hosts in the same Vlan can use different default gateways
C . Odd and Even Vlan numbers traffic can be sent through
different default gateways for load balancing
D. Maximum 100 Vlans can use an individual default gateway
Answer 13:
Odd and Vlan number separation is common method in Vlad based load
balancing. That’s why Option D is one of the correct options.
Hosts in different Vlans can use different default gateways. Whole idea of
Vlan based load balancing is this.
There is no Vlan limitation per default gateway.
That’s why answer of this question is A and C.
Question 14:
Why Spanning Tree and FHRP synchronization/interaction is necessary?
A. To prevent blackholing
B. To prevent sub optimal forwarding
C. To provide fast convergence
D. To provide better security
Answer 14:
As it was explained in the Spanning Tree/FHTP part of the Layer 2
technologies chapter, it is necessary to provide optimal forwarding.
Question 15:
Which below statements are true for the Layer 3 routed access design?
(Choose Three)
A . There is no spanning tree between access and distribution
layers
B. Spanning Tree should be enabled on the user facing ports
C . ECMP routing can be done between access and distribution
layer devices
D. Maximum 4 links can be used between access and distribution
layer devices
E. Any given vlan can be spanned between access layer devices
Answer 15:
There is no spanning tree in layer 3-access design/routed access design
between the access and distribution layer switches.
Spanning tree should be enabled on the user facing ports to prevent
intentional and unintentional layer 2 attacks and loop issues.
ECMP (Equal Cost Multipath) routing can be done between access and
distribution layer devices.
You can use 8 or more links between the access and distribution layer
devices depends on hardware and vendor capabilities.
Vlans cannot be spanned between access switches in layer 3-access
design.
That’s why the correct answer of this question is A, B and C.
LAYER 2 FURTHER STUDY RESOURCES
BOOKS
Tiso, J. (2011) Designing Cisco Network Service Architecture (ARCH)
Foundation Learning Guide: (CCDP ARCH 642-874) (Third Edition), Cisco
Press.
VIDEOS
Ciscolive Session-BRKCRS-2031 Ciscolive Session – BRKRST – 3363
Ciscolive Session-BRKCRS-2468
https://www.youtube.c om/watch?v=R75vN-frPhE
ARTICLES
http://www.pitt.edu/~dtipper/2011/COE.pdf
http://orhanergun.net/2015/05/common-networking-protocols-in-lan-wan-
and-datacenter/
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/DC_Infra2_5/DCI
https://www.cisco.com/web/ME/exposaudi2009/assets/docs/layer2_attacks_and_mitigatio
CHAPTER 2
NETWORK DESIGN TOOLS AND THE BEST PRACTICES
T here are design tools, which we should consider for every design.
LAN, WAN and the data center where this common design tolls and
attributes should be considered.
Many of the principles in this chapter is not only for the networking
technologies and the protocols but also applicable to compute, virtualization
and storage technologies as well.
First ‘reliability’ will be explained; Components of the reliable network
design and the resiliency concept will be explained.
RELIABILITY
Reliability is within the reasonable amount of time, which depends on the
application type and architecture, delivering the legitimate packets from
source to destination.
This time is known as delay or latency and it is one of the packet delivery
parameters. Consistency of delay known as jitter and it is very important for
some type of applications such as voice and video, jitter is our second
delivery parameters.
Third packet delivery parameter is packet loss or drop; especially voice
and video traffic is more sensitive to packet loss compare to data traffic.
Packet loss is application dependent and some applications are very
drop/packet loss sensitive. General accepted best practices for the delay, jitter
and packet loss ratio has been defined and knowing and considering them is
important from the network design point of view. For example for the voice
packets one way delay which is also known as ‘mouth to ear’ delay should be
less than 150ms.
Reliability should not be considered only at the link level. Network links,
devices such as switches, routers, firewalls, application delivery controllers,
servers, storage systems and others should be reliable; also component of
these devices needs to be reliable.
For example, if you will carry the voice traffic over unreliable serial links,
you may likely encounter packet drops because of link flaps. Best practice is
to carry voice traffic over the low latency links, which don’t have packet loss
and the latency. If you have to utilize those cheaper unreliable links such as
Internet, you should carry the Data traffic over them.
But actually whichever device, link or component you choose, essentially
they will fail.
Vendors share their MTBF (Meantime between failure) numbers. You
can choose the best reliable devices, links, component, protocols and
architecture; you need to consider unavoidable failure. This brings us the
resiliency.
RESILIENCE
Resiliency is how the network behaves once the failure happen. Is that
highly available, will it convergence and when?
Resilience can be considered as combination of high availability and
convergence. In order any network design to be resilient, it should be
redundant and converge fast enough to avoid application timeout.
Thus, resiliency is interrelated with redundancy and the fast
convergence/fast reroute mechanisms.
Every component and every device can and eventually will fail, thus
system should be resilient enough to re converge/recover to a previous state.
As it is stated above; Resiliency can be achieved with redundancy.
But how much redundancy is best for the resiliency is another
consideration to be taken into an account by the network designers.
Many tests has been performed for routing convergence based on link
number for different routing systems and it seems two or three links are the
best optimum for the routing re convergence.
For the routing systems, there are two approaches for faster convergence
than their default convergence time. Fast convergence is achieved with
protocol parameter tuning, failure detection time reduction, propagation of
failure to the routing system, processing the new information to find a new
path and routing and forwarding table updates of the routers.
For the fast reroute, backup-forwarding entry should be in the devices
forwarding table, pre-computation of the alternate path is necessary.
Understanding different Fast Reroute mechanisms and their design
characteristics are important for the network engineers.
FAST CONVERGENCE AND FAST REROUTE
Network reliability is an important measure for deployability of sensitive
applications. When a link, node or SRLG failure occurs in a routed network,
there is inevitably a period of disruption to the delivery of traffic until the
network reconverges on the new topology.
Fast reaction is essential for the failed element. There are two approaches
for the fast reaction:
Fast convergence and fast reroute. When a local failure occur four steps
are necessary for the convergence.
1. Failure detection
2. Failure propagation
3. New information process
4. Update new route into RIB/FIB
For fast convergence, these steps may need to be tuned. Although the
RIB/FIB update is hardware dependent, the network operator can configure
all other steps.
One thing always needs to be kept in mind; Fast convergence and fast
reroute can affect network stability.
Unlike fast convergence, for the fast reroute, routes are pre-computed and
pre-programmed into the router RIB/FIB.
There are many Fast Reroute mechanisms available today. Most known
ones are; Loop Free Alternate (LFA), Remote Loop Free Alternate (rLFA),
MPLS Traffic Engineering Fast Reroute and Segment Routing Fast Reroute.
Loop Free Alternate and the Remote Loop Free Alternate if also known
as IP or IGP Fast Reroute Mechanisms. Main difference between MPLS
Traffic Engineering Fast Reroute and the IP Fast Reroute mechanisms are the
coverage.
MPLS TE FRR can protect the any tunnel in any topology. IP FRR
mechanisms need the physical topology of the networks to be highly
connected. Ring and square topologies are hard for the IP FRR topologies but
not a problem for MPLS TE FRR at all.
But if MPLS is not enabled on the network, adding MPLS and RSVP-TE
for just MPLS TE FRR functionality can be too complicated. In that case
network designers may want to evaluate their existing physical structure and
try to alternate/backup path by adding or removing some circuit in the
network. IGP metric tuning also helps router to find alternate loop free paths.
IGP, BGP and MPLS Traffic Engineering Fast Reroute details will be
covered in the later chapters in detail.
SCALABILITY
Scalability is the ability to change, modify or remove the part of entire
system without having a huge impact on the overall design. There are two
scalability approaches for the IT systems. These approaches are scale up or
scale out and implies for the Network, Compute, Storage, Application,
Database and many other systems.
Scalability through scaling up the system can be defined as to increase the
existing system resources without adding a new system.
Consider scale out application architecture, if application can be run over
the two different servers, we can do some maintenance on the one of the
servers without affecting the user experience.
Consider we have only one router as a network device and we need to
plan software upgrade. If we have a two supervisor engines for control plane
activities on that router, we can upgrade the software without having a
downtime and maintenance will not be an issue. We don’t have to have Flag
Day for upgrade activity.
Although the benefit of having scale up approach for high availability is
limited, obviously in this case it helps.
Scaling with the scale out approach provides better high availability.
Secondary system might be processing the load or worst case; it waits as an
idle to take responsibility in case of a primary system failure.
Once we modify, remove or add additional component into a system we
don’t expect to have an impact on the running system. As an example, let’s
examine scalability of routing protocols. If we have many router and many
links in the single area OSPF deployment, even small link flap can trigger the
all routers to calculate a new topology.
Up to some limit it might be acceptable but after that limit it affects the
overall routing domain a lot. For the OSPF case, limit is defined generally
with the Routing LSA size.
For having scalability, choosing the correct technology is important.
Consider you need additional ports in your data center aggregation layer
switch to support more compute resources. Of course you can create
additional access-aggregation POD and connect to your core if you have
three-tier architecture But rather, two tier leaf and spine architecture for the
physical design could be considered since it doesn’t not only provide
scalability but also better east-west application performance and would be
simpler architecture than the POD based design.
Lastly, scalable systems also should be manageable. While growing in
size, if system starts to become non manageable, it will affect inversely the
scalability of the system. In order to perform any change, network shouldn’t
need flag days, long and frequent maintenance windows because of
operationally complex environment. (OPEX)
This brings us to the next design tools, which is COST
COST
Cost is generally afterthought in network design. But most of the time it
is very important constraint in the projects. If we breakdown the components
of cost:
OPEX:
OpEx refers to operational expenses such as support, maintenance, labor,
bandwidth and utilities. Creating a complex network design may show off
your technical knowledge but it can also cause unnecessary complexity
making it harder to build, maintain, operate and manage the network.
A well- designed network reduces OpEx through improved network
uptime (which in turn can avoid or reduce penalties related to outages),
higher user productivity, ease of operations, and energy savings. Consider
creating the simplest solution that meets the business requirements.
CAPEX:
CapEx refers to the upfront costs such as purchasing equipment,
inventory, acquiring intellectual property or real estate. A well-thought
design provides longer deployment lifespan, investment protection, network
consolidation and virtualization, producing non-measurable benefits such as
business agility and business transformation and innovation, thus reducing
risk and lowering costs in the long run.
Last metric in the COST constraint is TCO (Total cost of ownership).
TCO is a better metric than pure CapEx to evaluate network cost, as it
considers CapEx plus OpEx. Make your network designs cost-effective in the
long run and do more with less by optimizing both CapEx and OpEx.
There are certainly other network design attributes such as Flexibility,
security, modularity and hierarchical design. These design tools or goals will
be covered throughout the book.
But very briefly flexibility and the modularity can be described as below.
FLEXIBILITY
Flexibility refers to the ability of a network design to adapt to business
changes, which can come in a planned or unplanned way. There are a few
constants in life: death, taxes and change – not much we can do about the
first two, but certainly we can influence how to adapt to change. Merger,
acquisition, divesture can happen anytime in any business. How your
network will react to these rapid changes?
You can make network design more flexible by making it more modular.
MODULARITY
Modularity means to divide the network by functions or policy
boundaries, making it replicable (for example on branches) and thus easier to
scale and operate, and enabling business continuity. How do you make a
design modular?
1. Choose the physical topology: Some topologies such as hierarchical or
leaf&spine are more conducive to allow for modules than others (fully
meshed, for example).
2. Split functions or geographies: Separate campus, branches, data center
and applications, Internet, network management systems, and security
policy boundaries to make each function easier to expand, upgrade,
enhance or change. Make them small enough to ease replication.
3. Break it into smaller pieces: Create smaller fault domains so that a
failure on a part of the network doesn’t propagate to other parts, by
subdividing the functions as appropriate.
DESIGN CONSIDERATIONS FOR NETWORK MERGERS
AND ACQUISITIONS
Network mergers and acquisitions are the processes, which can be seen in
any type of businesses. As a network designer, our job to identify the
business requirements of both existing networks and the merged network and
finding best possible technical solutions for the business.
There are many different areas that need to be analyzed carefully. Wrong
business requirement gathering and design analyze, definitely lead to
catastrophic failures.
Business and network analysis and technical information gathering are
the key steps and there are many questions which need to be asked and
answered should be well understood.
Network mergers and acquisitions are also called as Network integration.
Below are the key points for any type of network mergers and
acquisitions projects.
Business analysis and information gathering
Applications of the company, at least the business critical applications
should be understood and analyze very well.
What are the capabilities of these applications and what are the
requirements from the existing network.(Packet loss, jitter, delay,
application traffic flow, security and QoS requirements and so
on).Basically in this step, we analyze the current infrastructure of the
companies. IP addressing scheme, Application requirement, physical
topology gathering, business future growth forecast analysis, security,
QoS, Multicast, OAM and management infrastructure capabilities and
information should be gathered.
What type of WAN, LAN and DC infrastructure each network is using, Is
any VPN solution deployed on the WAN, is there a traffic engineering
requirement on WAN or DC, Is IPv6 supported on any of the companies?
Is there any single point of failure and what will be the high availability
requirement of merged network?
What is the convergence time of the network and what is the required
convergence time of any single component failure? (You shouldn’t design
the network for multiple failures)
As you can see there are so many questions, which should be asked and
noted during the business analysis. This is most time consuming step of any
network design but definitely worth to do it properly to avoid any future
problem and having best network design.
Analyzing the design for network mergers and acquisitions is not
different analyzing the design for the greenfield network. Application and
business requirements are always the most important, technology is second.
Alternative technologies always can be found.
Where will be the first place in the network for the merger ?
When two network merge, generally two networks are connected through
their core network component. If there is any overlapping issue, for example
IP Address, these should be fixed. Traditionally IP address overlap issue is
fixed via NAT (Network Address Translation).
Physical location selection is very important. As you will see later in the
post, some of the sites can be decommissioned for the operational cost
savings. After deciding the locations, which will be used, by extending the
current WAN connections companies can start to reach each other.
• What happens to the routing, IGP, BGP? Is a “ship in the night”
approach suitable or redistribution is better for routing protocol merger?
One common routing protocol can run through both network or if there
are two different IGPs, redistribution can take place. If there is MPLS on the
networks, any type of Inter-AS VPN solution can be deployed. In some Inter-
AS MPLS solutions redistribution is not required.
Running common IGP is always better compare to dealing with
redistribution. It is better for management, troubleshooting, convergence,
availability and for many other design objectives.
Which type of security infrastructure will merged network support?
What are the existing security policy and parameters of each individual
network. You should deploy common security policy for the merged
network. You should make sure edge of the network as secure as possible and
core of the network just should transport the packets.
What is the Quality of Service Policies of the companies? Will final
merged network support Quality of Service or through bandwidth
everywhere?
Quality of service policy of end-to-end network should be deployed by
understanding the applications of the each individual company.
That’s why understanding the applications which was the analyzing the
network task, is crucial. You should follow best current practices for QoS
design.
Some businesses don’t use QoS, especially on their core network. They
generally have their DWDM infrastructure, so when they need extra capacity,
they can provision quickly and start using it.
The reason why they don’t use QoS is simplicity. They want to keep
their core network as simple as possible. This approach is seen generally in
the Service Provider business.
Will merged network have IPv6?
IPv6 is unavoidable. There are many IPv6 business drivers for any type
of business. If IPv6 only design is not possible for the merged company, at
least IPv6 transition mechanisms should be understood very well and
considered for the merged network.
Does one of the networks require Multicast? Will merged network support
Multicast?
If Multicast is running on any of the companies, most probably merged
network will require and benefit from the multicast deployment as well. PIM
(Protocol Independent Multicast) and current multicast best practices should
be understood and deploy based on the company requirements. Some
applications of the company may benefit from the special Multicast routing
protocol deployment model such as PIM ASM (Any source multicast), PIM
SSM (Source Specific Multicast) or PIM Bidir (Bidirectional Multicast).
What is the new capacity requirement of the merged network?
When two networks merge overall capacity requirement for edge and core
network generally changes. Understanding network capacity planning is key
and network designers should understand the available methods for backbone
and overall network capacity planning tools and best practices.
How will be the merged network monitored? Do exist Network
Management tools capable to support all the technologies/protocols of the
both network?
Both companies may have different monitoring tool, application support
might be different of their tools as well. Monitoring and management tools
should be considered before the merger because tools should be able to
support all applications, protocols and technologies of the merged network.
When you divest the network, where will the datacenters be? Can you
decommission any datacenter, POP location for cost optimization?
Some of the locations of the companies may overlap and some POP
locations and/or datacenters, even Head Quarters can be decommissioned to
reduce operational expenses.
Physical topology of the companies should be understood well and if
there is cost of advantage of choosing particular location, definitely needs to
be considered.
This is definitely not be the entire list for network mergers and
acquisitions, but you should at least start your design with these questions in
your real life design as well as in the design certification exams. In the CCDE
exam, questions will be based on above considerations mostly.
DESIGN BEST PRACTICES FOR HIGH AVAILABILITY,
SCALABILITY, CONVERGENCE AND OTHER DESIGN
TOOLS
The section below lists design recommendations and practical knowledge
will be provided for the network design tools.
These are protocol independent. For the protocols such as OSPF, IS-IS,
EIGRP, BGP, MPLS, Multicast, QoS and IPv6, design recommendations and
the best practices will be provided in the related chapters accordingly.
HIGH AVAILABILITY:
Availability of a system is mainly measured with two parameters. Mean
time between failure (MTBF) and Mean time to repair (MTTR). MTBF is
calculated as average time between failures of a system. MTTR is the
average time required to repair a failed component (link, node, device in
networking terms)
Too much redundancy increases the MTTR of the system (Router, Switch
or overall network) thus inversely effect the availability.
Most failures are caused by human error. In fact, the estimated range is
between 70 and 80%. How can so many people be incompetent? Actually,
they are not! It’s actually a design problem. In hub and spoke deployment
for example, if adding spoke sites causes an entire network meltdown, this
is a design problem and not the operator’s mistake. You should increase
the hub capacity.
Due to BGP path vector behavior, BGP route reflector selects and advertise
only one best path to all BGP route reflector clients, but some applications
such as BGP PIC or BGP Multipath require more than one best path.
Not every network needs 5x9 or 6x9 availability. Before deciding upon the
availability level of a network design, understand the application
requirements and the place where the design will be applied on the
network.
For example, availability requirements for a company’s centralized
datacenter will be very different than one of its retail stores.
CONVERGENCE:
Don’t use Routing Protocol hellos for the Layer 3 routing failure detection,
at least don’t tune them aggressively, instead leave with the default. Use
BFD whenever possible for failure detection in Layer 3.
BFD supports all routing protocols except RIP. It supports LDP and MPLS
Traffic Engineering as well.
If you can detect the failure in Layer 1, then don’t enable BFD
(Bidirectional Forwarding Detection) as well.
Pooling-based mechanisms are always slower than event-driven
mechanisms. For example, Layer 1 loss of signal will be much faster than
BFD hellos; Automatic Protection Switching (APS) on SDH links are
always faster than BFD hellos for failure detection.
Distance Vector Protocol converge time is the same as Link-State Routing
Protocols. If there is a feasible successor in the EIGRP topology, EIGRP
by default converges faster than other routing protocols.
BGP doesn’t have to converge slowly. Understanding the data plane and
control plane convergence difference is important for network designers.
BGP control plane and data plane convergence is explained in detail in the
BGP chapter.
BGP route reflector is not always the best solution. It hides the available
alternate next-hops, slows down the network convergence, and requires
route reflector engineering thus requires stuff experience.
SCALABILITY:
In modern platforms there are software and hardware forwarding
information tables. Software forwarding is a very resource-
intensive task; utilize hardware forwarding if you need better
performance.
Multi-area OSPF or multi-level IS-IS design is not always necessary but
you should know what business problem you are trying to solve.
Resiliency? Opex? Security? Reliability? Scalability? In general
Scalability is considered as a reason to deploy Multi Area OSPF or Multi
Level IS-IS. These two topics will be covered in great detail in the related
chapters.
Try to find a way to deploy any technology, feature, or protocol with the
least amount of configuration possible. If you can achieve the same result
with lesser configuration steps, prefer that one.
LOAD BALANCING:
Load balancing and load sharing is not the same thing. Load sharing
terminology should be used for the routers or switches but load balancing
requires more intelligence such as Load balancer. If the downstream
device is busy, routers or switches cannot take this information into an
account. But Load Balancers can!
Load balancing is any intelligence feature that devices need to support,
such as destination health check, considering destination device resource
utilization, the number of connections, etc. The load balancers do this.
Routers perform load sharing. Routers only take the routing metric into
account to send the packet to the destination. Traffic Sharing can be over
equal or unequal cost paths.
OSPF and IS-IS can do the unequal cost load sharing with the help of
MPLS-Traffic Engineering only. By default they don’t support unequal
cost multipath routing. EIGRP by default can route the packets over
unequal cost paths.
Redistribution:
You may need to redistribute routing protocols. You may have a partner
networks or BGP into IGP for default route advertisement.
Redistribution should be used in conjunction with the filtering mechanisms
such as route tags.
Keep in mind that these mechanisms increase overall complexity of the
network. Also be aware of routing loops during redistribution operation.
Two-way redistribution is the place where routing loops are most likely to
occur. And most common prevention for routing loop in this case is to use
route tags.
Redistribution between routing protocols does not happen directly; routes
are installed in RIB and pull from the RIB to other protocol. So route
should be in the RIB to be redistributed. A classic example of this is BGP.
If the network is not in the routing table of the router, which is RIB, it
cannot be taken to the BGP RIB. This is why those routes cannot be sent
to another BGP neighbor.
If avoidable, don’t use redistribution. Managing redistribution can be very
complex.
OPTIMAL ROUTING:
Overlay protocols should follow the underlay protocol to avoid sub optimal
routing and traffic blackholing. In other word, they should synchronize.
For example FHRP (HSRP, VRRP, GLBP) should synchronize with STP
to avoid sub optimal forwarding. IGP/BGP and IGP/LDP synchronization
are the other examples and will be explain on the topologies, later in the
book.
Control plane state is the aggregate amount of information carried by the
control plane through the network in order to produce the forwarding table
at each device. Each piece of additional information added to the control
plane such as more specific reachability information, policy information,
security configuration, or more precise topology information adds to the
complexity of the control plane.
This added complexity, in turn, adds to the burden of monitoring,
understanding, troubleshooting, and managing the network. Removing
control plane state almost always results in decreased optimality in the
forwarding and handling of packets travelling through the network, which
creates sub-optimality.
We don’t configure the networks; we configure the networking devices
(Routers, Switches etc.) Understanding the overall impact of configuring
one router on the network holistically is very important. We try to
configure the many routers, switches etc. and wait the result to be a
coherent. But at the end we face all kinds of loops, micro loops, broadcast
storms, routing churns, and policy violations.
It is a good idea to create a small failure domain in Layer 2 and Layer 3,
but you must be aware of suboptimal routing and black holes. There is an
important design tradeoff: Whenever summarization is done, chance of
Sub-optimal routing increases!
Summarization is done at the aggregation layer in three-tier hierarchy.
Doing it at the aggregation simplifies the core network since there will be
less amount of routing table entry in the core and access network changes
don’t impact the network core. Core should remain as simple.
NETWORK TOPOLOGIES:
Intelligence should be at the edge of the networks and the network core
should be as simple as possible. The responsibility of the network core is
fast packet forwarding, not the traffic aggregation, policy insertion, or user
termination.
Try to create triangle physical topology instead of a square. In the case of
link failure, triangle topologies converge faster than squares.
Ring topology is the most difficult of all the routing protocols from the
viewpoint of convergence and optimality. Simply adding some links and
creating partial-mesh topology instead of ring provides a more optimal
path, better resource usage, and faster convergence time in case of link or
node failure.
complex.
Which one is salt and which one is pepper? It must be simple to understand!
Your design shouldn’t be confusing. Can you understand in above picture;
which one is salt and which one is pepper without testing? When the
complexity of your network increases, you cannot simply operate it
without testing and very careful planning.
Features can be intended for robustness, but instead create fragility. The
impact may not be seen immediately, but it can be huge. In design this is
known as the Butterfly Effect.
“ A butterfly flapping its wings in South America can affect the weather
in Central Park.”
Last but not least:
Question 1:
In the below figure, two routers are connected through two links. OSPF
routing protocol is running over the links.
Which below statement is true for the below figure?
Answer 1:
Adding more links don’t provide better security. Resiliency depends on
redundancy, convergence and reliable packet delivery. More links don’t
necessarily provide better resiliency. General rule of thumb, 2 links is best for
resiliency.
We cannot know whether IS-IS would be better since there is no other
requirement.
Option A is definitely correct. More links increase routing table size since
OSPF is running on individual links and more links means more routing table
entry.
If there would be an etherchannel between the routers and OSPF would
run on top of that link, adding more link wouldn’t increase the routing table
size.
That’s why; answer of this question is A.
Question 2:
Which below technologies provide fast failure detection? (Choose two)
A. BFD
B. Routing fast hellos
C. Loopguard
D. SPF Timers
E. BGP Scanner time
Answer 2:
Loopguard, SPF timers and BGP Scanner Timers are not used for fast
failure detection. BGP Scanner time for example is 60seconds and reducing
can create 100% CPU utilization. Thus better and newer approach Next Hop
Tracking is used in BGP, as it will be explained in the BGP Chapter.
Routing Protocols hellos can be tuned to provide fast failure detection and
the purpose of BFD is to provide fast failure detection.
Thus the correct answer of this question is A and B.
Question 3:
Which of the below protocols support BFD for fast failure detection?
(Choose all that apply)
A. Static Routing
B. OSPF
C. IS-IS
D. EIGRP
E. BGP
F. RIP
Answer 3:
All the routing protocols above except RIP support BFD as it was
mentioned in this chapter. They can register to BFD process for fast failure
detection. In case of failure BFD inform these protocols to tear down the
routing session.
RIPv2 on the other hand supports BFD.
Question 4:
What are the benefits of having modular network design? (Choose Two)
A. Each module can be designed independently from each other
B . Each module can be managed by different team in the
organization
C. Each module can have a separate routing protocol
D. Each module can have different security policy
Answer 4:
If the design supports modularity, then each module can be designed
independently, In access, aggregation, core module for example, access
network can be hub and spoke, distribution can be full mesh and core
network can be partial mesh.
Also commonly in the service provider networks, access and core team
are the separate business units and modularity provides this opportunity. Or
in large Enterprises, different team can managed the different geographical
areas of the network, which has been designed by considering modularity.
Modularity is not done to have different routing protocols and companies
should deploy common security policies across all domains.
That’s why the correct answer of this question is A and B.
Question 4:
Which below statements are true for the network design? (Choose Two)
A. Predictability increases security
B. Every networks need 5x9 or 6x9 High Availability
C . Using more than routing protocol in the network increases
availability
D. Modular network design reduces deployment time
Answer 4:
As it was explained in this chapter, modular network design reduces
deployment time. And predictability increases security. Predictable networks
also reduces troubleshooting time thus increases high availability.
Not every networks need 5x9 or 6x9 high availability. Using more than
one routing protocol in the network, if there is mandatory reason such as
partner network requirement, is not a good design.
That’s why the answer of this question is A and D.
Question 5:
If there is two-way redistribution between routing protocols, how can
routing loop is avoided?
A. Deploying Spanning Tree
B. Deploying Fast Reroute
C. Implementing Route tags
D. Only one way redistribution is enough
Answer 5:
As it was explained in the redistribution part of the Best Practices chapter
of the book, route tags are the common method to prevent routing loops if
redistribution is done at multiple locations between the protocols
That’s why the answer of this question is C.
Question 6:
Which below statements are true for the network design? (Choose Three)
A. Using triangle topology instead of square reduces convergence
time that’s why it is recommended
B. Full mesh topology is the most expensive topology to create
C . Using longer and complex configurations always better so
people can understand how good network designer you are
D . Sub optimal routing is always bad so avoid route
summarization whenever you can since it can create sub optimal
routing
E . Network complexity can be reduced by utilizing SDN
technologies
Answer 6:
Network complexity can be reduced by utilizing SDN technologies as it
was explained in this chapter. It helps to shift the configuration task from the
human to the software. That’s why Option E is one of the correct answers.
Route summarization can create sub optimal routing but sub optimal
routing is not always bad. For some type of traffic in the network, optimal
routing may not be required at all. And just because we might have sub
optimal routing, we shouldn’t avoid doing summarization. That’s why Option
D is incorrect.
It should be obvious that Option C doesn’t make sense.
Option A and B are also correct. Triangle topology reduces convergence
time and full mesh topologies are the most expensive topologies.
Correct answers of this question are; A, B and E.
Question 7:
What is the key benefit of hierarchical network design?
A. Less Broadcast traffic
B. Increased flexibility and modularity
C. Increased security
D. Increased availability
Answer 7:
Hierarchical design may not be redundant and highly available. Also it
doesn’t bring additional security but key benefit of it is flexibility and
modularity as it was explained in the Best Practices chapter.
That’s why the answer of this question is B.
Question 8:
If routing summarization is done which below statements are valid for the
link state protocols? (Choose Two)
A. Convergence will be slower
B. Sub optimal routing may occur
C. Traffic blackholing may occur
D. Routing table size grows
Answer 8:
As it was explained in the chapter, when route summarization is done
routing table size gets smaller which makes converges faster. It can create
sub optimal routing and traffic might be blackholed in some failure scenarios.
That’s why the answer of this question is B and C.
Question 9:
What would be the impact of doing summarization at the aggregation
layer in three-tier hierarchy? (Choose Two)
A . Core network can be simplified, it doesn’t have to keep all
Access network routes
B . If you have summary in the aggregation layer, core can be
collapsed with aggregation layer
C. Access network changes don’t affect the core network
D . Aggregation is the user termination point and summarization
shouldn’t be made at aggregation layer
Answer 9:
In three-layer hierarchy aggregation layer is the natural summarization
point. When the summarization is done at the aggregation layer, core layer is
simplified and the access network changes don’t affect the core layer.
Collapsing the core is not the result of summarization since the main
reason of using core layer is physical scaling requirement. With
summarization physical requirements don’t go away.
Aggregation layer is not the user termination point. User termination is
the access layer responsibility, thus Option D is incorrect.
Answer of this question is A and C.
Question 10:
Which routing protocol supports unequal cost multi path routing?
A. OSPF
B. IS-IS
C. EIGRP
D. RIPv2
Answer 10:
In the above question, all the routing protocols are dynamic routing
protocols and among them only EIGRP supports unequal cost multi path
routing. And as it was explained in the chapter, with MPLS Traffic
engineering tunnels only, OSPF and IS-IS can support unequal cost
multipath.
That’s why the correct answer of this question is C.
VIDEOS
http://ripe61.ripe.net/archives/video/19/
ARTICLES
http://orhanergun.net/2015/01/route-redistribution-best-practices/
https://tools.ietf.org/html/draft-ietf-ospf-omp-02
https://www.ietf.org/rfc/rfc3439.txt
http://orhanergun.net/2015/01/load-balancing-vs-load-sharing/
CHAPTER 3
OSPF
I fandtheEnterprise-level
requirement is to have MPLS Traffic Engineering, standard-based,
protocol, then the only choice is OSPF. IS-IS can
support MPLS Traffic Engineering as well, IS-IS is also a standard protocol,
but it is not an Enterprise-level. Especially once it is considered that most of
the Enterprise network may require IPSEC, IS-IS cannot run over IPSEC.
Also IS-IS is not known widely by the Enterprise network engineers.
OSPF as a link-state protocol shares many similarities with IS-IS,
however OSPF can be used with IPSEC, but since IS-IS does not work over
IPSEC, making IS-IS unsuitable for an Enterprise environment.
In this chapter, OSPF theory, design best practices, and case studies will be
covered.
OSPF THEORY
As you can see from the above picture, OSPF is a Link-State Routing
protocol. But Why OSPF is link state and what is Link State Routing?
In the link state protocols, each router advertises the state of its link to
every other router in the network.
D determines that it is connected to 192.168.0.0/24 with metric 10.
Connected to B with metric 10 and Connected to C with metric 10 as well. In
turn, Router B and Router C advertise this information to Router A.
In OSPF (Similar in IS-IS) all the connections and their associated metric
is known by all the routers. In above topology, Router A knows that
192.168.0.0/24 network is connected to Router D.
In Distance Vector Protocols (EIGRP, RIP) Router A would only know
that 192.168.0.0/24 network is reachable through Router B or Router C.
Router A wouldn’t know that the network is connected to Router D. This is
called as OSPF’s distance vector behavior.
This information is called topology information. Since they are Link State
Routing Protocols, in OSPF and IS-IS networks, every router knows the
topology information. (Who is connected to who and how)
OSPF LINK-STATE ADVERTISEMENT
A
192.168.0.0/24
Above table lists all the OSPF LSAs, Type 6 and Type 8 is never
implemented. Type 9 through Type 11 is used in specific applications such as
MPLS Traffic Engineering, Segment Routing, OSPF Graceful Restart and so
on.
6 CRITICAL LSAS FOR OSPF DESIGN
OSPF ROUTER LSA
Also called as OSPF type 1 LSA.
Every router within a single area generates a router LSA to advertise link
and prefix information.
In an area every router has to have same routing information, otherwise
routing loop might occur.
Important network design best practice is for the OSPF Router LSA is not
to exceed the Interface MTU.
If Router LSA size exceeds the interface MTU value, routers fragment
and reassemble the fragmented packets. Fragmentation is always bad!
Especially if it is done by the routers. In IPv6, Routers don’t fragment or
reassemble the packets though. Hosts handle the fragmentation.
OSPF NETWORK LSA
Also called as OSPF type 2 LSA.
Type 2 LSA is used to advertise connected routers to multi access
network by the DR (Designated Router).
OSPF uses DR (Designated Router) and BDR (Backup Designated
Router) on multi access network such as Ethernet. DR and BDR reduce the
number of flooding in multi access network thus help in scalability. But on
point-to-point link, if OSPF is enabled, it is best practice to set OSPF network
type as ‘ Point to Point’. Otherwise unnecessary Type 2 LSA is created by
DR.
If point-to-point OSPF network type is set there is no DR or BDR.
DR and BDR election takes time, that’s why from design best practice
point of view, if the requirement is fast convergence, it is good to change the
OSPF network type to point to point, even if the physical connection is
Ethernet. (When two routers are connected back to back via Ethernet, even
though there are only two routers, still there will be DR/BDR election)
OSPF SUMMARY LSA
Also called as OSPF type 3 LSA.
Generated by the OSPF ABR (Area Border Router) in multi area OSPF
environment.
OSPF ABR doesn’t send topology information between the OSPF areas.
Instability in one area doesn’t affect the other areas.
OSPF Type 3 LSA is generated by the OSPF ABR. Important design
question for the OSPF summary LSA is, how many ABR should be between
two areas. The answer is two. One would be bad for the high availability and
more than two ABR create unnecessary complexity since there would be 3x
amount of Summary LSA for each prefix.
OSPF ABR has too much work in Multi Area OSPF design. On Multi
access network OSPF DR also has more work than DR Other routers. That’s
why it is good practice to have OSPF DR and OSPF ABR function on
different routers whenever it is possible.
OSPF ASBR SUMMARY LSA
Also called as OSPF Type 4 LSA.
In order to reach to an ASBR (Autonomous System Boundary Router)
from different area, ABR creates a Type 4 LSA.
It is important to understand that ASBR doesn’t generate Type 4 LSA.
ASBR generates Type 5 LSA for the external prefixes. Also ASBR generates
Type 1 LSA for its own reachability information. When an ABR receives
Type 1 LSA advertisement of ASBR, it generates Type 4 LSA and floods
Type 4 LSA to the other areas.
If there is no Type 5 LSA, Type 4 LSA is not generated.
There are some special type of Areas which has been explained in
different article on the website such as Stub, NSSA areas which don’t allow
Type 5 LSA, in those areas, there is no Type 4 lsa as well.
OSPF EXTERNAL LSA
Also called as OSPF Type 5 LSA.
External LSA is used to advertise external reachability information.
External LSA is flooded to every router in the domain. ABR don’t
regenerate it. ABR just passes that information as is.
From different routing domain such as BGP or EIGRP, routes might be
redistributed for many reasons.
In that case, for those routes, type 5 OSPF external LSA is created by the
router, which does the redistribution. That router is called an ASBR
(Autonomous System Boundary Router).
That’s why the Area 20 routers cannot learn external subnets, which
come from the EIGRP domain.
Instead those networks are reached via default route. Default route is sent
into the OSPF stub area as OSPF Type 3 LSA, which is Inter-Area OSPF
LSA.
In the previous diagram, there are two ABRs in Area 10. For redundancy
and optimal traffic flow, two is always enough. More ABRs will create more
Type 3 LSA replications within the backbone and non-backbone areas.
In large-scale OSPF design, the number of ABRs will have a huge impact
on the number of prefixes. Thus having two ABR is good for redundancy for
the critical sites.
For example some of the remote offices or POP locations may not be
critical as others and having only one ABR can be tolerated by the company.
In this case that specific location may have only one ABR as well.
Keep in mind that, two is company, three is crowded in design.
HOW MANY OSPF AREAS ARE SUITABLE PER OSPF ABR?
More areas per ABR might create a resource problem on the ABR.
Much more Type 3 LSA will be generated by the ABR. Also, when the
failure happens ABR slows down the convergence (Similar to BGP Route
Reflector and will be explain in the BGP chapter).
Thus having maximum amount of routers in a given area without creating
fragmentation issue and having two ABR per OSPF areas is the best
practice. If you have 100 sites, you don’t want to place if site in different
Area. Having one or two OSPF area is always enough in today networks.
BEST PRACTICES ON OSPF AREAS:
Topology information is not sent between different OSPF areas, this
reduces the flooding domain and allows large scale OSPF deployment. If
you have 100s of routers in your network, you can consider splitting the
OSPF domain into Multiple OSPF areas. But there are other
considerations for Multi Area design and will be explained in this chapter.
Stub, Totally Stub, NSSA and Totally NSSA Areas can create sub optimal
routing in the network.
OSPF Areas are used for scalability. If you don’t have valid reason such as
100s of routers, or resource problems on the routers, don’t use multiple
areas.
OSPF Multi area design just increases the complexity.
Two is company, three is crowded in design. Having two OSPF ABR
provides high availability but three ABR is not a good idea.
Having single OSPF area per OSPF ABR is very bad and there is no use
case for that. You should monitor the routers resources carefully and
placed as much router as you can in one OSPF area.
Not every router has powerful CPU and Memory, you can split up the
router based on their resource availability. Low end devices can be placed
in a separate OSPF area and that area type can be changed as Stub, Totally
Stub, NSSA or Totally NSSA.
Always look for the summarization opportunity, but know that
summarization can create sub optimal routing. (OSPF summarization and
sub optimal routing will be explained in this chapter).
Good IP addressing plan is important for OSPF Multi Area design. It
allows OSPF summarization (Reachability) thus faster convergence and
smaller routing table.
Having smaller routing table provides easier troubleshooting.
OSPF NSSA area in general is used at the Internet Edge of the network
since on the Internet routers where you don’t need to have all the OSPF
LSAs yet still redistribution of selected BGP prefixes are common.
OSPF SINGLE VS. MULTI AREA DESIGN COMPARISON
Below table summarizes the similarities and the differences of these two
OSPF design models in great detail. Network designers should know the pros
and cons of the technologies, protocol alternatives and their capabilities from
the design point of view.
There are some new LSA types in OSPFv3. These LSAs bring scalability
for the OSPFv3.
OSPFv3 actually very different from the LSA and network design point
of view, although configurations of the two protocols are similar.
Below table summarizes the similarities and the differences of these two
protocols in detail. Network designers should know the pros and cons of the
technologies, protocol alternatives and their capabilities from the design point
of view.
Design
OSPFv2 OSPFv3
Requirement
Better, since router and
Network LSA doesn’t
Scalability Good contain prefix
information, but only
topology information
Works well with mesh Works well with mesh
Working on Full Mesh
group group
Working on Hub and Works poorly, requires a Doesn’t work well,
Spoke lot of tuning requires tuning
YES, IP FRR, but
Fast Reroute Support YES, IP FRR
limited platform support
Suitable on WAN YES YES
DCs are full mesh, DCs are full mesh not so
Suitable on Datacenter
therefore, not well well
Suitable as Interprise
YES YES
IGP
Suitable as Service
YES Definitely
Provider IGP
Complexity Easy Moderate
If topology doesn’t
change, full SPF is not
Full SPF runs on prefix
needed. Prefix
Resource requirement or topology change as it
information is carried in
is worsee than OSPFv3
new LSA, not in Router
LSA any longer
Ipv6 Support NO YES
IPv4 Support YES YES
Even Slower, if multiple
Default Convergence Slow address families are
used
Harder, requires
understanding of IPv6
Troubleshooting Easy addressing, after that, it
is the same packet types,
LSA, LSU, DBD
Inter area prefixes Same as OSPFv2. Inter
Should be recieved from area prefixes should be
ABR. All non-backbone recieved from ABR, all
Routing Loop
areas should be non-backbone areas
connected to the should be connected to
backbone area the backbone area
1. Failure detection
Layer 1 Failure detection mechanisms:
Carrier delay
Debounce Timer
Sonet/SDH APS timers
Layer 3 Failure detection mechanisms:
Protocol timers (Hello/Dead)
BFD (Bidirectional Forwarding Detection)
For the failure detection, best practice is always use Physical down
detection mechanism first. Even BFD cannot detect the failure faster than
physical failure detection mechanism.
Because BFD messages is pull based detection mechanism which is sent
and receive periodically, but physical layer detection mechanism is event
driven and always faster than BFD and Protocol hellos.
If physical layer detection mechanisms cannot be used (Maybe because
there is a transport element in the path), then instead of tuning protocol hello
timers aggressively, BFD should be used. . Common example to this is if
there are two routers and connected through an Ethernet switch, best method
is to use BFD.
Compare to protocol hello timers, BFD is much ligher in size, thus
consumes less resource and bandwidth.
2. Failure propagation
Propagation of failure throughout the network.
Here LSA throttling timers come into play. You can tune LSA throttling
for faster information propagation. It can be used to slow down the
information processing as well.
Also LSA pacing timers can be tuned for sending update much faster.
3. New information process
Processing of newly arrived LSA to find the next best path. SPF throttling
timers can be tuned for faster information process for fast convergence.
4. Update new route into RIB/FIB
For fast convergence, these steps may need to be tuned.
Although the RIB/FIB update is hardware dependent, the network
operator can configure all other steps.
One thing always needs to be kept in mind; Fast convergence and fast
reroute can affect network stability.
Unlike fast convergence, for the fast reroute, routes are precomputed and
preprogrammed into the router RIB/FIB.
Additional, an alternate is found, if possible, and pre-installed into the
RIB/FIB. As soon as the local failure is detected, the PLR (Point of Local
Repair) switches the routes to use the alternate path. This preserves the
traffic while the normal convergence process of failure propagation and SPF
recomputation occurs. Fast reroute mechanisms and the comparison charts of
common fast reroute mechanisms are going to be explained in the MPLS
Traffic Engineering section of MPLS chapter.
In the above picture, A is Hub router; B, C and D are the spoke routers. In
Hub and Spoke topologies, Hub router should be the OSPF DR. Otherwise
flooding fails. In the above topology, if any of the spokes consider itself as
DR and Hub also believes that spoke is the DR (Because higher DR priority),
remote sites cannot reach each other.
Thus the best practice in Hub and Spoke network, configure Hub router
as DR and set the priority as ‘ 0 ‘ on all the spoke routers. With Priority 0,
spoke routers don’t even participate DR/BDR election.
In large scale Hub and Spoke deployment, other design recommendation
is; spoke sites should be placed in Stub, Totally Stub, NSSA or Totally NSSA
areas if the optimal routing is not a concern from the spokes sites.
If redistribution is required, then NSSA and Totally NSSA area should be
chosen for the spoke sites.
Between Router A and Router B there are 1800 different paths. (5x6) x 2
(5x6). If we put all of them in the same area there would be flooding,
convergence, resource utilization, and troubleshooting problems. If we use
Router G, or if Router H as an ABR, we will have only 32 paths max (5x6)
+2 between Routers A and B. This will greatly reduce the load on the
resources, reduce the overall complexity, and make troubleshooting easier.
Always put an ABR where you can separate the complex topologies from
each other.
OSPF: CASE STUDY – OSPF MULTI AREA ADJACENCY
Question:
What is the path from Router C to 192.168.10.0/24 and from Router D to
192.168.0.0/24 networks? Is there a problem with the path? Why? What is a
possible solution?
If Link 1 is in Area 0, Router C will choose a path through E, F, and D to
192.168.10.0/24 rather than Link 1. This is because OSPF always prefers
intra-area routes over inter-area routes.
If Link 1 is placed in Area 10, Router D will choose a path through B, A,
and C to 192.168.0.0/24 for the same reason. This is suboptimal.
Placing link into Area 1 and creating virtual link is a temporary solution.
New OSPF adjacency is also required for each additional non-backbone.
Best solution: RFC 5185 -OSPF Multi-Area Adjacency.
More than one OSPF adjacency multiple-area can be allowed with the
RFC 5185
Below is a sample configuration from the Cisco device which supports
RFC 5185:
rtr-C(config)# interface Ethernet 0/0
rtr-C(config-if)# ip address 192.168.12.1 255.255.255.0
rtr-C(config-if)# ip ospf 1 area 0
rtr-C(config-if)# ip ospf network point-to-point
rtr-C(config-if)# ip ospf multi-area 2
A. 50
B. 100
C. 250
D. Less than 50
E. It depends
Answer 1:
As it is explained in the OSPF chapter, you cannot have a numeric answer
for this question.
There is no numeric answer of this question. It depends on how many
links each router have, stability of the links, hardware resources such as CPU
and Memory of the routers and physical topology of the network.
For example in full mesh topology, every router is connected to each
other and number of links is too much compare to ring or partial topologies.
Thus, in one OSPF network you may place 50 routers in one OSPF area,
but other OSPF network can have 100s of routers in one area.
Question 2:
Why there are many different types of LSAs are used in OSPF? (Chose
all that apply)
A. Provides Scalability
B. Allow Multi-Area OSPF design
C. Provides fast convergence
D. Provides High Availability
E. Better Traffic Engineering
Answer 2:
Question here is asking the reason of having multiple different types of
OSPF LSAs. As you have seen in the OSPF chapter there are 11 different
types of OSPF LSAs.
Although there are other reasons to use OSPF LSAs, two important ones
are scalability and Multi-Area design. They don’t help for fast convergence
or high availability LSAs are not related with High Availability or Fast
convergence. Although MPLS Traffic engineering can use OSPF Opaque
LSAs for the distributed CSPF calculation, CSPF is not mandatory and many
networks which have MPLS Traffic engineering uses Offline Path calculation
tool such as Cariden Mate.
Question 3:
What does topology information mean in OSPF?
A. IP addresses of the directly connected interface
B. IP addresses of the loopback interfaces of all the routers
C. Provides an IP reachability information and the metric of all the
physical and logical interfaces
D . Provides a graph of the OSPF network by advertising
connection information such as which router is connected to
which one and the metric of the connections
Answer 3:
There are two type of information is provided in link state protocols:
Topology and reachability information.
Reachability information means IP addresses of the physical or logical
interfaces of the routers. Topology information explains, which router is
connected to which one, what is the OSPF metric value between them, thus
provide a graph of the OSPF network.
Based on this information every router runs SPF algorithm to find a
shortest path to each and every destination in the network.
Question 4:
Why more than one Area is used in an OSPF network?
A. They are used for high availability
B. They are used for easier troubleshooting
C. They are used to provide scalability by having smaller flooding
domains
D. Since topology information is not shared between OSPF areas,
they provide better security
Answer 4:
OSPF areas are used mainly for scalability. Having smaller domain
means, keeping topology information in an area and not sending between the
areas. More than one area doesn’t provide high availability and doesn’t make
troubleshooting easier.
Also in OSPF having more than one area doesn’t prevent a route to be
propagated to other areas by default, it requires manual configuration and
even in that case it doesn’t bring extra security.
Question 5:
Which router in the below topology should be an ABR?
A. G or H
B. A or B
C. C or D
D. E or F
E. G
Answer 5:
Router G or H should be an ABR to separate two full mesh topology from
each other. Otherwise each router in the top full mesh network would run full
SPF algorithm for each other router in the below full mesh network in case
link failure, metric change or when new link or prefix is added.
Question 6:
In the below topology, Router B needs to be reloaded. Network operator
doesn’t want any traffic loss during and after Router B’s maintenance
operation. Which feature should be enabled on the Router B?
A. Max-metric router-lsa on startup wait-for-bgp
B. OSPF prefix-list
C. Type2-lsa on-startup wait-for-bgp
D. IGP LDP synchronization
Answer 6:
BGP as an overlay protocol needs next hop reachability. Static routing or
the dynamic routing protocol is used to create an underlay network
infrastructure for the overlay protocols such as BGP, LDP, PIM and so on.
One of the routers in the forwarding path towards BGP next hop will be
reloaded. We might have two problems here
When Router B is reloaded, traffic is going to Router B shouldn’t be
dropped. Router B should signal the other OSPF routers.
This signaling is done with OSPF Stub Router advertisement feature.
‘ Max-metric router-lsa ‘ is used by OSPF for graceful restart. Second
problem is when the Router B comes back; BGP traffic towards Router B
will be black holed, because IGP process of Router B will converge faster
than its BGP.
IGP should wait to BGP. Router B should take the BGP traffic once BGP
prefixes installed in the routing table.
This is done with the OSPF Stub router advertisement feature as well.
‘ MAX-METRIC ROUTER-LSA ON-STARTUP WAIT-FOR-BGP ‘ IS USED BY OSPF, SO
UNTIL BGP PROCESS IS CONVERGED, OSPF DOESN’T TAKE TRAFFIC.
Question 7:
How many level of hierarchy is supported by OSPF?
A. One
B. Two
C. Three
D. As many as possible
Answer 7:
OSPF supports two level of hierarchy. Hierarchy is common network
design term, which is used to identify the logical boundaries. Backbone area
and Non-Backbone areas are the only two areas, which are supported by
OSPF, thus it supports only two level of hierarchy.
Question 8:
Which below options are correct for OSPF ABR? (Choose all that apply)
A. It slows down the convergence
B. It generates Type 4 LSA in Multi Area OSPF design
C. It does translation between Type7 to Type 5 in NSSA area
D. It does translation between Type 5 to Type 7 in NSSA area
E. It prevents topology information between OSPF areas
Answer 8:
OSPF ABR slows down the network convergence. Because it needs to
calculate for each Type 1 and Type 2 LSAs, corresponding Type 3 LSAs and
send its connected OSPF areas.
OSPF ABR generates Type 4 LSAs in Multi Area OSPF Design. When
ABR receives the external prefixes in an Area, it translates Type 1 LSAs of
the ASBR to Type 4 LSA and sends it to the other areas.
In NSSA Area, ABR translates Type 7 LSA to Type 5 LSA, but there is
no Type 5 to Type 7 LSA translation. It is not allowed.
Topology information is not sent between the OSPF Areas, ABR stops
topology information.
Thus the answer of this question is A- B – C- E.
Question 9:
Why Designated Router is used in OSPF network?
A. It is used to have an ABR in the network
B. It is used to create topology information
C . It is used to centralize the database, instead of keeping
distributed OSPF link state database in every node
D. It is used to avoid flooding information between each device in
multi access OSPF network
Answer 9:
Designated Router (DR) is used to avoid flooding information between
each OSPF device in Multi-Access networks such as Ethernet or Frame
Relay.
Routers only send their update to DR and DR floods this information to
the every router in the segment. Multicast Group addresses 2224.0.0.5 and
224.0.0.6 is used for communication in IPv4.
Question 10:
Which below feature is used to avoid blackholing when OSPF and LDP
are used together?
A. OSPF Fast Reroute
B. OSPF Multi Area Design
C. IGP LDP Synchronization
D. Converging OSPF faster than LDP in case of failure
Answer 10:
The problem occurs when link or node fails when OSPF and LDP is used
together. It also occurs when IS-IS and LDP is together and the IG-LDP
synchronization provides a label for the IGP prefixes in the Label database,
otherwise since IGP converge first and then LDP, packets would be
blackholed.
Chicken and egg problem is solved and blackholing is avoided.
Question 11:
Which below option is correct for the given topology?
Question 12:
In the below topology Area 30 is an NSSA area. Which below option is
true?
Answer 12:
Since Area 30 is an NSSA area; there will be Type 3 LSA, that’s why
Option A is incorrect. There will be Type 1 and Type 2 LSA, but not from
the other Areas.
In Are 30, every router generates Type 1 LSAs, and of there is multi-
access network, the DR will generate Type 2 LSA as well.
EIGRP prefixes will be allowed and they will be seen as Type 7 LSA in
the Area 30.
Only Option B is correct, because ABR of Area 30 translate Type 7 LSA
which is the EIGRP prefixes to Type 5 LSA send them to the network.
Question 13:
In the below topology Area 10 is Totally NSSA Area. Which below
option is true?
Question 14:
Which below topology OSPF is worse than EIGRP in large-scale
implementation?
A. Full Mesh
B. Partial Mesh
C. Hub and Spoke
D. Ring
Answer 14:
In Full Mesh physical topology, Mesh Group feature allows only two
routers to flood LSAs into the area. Mesh Group is supported by both OSPF
and IS-IS.
This brings scalability into OSPF.
Ring and Partial mesh topologies are hard for all the routing protocols.
Ring and Partial mesh are cheaper to build but convergence, optimal routing
and fast reroute is very hard in Ring and Partial mesh.
EIGRP is best in Hub and Spoke topology from the scalability point of
view, because it doesn’t require so many configurations for its operation.
OSPF on the other hand, requires a lot of tuning for its operation in Large
scale Hub and spoke topology.
Question 15:
Why OSPF is used as an Infrastructure IGP in an MPLS VPN
environment?
A. To carry the customer prefixes
B. Reachability between the MPLS VPN endpoints
C . OSPF is not used in MPLS VPN environment as an
Infrastructure IGP protocol but BGP is used
D. LDP required IGP
Answer 15:
LDP requires IGP yes but it is not relevant. It could be EIGRP or IS-IS as
well.
And the purpose of OSPF or any other IGP.as an Infrastructure protocol
is to carry the loopback interface addresses of the MPLS VPN endpoints.
So the OSPF is used for reachability between the VPN endpoints (PE
devices) in SP networks. OSPF is not used to carry the customer prefixes as
an Infrastructure IGP.
Knowing the difference between the Infrastructure IGP and the PE-CE
IGP protocol in MPLS VPN is important. This will be explained in detail in
the MPLS chapter.
Question 16:
Which OSPF feature in MPLS VPN PE-CE is used to ensure MPLS
service is always chosen as primary link?
A. OSPF max-metric
B. OSPF prefer-primary path
C. OSPF sham-link
D. Passive-interface
E. Virtual link
Answer 16:
Even domain IDs are the same in both site of the MPLS VPN, without
sham-link feature only Type 3 LSA can be received from the PE by CE.
Sham-link is used to receive Type 1 LSA and even if there is a backup
connection between the CEs, only changing cost on either PE-CE or CE-CE
link make MPLS link as primary.
OSPF as a PE-CE protocol will be explained in detail in the MPLS
chapter.
Question 17:
Which below options are correct for OSPF? (Choose all that apply)
A. OSPFv2 doesn’t support IPv6 so when IPv6 is needed, OSPFv3
is necessary
B . OSPF virtual link shouldn’t be used as permanent solution is
OSPF design
C . OSPF and BGP are the two separate protocols so when OSPF
cost changes, it doesn’t affect BGP path selection
D . OSPF can carry the label information in Segment Routing so
LDP wouldn’t be necessary
E. OSPF unlike EIGRP, supports MPLS Traffic Engineering with
dynamic path calculation
Answer 17:
Only incorrect option of this question is C. although they are two separate
protocols; changing the OSPF metric can affect the best BGP exit point.
Taking IGP cost into consideration to calculate best path for the BGP
prefixes is called Hot Potato Routing.
Changing IGP metric can affect BGP best path.
Question 18:
What is the reason to place all routers in Area 0/Backbone Area, even in
flat OSPF design?
A . You cannot place routers in non-backbone area without
backbone area
B. Type 3 LSAs should be received from the ABR
C. Future Multi Area design migration can be easier
D . It is not a best practice to place all the routers in Area 0 in
Flat/Single OSPF area design
Answer 18:
In OSPF design, all the routers can be placed in any Non-Backbone area.
If you have 50 routers in your network, you can place all of them in Area 100
for example.
But having the routers in OSPF Backbone area (Area 0) from the early
stage of network design provides easier migration to Multi Area OSPF
design.
This is true for the IS-IS as well. In IS-IS you can have all the routers in
the network in Level 1 domain. But having them in Level 2 allows easier
Multi-Level IS-IS design if it is required in the future. This will be explained
in the IS-IS chapter with the case study.
Question 19:
In OSPFv2 which LSA types cause Partial SPF run? (Choose Three)
A. Type 1
B. Type 2
C. Type 3
D. Type 4
E. Type 5
Answer 19:
In OSPFv2, Type 3, 4 and 5 causes Partial SPF run. Not full SPF. Partial
SPF is less CPU intensive process compare to Full SPF run.
Thus the correct answer of this question is C, D and E.
Question 20:
Based on which design attributes, number of maximum routers change in
OSPF area?
A. It depends on how many area is in the OSPF domain
B. Maximum number of routers in OSPF area should be around 50
C . Depends on link stability, physical topology, number of links,
hardware resources, rate of change in the network
D. If there are two or more ABRs, number can be much more
Answer 20:
Depends on link stability, physical topology, number of links on the
routers, hardware resources and rate of change in the network. If some links
flap all the time, this affects the routers resources and the scalability of the
network.
Question 21:
How many OSPF ABR routers should be in place in OSPF by keeping
also redundancy in mind?
A. One
B. Two
C. Three
D. If the number of routers in an area is too much, it can be up to 8
ABRs
Answer 21:
In large-scale OSPF design, the number of ABRs will have a huge impact
on the number of prefixes. Thus having two ABRs is good for redundancy for
the critical sites.
For example some of the remote offices or POP locations may not be
critical as other locations and having only one ABR in those locations, can be
tolerated by the company.
In this case that specific location may have only one ABR as well.
Keep in mind that; two is company, three is crowded in design.
Question 22:
What are the most important reasons of route summarization in OSPF
(Choose Two)
A. In order to reduce the routing table size so routers have to store
and process less information
B. In order to increase the availability of the network
C. Increase the security of the routing domain
D. In order to reduce the impact of topology changes
E. In order to provide an optimal routing in the network
Answer 22:
If there is route summarization, sub optimal routing might occur as it was
explained in the OSPF chapter. Thus Option E is incorrect.
Availability and security doesn’t increase with route summarization. But
topology change affects is definitely reduced.
Also the routing table size is reduced and this provides better memory
and CPU utilization, fast convergence and better troubleshooting.
That’s why the answer of this question A and D.
OSPF FURTHER STUDY RESOURCES
BOOKS
Doyle, J. (2005). Choosing an IGP for Large-Scale Networks, Addison-
Wesley Professional.
VIDEOS
Ciscolive Session-BRKRST-2337
ARTICLES
http://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_16-
2/162_lsp.html
http://orhanergun.net/2015/02/ospf-design-challenge/
https://tools.ietf.org/html/rfc4577
CHAPTER 4
IS-IS
OSPF IS-IS
Host End System (ES)
Router Intermediate System (IS)
Link Circuit
Packet Protocol Data Unit (PDU)
Designated Router (DR) Designated IS (DIS)
Backup Designated Router N/A (No backup DIS is
(BDR) used)
Link-State Advertisement
Link-State PDU (LSP)
(LSA)
Hello Packet IIH PDU
Complete Sequence Database Description
Number PDU (CSNP) (DBP)
Sub-Domain (Area) Area
Level-1 Area Non-backbone Area
Level-2 Subdomain
Backbone Area
(backbone)
L1L2 Router Area Border Router (ABR)
Autonomous System
Any IS
Boundary Router (ASBR)
IS-IS FAST CONVERGENCE
IS-IS fast convergence steps are very similar to OSPF fast convergence.
Four necessary steps in fast convergence
1. Failure detection
Layer 1 Failure detection mechanisms:
Carrier delay
Debounce Timer
Sonet/SDH APS timers
Layer 3 Failure detection mechanisms:
Protocol timers (Hello/Dead)
BFD (Bidirectional Forwarding Detection)
For the failure detection, best practice is always use Physical down
detection mechanism first. Even BFD cannot detect the failure faster than
physical failure detection mechanism.
Because BFD messages is pull based detection mechanism which is sent
and receive periodically, but physical layer detection mechanism is event
driven and always faster than BFD and Protocol hellos.
If physical layer detection mechanisms cannot be used (Maybe because
there is a transport element in the path), then instead of tuning protocol hello
timers aggressively, BFD should be used. Common example to this is if there
are two routers and connected through an Ethernet switch, best method is to
use BFD.
Compare to protocol hello timers, BFD is much ligher in size, thus
consumes less resource and bandwidth.
2. Failure propagation
Propagation of failure throughout the network.
Here LSP throttling timers come into play. You can tune LSA throttling
for faster information propagation. It can be used to slow down the
information processing as well. Also LSP pacing timers can be tuned for
sending update much faster.
3. New information process
Processing of newly arrived LSP to find the next best path. SPF throttling
timers can be tuned for faster information process for fast convergence.
4. Update new route into RIB/FIB
For fast convergence, these steps may need to be tuned. Although the
RIB/FIB update is hardware dependent, the network operator can configure
all other steps. One thing always needs to be kept in mind; Fast convergence
and fast reroute can affect network stability.
In both OSPF and IS-IS Exponential backoff mechanism is used to
protect the routing domain from the rapid flapping events. It slows down the
convergence by penalizing the unstable prefixes. Very similar mechanism to
IP and BGP dampening.
IS-IS ROUTER TYPES
If the area ID is the same on both routers, both L1 and L2 adjacency can be
set up.
If the area IDs are different, only an L2 adjacency can be set up.
There is no backbone area in IS-IS as there is in OSPF. There are only
contiguous Level 2 routers. Level 2 domains must be contiguous.
IS-IS Level 2 domain can be considered as similar to the OSPF backbone
area.
THERE ARE THREE TYPE OF ROUTERS IN IS-IS:
Level 1 Router
Can only form adjacencies with Level 1 routers within the same area
LSDB only carries intra-area information
Level 2 Router
Can form adjacencies in multiple areas
Exchanges information about the entire network
LEVEL 1-2 IS-IS ROUTERS
These routers keep separate LSDB for each level, one for Level 1
databases and one for Level 2 databases.
These routers allow L1 routers to reach other L1 routers in different areas
via L2 topology.
Level 1 routers look at the ATT bit in L1 LSP of L1-L2 routers and use it
as a default route to reach the closest Level 1-2 router in the area. This can
create suboptimal routing.
IS-IS DESIGN
In new IS-IS design, starting with L2-only is the best option; migration to
multi-level design is easier when starting with L2. Domain migration will be
harder if design is started with L1.
If you start with L1-L2 then all the routers have to keep two databases for
every prefix. This is resource-intensive without additional benefit.
When designing multi-level IS-IS with more than one exit (L1-L2
routers), you will more likely create suboptimal routing. Suboptimal routing
is not always bad, just know the application requirements. Some applications
can tolerate suboptimal routing and you can have low-end devices in L1
areas; edge and core can be placed in L1.
IS-IS AREA AND LEVELS DESIGN
Edge and core can be placed in the same Level 2 area.
Edge and core can be placed in different Level 2 areas; this can make
future multi-level migration easier. POPs can be placed in Level 1 areas and
core in Level 2, as illustrated. This can create suboptimal routing, but
provides excellent scalability.
In the below pictures you will see different POP and Core design models
and their characteristics.
L1 IN THE POP AND CORE
Flat IS-
Design MultiLevel IS-
IS/Single Level
Requirement IS Design
Design
Scalability It is scalable, but not to Multi- YES, better than Flat IS-
Level. IS/Single Level Design
Working on Full Mesh YES, but not scalable. Mesh YES, more scalable and Mesh
topology Group provides scalability. group feature is available as
well
Working on Hub and Spoke YES YES
L1-L2 routersadd additional
Convergence Better than Multi Level IS-IS latency (Processing Delay)
during convergence
NO, reachability information
YES, inside a level (Sub- is not sent between IS-IS level
Reachability Information domain), all the routers have (Sub-domain) Only default
same Link State Database. route is sent from the Level 2
to Level 1 with ATT bit
DMVPN Support NO, IS-IS cannot run over NO, IS-IS cannot run over
DMVPN DMVPN
Can Run Over VPN YES, it can run over GRE and YES, it can be designed over
Multipoint GRE GRE and Multipoint GRE
Question 1:
What happens if P3-P4 link fails?
Question 2:
Do you need to know the level of IS-IS network to provide a solution?
Question 3:
What would be your design recommendation to ensure high-availability
on this network?
Answer 1:
If any link fails in the MPLS networks, IGP will not converge on the
failed link before getting green light from the LDP.
Also, if P3-P4 link fails in the topology shown above,
P1-P2-P4 link is used. If the link returns and if IGP converges before LDP
comes together, P3 cannot create a label for the prefixes; it sends the regular
IP packet to P4. In fact, P4 drops the packets because it cannot recognize the
CE (customer).
ANSWER 2:
It doesn’t matter which IS-IS Level (L1 or L2) is used to provide a
solution for this problem.
Here the question is see if you know the solution already.
This type of question will be asked in the CCDE Practical exam and the
task domain will be analyzing the design.
ANSWER 3:
If IGP-LDP synchronization feature is enables, P3 and P4 signal their
neighbor not to P3-P4 link unless LDP converges. IGP signals the other
nodes in the routing domain for BGP convergence in exactly the same way.
OSPF Case Study 5 showed IGP-BGP synchronization.
With OSPF max-metric router-lsa and IS-IS overload bit, OSPF and IS-IS
signals the other node in the IGP domain for BGP converge. Protocol
interaction is for optimal routing design. If overlay protocols do not follow
the underlay protocols or physical topology suboptimal routing, blackholes,
or routing or forwarding loops can occur.
In order to avoid issues, synchronization should be enabled.
So far in this class you have seen, STP-FHRP, IGP–BGP, and IGP-MPLS
interactions within the case studies.
More case studies regarding interactions for different technologies will be
provided in later sessions.
Design
OSPF IS-IS
Requirement
Scalability 2 tier hierarchy, less scalable 2 tier hierarchy, less scalable
Working on Full Mesh works well with Mesh Group Works well with Mesh Group
Ring is hard for the routing
Working on Ring Topology protocols, in the case of a Same as OSPF
failure, micro loop occurs
Working on Hub and Spoke Works poorly, it requires a lot Same as OSPF
of tuning
Fast Rerote Support YES, IP FRR YES, IP FRR
Suitable on WAN YES YES
DCs are full mesh, and full Same as OSPF, but since IS-IS
mesh operation requires a lot runs on layer 2, it is used as
Suitable on Datacenter of tuning. Instead, in large the controlpoint for many
scale data centers, Layer 2 overlay technologies such as
protocols os BGP is used OTV, Fabricpath, TRILL,
SPB, in the datacenter
Suitable on the Internet
Edge between two AS NO, it is designed as an IGP NO, it is designed as an IGP
Complexity Complex, it has 11 types of Easy, there are only two levels
LSAs
Policy Support Good Good
Quesiton 1:
Which OSPF Area is similar to IS-IS Level 1 sub domain?
A. Backbone area
B. Stub Area
C. Totally Stub Area
D. Totally NSSA Area
Answer 1:
Answer of this question is D. Because IS-IS level 1 domain allows route
redistribution and only the default route is sent from the L2 domain. This was
explained in the IS-IS chapter.
Question 2:
If two IS-IS devices are connected to an Ethernet switch. Which below
option provides fastest down detection to the IGP process?
A. Tuned IS-IS LSP timers
B. BFD
C. Tuned IS-IS SPF Timers
D. IS-IS Hello timers
Answer 2:
Tuning LSP and SPF timers can improve the convergence of IS-IS in case
of a failure but they don’t provide fast failure detection.
Reducing the hello timers can provide shorter failure detection time but
cannot be tuned as much as BFD. Also since there is an Ethernet switch in
between, port-failure event cannot trigger remote port interface down event.
BFD is a best solution, especially if there is a node, which prevents end-to-
end failure signaling between two devices.
Question 3:
Why IS-IS overload bit is set in IGP – BGP synchronization?
A. In order to prevent routing loop
B . In order to prevent traffic loss which can be caused by
blackholing
C. In order to prevent routing oscillation
D. For fast convergence
Answer 3:
As it was explained in the IS-IS chapter, it uses to signal the other routers
so the node is not used as transit. If node would be eligible to be used as
primary path, blackhole would occur since BGP and IGP converges times are
not the same.
IGP should wait BGP before staring to accept network traffic.
That’s why; answer of this question is B.
Question 4:
Which of the below mechanisms are used to slow down the distribution
of topology information caused by a rapid link flaps in IS-IS? (Choose Two)
A. ISPF
B. Partial SPF
C. Exponential Back Off
D. LSA Throttling
E. SPF Throttling
Answer 4:
Exponential back off mechanism is used in OSPF and IS-IS to protect the
routing system from the rapid link flaps. Also LSA throttling timers can be
tuned to protect the routing system from these types of failures.
But LSA throttling timers tuning also will affect on convergence so
careful monitoring is necessary if there is IS-IS fast convergence requirement
in design.
That’s why the correct answer of this question is C and D.
Question 5:
When would it be required to leak the prefixes from Level 2 to Level 1
subdomain? (Choose Two)
A . When optimal routing is necessary from the Level 1 routers
towards the rest of the network
B. When MPLS PE devices are configured in Level 1 domain
C. When ECMP is required from Level 1 domain to the rest of the
network
D . When unequal cost load balancing is required between L1
internal routers and the L1-L2 routers
Answer 5:
Unequal cost load balancing is not supported in IS-IS. Even if you leak
the prefixes it won’t work. ECMP is done by hop by hop. Even L2 prefixes
are not leaked into the L1 domain; still internal L1 domain routers can do the
ECMP towards L1-L2 routers if there is more than one L1-L2 router. But L1-
L2 routers may not do ECMP. Thus Option C is incorrect.
When MPLS PE is inside L1 domain, LDP cannot assign a label to the PE
loopbacks since the remote loopbacks are not known. Internal L1 routers only
learn default route as it was explained in the IS-IS chapter.
And whenever optimal routing is required, of there is available, more
specific information can help for that.
Correct answer of this question is A and B.
Question 6:
How many level of hierarchy is supported by IS-IS?
A. One
B. Two
C. Three
D. As many as possible
Answer 6:
IS-IS supports two level of hierarchy. Hierarchy is common network
design term, which is used to identify the logical boundaries.
IS-IS Level 1 and IS-IS Level 2 domains provide maximum two levels of
hierarchy. Level 2 IS-IS domain is similar to Backbone area in OSPF, Level
1 IS-IS domain is similar to Totally NSSA area in OSPF.
Question 7:
If some prefixes are leaked from the IS-IS level 2 domain into level 1
domain, how IS-IS prevents those prefixes to be advertised back in Level 2
domain?
A. Route tag should be used
B . ATT bit prevents prefixes to be advertised back in Level 2
domain
C . U/D bit is used to prevent prefixes to be advertised back in
Level 2 domain
D. They wouldn’t be advertised back in Level 2 domain anyway
Answer 7:
If some reason some prefixes are leaked from Level 2 into level 1, U/D
bit in IS-IS prevents those prefixes to be advertised back into IS-IS level 2
domain. This is an automatic process, doesn’t require configuration. It is a
loop prevention mechanism in IS-IS route leaking.
That’s why the answer of this question is C.
Question 8:
Which below mechanism is used in IS-IS full mesh topologies to reduce
the LSP flooding?
A. Elect a DIS and Backup DIS
B. Use IS-IS Mesh Group
C. Use DR and BDR
D. Deploy Multi Level IS-IS design
Answer 8:
Full mesh topology could be in any level, either Level 1 or Level 2 in
multi level design. Thus having Multi level design won’t help for LSP
flooding if the topology already in any particular level.
It is similar to have BGP Confederation for scalability but still in sub AS
you have to configure full mesh IBGP or for scalability you implement Route
Reflector inside confederation sub AS. There is no Backup DIS in IS-IS,
there is only a DIS (Designated Intermediate System), thus the Option a is
incorrect. DR and BDR is an OSPF feature not the IS-IS.
That’s why the correct answer of this question is Option B. Mesh Group
concept was explained with a case study in IS-IS chapter.
Question 9:
If an IS-IS router is connected to three links and redistributing 100
EIGRP prefixes into the domain, and the design is flat/single level IS-IS
design, how many IS-IS LSP is seen in the domain?
A. 100 IS-IS LSP
B. 3 IS-IS LSP
C. 300 IS-IS LSP
D. 1 IS-IS LSP
Answer 9:
There will be different TLVs for internal and external routes but there
will be only 1 IS-IS LSP for the domain. If there would be multi level IS-IS
design two LSP would be seen but since the question says that it is a
flat/single level deployment, there will be only 1 IS-IS LSP, either L1 or L2.
That’s why the correct answer is D.
Question 10:
Which below statements are correct for IS-IS design?
A. Topology information is not advertised between IS-IS levels
B . Starting with Flat/Single Level 2 IS-IS design makes the
possible future IS-IS deployment easier
C. IS-IS level 2 route is preferred over level 1 route in IS-IS
D. IS-IS uses DIS and Backup DIS on the multi access links.
Answer 10:
There is no backup DIS in IS-IS, thus Option D is incorrect.
IS-IS level 1 routes are preferred over IS-IS level 2 routes. Similar to
OSPF intra area routes preferred over Inter Area routes. Thus option C is
incorrect as well.
Correct answer of this question is A and B.
BOOKS
White, R. (2003). IS-IS: Deployment in IP Networks, Addison-Wesley.
VIDEOS
Ciscolive Session –BRKRST–2338
PODCAST
http://packetpushers.net/show-89-ospf-vs-is-is-smackdown-where-you-can-
watch-their-eyes-reload/
CHAPTER 5
EIGRP
I fconfiguration
the requirement is to use Enterprise level, scalable, minimal
for the Hub and Spoke topology such as DMVPN, then
the only choice is EIGRP.
EIGRP is a distance vector protocol, and unlike OSPF and IS-IS,
topology information is not carried between the routers. EIGRP routers only
know the networks/subnets that their neighbors advertise to them.
As a distance vector protocol, nodes in EIGRP topology don’t keep the
topology information of all the other nodes. Instead, they trust what their
neighbors tell them.
Feasible distance is the best path. Primary path successor is the next-hop
router for the route. Feasible successors are the routers which satisfy the
feasibility condition. These are the backup routers.
Feasible successors are placed in EIGRP topology table.
Reported distance is the feasible distance of the neighboring router.
In the below diagrams, how EIGRP prefixes are learned by the routers,
then how EIGRP routers advertises the prefixes are shown.
EIGRP STUB
EIGRP Stub allows the router to not be queried, so the router does not
advertise routes to peer if the route is learned from another peer.
EIGRP Stub is the most important feature in large-scale EIGRP design.
The summarization metric is received from the route, which has the
lowest metric. If that route goes down, metric changes so summarization
effect to upstream will be lost.
You can create a loopback interface within the summary address range
with a lower metric than any other route in the summary, but the problem
with this approach is if all the routes fail in that summary range but loopback
stays, then a blackhole occurs.
When this problem occurs within the EIGRP named mode you can use
summary-metric so that you can statically state the metric you want to use.
EIGRP OVER DMVPN CASE STUDY
Enterprise Company wants to use MPLS L3 VPN (right one in the below
topology) as its primary path between its remote office and the datacenter.
Customer uses EIGRP and EIGRP AS 100 for the Local Area Network
inside the office. They want to use their DMVPN network as a backup path.
Customer runs EIGRP AS 200 over DMVPN
Service Provider doesn’t support EIGRP as a PE-CE protocol, only static
routing and BGP. Customer selected to use BGP instead of static routing
since cost community attribute might be used to carry EIGRP metric over the
MP-BGP session of service provider. Redistribution is needed on the R2
between EIGRP and BGP (two ways).Since customer uses different EIGRP
AS numbers for the LAN and DMVPN networks, redistribution is needed on
R1 too.
QUESTION 1:
Should the customer use same EIGRP AS on the DMVPN network and
its office LAN? What is the problem with that design?
Answer 1:
No they should not.
Since the customer’s requirement is to use MPLS VPN as primary path, if
the customer runs same EIGRP AS on LAN and over DMVPN, EIGRP
routes will be seen as internal by DMVPN, but external by MPLS VPN.
Internal EIGRP is preferred over external because of Admin Distance so
customer should use different AS numbers.
DMVPN could be used as a primary path for some delay, jitter, or loss of
insensitive traffic, but the customer didn’t specify that.
QUESTION 2:
When the company changed the EIGRP AS on DMVPN and started to
use a different EIGRP AS on the DMVPN and the LAN, which path will be
used between the remote offices and the datacenter?
Answer 2:
Since redistribution is done on R1 and R2, remote switch and datacenter
devices see the routes both from DMVPN and BGP as EIGRP external. Then
the metric is compared. If the metric (bandwidth and delay in EIGRP) is the
same, both paths can be used (Equal Cost Multipath-ECMP).
QUESTION 3:
Does the result fit the customer’s traffic requirement?
Answer 3:
Remember what the customer’s expectation was for the links; they to use
MPLS VPN for all their applications as a primary path.
So the answer is yes, it satisfies the customer’s requirements. If the
customer uses different EIGRP AS on LAN and DMVPN, with metric
adjustment MPLS VPN path can be used as primary with the metric
arrangement.
QUESTION 4:
What happens if the primary MPLS VPN link goes down?
Answer 4:
Traffic from remote office to the datacenter goes through Switch-R1-
DMVPN path. Since those will not be known through MPLS VPN when it
fails, only DMVPN link is used from the datacenter. DMVPN link is used as
primary if a failure happens.
QUESTION 5:
What happens when the failed MPLS VPN link comes back?
Answer 5:
R2 receives the datacenter prefixes over MPLS VPN path via EBGP and
from R1 via EIGRP. Once the link comes back, datacenter prefixes will still
be received via DMVPN and MPLS VPN and appear on the office switch as
an EIGRP external.
Since metric was arranged previously to make MPLS VPN path primary,
no further action is required.
This is the tricky part: if using Cisco switches or those from another
vendor that takes BGP weight attribute into consideration for best path
selection, then redistributed prefixes weight would be higher than the prefixes
which are received through MPLS VPN, so R2 uses Switch-R1 DMVPN path
which violates the customer’s expectations.
EIGRP VS. OSPF DESIGN COMPARISON
Below table summarizes the similarities and the differences of these two
Interior Gateway Dynamic Routing Protocols in great detail.
Network designers should know the pros and cons of the technologies,
protocol alternatives and their capabilities from the design point of view.
esign requirements are given such as Scalability, Convergence, standard
protocols and many others and these design parameters should be used during
any design preparation. Also know that this table will be helpful in any
networking exam.
Design
OSPF EIGRP
Requirement
Scalablability 2 tier hierarchy , less scalable 2 tiers hierarchy , less scalable
Working on Full Mesh Works well with mesh group Works well with mesh group
Working on a Ring Topology Its okay Its okay
Working on Hub and Spoke Works poorly, require a lot of Works bad requires tuning
tuning
Fast Reroute Support Yes - IP FRR Yes - IP FRR
Suitable on WAN Yes Yes
Suitable on Datacenter DCs are full mesh. So, No DCs are full mesh so No
Suitable on Internet Edge No it is designed as an IGP No it is designed as an IGP
Standard Protocol Yes IETF Standard Yes IETF Standard
Resource Requirement SPF requires more processing SPF requires more processing
power power
Extendibility Not good Good, thanks to TLV support
IPv6 Support Yes Yes
Default Convergece Slow Slow
Training Cost Cheap Cheap
Email:
Dear Designer,
We are using EIGRP. EIGRP Stub has been enabled on all remote
branches per your suggestion.
Some critical branch offices have two routers and there is a 1 Gbps
Ethernet handoff from each router to only one hub router in the datacenter. I
am sending our DMVPN network diagram at the attachment. For simplicity, I
have shared only a couple of sites, but the rest of the sites are connected in
the same way, i.e., either one 1 router 2 links or 2 routers 2 links to the
datacenter.
Hub and Spoke Case Study Customer Topology
Question 2:
Based on the information provided, what might be the problem? How it
can be solved?
Since the spoke routers are running as EIGRP Stub, they don’t send the
prefixes which are learned from each other to the hubs. If the link between
hub and the spoke sites which have two routers fails the router is isolated
from the rest of the network
Spokes in the spoke site 1 send their network to each other. So
192.168.0.0/24 and 192.168.1.0/24 is learned by both spokes, but since they
are EIGRP Stub, they don’t send the learned routes to the hub. If the Hub and
Spoke link failed in Spoke Site 1, 192.168.0.0/24 network will not be
reachable anymore.
Same thing for Spoke Site 3, since that site also has two routers and
EIGRP Stub is enabled. The solution is to enable EIGRP Stub leaking. In
DMVPN it is good to send summary or default route to the spokes by the
hubs. Spokes should send the routes which they learn from each other to the
hub and also should send the routes which they learn from the hub to each
other. In this way, sites, which have more than one router, which has EIGRP
Stub configuration, do not have an issue in case of any failure.
Question 1:
Which below technology provides similar functionality with EIGRP
Feasible Successor?
A. ISPF
B. Partial SPF
C. Loop Free Alternate Fast reroute
D. OSPF Stub Areas
E. IS-IS Level 1 domain
Answer 1:
Although EIGRP convergence was not explained in the EIGRP chapter, it
is important to mention here. EIGRP Feasible successor is the backup path,
which satisfies the feasibility condition.
Which means it should satisfy the EIGRP’s loop free backup path
condition.
There is no ISPF, Partial SPF, PRC or SPF in EIGRP. These algorithms
are used in link state protocols.
Answer of this question is LFA FRR, which is one of the IP Fast Reroute
mechanisms. IP FRR mechanisms will be explained in the MPLS traffic
engineering section later in the book.
Question 2:
How many levels of hierarchy is supported in EIGRP?
A. One
B. Two
C. Three
D. Unlimited
Answer 2:
Unlike OSPF and IS-IS, there is no limit in EIGRP. OSPF and IS-IS
support two levels of hierarchy as it was explained earlier.
There is no topology information in EIGRP and summarization can be
done anywhere in EIGRP. Unlimited level of hierarchy is possible with
EIGRP, that’s why answer of this question is D.
Question 3:
In the below topology R3 is configured as EIGRP Stub. If the link
between R1 and R2 fails, which below statements are true for the below
topology? (Choose Two)
Answer 3:
As it was explained in the EIGRP Stub section in this chapter, when node
is configured as EIGRP stub, it is not used as transit node anymore.
Question is asking when the R1 and R2 link fails, whether R3 will be
transit node. No, it will not be transit node.
Which mean, R1 cannot reach R2 through R3 and R2 cannot reach R1
through R3. That’s why B and E are incorrect.
R3 has all the prefixes of R1 and R2 even if it is configured as EIGRP
Stub.
That’s why the correct answer of this question is A and C.
Question 4:
Which below option is considered as loop free path in EIGRP?
A. If reported distance is less than feasible distance
B. If reported distance is same as the feasible distance
C. If reported distance is higher than feasible distance
D. If administrative distance is higher than feasible distance
Answer 4:
In order a path to be chosen as loop free alternate which means satisfy the
EIGRP feasibility condition as it was explain in the EIGRP chapter of the
book, reported distance has to be less than feasible distance. That’s why the
answer of the question is A.
Question 5:
What happens if the backup path satisfies the feasibility condition?
(Choose Two)
A. It is placed in link state database
B. It is advertised to the neighbors
C. It is placed in the topology table
D. It can be used as unequal cost path
E. It is placed in the routing table
Answer 5:
EIGRP database is called Topology database. Link state database is used
in link state protocols.
If backup path satisfies feasibility condition, it is placed in topology table,
not in routing table. If it would be best path (successor) or equal cost path, it
would be placed in routing table. But since question says, backup path, it is
only placed in EIGRP topology database.
Since it is not the best path, it is not advertised to the neighbors.
With ‘ variance’ command, it can be used as unequal cost path and can be
placed in the routing table.
That’s why answer of this question is C and D.
Question 6:
Which below statements are true for EIGRP Summarization? (Choose
Two)
A. EIGRP Auto-summarization is on by default for all the Internal
and External routes
B . EIGRP Route summarization can reduce the query domain
which helps for convergence
C . EIGRP Route Summarization can reduce the query domain
which can prevent Stuck in Active problem
D. Summarization cannot be done at each hop in EIGRP
Answer 6:
Summarization can be done at each hop in EIGRP. This is different than
OSPF and IS-IS. Auto-Summarization is not enabled for all the routes by
default in EIGRP. Summarization helps to reduce query domain boundary,
which in turn help for convergence, SIA problem, troubleshooting and so on.
That’s why the answer of this question is B and C.
Question 7:
Which below statement is true for EIGRP queries? (Choose Two)
A. EIGRP queries always send
B. Limiting EIGRP query domain helps for scalability
C. If summarization is configured, EIGRP query is not sent
D. If filtering is configured, EIGRP query is not sent
E. If EIGRP Stub is configured, EIGRP query is not sent
Answer 7:
If EIGRP Stub is configured, as it was explained before, EIGRP query is
not sent. With summarization and filtering still EIGRP query is sent. EIGRP
query domain size affects scalability. If the query domain size is reduced,
scalability increases.
That’s why answer if this question is B and E.
Question 8:
Why passive interface should be enabled on the access/customer ports?
A. To prevent injecting the customer prefixes to the network
B. To reduce the size of the routing table
C. For the fast convergence
D. For higher availability
Answer 8:
Passive interface should be used on all hosts, access and customer ports.
Otherwise security attack can happen and prefixes can be injected into the
routing domain. It doesn’t provide faster convergence. And the reason to
disable routing protocols on the customer/access ports is not to reduce routing
table size.
That’s why the answer of this question is A.
Question 9:
If the path in the network will be changed by changing the EIGRP
attribute, which below statement would you recommend as a network
designer?
A. Bandwidth should be changed
B. Delay should be changed
C. Reliability should be changed
D. PBR should be configured
Answer 9:
PBR is not an EIGRP attribute. Reliability is not used for EIGRP path
selection. Bandwidth and Delay attributes are used for EIGRP path selection
and metric is calculated based on these two parameters.
But, since bandwidth can be used by many applications such as QoS,
RSVP-TE and so on it should be changed, otherwise other things in the
network can change too.
Also since the minimum bandwidth is used for path calculation, changing
bandwidth can affect entire network design. Not only the path, which we
want.
On the other hand, delay is additive and changing it can only affect the
path, which we want.
That’s why the answer of this question is B.
Question 10:
When EIGRP is used as MPLS VPN PE-CE routing protocol, which
below mechanism helps for loop prevention even if there is a backdoor link?
A. Up/Down bit
B. Sham link
C. Site of Origin
D. Split Horizon
Answer 10:
EIGRP Site of Origin is used to prevent loop even if there is a backdoor
link. Backdoor link causes race condition in MPLS VPN topologies and it
can create sub optimal routing and routing loop.
It will be explained in the MPLS VPN section in the MPLS chapter in
detail.
That’s why answer of this question is C.
EIGRP FURTHER STUDY RESOURCES
BOOKS
Pepelnjak, I. (2000). EIGRP Network Design Solutions, Cisco Press
Core.
VIDEOS
Ciscolive Session–BRKRST -2336
PODCAST
http://packetpushers.net/show-144-open-eigrp-with-russ-white-ciscos-donnie-
savage/
ARTICLES
http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/eigrpstb.html
http://www.cisco.com/c/en/us/td/docs/ios/xml/ios/iproute/_eigrp/configuration/xe-
3s/ire/xe/3s-book/ire-ipfrr.html
CHAPTER 6
VPN DESIGN
I ntechnologies
this chapter below technologies will be covered. Most of these
are used in the Wide Area Network but can be applied in
the Campus or Datacenter as well.
For example there are use cases of LISP and GETVPN in the Datacenter,
GRE in the Campus and the Datacenter. Throughout this chapter, use cases of
all these technologies will be explained in detail. Comparison charts are very
important to understand the pros and cons of each of these technologies. They
will help in CCDE exam and they will definitely help network designers for
their design projects.
GRE
mGRE
IPSEC
DMVPN
GETVPN
LISP
VPN THEORY
Virtual Private Network is the logical entity, which is created over a
physical infrastructure. It can be setup over another private network such as
MPLS or public network such as Internet
All VPN technologies add extra byte to the packet or frame, which
increases the overall MTU so the network links should be accommodated to
handle bigger MTU values.
VPN technologies work based on encapsulation and decapsulation.
For example GRE, mGRE and DMVPN encapsulate IP packets into
another IP packet, VPLS and EVPN encapsulates Layer 2 frame into an
MPLS packets.
You can run routing protocols over some VPN technologies but not all
VPN technologies allow you to run routing protocols.
In order to support routing over tunnel, tunnel endpoints should be aware
from each other.
For example MPLS Traffic Engineer tunnels don’t support routing
protocols to run over, since the LSPs are unidirectional which mean Head-
end and Tail-end routers are not associated. This will be explained in detail in
MPLS chapter.
All VPN technologies except IPSEC and LISP, in our list, supports
routing protocols to run over.
GRE
GRE tunnels are by far most common tunneling technology. Very easy to
setup, troubleshoot and operate. But in large scale deployment, configuring
GRE tunnels become cumbersome.
IPSEC
IPSEC provides secure transmission of packets at the IP layer (Not
Layer2). For Layer 2 encryption MACSEC is used. With IPSEC packets are
authenticated and encrypted.
DMVPN
DMVN is Point to Multipoint and Multipoint to Multipoint tunneling
technology. It reduces the configuration in full mesh topologies greatly.
Cisco proprietary technology but the multi point to multipoint automatic
tunneling concept is supported by many vendors.
DMVPN Phase 1
Spokes use Point to Point GRE but Hub uses a multipoint GRE tunnel.
DMVPN Phase 2
Spoke to spoke dynamic on demand tunnels are first introduced in
DMVPN Phase 2.
In contrast to Phase 1, mGRE (Multipoint GRE, not Multicast) interface is
used in Phase 2 on the Spokes.
Thus, spokes don’t require tunnel destination configuration under the
tunnel interface and tunnel mode is configured as ” Multipoint Gre”.
Spoke to spoke traffic doesn’t have to go through the HUB. Spokes can
trigger on demand tunnel between them.
The biggest disadvantage of Phase 2 is, each spoke has to have all the
Remote – LAN subnets of each other since the Hub, preserves the next
hop addresses of the spokes.
Thus, spokes have to have reachability to the tunnel addresses of each
other.
This disallows the summarization or default routing from Hub down to the
spokes.
This is a serious design limitation for the large scale DMVPN networks.
For the distance vector protocols, Split horizon needs to be disabled and
“no next-hop self” should be enabled on the HUB.
There was too many routing protocol tuning requirement in DMVPN
Phase 2. Also lack of summarization was a scaling concern in large scale
DMVPN deployment.
DMVPN Phase 3
Spoke to spoke dynamic tunnels are allowed in DMVPN Phase 3 as well.
Spokes don’t have to have the next hop of each other’s private addresses.
An NHRP redirect message is sent to the spokes to trigger spoke to spoke
tunnels. Hub provides the Public/NBMA addresses (Real IP address on the
Internet) of the spokes to each other.
Since the next hop in the routing table of the spokes is HUB’s tunnel
address, spokes don’t have to have the specific next hop IP address of each
other.This allows summarization and default routing in Phase 3.
Hub can send a summary or just a default route down to the spokes. Hence,
Phase 3 is extremely scalable
GETVPN
GETVPN is Any to Any tunnelless VPN technology, there is no tunnel
configuration in GETVPN.
Some of the characteristics of GETVPN are:
Uses an IPSEC Tunnel mode.
GETVPN is Cisco proprietary protocol but the concept of any to any
IPSEC is supported by the other vendors with the different names as well.
Can run over private network only, can not run over Public Internet due to
IP header preservation.
In the below picture, GETVPN header is shown. You can see that it is
very similar to IPSEC tunnel mode but the outside IP header is preserved.
This is different from regular IPSEC tunnel mode, which uses different IP
header after ESP header.
One of the common questions from the customers in real life as well as in
the design exam is GETVPN to DMVPN comparison. From menu design
criteria, you should know the pros and cons of each technology.
Design
DMVPN GETVPN
Requirement
Scalability Scalable Much more scalable than
DMVPN
Permanent hub and spoke It works perfectly if the
Working on Full Mesh tunnels and on demand spoke underlying routing architecture
topology to spoke tunnels, it works but is full mesh topology, GET
limited scalability VPN needs underlay routing
Working on Hub and Spoke Works very well Works very well
Suitable on private WAN Yes Yes
No. GETVPN cannot run over
Suitable over Public Internet Yes Public Internet because of IP
header preservation
To setup the Mgre tunnels uses
underlay routing, for the It uses underlay routing to
End point discovery private address discovery uses create VPN, there is no
NHRP (Next hop Resolution overlay tunnels
Protocol)
Yes, it uses Mgre(Multi Point It is tunnelles VPN, uses
Tunnel Requirement GRE) tunnels to create underlaying routing to encrypt
overlays the data between endpoints
No,Cisco proprietary but
Standard Protocol No,Cisco proprietary Juniper also supports the same
idea with Group VPN feature
Stuff Experince Not well known Not well known
Except IS-IS other routing It is tunnelles VPN so routing
Overlay Routing Protocol protocols are supported, IS-IS protocols cannot run on top of
Support runs on top of Layer 2 but only GETVPN but it requires
IP protocols can run over underlaying routing protocols
DMVPN to setup the communication
Required Protocols NHRP and Mgre GDOI and ESP
GRE VS. MGRE VS. IPSEC VS. DMVPN AND GETVPN DESIGN
COMPARISON
Below table summarizes the similarities and the differences of all the
VPN technologies, which have been discussed in this chapter in great detail.
Network designers should know the pros and cons of the technologies,
protocol alternatives and their capabilities from the design point of view.
Many design requirements are given in the below table and the
explanation is shared for each of the technology.
Design
GRE mGRE IPSEC DMVPN GETVPN
Requirement
Scalable for
Not scalable. Scalable,one Not scalable,point routing but not
Scalability Point to point tunnel interface to point scalable for Very scalable
technology for multiple technology IPSEC. DMVPN technology
tunnel endpoint is used with
IPSEC in general
Permanent hub It works perfectly if
It works but not It works but not and spoke the underlying
Working on Full scalable if there It works very scalable if there tunnels and on routing architecture
Mesh topology are too many well on full mesh are too many demand spoke to is full mesh
devices to topology devices to connect spoke tunnels, it topology, GET VPN
connect works but limited needs underlay
scalability routing protocol
It woks but
require too much
Yes, it is suitable It works but processing power
Working on Hub on Hub and Yes works well require too much on the Hub site Works very well
and Spoke Spoke processing power from the IPSEC
on the Hub site point of view, for
the routing works
very well
Suitable on private Yes Yes Yes Yes Yes
WAN
No. GETVPN
Suitable over cannot run over
Public Internet Yes Yes Yes Yes Public Internet
because of IP header
preservation
To setup the
Mgre tunnels
Tunnel Source Tunnel uses underlay It uses underlay
End point and Destination destination is not Manual routing, for the routing to create
discovery needs to be specified configuration private address VPN, there is no
manually defined manually,it is discovery uses overlay tunnels
automatic NHRP (Next hop
Resolution
Protocol)
Question 1:
Which below statements are true for DMVPN? (Choose Two)
A. DMVPN can work over IPv6
B. IPv6 can work over DMVPN
C. OSPF and IS-IS as link state protocols can run over DMVPN
D. DMVPN cannot run over Internet since there may not be static
Public IP address at every spoke sites
Answer 1:
As it was mentioned in this chapter, DMVPN can work over IPv4 and
IPv6 and both IPv4 and IPv6 can run over on top of DMVPN.
IS-IS cannot work over DMVPN, that’s why Option C is incorrect.
DMVPN can run over Internet, spoke sites don’t have to have static
Public IP addresses.
That’s why; answer of this question is A and B.
Question 2:
Which below statements are true for GRE tunnels? (Choose Three)
A. Any routing protocols can run on top of GRE tunnels
B. Multicast can run on top of GRE tunnels
C. GRE tunnels are multi point to multi point tunnels
D. Non-IP protocols are supported over GRE tunnels
E . From the processing point of view, for the devices, GRE
encapsulation and decapsulation is harder than IPSEC
encryption/decryption
Answer 2:
Any routing protocols can run on top of GRE tunnels including IS-IS.
Multicast can run as well.
GRE tunnels are point-to-point tunnels and Non-IP protocols are
supported over GRE tunnels.
From the processing point of view, most CPU intensive task is encryption
not the GRE encapsulation.
That’s why the answer of this question is A, B, D.
Question 3:
Which below option is true for GETVPN over DMVPN for Internet
deployment?
Answer 3:
GETVPN and DMVPN can work together. Thus Option A is incorrect.
GETVPN cannot work over Internet, that’s true but question is asking
specific deployment, which is GETVPN over DMVPN that can work over
Internet. That’s why Option B is incorrect.
GETVPN brings scalability for the IPSEC part when it is used together
with DMVPN.
Only correct option is C since GETVPN key servers would be placed on a
public place which is a security risk.
Question 4:
Which below statement is true for IPSEC VPN? (Choose Two)
Answer 4:
Multicast and Routing protocols cannot run over IPSEC tunnels.
IPSEC tunnels are point-to-point tunnels.
There is no LISP tunnels thus IPSEC cannot run over LISP tunnels but
wording could be IPSEC can run with LISP and there is real world
deployment with LISP and GETVPN (Multi Point IPSEC).
That’s why the correct answer of this question is A and B.
Question 5
Which below option is important in GRE tunnel deployment? (Choose
Two)
A. GRE Tunnel endpoints shouldn’t be learned over the tunnel
B. GRE tunnel endpoints are manually configured
C. IPSEC is enable by default on GRE tunnels
D . Tunnel destination address is learned through an Authoritative
server
Answer 5:
Tunnel destination address is not learned through an Authoritative server
in GRE tunnels. This is done in LISP for example. IPSEC is not enabled by
default with GRE tunnels.
One of the most important design considerations in GRE tunnels; tunnel
end point address/destination address shouldn’t be learned over the tunnel.
Otherwise tunnel comes up and goes down. It flaps.
Another correct answer is B, because GRE tunnels are manual tunnels,
which require manual tunnel destination configuration.
Thus the correct answer of this question is A and B.
Question 6:
Which below option is true for DMVPN deployments? (Choose Three)
A. DMVPN can interoperate with the other vendors
B. DMVPN supports non-IP protocols
C. DMVPN supports multicast replication at the Hub
D. DMVPN can run over IPv6 transport
E. DMVPN supports full mesh topology
Answer 6:
DMVPN is a Cisco preparatory solution that cannot interoperate with the
other vendor VPN solutions. It only supports IP tunneling as it was explained
in the VPN chapter. Thus IS-IS is not supported over DMVPN.
DMVPN supports multicast but replication is done at the Hub. It is not
Native Multicast as it is the case in GETVPN.
DMVPN can run over IPv6 transport and Hub and Spoke, Partial Mesh
and Full mesh topologies can be created with DMVPN as explained in the
VPN chapter.
That’s why the correct answer of this question is C, D and E.
Question 7:
Which below options are true for DMVPN vs. GETVPN comparison?
(Choose Three)
A . IPSEC scalability point of view GETVPN is much better than
DMVPN
B . DMVPN provides multi point to multipoint topology but
GETVPN cannot
C . DMVPN is a tunnel based technology but GETVPN is
tunnelless technology
D . DMVPN is Cisco preparatory technology but GETVPN is
standard based
E. DMVPN can run over Internet but GETVPN cannot.
Answer 7:
IPSEC scalability point of view GETVPN is much better compare to
DMVPN since GETVPN uses Group Key.
DMVPN is tunnel-based technology and GETVPN is tunnelless. Both are
Cisco preparatory.
DMVPN can run over Internet but GETVPN due to IP header
preservation cannot.
Both can support multi point to multi point topology.
That’s why the correct answer of this question is A, C and E.
Question 8:
Which below statements are true for the GETVPN? (Choose Three)
A. It is a tunnelless technology
B. It uses GDOI for key distribution
C. Multicast replication is done at the HUB
D. It cannot run over Public Internet due to IP Header preservation
E. OSPF can run over GETVPN
Answer 8:
GETVPN is tunnelless technology but routing protocols cannot run over
GETVPN. GETVPN runs over routing protocol. So it is not an underlay VPN
mechanism. It uses GDOI for the key distribution as it was explained in the
VPN chapter.
Multicast is native in GETVPN so there is no HUB Multicast replication
as it is the case in DMVPN.
It cannot run over Public Internet due to IP header preservation.
That’s why the correct answer of this question is A, B and D.
Question 9:
Which below statements are true for LISP? (Choose Three)
A. LISP is an MAC in MAC encapsulation mechanism.
B. LISP can encapsulate IPv6 in IPv4
C. LISP can encapsulate IPv4 in IPv6
D. LISP can provide IP mobility
E. LISP comes encryption by default
Answer 9:
LISP is an IP in IP encapsulation mechanism, which allows IP mobility.
It can encapsulate IPv6 in IPv4 packets and vice versa.
It doesn’t come with IPSEC encryption by default.
That’s why the correct answer of this question is B, C and D.
Question 10:
Which below statements are true for DMVPN Phases? (Choose Four)
A. DMVPN Phase 1 supports spoke to spoke dynamic tunnels
B. DMVPN Phase 1 uses permanent point to point GRE tunnels on
the spokes
C. DMVPN Phase 2 requires IP next hop preservation
D. DMVPN Phase 3 allows summarization
E. Only DMVPN Phase 2 and Phase 3 supports dynamic spoke to
spoke tunnels thus full mesh topology can be created
Answer 10:
DMVPN Phase 1 doesn’t support spoke-to-spoke dynamic tunnels. It uses
point-to-point permanent GRE tunnels on the spoke. Hub still uses mGRE
tunnels though.
DMVPN Phase 2 requires IP next hop reservation. Hub doesn’t change
the IP next hop to itself.
DMVPN Phase 3 allows summarization.
And only DMVPN Phase 2 and Phase 3 supports dynamic spoke-to-spoke
tunnels, which allow full-mesh topology to be created.
That’s why the answer of this question is B, C, D and E.
CHAPTER 7
IPV6 DESIGN
T his chapter will start with basic IPv6 definition. Then below sections
will be covered:
IPv6 Business Drivers
IPv6 Address Types
IPv6 Routing Protocols Overview
IPv6 Design and Deployment Methodology
IPv6 Transition Mechanisms
IPv6 Case Study
IPv6 Design Review Question and Answers
WHAT IS IPV6?
Design
Requirement OSPFv2 OSPFv3
Better since Router and
Network LSA doesn't contain
Scalability Good prefix information but only
topology information
Working on Full Mesh Works well with mesh group Works well with mesh group
Working on Hub and Spoke Works poorly, require a lot of Works bad requires tuning
tuning
Harder,requires understanding
Troubleshooting Easy IPv6 addressing, after that it is
same packet types, LSA, LSU,
DBD
Inter area prefixes should be Same as OSPFv2. Inter area
received from ABR,all non- prefixes should be received
Routing Loop backbone areas should be from ABR,all non-backbone
connected to the backbone areas should be connected to
area the backbone area
MPLS Transport:
6PE and 6VPE are the best strategies for the MPLS based backbone
infrastructure. These will be explained later in this chapter in detail.
1. Network Readiness Assessment:
• You should check whether your infrastructure is ready for IPv6.
• What can run IPv6 today and what needs to be upgraded?
Documentation is the key for IPv6 deployment. Good documentation
helps for the IPv6 deployment a lot.
For the network infrastructure, you should first check whether the routers
are IPv6 capable. Because if you cannot route the packets across the network
infrastructure, your servers and the systems will not be able to serve.
Critical systems such as DNS, E-mail and others can be checked but these
can be done in later stage as well.
Remote offices, POP locations, Datacenters and other places in the
network should be analyzed.
Hardware capabilities of the devices such as RAM, Flash Memory, CPU
and also software versions should be documented.
You should know which technologies, protocols and the features are used
in IPv4 infrastructure. Because you will look whether your software and
hardware will support those in IPv6.
Does the existing software provide IPv6 features that you use for your
IPv4 infrastructure?
Maybe with software upgrade you may have the capabilities that you
want. Maybe upgrading software can provide the capability, which you need,
but you don’t have enough memory to upgrade.
In Enterprise networks, application owners should check whether their
Applications are IPv6 capable.
Maybe not and need to change the hardware.
As you can see, all these information will be needed.
Many vendors provide an automation tool that you can collect all these
information.
2. Network Optimization and Garbage Collection
• If you finish the second step that is Network Readiness assessment which
mean the network is ready for IPv6. But before starting technical IPv6
tasks, we may want to optimize our existing network.
Network optimization mean, checking the best practices for the
technologies, looking for optimal routing, removing unused features, securing
the infrastructure and so on.
If you are starting RIPv2, you may want to migrate it to other protocol for
example.
IPv4 might have been deployed on the network for many years and you
probably haven’t looked for the optimization.
IPv6 deployment is a good time to optimize the existing network so IPv6
can work on a clean infrastructure.
We should avoid the mistakes that have been done in IPv4.
3. IPv6 Address Procurement
IPv6 addresses can be received either from ISPs (Local Internet
Registries) or RIR (Regional Internet Registries)
Regional Internet Registries (ARIN, APNIC, RIPE and so on) assign /32
to the Service Providers. This provides 65k /48 subnets. If company requires
more, they can get as well.
If the IPv6 address space is received from the ISPs, allocation policy in
general is /48. This provides 65k /64 subnets.
Multihoming issue in IPv4 is the same in IPv6.
If the Enterprise Company is looking for multihoming, address space
should be received from the RIR to avoid readdressing and other issues.
When the prefixes are received from the RIR, those prefixes are called
Provider Independent (PI) prefixes. It is also known as PI space.
4. IPv6 Addressing Plan
When creating an IPv6 addressing plan, there are couple things need to be
considered by every business
Scalable plan should be created
Assigning IPv6 addresses at the Nibble boundary
BCP (Best Current Practices) should be known
Address space can be distributed based on function
Let’s take a look at why assigning IPv6 address at the nibble boundary is
important.
Assigning an IPv6 addresses at the Nibble Boundary
IPv6 offers network engineers more flexible addressing plan.
Backbone Interfaces:
One /48 is allocated for the whole backbone
This provides 65k subnets
Some multi national companies can assign /48 per region
Between region summarization provides scalability
Local Area Networks:
/64 per LAN is assigned
Some networks in real life assign /96 as well.
Point-to-Point Links:
Best practice is using /127 for the point to point links as per RFC 6164
Many operators are reserving /64 but assigning /127 for the point to point
links
The only available public IP addresses are IPv6 addresses. But vast
majority of the content is still working on IPv4. How IPv6 users can connect
to the IPv4 world and How IPv4 users can reach to the IPv6 content. This is
accomplished with the IPv6 transition technologies.
Probably the IPv6 transition technologies is a misleading term. Because;
IPv4 infrastructure is not removed with these technologies. Thus probably the
IPv6 integration technologies is a better term.
But still throughout this section I will be using IPv6 transition
technologies.
There are three types of IPv6 Transition Technology.
1. Dual Stack
IPv6 + IPv4
The entire infrastructure is running both IPv4 and IPv6.
2. Tunnels
IPV6 - IPv4 – IPv6
IPv4 – IPv6 – IPv4
Two IPv6 islands communicate over IPv4 part of the network or two IPv4
islands communicate over IPv6 part of the network.
3. Translation
IPv6 – IPv4 (NAT64)
Private IPv4 – Public IPv4 (NAT44)
With translation, IPv6 only device can communicate with IPv4 only
device. But they think that they communicate with the same version of
device.
IPv6 Translation Mechanisms:
Most common IPv6 translation mechanism is NAT64 +DNS64.
It replaces older version translation mechanism NAT-PT. NAT-PT is
deprecated due to DNS security issues. DNS Application Layer Gateway
were integrated in NAT-PT.
With NAT64 + DNS64 mechanism, DNS entity is separated from the
NAT entity.
In this mechanism, IPv6 only device can communicate with IPv4 only
device over IPv6 only network.
In the above picture, on the left v6 host, which is in the IPv6 network,
wants to communicate with v4 host, which is in the IPv4 network.
When IPv6 host wants to communicate with the ipv4 host, it sends a DNS
query. This query passed through the DNS64. DNS64 then queries send this
query to the authoritative DNS server, which is in the IPv4 world.
Authoritative DNS server sends an ‘ A ‘ record back.
DNS64 translate this A record into a AAAA record which is IPv4
address. It embeds IPv4 ‘ A ‘ record in to and IPv4 prefix that is assigned to
DNS64. Resulting IPv6 address is called IPv6 Synthetized address.
Then packet goes to the NAT64 device; it can use the embedded IPv4
address inside the IPv6 address (AAAA from the DNS), removes the IPv6
part and create a stateful mapping table.
In this model, IPv6 host thinks that it communicates with the IPv6 device
(DNS64), and v4 host thinks that it communicates with the IPv4 device
(Authoritative IPv4 DNS server).
There are many problems with NAT64 + DNS64 Translation. This is the
common problem of Stateful address translation. Stateful Load Balancing,
which is one of the techniques used in Enterprise IPv6 Internet Edge only
deployment, also suffer from the same problems.
Stateful IPv6 translation problems:
Traffic has to be symmetrical. Stateful devices drops asymmetrical traffic
flows.
Many applications will break because of NAT
Some IPSEC modes don’t work through NAT
There will be problems with timing in Mobile networks
Logging will be a problem and many government forces operator to
provide the logs
There are much more problems special to the different kind of networks.
These are common ones only.
Manual Tunnels:
For any type of tunnel, tunnel endpoints should be known and reachable.
In Manual Tunnels, Tunnel endpoints are manually configured.
They are mostly used for permanent site-to-site connectivity.
IP-in-IP and GRE are the manual tunnels.
6PE and 6VPE, which are the MPLS based tunneling methods are also
considered as Manual Tunneling technologies.
Automatic Tunnels:
Commonly used for transient connectivity. They could be site-to-site or
host-to-host tunnels.
Within Automatic Tunnels, there must be an an automatic way to find to
tunnel end points.
Every Automatic tunneling solution either encapsulates IPv4 tunnel
endpoints in IPv6 Address or it consults an Authoritative server for the tunnel
endpoints. (Remember LISP?).
Embedded Tunnel Endpoints Automatic IPv6 Tunneling Mechanisms:
1. 6TO 4 TUNNELS
3. DUAL STACK
Dual stack is possibly the simplest IPv6 transition mechanism to
implement. Every interface, applications and host runs IPv6 and IPv4 at the
same time.
Dual stack operation is driven by DNS.
If destination address comes from DNS in an A record only, then
communication is done via IPv4.
If destination address from DNS in a AAAA record only, then
communication is done via IPv6.
If both A and AAAA record return, most of the applications prefer IPv6.
But the biggest problem in Dual Stack is, if there is no more IPv4
addresses available, how every interface can have IPv4 as well? Especially
for the Service Provider networks!
Common solution for this issue by many of the companies is CGN
(Carrier Grade NAT), which is also known as LSN (Large Scale NAT).
Carrier Grade NAT is doing NAT44 operation in large scale, in the
Service provider network.
Service provider instead of assigning Public IPv4 address to each
customer, they assign IPv4 private address.
In CGN, globally unique IPv4 address pool moves from customer edge to
more centralized location in the Service Provider network.
There are three CGN architectures.
Three Carrier Grade NAT solutions:
1. NAT444
2. NAT464
3. DS-Lite
1. NAT 444
This solution uses three IPv4 layers.
Customers IPv4 private space is NATed to Service Provider assigned
IPv4 private space first.
Then second NAT44 operation is done from Service Provider assigned
IPv4 private address space to Service Provider IPv4 public address.
In this solution there are two layers of NAT44. One on the customer CPE
another on the Service Provider network. Potential problem is many
application which may work through one layer of NAT, will not work in two
layers of NAT.
Second problem is Service Provider IPv4 private address space can
conflict with the Customer IPv4 address space.
2. NAT464
Due to potential address conflict between customer and the Service
Provider private IPv4 address spaces, another solution proposed by IETF was
NAT464.
In this solution, Customer IPv4 private address space is NATed from
IPv4 private to IPv6 address. On the customer CPE NAT 64 operation is
needed.
Second NAT in this solution would be on the Service Provider network.
Second NAT would be also NAT64.
This solution requires two times NAT64 operation and nobody
implemented it.
CONCLUSION:
There will always be a need to use all these transition mechanisms
together in the network. Dual-Stack is the hardest to support IPv6 transition
method among all the others by the large-scale companies and the IPv6 to
IPv4 translation technologies breaks most of the applications.
Tunnelling is a solution to support IPv6 over IPv4 network and can be the
interim solution until dual-stack is enabled on all the nodes and links.
Our end goal shouldn’t be IPv6 dual-stack! Our goal is to have an IPv6
only network and remove IPv4 completely. This can be only achieved with
networking vendors, Service Providers, Operating System manufacturer,
application developers, website owners, CDN companies and many others.
Otherwise CGN or Trade-market (Buying IPv4 public address from the
other companies) type of interim solution only buy a time and those solutions
will be expensive for the companies day by day without IPv6.
There are companies, which has IPv6 only network today!
IPV6 REVIEW QUESTIONS
Question 1:
Fictitious Service Provider company has been planning IPv6 access for
their residential broadband customers. Which solutions below don’t require
access node changes in the Service Provider domain? (Choose Three)
A. CGN
B. 6rd
C. 6to4
D. D. IPSEC
E. DS-Lite
F. Dual Stack
Answer 1:
IPSEC is not an option. Dual Stack requires IPv6 support in addition to
IPv4 everywhere.
DS-Lite require IPv6 access nodes.
6rd and 6to4 are the IPv6 tunnelling mechanisms over IPv4 Service
Provider infrastructure.
6rd and 6to4 don’t require access node upgrade such as DSLAM, in the
case of residential broadband upgrade.
But both 6to4 and 6rd still require CPE upgrade on the customer site.
CGN (LSN) doesn’t require access node upgrade as well; most of the
residential equipment already supports NAT44.
Thus the answer of this question is A, B and C.
Question 2:
Which below mechanisms allow asymmetric IPv6 routing design?
A. 6rd
B. 6to4
C. NAT 64 +DNS 64
D. D. DS-Lite
Answer 2:
Asymmetric routing is possible with the stateless mechanisms only.
6rd is the stateless tunnelling mechanisms.
NAT64 + DNS 64 can be stateful or stateless, thus they are not the correct
answer. DS-Lite has CGN component, which is always stateful.
That’s why answer of this question is A, 6rd.
Question 3:
What is the biggest cost component during IPv6 transition design?
A. CPE
B. Access Nodes
C. Core Nodes
D. Training
E. Application Development
Answer 3:
Biggest cost component is CPE (Customer Premises Equipment).
In case IPv6 is not supported on the CPE, enabling it on software requires
operational expenses, changing the hardware requires both operational and
capital expenses.
If Service Provider needs to change CPE for 10 Million customers and
every CPE cost only 50$, 500million $ is required only for CAPEX.
That’s why answer of this question is A, CPE.
Question 4:
Which below options might be a possible problems with NAT 64 + DNS
64 design? (Choose Three)
A. It may not support IPv4 only applications such as Skype
B. Duplicate DNS entries can come if company has more than one
DNS
C. It doesn’t support DNSSEC
D. It doesn’t translate IPv4 to IPv6
E. Stateful NAT 64 + DNS 64 makes routing design harder
Answer 4:
As they have been explained in the IPv6 chapter, NAT64+DNS64 may
not support IPv4 only applications such as Skype. Duplicate DNS entries can
come if company has more than one DNS and Stateful NAT 64 + DNS 64
makes routing design harder.
Thus the correct answer of this question is A, B and E.
Question 5:
If IPv6 only node will reach to IPv4 only content, which below
mechanism is used?
A. A. 6rd tunneling
B. B. Dual Stack
C. C. Translation
D. D. Host to Host tunnelling
Answer 5:
Translation mechanism is needed. Tunnelling cannot solve the problem.
Question 6:
Which below options are used as IPv6 transition mechanisms? (Choose
Three)
A. Dual-Stack
A. B. Edge to Core Ipv6 design approach
B. C. Tunneling
C. D. Translation
D. E. IPv6 Neighbor Discovery
Answer 6:
As it is explained in detail in the IPv6 chapter, Dual-Stack, Tunneling and
the Translation are the IPv6 transition mechanisms.
That’s why, answer of this question is A, C and D.
Question 7:
Which subnet mask length is used in IPv6 on point-to-point links for
consistency?
A. A. /56
B. B. /64
C. C. /96
D. D. /126
E. E. /127
Answer 7:
/64 is used in IPv6 on point-to-point links for consistency.
Although there was discussions around its usage and some people
considered initially that it was wasting of address space, general design
recommendation is using /64 or /127 for point to point links and using /64
everywhere including point to point link provides consistency.
Question 8:
Which IPv6 design method consumes more resources on the network
nodes?
A. Dual-Stack
B. Tunneling mechanisms
C. Translation mechanisms
D. IPv6 only network
E. Carrier Grade NAT
Answer 8:
Dual Stack on the network nodes consumes more CPU and more memory
compare to tunnelling and the translation mechanisms, which are used for
IPv6 transition.
That’s why; the answer of this question is A, Dual-stack.
Question 9:
What does Dual-Stack mean?
A. A. Enabling IPv6 and IPv4 on all the networking nodes
B. B. Enabling IPv6 and IPv4 on all the networking nodes and the
links
C . C. Enabling IPv6 and IPv4 on all the networking nodes, links,
hosts and applications
D. D. Enabling IPv6 and IPv4 on the core, aggregation and access
network nodes.
Answer 9:
Dual stack is providing both IPv4 and IPv6 connectivity to all the
networking nodes, links, hosts and applications. That’s why; answer of this
question is C.
Question 10:
Fictitious Service Provider company requires more Public IPv4 addresses
but due to IPv4 exhaustion they couldn’t receive from the RIRs. What is the
option for them to continue providing IPv4 services without enabling IPv6 on
CPE, access and core network?
A. Carrier Grade NAT
B. DS-Lite
C. NAT64 + DNS64
D. 6rd
E. 6to4
Answer 10:
IPv4 exhaustion problem requires Carrier Grade NAT solution, which
share public IPv4 addresses among multiple users by using NAT 44 on the
CPE and NAT 44 on the SP domain. It is also called double NAT, Large
Scale NAT, Dual NAT 44 or NAT444.
That’s why answer of this question is A, Carrier Grade NAT.
Question 11:
Which below terms are used interchangeably for Carrier Grade NAT
(CGN)? (Choose Three)
A. LSN
B. Double NAT
C. Service Provider NAT
D. CPE NAT
E. NAT 444
Answer 11:
LSN (Large Scale NAT), Double NAT, NAT 444 are used
interchangeably for CGN. Thus, the answer of this question is A, B and E.
Question 12:
Which below options are used as an IPv6 over IPv4 tunnelling
mechanism? (Choose Two).
A. 6to4
B. 6rd
C. NAT 64 + DNS64
D. DS-Lite
E. MAP-E
F. 464xlat
Answer 12:
Out of given options, IPv6 tunnelling mechanisms are 6to4 and 6rd.
Remaining ones is used for IPv4 tunnelling. IPv4 service is tunneled over
IPv6.
That’s why; answer of this question is A and B.
Question 13:
What are the problems with Carrier Grade NAT IPv6 design? (Choose
four)
A. Some applications doesn’t work behind CGN
B . If the users behind same LSN, stateful devices might drop
traffic, thus require traffic go through CGN node even if the
traffic between nodes which are behind same LSN
C . IP address overlapping if Customer uses same private address
range with the Service Provider
D . It requires IPv6 on the CPE nodes, thus CPEs have to be
upgraded
E. Since it is stateful, asymmetric traffic is not allowed.
F. Since it is stateless, asymmetric traffic is not allowed.
Answer 13:
Some applications doesn’t work behind CGN If the users behind same
LSN, stateful devices might drop traffic, thus require traffic go through CGN
node even if the traffic between nodes which are behind same LSN IP
address overlapping if Customer uses same private address range with the
Service Provider . Since it is stateful, asymmetric traffic is not allowed.
Correct answer of this question is A, B, C and E.
Question 14:
What are the problems with dual stack IPv6 design? (Choose Three)
A. It consumes more memory and CPU on the networking nodes
compare to tunnelling and translation mechanisms
B. It doesn’t solve IPv4 address exhaustion problems
C. It requires IPv6 support on all the CPE and Access nodes which
are the most cost associated components
D . Troubleshooting wise it is harder compare to tunnelling and
translation mechanisms
E. All of the above
Answer 14:
It consumes more memory and CPU on the networking nodes compare to
tunnelling and translation mechanisms. It doesn’t solve IPv4 address
exhaustion problems. CPEs and hosts still require IPv4 address. Host private
address is NATed to the CPE public IPv4 address (NAT44) It requires IPv6
support on all the CPE and Access nodes, which are the most cost associated
components.
That’s why; answer of this question is A, B and C.
Question 15:
What is the best IPv6 design method for MPLS Layer 3 VPN service?
A. Dual Stack
B. NAT 64 + DNS 64
C. 6rd
D. 6VPE
E. 6PE
Answer 15:
Best IPv6 design method for MPLS Layer 3 VPN service is 6VPE.
Question 16:
Which options are the IPv6 Automated Tunneling mechanisms? (Choose
Three)
A. 6rd
B. 6over4
C. 6to4
D. Tunnel Brokers
E. NAT-PT
F. GRE Tunnels
Answer 16:
6rd, 6to4 and 6over4 are the automated IPv6 tunnelling mechanisms.
6over4 requires multicast on the network thus it is deprecated.
In all three mechanisms IPv4 addresses embedded in the IPv6 address.
Tunnel broker is a semi-automated mechanism. The Authoritative server
provides tunnel destination address. NAT-PT is a translation mechanism and
because of security issues it is deprecated.
GRE Tunnels are manual tunnelling mechanism.
That’s why the answer of this question is; A, B and C.
Question 17:
Service Provider Company wants to implement DPI (Deep Packet
Inspection) node in the network. Which below method would create a
problem?
A. Tunneling
B. Dual-Stack
C. Native IPv4
D. Translation
E.
Answer 17:
Most of the DPI devices cannot work with the IPv6 tunnelling
mechanisms. Thus using them with the DPI element can create a problem.
There is no problem with the other options. Correct answer is Option A.
Question 18:
Enterprise Company implemented QoS on their network. Which below
IPv6 design option method doesn’t work well with QoS?
A. Dual Stack
B. Translation
C. IPv6 only
D. Tunneling
Answer 18:
Ipv6 tunnelling mechanisms don’t work well with the QoS.
Question 19:
Which below options are used for host to host IPv6 tunnelling?
A. ISATAP
B. 6to4
C. 6rd
D. Teredo
E. IPv6 DAD
Answer 19:
ISATAP and the Teredo are used for host to host or host to router
tunnelling.
Question 20:
Enterprise Company wants to have an experience with the IPv6. They
have 50 IT Lab facilities and want to access IPv6 application in the
datacenter. They don’t have currently IPv6 on their network and they want to
have an access immediately from the labs to the applications
Where would they start enabling IPv6?
A. Network Core first and IT labs should enable IPv6
B. No need for IPv6 on the network, they can use translation
C. IT labs should be enabled IPv6 and tunnel to the DC
D . Placing CGN box at the central place solves is best design
options for them
Answer 20:
As it is explained in the IPv6 chapter, they are looking for Edge to the
Core model. IT labs should be enabled IPv6 and tunnel to the DC. Answer of
this question is C.
Question 21:
Which mechanism can be used to deploy IPv6 services in an IPv4 only
backbone?
Answer 21:
Since in the requirement it is said that, IPv4 only backbone, NAT64, 6PE
and DS-Lite cannot be a solution.
Because NAT64 requires IPv6 only network or Dual Stack, 6PE requires
MPLS network and DS-Lite requires IPv6 only network.
Yes, NAT64 could be place at the Internet edge and the best place for
NAT64 deployment is Internet edge according to RFC 7269, in this question,
requirement says that IPv4 only network. That’s why; answer of this question
is C.
Question 22:
E-commerce company want to enable IPv6 on their network as soon as
possible. Where would be the best place for them to start and which solution
would you recommend?
Answer 22:
In the requirement it is said that E-commerce Company and they want to
enable IPv6 as fast as possible. Dual stack is very time consuming if not
impossible.
Also, since the business is E-commerce, in general, IPv6 business case
for the E-commerce companies is IPv6 presence.
If Happy Eye balls enabled at the customer sites, or IPv6 only users will
reach to their site, it is important to have IPv6 presence for E-commerce
companies. Thus Starting from the Internet Edge and enabling NAT 64 +
DNS 64 is the best for the given company and the requirements.
Thus, answer of this question is B.
Question 23:
Which below options are critical as an IPv6 First Hop Security features?
(Choose Three)
A . A. Suppressing excessive Multicast neighbor discovery
messages
B. B. ARP Inspection
C. C. Limiting IPv6 Router advertisement
D. D. Preventing rogue DHCPv6 assignments
E. E. Broadcast control mechanism
Answer 23:
There is no ARP in IPv6. So ARP inspection is unrelated.
There is no Broadcast in IPv6 as it is explained in the IPv6 chapter, thus
Option E is wrong as well. Remaining all three features are critical IPv6 First
Hop Security features.
That’s why; answer of this question is A, C and D.
Question 24:
Enterprise Company implemented dual stack network. It took a lot of
time them to implement dual stack on all their networking nodes, links,
applications, hosts and operating system. Although their network is 100 %
dual stack, they only see 25 % IPv6 Internet traffic on their network.
What might be the possible problem?
A. Some of their link for the Internet may not be IPv6 enabled
B. Content which their users try to access is not enabled IPv6
C. Operating system of their users might prefer IPv4 over IPv6
D . They might have Happy Eye Balls enabled and IPv6 might
have priority
Answer 24:
Because either content, which their users try to access, is not enabled
IPv6 or Operating system of their users might prefer IPv4 to IPv6. Answer of
this question is B and C.
Question 25:
Which below protocols are used in IPv6 Multicast?
A. MLD
B. Auto-RP
C. MSDP
D. Embedded RP
E. Anycast RP
Answer 25:
MSDP and Auto-RP is not supported in IPv6 Multicast. MLD, Embedded
RP and Anycast RP are the IPv6 Multicast features.
MLD is equivalent of IGMP Snooping in IPv4 and whenever there are
layer 2 switches in IPv6 Multicast design, MLD should be enabled for
optimal resource usage.
CHAPTER 8
BORDER GATEWAY PROTOCOL
I fthetheonly
requirement is to use a routing protocol on the Public Internet then
choice is Border Gateway Protocol (BGP). Scale is extremely
large and robust.
BGP works over TCP that’s why it is considered robust, because TCP is
inherently reliable.
BGP is multi-protocol; with the new NLRI it can carry many address
families. Today almost twenty different NLRI are carried over BGP. New
AFI and SAFI are defined for the new address families.
EBGP and IBGP are our main focus. If the BGP connection is between
two different autonomous systems, it is called EBGP (External BGP).
If BGP is used inside an autonomous system with the same AS number
between the BGP nodes, then the connection is called IBGP (Internal BGP).
BGP THEORY
Before starting BGP Theory, EBGP, IBGP and more advanced topics,
some basic BGP definitions should be clear.
In the above topology, when CE1 wants to reach CE2 at the other side of
the network, the packet reaches to either R1 or R2. If there are no tunneling
mechanisms such as MPLS, GRE, or any other mechanisms, R1 or R2 makes
IP destination-based lookup and sends packets to R3 or R4.
If the prefixes behind CE2 are learned by BGP, all the routers have to do
an IP destination-based lookup to see if there is a route for the CE2 prefixes
in the routing table from BGP.
Every router – R1, R2, R3, R4, R5, and R6 – has to run BGP.
If any Layer 3 overlay tunnelling technology runs in the network, then the
routers in the middle, which are R3 and R4, do not have to keep the CE1 and
CE2 prefixes.
R3 and R4 keep only the routing information of the edge nodes. As a
result, R3 and R4 are used for reachability between R1, R2, R5, and R6.
Since MPLS is a tunnelling mechanism that provides Layer 2 or Layer 3
overlay, if MPLS is used in the network, intermediate devices, which are R3
and R4, do not have to run BGP.
R1, R2, R5, and R6 are called edge nodes, and R3 and R4 are known as
core nodes.
That’s why you can have BGP free core network if MPLS is used in the
networks.
BGP Free core means, Core nodes of the network doesn’t have to enable
BGP.
In the above figure, the Customer is connected to the two Internet Service
Providers, which are linked to the same upstream Service Provider, SP3.
The Customer is receiving only default route, thus increasing the local
preference on SP2 BGP connection. The Customer wants to reach
78.100.120.0/24 network, which is the Customer of SP1.
The connection will be optimal if the Customer reaches 78.100.120.0/24
network over SP1 link directly. Nonetheless, since the Customer increases
the local preference for the default route over SP2 link – for each prefix –
only SP2 link is used.
And the traffic flow between the Customer and the 78.100.120.0/24
network is Customer- SP2 – SP3 – SP1. SP2 uses its upstream Service
Provider that is SP3.
In the above Figure, there is a peering link between the SP1 and SP2. The
Customer is still receiving only default route and using BGP local preference
150 (by default 100 on SP1 connection) over SP2. What’s more, the
Customer wants to reach 78.100.120.0/24 network, which is the Customer of
SP1.
In this traffic, the flow would be Customer-SP2-SP1.
The peering link between SP1 and SP2 prevents the packets from being
sent from SP2 to SP3.
By default, SP2 prefers peering link over upstream link because of cost
reduction. This is almost always the case in real life BGP design of the
Service Providers and will be explain in detail in the BGP Peering section of
this chapter.
But the traffic flow, from the Customer point of view, is still suboptimal
because it is supposed to be directly between the Customer and SP1, not
between SP2 and SP1.
Let’s examine the last topology to see whether the partial routing can
avoid suboptimal BGP routing.
BGP Path Selection with the Default Route +Partial Route
In the above figure the partial route is received from the SP1. Everything
is the same with topology; besides, only the partial route is added.
To simplify the concept, let’s assume that we are receiving
78.100.120.0/24 network, including the default route, from SP1.
The Customer still uses BGP Local Preference 150 over SP2 link and
BGP Local Preference 100 for the default route. The Customer doesn’t
change BGP local preference for the partial routes; rather, the Customer uses
BGP Local Preference 100 for the 78.100.120.0/24 as well.
But since the longest match routing is evaluated and chosen over the local
preference (Remember BGP Best Path Selection steps), the Customer selects
SP1 as the best path for the 78.100.120.0/24 network. The remaining
networks are reached through the SP2.
Receiving DFZ, which is full Internet routing table, allows network
administrators to have optimal path if there are multiple ISPs or multiple
links. Nonetheless, this benefit is not free.
In sum, the more the routes, the more the processing power. BGP routers,
which have full internet routing table, requires much more memory and CPU
compared to BGP routers which have only the default route or default +
partial routes.
EBGP
EBGP is used between two different autonomous systems. Loop
prevention in EBGP is done by the AS path attribute, which is why it is a
mandatory BGP attribute. If BGP node sees its own AS path in the incoming
BGP update message, BGP message is rejected.
BGP traffic engineering sends and receives the network traffic based on
customer business and technical requirements. For example, link capacities
might be different; one link might be more stable than the other or the costs
of the links might be different. In all of these cases, customer may want to
optimize their incoming and outgoing traffic.
For BGP outgoing traffic local preference attribute is commonly used.
BGP inbound traffic engineering can be achieved in multiple ways:
∗ MED (BGP external metric attribute)
∗ AS-path prepending
∗ BGP community attribute
BGP weight attribute can be used for outgoing traffic optimization as
well, but don’t forget that it is local to the router and many implementations
may not support it. MED attribute is used between two autonomous systems.
If the same prefix is coming from two different AS to a third AS, although
you can use always-compare MED feature, it is not good practice to enable
this feature since it can cause BGP MED Oscillation problem.
Creating an inter-domain policy with the MED attribute is not a good
practice.
AS path is mandatory and it is carried over entire internet, although some
service providers can filter excessive prepending.
Local preference attribute is domain-wise and sent by the IBGP neighbors
to each other.
In the above figure, basic MPLS network and its components are shown.
MPLS Layer 3 VPN requires PE router to be the routing neighbor with the
CE routers. It can be static route, RIP, EIGRP, OSPF, IS-IS, or BGP.
IP prefixes are received from the CE routers and PE appends RD (Route
Distinguisher) to the IP prefixes. And a completely new VPN prefixes are
created. (IPv4+RD=VPNv4)
PE routers re-originate all the customer prefixes regardless of its
origin, static redistribution, and PE-CE OSPF/IS-IS/EIGRP/BGP as well
advertising all MP-IBGP peers by setting the BGP next-hop to it. As for the
IP network, you don’t need to do the manual operation.
Also there can be forth model, which requires shifting traffic between
links with any of the three options such as pushing traffic away from over
utilized links.
Let’s look at each option in detail.
In the above figure, Load Sharing BGP Internet edge design model is
shown.
In this model, rather than having default route, full Internet routing table
is received from all available uplinks. Having full Internet routing table
provides ability to send the traffic to the most optimal exit point or closest
exit point.
In the previous design model that is Load Sharing Design Model, default
route was received from the Service Providers and if necessary, more specific
prefixes were created in the routine table. More configurations is necessary in
that model but having smaller routing table gives an ability to use lower end
devices at the Internet edge. This model requires full Internet routing table,
which means more Memory and the processing power but allows utilizing the
best path without doing too much configuration.
In all of the above models, when there is congestion on the link or some
of the links, some amount of traffic might be shifted to less utilized ones.
This might be the case even in the Primary/Backup model.
Shifting traffic from the more utilized link to the less utilized link is
called Traffic Dialing. Inbound traffic is shifted to the backup link or
underutilized links by advertising some, but not all, destinations as more
preferred across the links to be utilized.
These destinations will be more preferred due to being more specific,
having a higher local-preference, shorter AS-path, or lower MED value, etc.
Out bound traffic is pushed away from the over utilized links by increasing
the IGP distance to the over utilized links for some sources. (Hot Potato
Routing) Outbound traffic can also be shifted away from the over utilized
links by depreferencing some inbound BGP paths associated with the over
utilized links.
Let’s have a look at how AS-path prepending is used in BGP Best Path
design model for inbound path manipulation.
BGP PEERING
BGP Peering is an agreement between different Service Providers. It is an
EBGP neighborship between different Service Providers to send BGP traffic
between them without paying upstream Service Provider.
To understand BGP peering, first we must understand how networks are
connected to each other on the Internet. The Internet is a collection of many
individual networks, which interconnect with each other under the common
goal of ensuring global reachability between any two points.
BGP Peering and Transit Links
BGP Confederation
Prefix p/24 is sent from the RR client to three of the RRs. Route reflector
has full mesh among them. They send the prefixes to each other. BGP route
reflector cluster is the collection of BGP route reflectors and route reflector
clients. The RR uses Cluster ID for loop prevention. RR clients don’t know
which cluster they belong to.
In the above picture, instead of P router, if we had a BGP route reflector
then PE3 wouldn’t receive the backup path. Because route reflectors hide the
paths, select the best path and advertise only the best path to the route
reflector clients.
BGP ROUTE REFLECTOR CLUSTERS
BGP route reflectors, used as an alternate method to full mesh IBGP, help
in scaling.
BGP route reflector clustering is used to provide redundancy in a BGP
RR design. BGP Route reflectors and its clients create a cluster.
In IBGP topologies, every BGP speaker has to be in a logical full mesh.
However, route reflector is an exception.
IBGP router sets up BGP neighborship with only the route reflectors.
SOME TERMINOLOGY FIRST:
Route Reflector Cluster ID has four-byte BGP attribute, and, by default, it
uses a BGP router ID.
If two routers share the same BGP cluster ID, they belong to the same
cluster.
Before reflecting a route, route reflectors append its cluster ID to the
cluster list. If the route is originated from the route reflector itself, then route
reflector does not create a cluster list.
If the route is sent to EBGP peer, RR removes the cluster list information.
If the route is received from EBGP peer, RR does not create a cluster list
attribute.
Cluster list is used for loop prevention by only the route reflectors. Route
reflector clients do not use cluster list attribute, so they do not know to which
cluster they belong.
If RR receives the routes with the same cluster ID, it is discarded.
Let’s start with the basic topology.
ROUTE REFLECTOR USES SAME CLUSTER ID
In the diagram shown above, R1 and R2 are the route reflectors, and R3
and R4 are the RR clients. Both route reflectors use the same cluster ID.
Green lines depict physical connections. Red lines show IBGP connections.
Assume that we use both route reflectors as cluster ID 1.1.1.1 that is R1’s
router ID.
R1 and R2 receive routes from R4.
R1 and R2 receive routes from R3.
Both R1 and R2 as route reflectors append 1.1.1.1 as cluster ID attributes
that they send to each other. However, since they use same cluster, they
discard the routes of each other.
That’s why; if RRs use the same cluster ID, RR clients have to connect to
both RRs.
In this topology, routes behind R4 are learned only from the R1-R4 direct
IBGP session by the R1 (R1 rejects from R2). Of course, IGP path goes
through R1-R2-R4, since there is no physical path between R1-R4.
If the physical link between R2 and R4 goes down, both IBGP sessions
between R1-R4 and R2-R4 goes down as well.
Thus, the networks behind R4 cannot be learned.
Since, the routes cannot be learned from R2 (the same cluster ID), if
physical link is up and IBGP session goes down between R1 and R4,
networks behind R4 will not be reachable either, but if you have BGP
neighborship between loopbacks and physical topology is redundant, the
chance of IBGP session going down is very hard.
Note: Having redundant physical links in a network design is a common
best practice. That’s why below topology is a more realistic one.
WHAT IF WE ADD A PHYSICAL LINK BETWEEN R1-R4 AND R2-R3?
Figure-2 Route Reflector uses same cluster-ID, physical cross-connection is
added between the RR and RR clients
When route reflectors are deployed in the network, they only send the
best path to clients. Some applications such as Fast Reroute, Multipath and
Optimal Routing as explained before, requires more than one best path to be
advertised from the Route Reflectors to the Route Reflector clients. There are
many approaches for that as they were explained earlier in the chapter. Below
table summarizes the similarities and the differences of these design models
in great detail.
In MPLS deployment, the Unique RD per VRF per PE is the best method.
Network designers should know the pros and cons of the technologies,
protocol alternatives and their capabilities from the design point of view.
MPLS
BGP
Design BGP Add BGP Unique RD
Shadow
Requirement Path Shadow RR per PE per
Sessions
VRF
Best in MPLS No No No Yes
One session per Route
reflectos. If there is One session per next- One IBGP session
only one more hop. Only one RR but between VPN RR
How many IBGP Session One IBGP session, Shadow RR which
between RR and RR- Path IDs are different sends second best multiple separate and RR Client,
Client for different next-hop path, two IBGP IBGP session is different RDs make
sessions on the RR required between RR the same IP prefixes
Client, one for each and RR Client unique
RR
Better than Shadow
Worst, requires RR because doesn't Same as Add-path,
separate RR and requre separate Route doesn't require extra
Resource Requirement Best IBGP session per reflector, worse than IBGP session or
next-hop ADD path because Route Reflector
require extra IBGP
session per next-hop
Easiest because there
is no upgrade on any
Very hard,all Route Easy, only Route Easy,only Route device. Only
Migration of existing Reflectors and clients Reflector code needs Reflector code needs unique/separate
Route Reflectors need to be upgraded to to be upgraded to be upgraded Route Distinguisher
support Add-path needs to be
configured on the
Pes per VRF
Standard Protocol Yes IETF Standard Yes IETF Standard Yes IETF Standard Yes IETF Standard
Stuff Experince Not well known Not well known Not well known Known
Hard,default behaviour
of BGP which is
advertising only one
Troubleshooting best path is changing. Easy Easy Easy
Operation stuff needs
to learn new
troubleshooting skill
IPv6 Support Yes Yes Yes Yes
BGP Add-Path vs. BGP Shadow RR vs BGP Shadow Sessions vs. Unique RD
per VRF per PE Design Comparison
BGP ROUTE REFLECTOR CASE STUDY
BGP Route Reflector logical topology should follow physical topology in IP
backbones
Full-Mesh IBGP vs. BGP Route Reflectors vs. BGP Confederation Design
Models Comparison
The illustration depicts the AS 100 and AS 200 connections. They have a
BGP peer (customer-transit) relationship in two locations, San Francisco and
New York.
IGP distances are shown in the diagram. Since there is not any special
BGP policy, hot potato rule will apply and egress path will be chosen from
AS 200 and AS 100 based on IGP distances.
BGP Hot and Cold Potato Routing
In this diagram, egress traffic from AS 200 is the green arrow, since SF
path is shorter IGP distance. Ingress traffic to AS 200 from AS 100 is the
blue arrow, since NYC connection from AS 100 is shorter IGP distance (40
vs. 200).
AS 200 is complaining about the performance and they are looking for a
solution to fix the problem behavior. What would you suggest to AS 200?
Customer AS 200 should force AS100 for cold potato routing. Since they
are customers, their service providers have to do cold potato routing for
them.
By forcing for cold potato routing, AS 100 has to carry the Web content
traffic to the closest exit point to AS 200, which is San Francisco.
That’s why AS 200 is sending its prefixes from SF with lower MED than
NYC as depicted below.
Problem with the Hot Potato Routing
In the above figure, there are two Service Providers, SP1 and SP2. They
have three peering connections in different places.
There is a traffic demand between the two routers as it is depicted in the
figure. Both Service Providers would do the Hot Potato routing because the
links are peering links. Not the customer, provider connections.
In this case SP1 to destination traffic would be similar to the Green path.
SP2 to destination traffic would be similar to the Blue Path.
But what if they would coordinate and would send the traffic towards
peering link 2 for this traffic demand?
If they would coordinate and wouldn’t do Hot Potato routing, traffic
pattern would be as below figure. Obviously below traffic would be
beneficial for both providers.
In the above figure their Internal uplinks are used more than below one. I
call below option as Coordinated Routing.
In the above figure, R1, R2, R3, R4, R5 and RR (Route Reflector)
belongs to AS 100, R6 and R7 belongs to AS 200.
There are two EBGP connections between ASBRs of the Service
Providers.
Everybody told you so far that BGP converges slow because BGP is good
for scalability not for the fast convergence, right?
But that is wrong too.
If BGP relies on control plane to converge of course it will be slow since
the default timers are long (BGP MRAI, BGP Scanner and so on, although
you don’t need to rely on them as I will explain now).
Default free zone is already more than 500K prefixes. So approximately
we are talking about 50 MB of data from each neighbor, it takes time to
process. If you have multiple paths, amount of data that needs to be
processed will be much higher.
Let’s look at BGP control plane convergence closer.
In the routers routing table there is always a recursion for the BGP
prefixes. So for the 192.168.0.0/24 subnet the next hop would be 10.0.0.1, if
the next-hop self is enabled, otherwise since IBGP doesn’t change BGP next
hop by default when the prefixes are received from an EBGP peer, 5.5.5.5
would be a next hop.
But in order to forward the traffic, router need to resolve immediate next
hop and layer 2 encapsulation.
So for the 10.0.0.1 or 5.5.5.5, R1 selects either 172.16.0.1 or 172.16.1.1.
Or R1 can do the ECMP (Equal Cost Multipath).
In the many vendors FIB implementation, BGP prefixes resolve
immediate IGP next hop. Cisco’s CEF implementation works in this way too.
This is not necessarily a bad thing though. It provides better throughput since
the router doesn’t have to do double/aggregate lookup. But from the fast
convergence point of view, we need a hierarchical data plane (Hierarchical
FIB). With the BGP PIC, both PIC Core and PIC Edge solutions, you will
have hierarchical data plane so for the 192.168.0.0/24 you will have 10.0.0.1
or 5.5.5.5 as the next hop in the FIB (Same as RIB).
For the 10.0.0.1 and 5.5.5.5 you will have another FIB entry which points
to the IGP next hops, which is 172.16.0.1 and 172.16.1.1. These IGP next
hops can be used as load shared or active/standby manner.
BGP PIC Core helps to hide IGP failure, from the BGP process. If the
links between R1-R2 or R2-R3 fails, or as a device, R2 or R3 fails, R1 starts
to use backup IGP next hop immediately.
Since the BGP next hop doesn’t change and only the IGP path changes,
recovery time will be based on IGP convergence. For the BGP PIC Core you
don’t have to have multiple IBGP next hop. BGP PIC Core can handle core
IGP link and node failures.
BGP PIC Edge on the other hand, provides sub second BGP fast recovery
in the case of Edge link or node failure.
In order BGP PIC Edge to work, edge IBGP devices (Ingress PEs and
ASBRs) need to support BGP PIC and also they need to receive backup BGP
next hop.
In the above topology, R1 is the ingress PE, R4 and R5 are the ASBR
nodes. Route Reflector is shown in the data path but it is not recommended in
real network design. Unfortunately backup next hop is not sent when BGP
Route Reflector is introduced since RR selects and advertises only the best
path for the given prefix. For example, in the above topology, R6 and R7
both sends, 192.168.0.0/24 network but R1 can learn from the RR only one
exit point (BGP next-hop), either R4 or R5.
There are many ways to send more than one best path from BGP RR as
we have seen earlier in this chapter. But let’s assume, R1 learns the
192.168.0.0/24 prefix from R4 and R5 by using one of those ways.
Above topology was common in the past and still is used in some Service
Provider networks. Pop and Core architecture is shown, without MPLS in the
core. POP has Route Reflectors in the data path. For redundancy there are
more than one Route Reflector. And the routes are summarized at the Core to
POP boundary.
In the above figure, for the simplicity there are only 3 POPs, which are
connected to the Core network is shown. Each pop has two RRs, which have
full mesh IBGP sessions between them. In the core, there is PE, which is
connected to the customer and ASBR, which is connected to upstream
provider and receive BGP prefix. In the POP there is full mesh IBGP session
as well.
Note that, there would be second level Hierarchy in the Core as well,
because when the number of POP locations grow, required full mesh IBGP
sessions between RRs would be too much.
For a given prefix, in this picture, we have two paths. Path1 from POP1
and Path 2 from POP3.
BGP best external in this topology can be enabled on two places. It can be
enabled on the ASBRs and also Route Reflectors.
Let’s assume Local preference is set to 200 on ASBR in Pop1 and 100 on
ASBR in Pop3. This makes ASBR in Pop1 is the overall BGP best path for
the prefix. If BGP best external is enabled only on the ASBRs but not on the
Router Reflectors, then Route Reflectors in POP 1 and POP2 doesn’t receive
the best external path, which is Path 2 from POP3.
But POP3 RR3-A and RR3-B does receive overall best path which is Path
1 and best external path which is Path 2 because simply the ASBR in POP3
sends best external path to its RR which is RR3-A and RR3-B.
In this topology, BGP Add-path could be used to send best external path
from RR3-A and RR3-B to the POP 1 and POP2 Route Reflectors. But the
problem with BGP Add-path, it requires every PE, ASBRs and Route
Reflector software and hardware upgrade.
Instead, BGP Best External is enabled on Route Reflector as well. This
allows RR3-A and RR3-B to send best external path, which is Path 2 to POP1
and POP2 RRs. When we have overall best path and the BGP Best External
path on the RRs, in case overall best path goes down, network convergence is
greatly increased, especially when BGP PIC is used together with BGP best
external on ASBRs and RRs.
For example, if traffic comes from POP2 that doesn’t have ASBR and
needs to go to the prefix, RR2-A and RR2-B will have two paths in this case.
One is overall best path which is Path1 and another is best external path
which is Path2.
Both paths would be installed in RRs RIB and FIB (BGP PIC is enabled
in addition to BGP best external). In case Path 1 fails, since best external path
is already in the RIB and FIB, BGP PIC would just changed the pointer to the
best external BGP path and you wouldn’t even lose packet.
Works well with Works well with Works very poorly, Works very poorly,
Working on Full Mesh mesh group mesh group and there is no mesh but RR removes the
group requirement
Working on a Ring Its okay Its okay Not good if ring is big Good with Route
Topology due to query domain Reflector
Working on Hub and Works poorly, require Works bad requires Works very well. It IBGP works very
Spoke a lot of tuning tuning requires minimum well with Route
tuning Reflector
DCs are full mesh. DCs are full mesh so DCs are full mesh so Yes, in large scale
Suitable on Datacenter So, No No no DC and it is not
uncommon
Standard Protocol Yes IETF Standard Yes IETF Standard No, there is a draft but Yes, IETF Standar
lack of Stub feature
Stuff Experince Very well known Not well known Well known Not well known
Security Less secure More secure since it Less secure Secure since it runs
is on layer2 on TCP
Suitable as Enterprise Yes No, it lacks Ipsec Yes Not exactly, very
IGP large scale networks
only
Question 1:
How can you achieve this?
Prepending will (usually) force inbound traffic from AS 10 to take primary
link.
The customer purchased a new link from the second service provider
which uses AS number 30 and decommissioned one of its links from the old
service provider. The customer wants to use the second service provider link
as a backup link. They learned the AS-path prepending trick from early
experience.
Question 2:
Is there a problem with this design?
Question 3:
If there is a problem, how can it be solved?
Answer
There is a problem with the design since the customer wants to use the
second service provider as a backup. AS-path prepending in this way is often
used as a form of load balancing
However, AS 30 will send traffic to backup link, because it prefers
customer routes due to higher local preference that service providers use the
customer link rather than the peer link. Local preference is considered before
AS-path length, so AS-path prepending is not affected in this design.
The solution is to use communities.
Question 4:
What if the customer uses second service provider link as primary and the
old provider as secondary with the second provider peering connection as
depicted in the below topology?
Does community help?
Question 5:
What happens if primary link fails?
Question 6:
What happens when the primary link comes back?
When the primary link comes back, both paths are used for incoming
traffic, because Provider A continues to choose to send to Provider C since
the community attribute is sent by Customer to Provider C, not to Provider A.
Solution:
Either Provider C will send a Provider A for its customer a community
attribute, or Backup BGP link will be reset the when primary link comes
back.
Question 2:
How can the company achieve symmetrical traffic flow so they don’t
have traffic drops or performance issues?
They can split their public IP space by half and advertise specifics from
each datacenter and summary from both datacenters as a backup in case
first DC IGW link or node fails.
Imagine they have /23 address space, they can dive 2x/24 and advertise
each /24 from local datacenters only and /23 from both datacenters (Load
Balancing Model which was explained in this chapter). Since their
upstream SP will prefer longest match routing over any other BGP
attribute, traffic returns to the location where it is originated.
BGP REVIEW QUESTIONS
Question 1:
Which of the below option is the reason to run IBGP? (Choose Two)
A . It is used for the reachability between PE devices in MPLS
network
B. It is used to carry EBGP prefixes inside an Autonomous System
C . It is used with Route Reflectors for the scalability reason in
large scale networks
D . It is used to prevent failures outside your network from
impacting your internal network operation
Answer 1:
One of the correct answers of this question is to carry EBGP prefixes
inside an Autonomous system.
IGP is used for the reachability between PE devices in an MPLS network.
Option C is valid but not the correct answer, because; question is asking
the reasons, not the best practices.
Option D is one of the correct answers as well because with IBGP,
internal network is protected from the outside failures by separating the local
failure domains.
That’s why; answers of this question are B and D.
Question 2:
Which of the below options are true for the BGP Route Reflectors?
(Choose Three)
A . Route Reflectors provide scalability in large scale network
design
B. Route Reflectors hide the available paths
C . Route Reflectors selects and advertise only the best path to
Route Reflector clients
D. Route Reflectors can be placed anywhere in the IP backbone as
an IPv4 RR.
Answer 2:
Route reflectors as explained in the BGP chapter, are used to improve
scalability of the BGP design in large-scale deployments.
Route reflectors hide the available path information by selecting and
advertising only the best path to the clients.
Thus the correct answer of this question is A, B and C.
Option D is wrong because, Route Reflectors should follow the physical
topology in an IP backbone, it cannot be placed everywhere, careful planning
is required. Otherwise forwarding loop occurs as it was explained in one of
the case studies in the BGP chapter.
Question 3:
Which below attributes are commonly used for BGP path manipulation?
(Choose Three)
A. Local Preference
B. Origin
C. As-Path
D. Community
E. Weight
Answer 3:
Origin is not used commonly for the BGP path manipulation. Weight is
Cisco preparatory and it is only local to the routers. It shouldn’t be used for
path manipulation.
BGP path manipulation was explained in detail in BGP chapter.
Answer of this question is A, C and D.
Question 4:
Which of the below options is used in the Public Internet Exchange
Points to reduce configuration overhead on the BGP devices?
A. BGP Route Reflectors
B. BGP Prefix Lists
C. BGP Route Servers
D. BGP Map Servers
Answer 4:
There is nothing called BGP Map Servers. In the Public Internet
Exchange points BGP Route Servers are used to reduce configuration
overhead. They improve scalability. Very similar to Route Reflectors but
Route Reflectors are used in IBGP, not in the Public Exchange Points. That’s
why answer of this question is C.
Question 5:
Which below options are true for the BGP Confederation? (Choose
Three)
A. It is done by creating Sub-Autonomous system
B . It is easier to migrate from full-mesh IBGP, compare to BGP
Route Reflectors
C. Between Sub Autonomous Systems mostly EBGP rules apply
D . Compare to BGP Route Reflector design, it is less commonly
deployed in the networks.
Answer 5:
From the migration point of view, Full mesh IBGP to BGP Confederation
is harder, compare to BGP Route Reflectors. Thus Option B is invalid.
All the other options are correct thus the answer of this question is A, C
and D.
Question 6:
Which below option is used for inbound BGP path manipulation?
A. Local Preference
B. MED
C. As-Path prepending
D. Community
E. Hot Potato Routing
Answer 6:
Hot Potato Routing and Local Preference are used for Outbound BGP
Path manipulation as explained in the BGP chapter in detail.
MED should be used if there is only one upstream ISP but still it is used
for inbound path manipulation. AS-Path prepending and the communities are
used for the multihoming connections as well.
That’s why; answer of this question is B, C and D.
Question 7:
Fictitious Service Provider is considering providing an availability SLA
for their MPLS VPN customers. They want to provide sub second
convergence in case link or node failure scenarios.
What would you suggest to this company to achieve their goal? (Choose
Two)
A. Implementing BFD
B. Implementing BGP PIC Core and Edge
C. Implementing BGP Route Reflectors
D. Implementing IGP FRR
Answer 7:
They should implement BGP PIC features to protect BGP from the link
or node failure. Especially Edge node failures, even if MPLS Traffic
Engineering or IP FRR deployed, couldn’t be recovered in sub second.
Since BGP PIC convergence is mostly depends on IGP convergence as
well, deploying IGP FRR (Fat Reroute) provides a necessary infrastructure
for the BGP PIC. They should be deployed together. BFD is just a failure
detection mechanism. IGP Convergence is depends on many other
parameters tuning as it was explained in the IGP chapters of the book.
That’s why; answer of this question is B and D.
Question 8:
What does MP-BGP (Multi Protocol BGP) mean?
A. BGP implementation which can converge less than a second
B . BGP implementation which is used in Service Provider
networks
C . BGP implementation which can carry multiple BGP Address
Families
D. BGP implementation which is used in Enterprise Networks
Answer 8:
MP-BGP (Multi Protocol BGP) as explained in the BGP chapter, is the
BGP implementation, which can carry multiple Address Families. BGP in
2016, can carry more than 20 different Address Families such as IPv4
Unicast, IPv6 Unicast, IPv4 Multicast, L2 VPN, L3VPN, Flowspec and so
on.
That’s; why; answer of this question is C.
Question 9:
What does Hot Potato Routing mean?
A . Sending the traffic to the most optimum exit for the
neighboring AS
B. Sending the traffic to the closest exit to the neighboring AS
C. By coordinating with the neighboring AS, sending traffic to the
closest exit point
D. It is the other name of BGP Multipath
Answer 9:
Hot Potato Routing means, sending the traffic to the closest exist point
from the Local Autonomous system to the neighboring Autonomous System
by taking the IGP metric into consideration. It was explained in the BGP
chapter in great detail.
There is no coordination between the Autonomous System in Hot Potato
Routing definition. But Coordination with the Hot Potato Routing case study
was provided in the BGP chapter.
That’s why; answer of this question is B.
Question 10:
With which below options, internal BGP speaker can receive more than
one best path even if BGP Route Reflectors are deployed? (Choose Three)
A. BGP Shadow RR
B. BGP Shadow Sessions
C. BGP Add-path
D. BGP Confederation
E. BGP Multipath
Answer 10:
As it was explained in the BGP Route Reflectors section of the BGP
chapter, Shadow Sessions, Shadow RR and BGP Add-path design provides
more than best path to the internal BGP speaker even if BGP Route
Reflectors are deployed.
BGP Multipath requires more than one best path and all the path
attributes to be the same. Thus it requires one of the above mechanisms. BGP
Confederation doesn’t provide this functionality.
That’s why; answer of this question is A, B and C.
Question 11:
Which below option is recommended to send more than one best path to
the VPN PEs in the MPLS VPN deployment if VPN Route Reflectors are
deployed?
A. BGP Add-path
B. BGP Shadow RR
C. BGP Full Mesh
D. Unique RD per VRF per PE
Answer 11:
BGP Add-path, BGP Shadow RR and Sessions deployments are suitable
for the IP backbones.
If there is an MPLS backbone, configuring unique RD per VRF per PE is
best and recommended design option since there is no software or hardware
upgrade, no additional BGP sessions and so on.
That’s why the answer of this question is D.
Question 12:
What are the reasons to send more than one BGP best path in IP and
MPLS deployment? (Choose Four)
A. BGP Multipath
B. BGP Fast Reroute
C. BGP Multihop
D. Preventing Routing Oscillation
E. Optimal BGP routing
Answer 12:
As it is explained in the BGP chapter, there are many reasons to send
more than one BGP best path in both IP and MPLS deployments.
These are; avoiding routing oscillations, BGP Multipathing, Fast
convergence/Fast Reroute and Optimal Routing.
Sometimes for the optimal routing, just sending more than one BGP best
path is not enough but may require all available paths though.
That’s why, answer of this question is A, B, D and E.
Question 13:
What is the drawback of sending more than one BGP best path in BGP?
A. More resource usage
B. Sub Optimal Routing
C. Slower Convergence
D. Security Risk
Answer 13:
Sending more than one BGP best path requires more memory, CPU,
network bandwidth, thus more resource usage in the network.
As a rule of thumb, whenever more information is sent, it consumes more
resource, may provide optimal routing, better high availability, better
convergence.
All other options are wrong, except Option A.
Question 14:
What below options are the advantages of Full Mesh IBGP design
compare to BGP Route Reflector design? (Choose Four)
A. It can provide more optimal routing compare to Route Reflector
design
B . It can provide faster routing convergence compare to Route
Reflector design
C . It provides better resource usage compare to Route Reflector
design
D. It can provide better protection against route churn
E . Multipath information is difficult to propagate in a route
reflector topologies
Answer 14:
Although there are advantages of using BGP Route Reflectors, there are
many drawbacks as well. Probably it is more harmful than deploying Full
Mesh IBGP if the requirement is optimal routing, faster convergence and
avoiding route churns.
Sending multiple paths is difficult since it requires Shadow Sessions, RR
or Add-path deployments in Route Reflector topologies.
Full Mesh IBGP design consumes more device and network resources
and requires more configurations on the devices compare to Route Reflector
design.
That’s why the answer of this question is A, B, D and E.
Question 15:
In the below topology IP backbone is shown. R2 is the RR client of R4
and R3 is the RR client of R1.
What is the next hop of R2 and R3 for the 70.70.0.0/24 prefix?
Answer 15:
Since it is given as IP backbone, IP destination based lookup is done for
the BGP prefixes.
Sine BGP prefixes require recursion and IGP next hop needs to be found
for the BGP prefixes, R2’s and R3’s IGP next hops for the BGP prefixes
should be found.
On R2, For the BGP next hop of 70.70.0.0/24 BGP prefix is R4. R2 can
only reach R4 through R3.
Thus, R2’s IGP next hop is R3. It applies for the R3.
R2’s IGP next hop is R3 and R3’s IGP next hop is R2. That’s why the
answer of this question is C.
Please note that in this topology BGP Route Reflectors don’t follow the
physical topology, which is against to BGP Route Reflector design
requirement in IP networks.
That’s why, in this design between R2 and R3, routing loop occurs.
Correct design is R2 should be the Route Reflector client of R1 and R3
should be the Route Reflector client of R4.
Question 16:
What can be the problem with BGP design in the Enterprise if there are
more than one datacenter?
A. Convergence is very slow
B. Asymmetric routing issues if there are stateful devices
C . Route Reflector deployment is harder compare to SP
deployment
D. Traffic flow cannot be optimized
Answer 16:
All the options are wrong except Option B.
Asymmetric can be a problem in Enterprise design, which has stateful
devices as it was explained in the BGP chapter. Because stateful devices
require symmetric routing for the flow information and firewalls, load
balancers, IDS/IPS are common elements at the Internet edge or within the
datacenters in Enterprise design.
In the Service Providers, CGN (LSN) is deployed to overcome IPv4
exhaustion problem as it was explained in IPv6 chapter. These nodes also
require symmetric routing.
Answer of this question is B.
Question 17:
Which below option is true for the VPN Route Reflectors in MPLS
deployments? (Choose Two)
A. It can be deployed in centralize place
B . It doesn’t have to follow physical topology, can have more
flexible placement compare to IP Route Reflectors
C. It is best practice to use VPN Route Reflectors for the IP Route
Reflectors as well
D . It always provides most optimal path to the Route Reflector
clients
Answer 17:
VPN Route reflector can be deployed in the centralized placed and they
have more flexible placement advantage compare to the IP Route Reflector.
The reason is there is no IP destination based lookup in the MPLS
networks. Thus there is no layer 3 routing loop problem as in the case of IP
Route Reflector which was explained in the Answer 15.
It is not best practice to deploy IP and VPN services on the same node.
Reason will be explained in Answer 18.
VPN RR, similar to IP RR, cannot always provide most optimal path to
their clients. Because they selects the BGP best path from their point view,
not from their clients point of view.
That’s why the answer of this question is A and B.
Question 18:
What can be the problem with using IP and VPN Route Reflector on the
same device? (Choose Two)
A. Attack for the Internet service can affect VPN Customers
B. Attack for the VPN service can affect Internet Customers
C. Scalability of the Route Reflectors are reduced
D. They have to participate in the IGP process
Answer 18:
When a Route Reflector is used for more than one service, it is called
Multi Service Route Reflector. The problem of using Internet and VPN
services on the same BGP Route Reflector is Fate Sharing.
Internet based attacks can affect VPN customers and any problem on the
VPN service users affect Internet customers. Also in case of failure, all the
customers fail.
Thus using a separate BGP Route Reflector per service is a best practice.
Using Multi Service RRs don’t reduce the scalability. And when using
multi service RRs, they still don’t have to participate in IGP process.
They can be designed as inline RR that participates IGP process in
specific design such as Seamless MPLS. Seamless MPLS and its five
different variations will be explained in MPLS chapter.
Answer of this question is A and B.
Question 19:
In the below topology there are two datacenters of the Service Provider. If
the requirement were to provide closest exit for the Route Reflector clients, in
which datacenter would you deploy the Route Reflectors?
A. In West DC
B. In East DC
C. Doesn’t matter the placement
D. Both in East and West DC
Answer 19:
Route Reflectors should be placed in both East and West DC. Otherwise
Route Reflector would choose the best path from their point of view and
would send the best path to the Route Reflector Clients from their best path.
If RR would be placed in West DC, all BGP RR Clients in East DC
would choose the West DC IGW (Internet Gateways) as exit point and vice
versa.
Thus the correct answer of this question is D.
Question 20:
Which below options are true for the BGP PIC deployment? (Choose
Two)
A. BGP PIC can provide sub second convergence even if there are
millions of prefixes in the routing table
B . BGP edge devices don’t have to receive more than one best
path for BGP PIC Edge to work
C . BGP PIC Edge can protect both from Edge link and Node
failure
D. BGP PIC has to work with BGP Add-Path
Answer 20:
BGP edge nodes have to receive more than one best path for BGP PIC
Edge operation. This was explained in the BGP chapter in detail. BGP Add-
Path is one of the mechanisms, which is used to send multiple paths even RR
is deployed in the network.
But BGP Add-Path is not mandatory for BGP PIC.
BGP PIC Edge can protect from both Edge link and node failures and can
provide sub second convergence even if there are millions of prefixes.
That’s why the correct answer of this question is A and C.
BOOKS
Zhang, R. (2003). BGP Design and Implementation, Cisco Press.
VIDEOS
https://www.nanog.org/meetings/nanog38/presentations/dragnet.mp4
https://www.youtube.com/watch?v=txiNFyvWjQ
ARTICLES
https://www.nanog.org/meetings/nanog51/presentations/Sunday/NANOG51.Talk3.peering
nanog51.pdf
http://ripe61.ripe.net/presentations/150-ripe-bgp-diverse-paths.pdf
https://www.nanog.org/meetings/nanog48/presentations/Tuesday/Raszuk_To_AddPaths_N
CHAPTER 9
MULTICAST
I fthethemost
requirement is to send flow in real time to multiple receivers, then
efficient way to do this is multicast. Multicast is a thirty-year-
old protocol, yet many people still struggle to understand it. Probably the
biggest difficulty in understanding multicast is source based routing. IP
unicast routing works based on destination based routing; IP multicast
routing works based on source based routing.
Tree in IP unicast routing is created from the source towards destination,
in IP Multicast routing from destination (receiver) towards source (sender).
The server has to send three copies of stream for three receivers in
unicast. Server sends one copy and network replicates the traffic to its
intended receivers in multicast.
Multicast works on UDP, not TCP. That’s why there is no error control or
congestion avoidance, it is purely best effort. Receiver can receive duplicate
multicast traffic in some situations. SPT switchover is an example of where
duplicate traffic delivery occurs. During an SPT switchover, multicast traffic
is received both from shared tree and shortest path tree.
This is one of inefficiency of multicast.
Source addresses in multicast always unicast address. Multicast address is
a class D address range. 224/4. You should never see multicast Class D
address as a source multicast address.
Source address can never be class D multicast group addresses. Separate
multicast routing table is maintained for multicast trees. Sources do not need
to join a group, they simply send the traffic. Multicast routing protocols
(DVMRP, PIM) are used to build the trees. Tree is built hop by hop from the
receivers to the source. DVMRP was the first multicast routing protocol, but
it is depreciated and not used anymore.
Source (sender) is the root of the tree in shortest path tree.
Rendezvous point is the root of the shared tree (rendezvous point is used
in ASM and bidir-PIM)
Fast Reroute Support Yes - IP FRR and Yes - IP FRR and Yes - IP FRR and Multicast only
Multicast only FRR Multicast only FRR FRR
Less known, especially Phantom
Stuff Experince Well known Well known RP operation for the load
balancing
Loop Avoidance Its done via RPF Check Its done via RPF Check Designated Forwarder is elected
per subnet
More secure because
Less secure since all the receivers specifically Less secure, same reason as the
Security sources can send to any states which source and PIM ASM
multicast group group pair thet are
interested in
Complex since it
requires randevous Easy, it requires source
point, Anycast RP for information only. There is Complex since it requires
the Randevous point no Randevous Point in Randevous point, Phantom RP for
Complexity redundancy, Randevous Source Specific Multicast, the redundancy, RP Engineering
point engineering for no RP Engineering , no for the optimal multicast routing
the optimal multicast Anycast RP
routing
Moderate among other
options since routers
have to keep (*,G) state Worst.In PIM SSM all the
between the hosts and routers every source-
the RP and (S,G) state group address state. Thus Best since PIM Bidir enabled
Resource Requirement between source and the it requires more memory routers only keep (*,G) states.
RP. (*,G) state can be and cpu on the devices, Only shared tree is used in
thought as a there is no (*,G) state in Bidirectional PIM.
summarization in the IP PIM SSM, only (S,G)
routing. After the SPT
transition, still (S,G)
state though
Troubleshooting Easy Very easy Easy
PIM ASM vs. PIM SSM vs. PIM Bidir Multicast Deployment Models
Comparison
MULTICAST CASE STUDY – AUTOMATIC TUNNELING PROTOCOL
(AMT)
Terrano is a European-based manufacturing company.
Users of Terrano want to watch the stream from the content-provider
which has peering with Terrano’s service provider. However, Terrano doesn’t
have multicast in the network.
What solution could Terrano users use without enabling IP multicast on
their network?
Solution:
Solution can be provided with Automatic Multicast Tunneling (AMT)
RFC 7450.
Question 1:
Which below technology is used between Multicast host and the first hop
default gateway of the host?
A. PIM Snooping
B. PIM Any source Multicast
C. IGMP
D. Rendezvous Point
Answer 1:
IGMP is used between the multicast host and its default gateway. Answer
is C.
Question 2:
Which below statement is true for PIM Sparse mode? (Choose Three)
A. Multicast traffic always has to go through RP
B . If there is no backup RP and RP fails new Multicast sources
cannot be discovered
C. RP is used for Source Discovery
D . Anycast RP is one of the redundancy mechanisms in PIM
Sparse Mode
E. There is no RP in Any Source Multicast
Answer 2:
As it was mentioned in the multicast chapter, RP is used for source
discovery. If there is no backup RP and RP fails, new multicast sources
cannot be discovered.
Multicast Traffic only in PIM Bidir always has to go through RP, in PIM
ASM it doesn’t have to.
Anycast RP is one of the redundancy mechanisms in PIM Sparse Mode.
Another one is Phantom RP, which is used in PIM Bidir.
There is RP in Any Source Multicast, that’s why Option E is incorrect.
Correct answer of this question is B, C and D.
Question 3:
Which below statements are true for IP Multicast design? (Choose Three)
A. There is overlap between Multicast IP and MAC addresses
B. Most optimal multicast routing is achieved with PIM SSM
C . Least resource intensive Multicast deployment model is PIM
Bidir
D. Phantom RP is the load balancing mechanism in PIM Bidir
E. Any Source Multicast RP doesn’t require Rendezvous Point
Answer 3:
Phantom RP is used in PIM Bidir but it doesn’t support load balancing.
Most Optimal routing is achieved with PIM SSM because the three uses
always the shortest IGP path. There is no need to send the traffic towards RP.
Least resource intensive Multicast deployment model is PIM Bidir because
there is only (*,G) multicast entries are kept in Multicast routing table. In
ASM, after SPT switchover traffic continues over the shortest path. And as it
was explained before there is overlap between the Multicast IP addresses and
the MAC addresses. Any Source Multicast requires Rendezvous Point thus
Option E is incorrect. That’s why the correct answer of this question is A, B
and C.
Question 4:
If the requirement is not to use Rendezvous point, which of the below
options provide most efficient IP Multicast deployment?
A. Deploy PIM SSM and implement IGMPv2
B. Deploy PIM SSM and implement IGMPv3
C. Deploy Anycast RP
D. Deploy Bidirectional PIM
E. Deploy PIM Any Source Multicast
Answer 4:
In Anycast RP, Bidirectional PIM and Any Source Multicast, there is
always an RP.
Only valid solutions could be A and B but since in the Option A it says
that IGMPv2, and Source Specific multicast although doesn’t have an RP,
requires IGMPv3
That’s why the correct answer of this question is Option B.
Question 5:
Which of the below solutions provide RP redundancy in case of failure in
IPv4 Multicast?
A. Embedded RP
B. MSDP Anycast RP
C. Auto RP
D. BSR
E. PIM SSM
Answer 5:
There is no RP in PIM SSM.
Embedded RP is not in IPv4 Multicast. Auto RP and BSR is used to teach
RP to the Multicast routers.
That’s why the correct answer of this question is MSDP Anycast RP,
which is used in PIM ASM only.
Question 6:
Which below Multicast PIM Sparse Mode deployment model provide one
to many multicast traffic and can work with IGMPv2?
A. PIM ASM
B. PIM SSM
C. PIM Dense
D. PIM Bidir
E. IGMPv3
Answer 6:
PIM SSM and PIM ASM can provide one to many application traffic
pattern but PIM SSM requires IGMP v3. That’s why the correct answer is
PIM ASM, Option A.
Question 7:
Which below Multicast technology provide any to any connectivity?
A. IGMP
B. PIM SSM
C. PIM Bidir
D. Anycast RP
Answer 7:
Any to any connectivity for the applications is provided by the PIM Bidir
(Bidirectional PIM) as it was explained in the Multicast chapter in detail.
That’s why the correct answer of this question is C.
Question 8:
How redundancy is achieved in PIM Bidir?
A. MSDP is configured between two RPs
B . Phantom RP is used and the two RP IP address is advertised
with different subnet masks
C. Static multicast routing table entries are configured on the RPs
D . Phantom RP is used and the two RP IP address is advertised
with the same subnet masks
Answer 8:
As it was explained in the Multicast chapter, Phantom RP concept is used
in PIM Bidir for the redundancy. It doesn’t provide load balancing but only
the redundancy can be achieved.
Two Rendezvous Points (RP) are configured and different subnet mask is
advertised for the RP IP address.
That’s why the correct answer of this question is B.
Question 9:
Which below technologies are used in IPv6 multicast?
A. IGMP Snooping
B. MLD
C. Embedded RP
D. PIM Auto RP
E. DVMRP
Answer 9:
PIM Auto RP is Cisco preparatory and not supported in IPv6.
DVMRP was one of the layer 3 multicast routing protocols that was not
implemented in IPv4 and not supported in IPv6.
Instead of IGMP, there is MLD in IPv6 between multicast host and the
first hop multicast gateway.
Embedded RP is used for embedding RP IPv6 address as part of the
multicast group address.
That’s why the answer of this question is B and C.
Question 10:
If both IPv4 and IPv6 Multicast will be enabled on the campus network.
Which multicast protocols should be enabled on the access switches?
(Choose Two)
A. PIM Bidir
B. IGMP Snooping
C. MLD
D. Any Source Multicast
E. PIM Rendezvous Points
Answer 10:
On access switches, layer 3 multicast protocols are not enabled. On the
first hop multicast routers, IGMP to PIM conversion are made. But IGMP
Snooping in IPv4 and MLD (Multicast Listener Discovery) in IPv6 is critical
for efficiency in IP multicast deployments.
That’s’ why answer of this question is B and C.
MULTICAST FURTHER STUDY RESOURCES
BOOKS
Williamson, B. (1999). Developing IP Multicast Networks, Volume I, Cisco
Press.
VIDEOS
Ciscolive Session-BRKIPM-1261: Speaker Beau Williamson
PODCASTS
http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-
software/multicast-enterprise/whitepaper_c11-474791.html
http://packetpushers.net/community-show-multicast-design-deployment-
considerations-beau-williamson-orhan-ergun
ARTICLES
https://tools.ietf.org/html/rfc7450
http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ip-
multicast/whitepaper_c11508498.html
https://www.juniper.net/techpubs/en_US/release-
independent/nce/information-products/topic-collections/nce/bidirectional-
pim-configuring-bidirectional-pim.pdf
http://www.juniper.net/documentation/en_US_junos13.3/topics/concept/multicast-
anycast-rp-mapping.html
http://d2zmdbbm9ferqf.cloudfront.net/2015/usa/pdf/BRKIPM-1261.pdf
CHAPTER 10
QUALITY OF SERVICE (QOS)
Layer
Layer 3 2 CoS
Application
Classification MPLS
EXP
IPP PHB DSCP
IP Routing 6 CS6 48 6
Voice 5 EF 46 5
Interactive Video 4 AF41 34 4
Streaming Video 4 CS4 32 4
Locally Defined
Mission Critical 3 Cs3 24 3
Data
Call Signals 3 AF31/CS3 26/24 3
Transactional
2 AF21 18 2
Data
Network
2 CS2 16 2
Management
Bulk Data 1 AF11 10 1
Scavenger 1 CS1 8 1
Best Effort 0 0 0 0
BEST-EFFORT DATA
The best-effort class is the default class for all data traffic. An application
will be removed from the default class only if it has been selected for
preferential or deferential treatment.
BULK DATA
The bulk data class is intended for applications that are relatively non-
interactive and drop-insensitive and that typically span their operations over a
long period of time as background occurrences.
Such applications include the following:
FTP
E-mail
Backup operations
Database synchronizing or replicating operations
Content distribution
Any other type of background operation
Bulk data traffic should be marked to DSCP AF11; excess bulk data
traffic can be marked down by a policer to AF12; violating bulk data traffic
may be marked down further to AF13 (or dropped).
Bulk data traffic should have a moderate bandwidth guarantee, but should
be constrained from dominating a link.
TRANSACTIONAL/INTERACTIVE DATA
The transactional/interactive data class, also referred to simply as
transactional data, is a combination of two similar types of applications:
transactional data client-server applications and interactive messaging
applications.
Transaction is a foreground operation; the user waits for the operation to
complete before proceeding.
E-mail is not considered a transactional data client-server application, as
most e-mail operations occur in the background and users do not usually
notice even several hundred millisecond delays in mail spool operations.
Transactional data traffic should be marked to DSCP AF21; excess
transactional data traffic can be marked down by a policer to AF22; violating
transactional data traffic can be marked down further to AF23 (or dropped).
Real time, Best Effort, Critical Data, and Scavenger Queuing Rule: Four
Classes of QoS Deployment
CONGESTION MANAGEMENT AND CONGESTION AVOIDANCE
Queuing, or congestion management, is used to manage the frames or
packets before they exit a device.
In routers this is known as output queuing because IP forwarding
decisions are made prior to the queuing.
Congestion avoidance is a term for managing traffic to decide when
packets will be dropped during congestion periods.
The most common congestion avoidance tools are RED (Random Early
Detection) and Weighted Random Early Detection (WRED).
If RED or WRED is not configured, by default all packets get same drop
treatment. It is called Tail drop.
Queuing tools can also be used along side traffic shaping, which basically
delays packets to ensure that the traffic rate for a class doesn’t exceed the
defined rate.
Congestion management deals with the front of the queues and the
Congestion avoidance mechanisms handles the end of the queues.
MPLS QOS
MPLS QoS is done based on MPLS EXP bits. EXP bits are 3 bits. QoS
tools; classification, marking, policing, shaping, and queening works similar
to IP QoS. When the packet receives from the IP domain, packet is tunneled
throughout the MPLS network.
DSCP bits are mapped to the EXP bits on the Ingress PE in the MPLS
network and tunneled up to the Egress PE.
There are three MPLS QoS Tunnelling mechanisms.
Uniform Mode
Short Pipe
Pipe (Also known as Long-Pipe)
As a network designer Understanding the different MPLS tunneling
modes and their affects on Customer QoS policy is very important.
UNIFORM MODE
Uniform mode is generally used when the customer and SP share the
same Diffserv domain, which would be the case if customer creates its MPLS
network. The first three bits of the DSCP field are mapped to the MPLS EXP
bits on the ingress PE
If a policer or other mechanism remarks the MPLS EXP value, new value
is copied to lower level EXP bits. At the egress PE, MPLS EXP value is used
to remark the customer DSCP value.
SHORT PIPE MODE
It is used when customer and SP are in different Diffserv domains. This
mode is useful when the SP wants to enforce its own Diffserv policy but the
customer wants its Diffserv information to be preserved across the MPLS
domain. The Ingress PE sets the MPLS EXP value based on the SP Quality of
Service policy.
If remarking is necessary, it is done on the MPLS EXP bits of the labels
but not on the DSCP bits of the customers IP packet.
On the Egress PE, the queuing is done based on the DSCP marking of the
customer’s packet.
Customer doesn’t need to remark their packet at the remote site.
PIPE MODE
Pipe mode is also known as Long-Pipe mode and Service Provider
controls the QoS policy end to end. Pipe mode is same as short pipe mode
except for that the queuing is based on MPLS EXP bits at the Egress PE and
not on the customers DSCP marking.
Customer may need to remark their packet if they want consistent end-to-
end QoS deployment.
Alternative name Unified MPLS QoS Short Pipe MPLS QoS Long Pipe MPLS QoS
Tunneling Tunneling Tunneling
Standard Implementation Yes IETF Standard Yes IETF Standard Yes IETF Standard
For the initial DSCP to
EXP mapping and also SP
has to know each and
Customer-Service Provider For the initial DSCP to every customer's QoS For the initial DSCP to
Interaction EXP mapping only requirement to arrange EXP mapping only
egress scheduling and
dropping strategy on the
Egress Pe
E-LSP and L-LSP support Yes Yes Yes
Question 1:
Which below statements are true for QoS design?
A. Classification and marking should be done at every hop
B . Classification and marking should be done as close to the
sources as possible
C . Instead of DSCP based marking COS based marking is
recommended for end to end QoS design
D. Quality of Service increases availability bandwidth capacity
Answer 1:
QoS doesn’t increase available capacity. It is used to manage fairness
between the applications for current capacity.
Classification and marking should be deployed as close to the sources as
possible, not at every hop.
And Instead of COS, DSCP based marking should be deployed to prevent
mapping throughout the network.
That’s why the answer of this question is B.
Question 2:
Which below option is true for the Congestion Avoidance mechanisms?
A . When it is enabled, router marks the packets based on some
criteria
B. When it is enabled, router classify the packets
C . When it is enabled, router handles the possible congestion by
using RED or WRED
D . When it is enabled, router can place the important traffic into
the LLQ
Answer 2:
As it was explained in the QoS chapter, congestion avoidance
mechanisms are different than congestion management mechanisms.
Congestion avoidance mechanism coordinates the front of the queues and
a congestion avoidance mechanism handles the tail of the queues.
RED and WRED are the congestion avoidance mechanisms. And when it
is enabled, router handles the possible congestion by using one of these
mechanisms. Other options are related with classification, marking and
queuing.
Only the correct answer of this question is Option C.
Question 3:
What should be the one-way latency for the Voice traffic in QoS design?
A. Less than 1 second
B. Less than 500 milliseconds
C. Less than 150 milliseconds
D. Less than 300 milliseconds
Answer 3:
As a general design rule of thumb, one way latency which is also known
as mouth to ear latency for the Voice traffic should be less than 150ms.
That’s why the answer of this question is C.
Question 4:
Which are the options are important in Voice over IP design? (Choose
Three)
A. Delay
B. Echo
C. Packet Loss
D. Variance in delay
E. CDR records
Answer 4:
For the Voice traffic, as it was explained in the QoS chapter of the book,
Packet loss, latency and the jitter is most critical performance indicators.
Latency is also known as delay and Variance in delay is known as Jitter.
CDR (Call detail record) is log information, which is not critical in Voice
design as the others.
That’s why the correct answer of this question is A, C and D.
Question 5:
Which below QoS mechanism is commonly deployed on the Service
Provider ingress PE device for their customer traffic?
A. Policing
B. Shaping
C. WRED
D. MPLS Traffic Engineering
Answer 5:
On the customer site shaping is deployed and the Service Provider
commonly deploy Policing. Exceeding traffic can be either dropped, pass
normally but it is charged extra or markdown and threated worse.
Question 6:
Which of the below statements are true for the Voice Traffic in QoS
design? (Choose Two)
A. Voice traffic should be marked with EF, DSCP 46
B. Voice traffic sensitive to packet loss, jitter and delay
C. Voice traffic should be placed in Best Effort Queue
D. For the voice traffic queue, WRED should be enabled
E. Voice requires one way latency less than 2 seconds
Answer 6:
Voice traffic should be marked with EF, DSCP 46. It is sensitive to
packet loss, jitter and delay.
It should be placed in LLQ (Low Latency Queue) not the best effort.
WRED should be enabled for the TCP based application, not for the voice
traffic.
It requires one-way latency to be less than 150ms, not 2 seconds.
The correct answers of this questions are A and B.
Question 7:
Enterprise Company receives Gigabit Ethernet physical link from the
local Service Provider. But the Committed Information Rate is 250Mbps.
Which QoS mechanism Enterprise Company should deploy to ensure low
packet loss toward the Service Provider?
A. Priority Queening
B. WRED
C. Marking
D. Policing
E. Shaping
Answer 7:
When the customer receives a service from the actual physical link speed,
they can send more traffic than the committed information rate (Actual
service bandwidth).
In this case common action on the Service Provider networks is to police
the traffic.
The Service Provider might drop customer critical traffic unless Customer
doesn’t do the Shaping at their site. Correct answer of this question is shaping
which is Option E.
Question 8:
Which below options are true for the QoS design? (Choose Two)
A. MPLS QoS is done based on EXP bits
B. IP DSCP uses 6 bits
C. Marking should be done at the every hop for better QoS
D. Queening is enabled when the interface utilization reaches 80%
E . Queening shouldn’t be enabled in the Local Area Network
since there is too much available bandwidth already
Answer 8:
MPLS QoS is done based on 3 bits EXP field. IP DSCP uses 6 bits of IP
TOS byte (8 bits)
Marking and Classification should be done as close to the sources as
possible. If mapping is necessary for example between PE-CE in MPLS
deployments, then at the WAN edge is also remarking can be done. But not at
every hop.
Queening is enabled only when the interface utilization reaches 100%.
Not 80! Queening should be enabled even in the LAN to protect the
applications from the micro burst. For the best QoS design, LAN shouldn’t
be missed and QoS should be deployed on the LAN as well.
That’s why the correct answer of this question is A and B.
Question 9:
Which below statements are true for the MPLS QoS tunneling models?
(Choose Three)
A . There are two types of tunneling models; uniform mode and
non-uniform modes
B. There are three types of tunneling models; uniform mode, short-
pipe and pipe modes
C. Pipe model is known as Long-pipe modes as well
D. With uniform modes customer may need to remark their traffic
at the remote site
E. Short-pipe modes requires MPLS between the customer and the
Service Provider
Answer 9:
There are three types of tunneling models; uniform mode, short-pipe and
pipe modes. Pipe mode is also known as long pipe mode.
With uniform mode customer QoS policy may require remarking at the
remote site.
None of the three modes require MPLS between customer and the Service
Provider.
That’s why the answer of this question is B, C and D.
Question 10:
Which below options are true for the MPLS QoS deployment? (Choose
Two)
A. Classification and marking is done on the P devices.
B. It requires MPLS between PE and CE
C. With Short-Pipe MPLS QoS tunneling mode, queening is done
based on the customer policy
D. Shaping should be done to drop the exceeding customer traffic
on the Ingress PE
E. Policing should be done to drop the exceeding customer traffic
on the Ingress PE
Answer 10:
Classification and marking is done on the PE devices.
It doesn’t require MPLS between PE and ce
With Short-Pipe MPLS QoS Tunneling mode, queening is done at the
Egress PE based on the customer QoS policy.
Policing is done on the Ingress PE to drop or remark exceeding customer
traffic.
That’s why the correct answer of this question is C and E.
QOS STUDY RESOURCES
BOOKS
Szigeti, T. (2013). End-to-End QoS Network Design: Quality of Service for
Rich-Media & Cloud Networks (Second Edition), Cisco Press.
VIDEOS
Ciscolive Session-BRKCRS-2501
https://www.youtube.com/watch?v=6UJZBeK_JCs
ARTICLES
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND
SRND-Book/QoSIntro.html
http://ww.cisco.com/c/en/us/td/docs/solutions/Enterprise/Video/qosmrn.pdf
http://orhanergun.net/2015/06/do-you-really-need-quality-of-service/
http://d2zmdbbm9feqrf.cloudfront.net/2013/usa/pdf/BRKCRS-2501.pdf
https://ripe65.ripe.net/presentations/67-2012-09-25-qos.pdf
CHAPTER 11
MPLS
I ffastthereroute
requirement is to have a scalable VPN solution that can provide
traffic protection, then the only choice is MPLS.
MPLS is a protocol-independent transport mechanism. It can carry Layer
2 and Layer 3 payloads. Packet forwarding decision is made solely on the
label without the need to examine the packet itself.
MPLS interacts as an overlay with IGP and BGP in many ways. For
example, in multi-level IS-IS design, Level 1 domain breaks end-to-end LSP.
In this chapter:
MPLS Theory will be explained very briefly, basic concepts in MPLS
which will be applicable to all of its applications will be mentioned.
MPLS Applications such as Layer 2 and Layer 3 VPNs, Inter-AS MPLS
VPN Deployment Options, Carrier Supporting Carrier Architecture,
Seamless MPLS, MPLS Transport Profile and MPLS Traffic Engineering
will be explained in detail. Comparison tables will be provided whenever
is applicable from the design point of view.
Many Case Studies will be provided to better understand the concepts in a
holistic manner.
So many MPLS design questions will be provided and answers will be
shared at the end of the chapter. These questions will be complementary to
the topics in the chapter and will be useful in real life MPLS design as
well as CCDE Written and Practical exams.
MPLS THEORY
MPLS APPLICATIONS
Important MPLS applications/services for the network designers are
listed below.
All that will be explained in this chapter.
Layer 2 MPLS VPN
Layer 3 MPLS VPN
Inter-AS MPLS VPNs
Carrier Supporting Carrier
MPLS Traffic Engineering
Seamless MPLS
MPLS Transport Profile (MPLS-TP)
MPLS infrastructure can have all of the above MPLS application/services
at the same time.
You can provide protection for Layer 2 and Layer 3 VPN customers by
having MPLS traffic engineering LSPs for SLA or FFR purposes.
MPLS LAYER 2 MPLS VPN
In MPLS Layer 2 VPN, Layer 2 frame is carried over the MPLS
transport.
If you are extending MPLS towards the access domain of the backbone
then you can have end-to-end MPLS backbone without the need for protocol
translation. Two different Layer 2 VPN architectures provide similar services
defined in MEF (Metro Ethernet Forum).
Label Switched path which provides end-to-end MPLS label reachability,
between the PE devices of the network is called Transport LSP. It is also
known as Tunnel LSP.
MPLS Layer 2 VPN can be point-to-point, which is called virtual private
wire service (VPWS), or multi-point-to-point, which is called virtual private
LAN service (VPLS).
BENEFITS OF MPLS, WHY MPLS IS USED AND MPLS ADVANTAGES
As an Encapsulation and VPN mechanism, MPLS brings many benefits
to the IP networks.
Below list shows the benefits of MPLS. MPLS as a very mature VPN
technology has many benefits. Below are the some of the important use case
of MPLS technology and will explained in great detail throughout this
chapter.
Faster packet processing with MPLS compare to IP
Initially MPLS invented to provide faster packet processing compare to
IP based lookup. With MPLS instead of doing IP destination based lookup,
label-switching operation is done. Smaller MPLS header compare to IP
header is processed and provides performance benefit. Although today
nobody enables MPLS for this reason, this was the initial reason for MPLS as
I stated above.
BGP Free Core with MPLS
Without MPLS, if BGP is running on the network, it needs to run on
every device on the path. MPLS removed this need, less protocol means,
simpler network and easier maintenance.
Hiding service specific information (customer prefixes, etc.) from the core
of the network
When MPLS is used on the network, only the edge devices has to keep
the customer specific information such as MAC address, Vlan number, IP
address and so on. Core of the network only provides reachability between
the edges.
More scalable network
Not having service specific information on the core of the network
provides better scalability. CPU and Memory point of view as well as dealing
with the routing protocol updates, link state changes and many other
problems abstract from the core of the network by using MPLS.
MPLS VPNs -MPLS Layer 2 and Layer 3 VPNs
Probably the most important reason and main benefit of MPLS is MPLS
VPNs. MPLS as you might know allows to create Point-to-point, point-to-
multipoint and multipoint-to-multipoint type of MPLS layer 2 VPN and
MPLS layer 3 VPNs.
By using BGP, LDP and/or RSVP protocols, VPNs can be created. There
are tens of articles on MPLS VPNs on the website.
Traffic Engineering
MPLS with the RSVP-TE provides traffic engineering capability which
allows better capacity usage and guaranteed SLA for the desired service.
MPLS Traffic Engineering are explained with the many articles on the
website in detail.
Fast Reroute
With RSVP-TE, MPLS provides MPLS Traffic Engineering Fast Reroute
Link and Node Protection. RSVP-TE is one option but with LDP, LFA and
Remote LFA can be setup if RSVP-TE is not used in the network. MPLS
Traffic Engineering Fast Reroute can protected the important service in any
kind of topology and provides generally less than a 50msec protection.
On the other hand, IP FRR mechanisms require highly meshed topology
to provide full coverage in the case of failures.
When LDP is used without RSVP-TE, solution is also called as IP Fast
Reroute. There was CR-LDP (Constrained based) draft but since it is
deprecated I don’t mention here.
MPLS doesn’t bring security by default. If security is needed then IPSEC
should run on top of that. Best IPSEC solution for the MPLS VPNs is
GETVPN since it provides excellent scalability.
MPLS is used mainly for the Wide Area Network but there are
implementation for Datacenter Interconnect, Datacenter Multi segmentation
as well.
Today with PCE (Path Computation Element), MPLS is considered to be
used in SDN (Software Defined Networking) for network programmability,
WAN bandwidth optimization, new emerging services such as bandwidth
calendaring, multi area and multi domain traffic engineering and automation
purposes as well.
There is only one egress point for the VPWS service at the other end of
the pseudo-wire, thus PE device doesn’t have to keep MAC Address to PW
binding.
PE device doesn’t have to learn MAC address of the customer in VPWS
(EoMPLS) MPLS Layer 2 VPN service. This provides scalability.
VPLS Topology
Customer runs a routing protocol with the service provider to carry the IP
information between the sites. As stated earlier, static routing is a routing
protocol. CE devices can be managed by the customer or service provider
depending on the SLA.
Service provider might provide additional services such as IPv6, QoS,
and multicast. By default, IPv4 unicast service is provided by the service
provider in MPLS Layer 3 VPN architecture. Transport tunnels can be
created by LDP or RSVP. RSVP is extended to provide MPLS traffic
engineering service in MPLS networks.
The inner label, also known as the BGP label, provides VPN label
information with the help of MPBGP (multiprotocol BGP). This label allows
data plane separation. Customer traffic is kept separated over common
MPLS network with the VPN label. MPBGP session is created between PE
devices only. P devices do not run BGP in an MPLS environment. This is
known as BGP Free Core design.
Route Distinguisher is a 64-bit value that is used to make the customer
prefix unique throughout the service provider network. With RD (route
distinguisher), different customers can use the same address space over the
service provider backbone. Route target is an extended community attribute
and is used to import and export VPN prefixes to and from VRF. Export
route target is used to advertise prefixes from VRF to MP-BGP, import route
target is used to receive the VPN prefixes from MP-BGP into customer VRF.
MPLS Layer 3 VPN by default provides any-to-any connectivity (multipoint-
to-multipoint) between the VPN customer sites. If the customer wants to have
hub-and-spoke topology, route target community can provide flexibility.
Below table summarizes the similarities and the differences of the three
common MPLS VPN technologies in great detail.
MPLS Layer 2 VPN models; EoMPLS (Ethernet over MPLS, a.k.a
ATOM) and VPLS will be compared with MPLS Layer 3 VPN.
Network designers should know the pros and cons of the technologies,
protocol alternatives and their capabilities from the design point of view.
Design MPLS L3
EoMPLS VPLS
Requirement VPN
Not scalable Very Scalable Very scalable
Scalability for compare to architecture for architecture for
the Customer VPLS and the layer 2 the layer 3
MPLS L3 VPN service service
Same as
EoMPLS if
BGP Auto
Scalability for
Discovery is not
the Service Not good
used, if BGP
Provider
AD is
used,better than
EoMPLS
Carries Layer 2 Carries Layer 2 Carries Layer 3
Service Type
frames Frames IP packets
Vey good with
the MPBGP
VPN Route
Reflectors but
Scalability is RT Constraints
It works very
Working on Full very bad for should be used
well for the full
Mesh full mesh to hide
mesh topology
topology unnecessary
information
from the
unintended PE
devices
Works quite
well but if the Better than Requires extra
number os sites EoMPLS for configuration
too much, both the Service on the Service
Working on scalability for Provider and Provider side
Hub and Spoke both customer Customer from but it is doable
and service the scalability and commonly
provider point of view used
becomes an
issue
Suitable as
Yes but not Yes it is very Yes it is very
WAN
scalable scalable scalable
technology
It is originally It can be used
designed as as Layer 3
It is suitable but
Datacenter datacenter
if there are so
Interconnect interconnect
Suitable as DCI many sites to
Technology,it is technology but
technology interconnect,
most suitable cannot provide
it's scalability is
one among all layer 2
not good
these three extension thus
options not good as DCI
Who controls
Service
the Backbone Customer Customer
Provider
Routing
Standard Yes IETF Yes IETF Yes IETF
Protocol Standard Standard Standard
Service Provider Limited
Not well known Well known
Stuff Experince knowledge
VPLS provides
In theory any
LAN emulation
routing protocol
so allows layer
All routing can run as PE-
2 to be streched
Routing protocols can CE but most
over the
Protocol be enabled over Service
customer
Support Ethernet over Provider only
locations.Any
MPLS Service provides BGP
routing protocol
and Static
can run over Routing
VPLS service
MPLS Traffic
Engineering Yes Yes Yes
Support
Same as Frame Same as Frame Same as Frame
Relay, doesn't Relay,doesn't Relay,doesn't
Security
provide IPSEC provide IPSEC provide IPSEC
by default by default by default
Service
Provider should
offer, otherwise
Customer has to
create overlays
Multicast
Yes Yes to carry
Support
Multicast
traffic, that’s
why Multicast
support may nor
be good
GETVPN,it GETVPN,it GETVPN,it
Best technology provides provides provides
for IPSEC excellent excellent excellent
scalability scalability scalability
Bad since the
PE devices have
Worst since the to keep the
Best since the PE devices have routing tables of
Resource PE devices to keep all the the customer
Requirement for don't have to MAC addresses but since the IP
the Service keep the of the customer addresses can
Provider customer MAC and MAC be aggregated,
addresses addresses are some sites may
not aggregatable not need entire
routing table of
the customer
More, it
Resource Basic,it Basic,It requires requires either
Requirement for requires only only layer 2 Layer 3 switch
the Customer layer 2 switch switch or Router at the
customer site
Yes, Service Yes,Service Yes with 6vPE
Provider is Provider is technology it
IPv6 Support transparent for transparent for provides IPv6
the IPv4 and the IPv4 and supports for the
IPv6 packets IPv6 packets VPN customers
Route Reflector
With H-VPLS
for the MPBGP
full mesh PW
Hierarchy None sessions
requirement is
between PE
avoided
devices
OSPF Down
In the core split Bit,IS-IS
horizon prevents Up/Down Bit,
It is only point
loop. If traffic EIGRP Site of
to point, there
Loop Prevention comes from PW Origin prevents
is no chance to
it is not sent loop when CE
loop
back to another is multihomed
PW to the MPLS L3
VPN PE
Design
Single Carrier Dual Carrier
Requirement
Complexity Less Complex More complex, due to more
protocols technically, also two
different business to deal with
Disadvantage compared to Two providers give an ability
Cost same level of availability with
dual carrier to discuss the costs with them
As it can be seen from the above picture, the backdoor VPN link (best-
effort no-service guarantee) is used as primary. Customer does not want that
because they pay for guaranteed SLA so they want to use MPLS backbone as
primary path. OSPF sends prefixes over the backdoor link as Type 1 LSA.
When PE2 at the remote site receives the prefixes via Type 1 OSPF LSA,
it doesn’t generate Type 3 LSA to send down a CE2.
Two approaches can help to fix this problem. One option is shown as
below. OSPF Sham-link.
With the OSPF Sham-link, PE2 will send OSPF Type 1 LSA towards
CE2. With only metric manipulation, MPLS backbone can be made
preferable.
Another approach would place the PE-CE link into Area 0. For the
headquarters, Orefe would have already put those links in Area 0. If multi-
area design is required, then Orefe should place the branch offices in a non-
backbone area.
Once PE-CE links are placed in Area 0, then the backdoor link should be
placed in different area. This makes CE1 and CE2 an ABR. Prefixes are
received over backdoor link as Type 3.
Without Sham-link they are also received as Type 3 (assuming domain
ID, process ID match between PEs), and then with metric manipulation,
MPLS backbone can be made preferable.
SoO 10:500 is set on the PE3-CE3 and PE3-CE4 links. When the PE4
receive the prefixes from PE3, it doesn’t advertise the prefixes to CE3. SoO
10:500 is set on the PE1-CE1 and SoO 10:500 is set on the PE2-CE2 links.
SEAMLESS MPLS
Seamless MPLS provides the architectural baseline for creating a
scalable, resilient, and manageable network infrastructure.
Seamless MPLS architecture can be used to create large-scale MPLS
networks.
It reduces the operational touch points for service creation.
Seamless MPLS architecture is best suited to the very large scale Service
Provider or Mobile Operator networks that have 1000s or 10s of thousands
access nodes and very large aggregation networks.
IP traffic increases rapidly due to video, cloud, mobile Internet,
multimedia services and so on. To cope with the growth rate of IP Traffic,
capacity should be increased but at the same time operational simplicity
should be maintained.
Since there might be 1000s to 10s of thousands of devices in the Access,
Aggregation and Core network of large Service Providers or Mobile
Operators; extending MPLS into the Access networks comes with the main
problems:
Large flat routing designs adversely affect the stability and convergence
time of the IGP.
Resource problems on the low-end devices or some access nodes in the
large-scale networks.
In Seamless MPLS Access, Aggregation and Core networks are portioned
in different IP/MPLS domains.
Segmentation between the Aggregation and the Core networks can be
based on Single AS Multi Area design or Inter-AS.
Partitioning Access, Aggregation and Core network layers into an
isolated IGP domains helps reduce the size of routing and forwarding tables
on individual routers in these domains.
This provides better stability and faster convergence.
In Seamless MPLS, LDP is used for label distribution to build MPLS
LSPs within each independent IGP domain.
This enables a device inside an access, aggregation, or core domain to
have reachability via intra-domain LDP LSPs to any other device in the same
domain. Reachability across domains is achieved using RFC 3107
(BGP+Label). BGP is used as an Inter domain label distribution protocol.
Hierarchical LSP is created with BGP.
This allows the link state database of the IGP in each isolated domain to
remain as small as possible while all external reachability information is
carried via BGP that is designed to carry millions of routes.
In Single AS multi-area Seamless MPLS design, IBGP labeled unicast is
used to build inter-domain LSPs.
In Inter-AS Seamless MPLS design, IBGP labeled unicast is used to build
inter domain (Aggregation, Core domains) LSPs inside the AS.
EBGP labeled unicast is used to extend the end-to-end LSP across the AS
boundary.
There are at least five different Seamless MPLS models based on access
type and network size. Network designers can use any of the below models
based on the requirements.
1. Flat LDP Aggregation and Core
2. Labeled BGP access with flat LDP Aggregation and Core
3. Labeled BGP Aggregation and Core
4. Labeled BGP Access, Aggregation and Core
5. Labeled BGP Aggregation and Core with IGP Redistribution into
Access network
Let’s describe these models briefly.
Three models defined in RFC 2547 for Inter-AS MPLS VPNs. Inter-AS
Option A is the first. It is also known as 10A.There is also fourth Inter-AS
MPLS VPN deployment option which has been invented by Cisco. It is
known as Option AB (A.K.A Option D). Option AB will be explained in this
chapter as well.
INTER-AS MPLS VPN OPTION A
Inter AS Option A is the easiest, most flexible, most secure Inter
autonomous system MPLS VPN technology.
Inter-AS Option A is known as back-to-back VRF approach as well.
Service providers treat each other as customers.
Between the service providers, there is no MPLS; only IP routes are
advertised. For each customer VPN, one logical or physical link is set up.
Over the link, any routing protocol can run.
But in order to carry end-to-end customer routing attribute, it is ideal to
run the same IGP at the customer edge and between ASBRs.
HOW INTER-AS MPLS VPN OPTION A WORKS
In the below topology VPN Customers A and B are connected to two
different service providers via MPLS Layer 3 VPN.
In the topology shown above, AS10 and AS20 are the service providers.
There are two Inter-AS MPLS VPN customers, VPNA and VPN B.
NOTE: All the customers of the service providers could run different
routing protocol at different sites. For example, while customer VPN A
who connected to the AS10, can run OSPF as PE-CE routing protocol, same
customer (VPN A) can run EIGRP as PE-CE routing protocol with the AS20.
In the service provider network, PE routers are connected to the customer
locations. This is regular MPLS Layer 3 VPN operation.
Only the technologies and the protocols are different at the ASBR in
Inter-AS MPLS VPN deployment options. Inside the Autonomous Systems
regular MPLS VPN rules apply.
PE routers run MP-BGP with route reflector to advertise VPNv4 prefixes.
(It could be full mesh BGP VPN deployment as well).
PE router that connects the two providers at the edge is known as ASBR
(Autonomous System Boundary Router).(Similar to other Inter-AS MPLS
VPN options). ASBR learns all the VPN routes from Inter AS customers. But
the main advantage from the scalability point of view in Inter-AS Option B,
ASBR doesn’t have to keep the customer prefixes in the VRF. It was the case
in Inter-AS MPLS VPN Option A.
Inter AS Option B does not require separate sub interfaces and different
IGP protocols per sub-interface between the ASBRs. (Another scalability
advantage, this time scalability advantage comes from the configuration
complexity point of view).
Between the ASBRs, VPNv4 address family is enabled on the ASBR
routers in Inter-AS Option B.
ASBRs (Autonomous System Boundary Routers) advertise the customer
prefixes that are learned from local BGP route reflector, to another Service
Provider through MP-BGP VPNv4 session.
Route-target extended community is placed on the VPNv4 update. Route
Target help to place the VPN prefixes to appropriate VRF. (This is also
regular MPLS VPN behavior).
When PE receives the customer IP prefixes, it changes next hop as
themselves (This is done by default in MPLS VPN, you don’t have to enable
next-hop-self)
Customer IP prefixes are sent as VPN prefixes, by adding Route
Distinguisher to the IP prefixes and VPN prefixes are sent to the VPNv4
route reflector. Route reflector does not change the next hop; rather, it only
reflects the route to the ASBR.
ASBR does not place MP-BGP prefixes (Customer prefixes) into VRF,
since it does not have to keep VRF table but customer prefixes are
maintained in the VPNv4 BGP table in Inter AS Option B.
By changing the next hop, ASBR from SP-A sends VPNv4 prefixes
through MP-BGP session to SP-B ASBR.
SP-B ASBR sends the customer prefixes to its local route reflector.
Route reflector in the SP-B domain reflects the prefixes as is, send to the
PE that is connected to Customer A2 location. SP-B PE sets the next hop
again and sends the prefixes to the customer A2 router.
As shown in the service provider domains, there are three LSP. Because
whenever BGP next hop changes, LSP is terminated at the point where the
next hop is changed, and new VPN label is assigned on that router for all the
VPN prefixes.
Inter AS Option B does not require LDP or IGP protocols between the
Autonomous Systems; thus service providers do not need to know the
internal addressing structure of each other.
Similar to Inter AS Option A, you do not need to redistribute VPN
prefixes at the ASBR.
Route reflectors store all the VPN routing information for each customer,
and they advertise those prefixes to ASBR.
Operators need to manage MP-BGP on the ASBRs as well as on the route
reflectors.
Caveats in Inter-AS MPLS VPN Option B:
By default ASBR would not accept the VPN prefixes if there is no
corresponding Route Target for that VPN.
Since ASBRs do not have a VRF and the Route Target for the VPN
customers in Inter-AS MPLS VPN Option B, ASBR would reject the VPN
prefixes.“no bgp route-target filter” configuration knob allows a router to
accept the VPN prefixes even if it doesn’t have the corresponding Route
Target for that VPN.
Route Target Rewrite may be necessary on the ASBR routers. Because
Route Target attributes has to be the same end to end for a particular VPN.
If one AS uses different RT for a given VPN, another AS need to Rewrite
and use the same RT value on the ASBR.
Answer is the Inter-AS Option AB. As you can see from the above figure,
on the ASBRs, separate sub-interface is created per VRF.
This provides data plane isolation. QoS configuration can be applied per
customer. As customers traffic are isolated via VRFs, better security is
achieved as well compare to the single interface.
The difference between Inter-AS Option AB and the Inter-AS Option A
is, customer prefixes is advertised through the single EBGP session between
the ASBRs in Option AB.
There is no separate EBGP session per VRF between the ASBRs as in the
case of Inter-AS Option A.
Control plane traffic that is the routing advertisement and other routing
protocol packets are sent through the single EBGP connection over the
Global routing table.
Customer data plane traffic is sent as IP traffic without MPLS
encapsulation.
USES CASE OF INTER-AS MPLS VPN OPTION AB:
When the customer requires an MPLS VPN service from the two service
providers with strict QoS SLA and the number of Inter-AS MPLS VPN
customer is too much between the two service providers, it can be used.
At least, initially it is created for these reasons but in my opinion real
applicability would be the migration from Inter-AS Option A to Inter-AS
Option B. During the migration from Option A to Option B, Inter-AS
Option AB can be used as transition solution.
Question 1:
Which VPN solution would be best to address this problem so that the
agreement can go forward?
Answer 1:
Among the available Inter-AS MPLS VPN options, Option B and Option
C are the most suitable ones because of the number of expected VPN
customers. However, Option C requires internal routing information such as
PE and VPN RR addresses to be leaked between the service providers.
So the best solution based on these requirements is Inter- AS MPLS
Option B
Question 2:
Based on the provided simplified network topologies of the two service
providers, please select the protocols, which need to be used on the devices
that have a check box next to them.
Answer 2:
Below is the answer of the second question.
INTER-AS MPLS VPN OPTIONS COMPARISON
Default Convergece in case Slow, VRF, RIB, FIB , Fast, only LFIB needs to Very fasy due to only LDP
of a ASBR failure LFIB needs to convergce convergece adjacency between ASBRs
Hard, requires MPLS VPN,
Troubleshooting Easy Moderate Route reflector and good
routing knowledge
Only interlink between two
domains are redistributed if Yes Pprovider Edge router
Redistribution Yes for each customer VRF the next-hop self is not loopbacks and Route
implemented on the local Reflector subnets
ASBR
Same as Option B
Requires MPLS between additionaly, since it is
Not suitable if there is time ASBRs and VPN required to leak internal
constraint for the operation configuration on the ASBRs routing information between
each and every customer but there is no configuration two AS, Option C is suitable
Merger&Acqusition VRF needs to be for each and every customer for the same conpany's
provisiones thus it requires thus operation can be much different administrative
very long time for the faster compare to Option A domain.Thats why it is very
migration migration suitable for the company
merger design
Inter-AS MPLS VPN Options Comparison
MPLS CARRIER SUPPORTING CARRIER CASE STUDY
Smallnet is an ISP that provides broadband and business Internet to its
customer. Biggercom is a transit service provider of Smallnet, which
provides Layer 3 IP connectivity between Smallnet sites. Smallnet wants to
carry all its customers’ prefixes through BGP over Biggercom infrastructure.
Biggercom doesn’t want to carry more than 1000 prefixes of Smallnet in the
Smallnet VRF.
Smallnet has around 3200 customer prefixes.
Provide a scalable solution for Biggercom and explain the drawbacks of
this design, given that Biggercom provides IP connectivity for Smallnet and
does not want to carry more than 1000 of Smallnet’s prefixes through Carrier
supporting Carrier (CsC) architecture.
Four steps are required for MPLS traffic engineering to take place:
• Link-state protocols carry link attributes in their link-state advertisements
(LSAs).
• Based on the constraints defined, the traffic path is calculated with the
help of Constrained Shortest Path First (CSPF) algorithm.
• The path is signaled by Resource Reservation Protocol (RSVP).
• Traffic is then sent to the MPLS traffic engineering tunnel.
LET’S TAKE A LOOK THESE STEPS IN DETAIL:
1. By default, link-state protocols send only connected interface addresses
and metric information to their neighbors. Based on this information, the
Shortest Path First (SPF) algorithm creates a tree and builds the topology of
the network. MPLS traffic engineering allows us to add some constraints. In
the above figure, let’s assume the R2-R5 link is 5 Mbit/s; R5-R6 is 10 Mbit/s;
and all the interfaces between the bottom routers are 6 Mbit/s.
If we want to set up a 6-Mbit/s tunnel, SPF will not even take the R2-R5-
R6 path into consideration, because the link from R2 to R5 does not satisfy
the minimum requirement.
In addition, we could assign an administrative attribute, also called a
“color,” to the link. For example, the R2-R5-R6 interfaces could be
designated blue, and the R2-R3-R4-R6 route could be assigned red. At the
headend, the constraint can then specify whether to use a path that contains a
red or blue color.
The color/affinity information, as well as how much bandwidth must be
available, reserved, and unreserved for the tunnel are carried within the link-
state packet. In order to carry this information, some extensions have been
added to the link-state protocols.
Open Shortest Path First (OSPF) carries this information in the Opaque
LSA (or Type 10 LSA), and Intermediate System to Intermediate System (IS-
IS) uses TLV 22 and 135 for traffic engineering information.
Below table summarizes link attributes, which are carried in OSPF and
IS-IS for MPLS Traffic Engineering purpose.
Some of that dropped traffic might be very important, so it’s in our best
interest to protect it.
To make traffic engineering tunnels aware of the data plane traffic, the “
auto bandwidth “ feature of MPLS traffic engineering might be used.
When auto bandwidth is enabled, the tunnel checks its traffic periodically
and signals the new LSP.
If a new LSP is signaled in this way, only the 80 Mbit/s LSP can survive
over the 100 Mbit/s link. There is not enough bandwidth for the 40 Mbit/s
LSP.
If there is an alternative link, 40 Mbit/s of traffic can be shifted to that
link.
Otherwise, circuit capacity must be increased or a new circuit must be
purchased.
If there is no alternate link and no time to bring in a new circuit, QoS
could potentially be configured to protect critical traffic.
Diffserv QoS with MPLS traffic engineering is mature and commonly
used by service providers in these cases.
How can one MPLS traffic engineering LSP beat another LSP?
This is accomplished with the priority feature of the tunnels. Using
priority, some LSPs can be made more important than others. To achieve this,
the setup priority value of one LSP should be smaller than the hold priority
value of the other LSP.
Once the path is computed and signaled, it doesn’t mean that traffic by
default follows the traffic engineering path.
Actually, it still follows the underlying interior gateway protocol path.
Since traffic engineering can work only with the link-state protocols Open
Shortest Path First (OSPF) and Intermediate System to Intermediate System
(IS-IS), traffic follows the shortest path from the cost point of view.
There are many methods for sending traffic into the MPLS traffic
engineering LSP. These are static routing, policy-based routing, class-of-
service-based tunnel selection (CBTS), policy-based tunnel selection (PBTS),
Autoroute, and forwarding adjacency.
Static routing, policy-based routing and CBTS are static methods and can
be cumbersome to manage.
But to send specific, important traffic into tunnels, classed-based tunnel
selection can be a good option.
Based on the EXP bit in the label stack, traffic can be classified and sent
to an LSP that is QoS-enabled for protection.
Autoroute and forwarding adjacency, on the other hand, are dynamic
methods to send traffic into traffic engineering LSPs.
MPLS Traffic Engineering Autoroute
By default, the shortest path is used for the destination prefix, and next-
hop resolution is done for the next direct connection. When the autoroute
feature is implemented, the next hop automatically becomes the destination
address of the tunnel tailend (Tunnel destination).
The drawback of this approach is there is no traffic classification or
separation, so all the traffic -- regardless of importance -- is sent through the
LSP. Once MPLS traffic engineering is enabled and Autoroute is used, traffic
can be inserted only from the ingress node (label-switched router). Any LSR
other than the ingress point is unable to insert traffic into the traffic
engineering LSP. Thus autoroute can only affect the path selection of the
ingress LSR.
MPLS Traffic Engineering Forwarding adjacency
Once we enable this feature, any MPLS traffic engineering tunnel is seen
as a “point-to-point link” from the interior gateway protocol point of view.
Even though traffic engineering tunnels are unidirectional, the protocol
running over an LSP in one direction should operate in the same way on the
return path in a point-to-point configuration.
3. MPLS Traffic Engineering Fast Reroute
Before explaining how fast reroute is used in the context of MPLS traffic
engineering, you’ll need to understand the basics of fast reroute.
In the below figure, there are two paths between Router 2 (R2) and
Router 6 (R6).
If we assume that Open Shortest Path First (OSPF) is used in this
topology, then based on end-to-end total link cost, the R2-R5-R6 path would
be chosen.
The information for the R2-R3-R4-R6 link is also kept in the OSPF link-
state database table.
If the R2-R5-R6 path fails, the SPF algorithm runs on every router in the
same area, and R2 selects R3 as the next hop. It puts this information into the
routing table, and if the router supports separated control and data planes, the
routing information is distributed into a forwarding information base as well.
The detection of link failure, the propagation of information to every
device in the flooding domain, and calculating and installing the new paths
into the routing and forwarding tables of the devices will require some time.
Interior gateway protocol parameters for propagation and detection can be
changed, and convergence time might be reduced to even less than one
second.
But for some applications like voice, this may not be enough.
We may need latency to be less than 100 or 200 ms in order to reroute
traffic without experiencing adverse effects. MPLS traffic engineering can
often provide a backup path within 50 ms, because the alternate path is
calculated and installed into the routing and forwarding information bases
before failure happens.
Below figure summarizes the benefits of MPLS Traffic Engineering Fast
Reroute and also shows Primary and Backup TE LSPs.
If the R2-R5 link fails and we need to protect that link, we call that link
protection. Backup and pre-signaled paths can be created between R2-R3 and
R5, so that if the R2-R5 link fails, traffic is automatically redirected to the
backup path. Because the failure is local to R2, it is called local protection.
It’s also possible for R5 to fail. In this case, the R2-R3-R5 path will not
work, so we need to bypass R5 completely. An R2-R3-R4-R6 pre-signaled
path could be created for node protection purposes, because in this case, we
want to protect the node, rather than the link.
Below figure summarizes MPLS Traffic Engineering Fast Reroute Link
Protection operation.
Most failures are a link failure in the networks. Node failure is less
common compares to link failure. Thus, many networks only enable link
protection. MPLS Traffic Engineering Fast Reroute can cover all the failure
scenarios. An IP Fast reroute technology such as LFA (Loop Free Alternate)
requires high-mesh topologies to find an alternate path, which will be
programmed in the data plane.
If the topology is a ring, then LFA cannot work. It requires a tunnel to the
PQ node. Remote LFA is another IP fast reroute technology, which allows to
be created a tunnel from the PLR to the PQ node.
There are more Fast Reroute Protection mechanisms beside MPLS Traffic
Engineering. In the below section these mechanisms are briefly introduced.
As depicted in the above picture, the green path is a backup path and it
cannot pass through any devices or links that primary LSP passes.
The second biggest drawback of having MPLS Traffic Engineering path
protection as opposed to local protection with the link or node protection
is the number of LSPs.
Since one backup LSP is created for each primary LSP, the number of
RSVP-TE LSPs will be almost double compared to 1:N local protection
mechanisms. In the transport networks SONET/SDH, OTN, MPLS-TP all
have linear protection schema which are very similar to MPLS Traffic
Engineering Path Protection.
If the decision is made with the transport team, it is suggested to continue
with their operational model, but at the end core the network will have
scalability and manageability problems.
Finally, switching to alternate path-in-path protection might be slower
than local protection mechanisms since the point of local repair (node which
should reach failure) may not be directly connected to a failure point. Thus,
failure has to be signaled to the head end, which might be many hops away
from the failure point.
In the above topology, even if the router in the middle of a topology fails,
failure has to be signaled to the R2 and R2 switchover to the backup (green)
LSP.
Assume all the link cost is the same and link-state protocol is used, in the
above figure, if R1-R2 link fails, to reach the destination networks which are
behind R2; R1 needs to find a way to send a packet.
When R1-R2 link fails, for the IP and MPLS networks, if R1 sends a
packet to R3, since all the link cost is the same, R3’s next-hop for the R2 is
R1. This is called micro-loop. Until R3 learns of the failure and computes its
new primary next-hop R5, packets are looped in between R1 and R3. This is
can cause a congestion on the link.
Loop free alternate mechanism looks for the alternate path for the R1 to
send a packet to R2 when R1-R2 link fails. In order mechanism to work, R1
runs additional SPFs from its neighbor point of view by using CSPF. It is
obvious that R1 cannot use R3 as its alternate next-hop. All other five
mechanisms can solve this issue with the different ways. This is the
drawback of LFA – there may not be either node or link protection because it
is very topology dependent. Some small topologies (RFC6571) can work
very well.
If somehow R3 would know that packet is coming from its primary next-
hop and it should send the packet to its alternate next-hop then packet could
reach to R2 through R1-R3-R5-R6-R4-R2. This is called U-turn alternate
since packet is sent back to R3 from R1 without causing a micro-loop.
Mechanism to work, R3 either explicitly marked or implicitly learns that
packet is coming from its primary next hop. Also R3 needs to have loop-free
node-protecting alternate path. Loop-free alternate traffic is sent to a neighbor
who will not send it back. In U-turn alternate, traffic is sent to a neighbor
who will send it to a neighbor’s alternate instead of back.
The other three mechanisms rely on tunnels. Before going further
explanation, it is important to understand some general concepts.
First, there is Remote LFA. The basic concept is to find a node that the
PLR can reach without going through the failure point and where that node
can also reach the destination (or a proxy for the destination) without going
through the failure point. Then the PLR can tunnel traffic to this node and it
will reach the destination without going across the failure point.
To find this node, there are two steps. First, the PLR determines all nodes
that it can reach without going through the primary next-hop. This set of
nodes is called the extended P-space. Either the PLR’s shortest path to these
nodes avoids the primary next-hop or the PLR has a neighbor whose shortest
path to these nodes avoids the primary next-hop.
• For example, the set of routers that can be reached from R1 without
traversing R1-R2 is called the extended P-space of R1 with respect to the
link R1-R2.
Second, the set of routers from which the node R2 can be reached, by
normal forwarding, without traversing the link R1-R2 is termed the Q-space
of R2 with respect to the link R1-R2.
Any nodes that are both in extended P space and Q space are candidate
PQ nodes that can be the end of the repair tunnel.
In the above example, R6 is a PQ node. R3 and R5 are not in Q space
because when either decapsulate a packet destined to R2, the packet would be
sent back to R1 and thus cause a loop.
R4 is not in extended P space because neither R1 nor R3 can reach R4
without potentially going via R1-R2. While any tunnel technology, such as IP
in IP, GRE or L2TPv3 could be used; current implementations depend upon
using MPLS tunnels signaled via LDP (and targeted LDP sessions to protect
LDP traffic).
When R6 is chosen as the tunnel end point, once packet is decapsulated,
packet is sent towards the R2 without looping back. Different
implementations may implement different policy to select among PQ nodes.
Remote LFA works based on this principle. Currently only MPLS tunnels
are supported. R6 is chosen in this topology for MPLS tunnel endpoint. For
an IP packet, R1 stores an alternate next-hop to R3 with the MPLS label that
R3 provided to reach R6.
If the packet has an LDP-distributed label, R1 must learn the MPLS label
that R6 uses for the FEC for R2; this can be done via a targeted LDP session.
Then, the label on the packet is swapped to one that R6 understands to
mean R2 and finally the label that R3 understands to mean R6 is used pushed
on the packet. R6 as a penultimate router does PHP and R4 receives a
labeled packet for R2.
This describes basically how Remote LFA can be used to provide link
protection. Like LFA, Remote LFA is not guaranteed to find link-protecting
alternates in a topology but it does significantly improve the coverage
compared to LFA. Additional computation can be done so that Remote LFA
finds node-protecting alternates when available.
The second tunneling mechanism: MPLS RSVP-TE-Fast Reroute can
provide guaranteed protection for the links, nodes and SRLG failures. When
used for local repair, an LSP can be created to protect against a particular link
or node failing; each LSP requires a CSPF at the PLR.
For example, R1 could have an LSP to protect against link R1-R2 failing;
that LSP would be routed R1-R3-R5-R6-R4-R2. This LSP can be used as
an alternate. Just as with any tunneling mechanism, targeted LDP
sessions are needed to learn the labels protect the LDP traffic.
Since failure is local, as soon as it detects, alternate path can be started to
use. Assume there is a LSP between R1 to R4. Since MPLS TE LSPs use by
default shortest IGP path, R1-R2-R4 LSP is established.
If R1-R2 link will be protected, then backup tunnel is configured through
the nodes R1-R3-R5-R6-R4-R2.
To reach the same destination behind R4, traffic flow would be R1-R3-
R5-R6-R4-R2-R4. There is obvious hair pinning. Destination is R4 but since
the backup LSP has to be terminated on R2 (It is link protection so next-hop
tunnel is configured), traffic comes to R2 by passing through R4 and then
from R2, back to R4.
This is well-known and common problem in the MPLS TE-FRR
networks.
It appears unnecessarily with fast-reroute when the repair end-point is
pinned to the next-hop (for link protection) or next-next-hop (for node
protection).
Lastly, the third tunneling mechanism is Not-Via that can also guarantee
protection for link, node, and SRLG failures. To accomplish this, each router
is given additional IP addresses with extra semantics.
For instance, R2 would have an address that means “R2 but not via R1-
R2”. To find the next-hop for “R2 but not via R1-R2”, each router would
remove R1-R2 from the network graph and then compute an SPF. The
computation can be optimized with ISPF, but many ISPFs can be needed
(per failure-point).
The alternate from R1 to R2 would thus involve tunneling the packet by
adding a header that had a destination address of “R2 but not via R1-R2”.
The path from R1 to “R2 but not via R1-R2” is R1-R3-R5-R6-R4-R2.
Because of the special semantics of the Not-Via address, R3 knows that it
shouldn’t use R1-R2 link to reach R2 and it sends the packets to R5.
Design
IP FRR MPLS TE FRR
Requirement
Less Scalable,
Uses RSVP for
label distribution
and tunnel
Scalability More Scalable creation, RSVP is
soft state and
refreshing the
tunnel state is
resource intensive
Works very well
Works very well
since IP FRR
because if the
mechanisms need
Working on Full constraints are met
topology to be
Mesh TE FRR can find
highly meshed to
an alternate path in
find an alternate
any topology
path
Works very bad, it
requires tunnelign It already uses
mechanisms such tunnel so can
Working on a as GRE or MPLS protect link, node
Ring Topology to find a node or entire path in
which will not ring topology as
send the traffic well
back
Worst topology
for IP FRR
mechanisms since Finding an
Working on a to find a node alternate tunnel is
Square Topology which won't send same as the other
the traffic back topologies
requires extra
processing
Suitable on Wide
Yes Yes
Area Networks
Standard LFA,Rlfa,TI-LFA Yes IETF
Protocol Cisco Proprietary Standard
It has been out
there quite some
Stuff Experince Not well known time and deployed
on many network,
it is known
Link Protection Yes Yes
Node Protection Yes Yes
Path Protection No Yes
Complexity Easy Complex
SRLG Protection No Yes
Very old
technology, used
Very new
in many ISP,
technology, not
Maturity VPN-SP, Mobile
commonly used
SP and some large
by the industry
Enterprise
networks for years
IP, It uses IPv4 or IPv4 routing
Control Plane IPv6 routing control plane and
Protocols control plane only RSVP-TE is used
for it's operation as a control plane
Resource
Minimum Too much
Requirement
IPv6 Support Yes No
Generally bad. If
the topology
highly meshed it
It can cover every
is good, otherwise
topology,
finding a
ring,square,partial-
Coverage repair/alternate
mesh, full-mesh
path is very hard,
can be covered
link metrics
%100
should be
arranged very
carefully
If there are
If there are
multiple
multiple
Load Balancing repair/backup
repair/backup
over the backup node, multiple
node, traffic can
path tunnels need to be
be shared between
created for load
them
sharing
Training Cost Cheap Moderate
Troubleshooting Easy Hard
Finds a node
which won't send
the traffic back
via Reverse SPF.
It uses MPLS in
Reverse SPF
the dataplane,
allows the node to
receives a label
calculate the SPF
Routing Loop over the protection
for its neighbor
tunnel. Creating a
point of view,
loop in MPLS is
same concept is
almost impossible
used in BGP
Optimal Route
Reflector
placement as well
Question 1:
Which datacenter interconnect solution is most appropriate for this
company and why?
A. OTV
B. LISP
C. EoMPLS
D. TRILL
E. Fabricpath
F. VPLS
Answer 1:
The company is looking for a standard-based Layer 2 DCI solution. We
know that they are looking for Layer 2 extension since they have applications
that require non-IP heartbeat.
Since OTV and FabricPath are Cisco-specific solutions, they cannot be
used. Also, FabricPath is not recommended for use as a DCI solution.
LISP is not a L2 extension protocol.
EoMPLS could be used, but since company has a lot of datacenters, it is
not scalable.
TRILL is not recommended as a DCI solution
The best option for the given parameters is VPLS.
Question 2: The company sent their topology as it is shown below. Is
there a solution to minimize the effect of specific VLANs in case their DC
interconnect switch and the service provider link goes down?
Answer 1:
MPLS VPNs and the MPLS Traffic Engineering are the applications of
MPLS. They were not the initial purpose of MPLS.
By the time these capabilities are invented and used in MPLS.
MPLS is an encapsulation/tunneling mechanism. It is not a routing
protocol. That’s why it is not an alternative to the routing protocols.
MPLS provides virtualization with the MPLS VPNs but MPLS VPNs are
not the initial purpose of inventing the MPLS.
Initial purpose of the MPLS was to avoid IP destination-based lookup and
increase the performance of the routers. Thus the correct answer of this
question is ‘ A ‘.
Question 2 :
Which of the options below are the characteristics of MPLS Layer 2 VPN
service?
A . MPLS Layer 2 VPN allows carrying of Layer 2 information
over service provider backbone.
B . Layer 2 VPN can provide point-to-point type of connectivity
between customer sites.
C. It is used to carry Layer 3 routing information of the customers
over the service providers.
D. It is used for datacenter interconnect.
E . Layer 2 VPN can provide point-to-multi- point type of
connectivity between customer sites.
Answer 2:
MPLS Layer 2 VPNs doesn’t carry layer 3 routing between the customer
sites. All the other options are correct for MPLS Layer 2 VPN service.
Question 3 :
Which of the below statements describe MPLS Layer 3 VPN service?
A . Service Provider network is transparent to routing of the
customer
B. It offloads routing between sites of the customer to the Service
Provider
C. It improves network convergence time
D. OSPF is most common routing protocol between customer and
the Service Provider
Answer 3:
MPLS Layer 3 VPN is a peer-to-peer service. Customer and the Service
Provider are the routing peers. Service Provider controls the WAN routing of
the customer. Thus SP network is not transparent to routing of the customer.
Customer routing is offloaded to the Service Provider in MPLS Layer 3 VPN.
Thus the correct answer of this question is ‘ B’.
Network convergence time is not improved with MPLS Layer 3 VPN. In
fact, convergence is much better with MPLS Layer 2 VPN.
The Service Providers can support OSPF as a PE-CE routing protocol but
it is not the most common protocol. In fast Static Routing and the BGP is the
most common routing protocols with MPLS Layer 3 VPN service.
Question 4 :
Enterprise Company is using MPLS Layer 3 VPN for their Wide Area
Network connections.
Their IGP protocol is EIGRP.
Service Provider’s of the company is using IS-IS as an internal
infrastructure IGP in their backbone and LDP for the label distribution.
Which protocols/technologies should be used on Point A, B and C in the
below topology?
Answer 4:
Correct answer is Point A: EIGRP, Point B: EIGRP+MPLS+IS-IS+MP-
BGP+Redistribution+VRF, Point C: MPLS+IS-IS.
PE router has to support customer routing as well as infrastructure
routing. Customer routing protocol for this question is EIGRP. Service
Provider is using IS-IS.
That’s why on the PE and P devices IS-IS has to be enabled.
Also on the PE, EIGRP should run as well. VRF has to be enabled on the
PE and redistribution from EIGRP into IS-IS and IS-IS into EIGRP is
necessary. MPLS has to be enabled on both PE and P devices.
MP-BGP is only necessary on the PE devices. MPLS VPN removed the
need of BGP in the core.
Question 5 :
Which below option depicts Inter-AS MPLS VPNs Option B
deployment?
A.
B.
C.
D.
Answer 5:
Picture A shows Inter-AS MPLS VPN Option A that is back-to-back
VRF option.
Picture B is very close to Inter-AS MPLS VPN Option B, but it is not
correct since ASBRs between two AS in Option B have VPN connection.
Picture B shows IPV4 +LABEL, there is no such an option in Inter-AS
MPLS VPNs
Picture C shows Inter-AS MPLS VPN Option B. Thus the correct answer
of this question is ‘ C’.
Picture D shows Inter-AS MPLS VPN Option C.
Question 6 :
Service Provider Company due to their customer growth, re-evaluating
their addressing plans. They want to ensure that their Enterprise MPLS
Layer 3 VPN customer address space don’t overlap on their PE device.
Which below option SP Company should use to avoid overlapping
address space from the customers?
Question 7 :
Enterprise Company has 6 datacenters. Between the datacenters, they
have non-IP clustering heartbeat traffic. They are looking scalable solution,
which will allow their future growth.
They don’t want to implement any vendor specific solution between the
datacenters.
Their Service Provider is able to provide MPLS services.
Which Datacenter Interconnect solution is most appropriate for this
company?
A. OTV
B. LISP
C. EoMPLS
D. TRILL
E. FabricPath
F. VPLS
Answer 7:
Company is looking for standard based Layer 2 DCI (Datacenter
Interconnect) solution.
We understand that they are looking for Layer 2 extension since they
have an applications which requires non-IP heartbeat traffic.
Since OTV and Fabricpath Cisco specific solutions, they cannot be
chosen based on the given requirements. Also there are some scalability
limits for OTV so it shouldn’t be chosen in any decent DCI deployment. Also
Fabricpath is not a recommended DCI solution due to many reasons.
LISP is not a L2 extension protocol . Although there are some attempts to
provide Layer 2 in LISP, there is no standard for that.
EoMPLS (Ethernet over MPLS) could be used but since company has a
lot of datacenters, it is not scalable solution. And in the requirement of the
question, it is given that Company is looking for scalable solution.
TRILL is not a recommended DCI solution as well. Inside the datacenter
it can be used as one of the Fabric technologies but not for multiple
datacenter interconnect.
Best option based on the given requirement is VPLS. It is standard based,
used by many companies around the world, proven to scale and provides all
the requirements in the question.
Question 8 :
Which below option is correct for Rosen GRE Multicast in MPLS Layer
3 VPN service?
A. Multicast traffic is carried over GRE tunnels
B. Unicast Traffic is carried over GRE tunnels
C . LDP is used for control plane for Rosen GRE in the Service
Provider network
D. Multicast Traffic is carried over LDP LSP
E. GRE tunnels are created between the customer sites.
Answer 8:
In Rosen GRE Multicast approach, GRE tunnels are created in the
Service Provider network. Not in the customer network. Thus Answer E is
incorrect.
Multicast traffic of the customer is carried over GRE tunnels of the
Service Provider. LDP is not used for Multicast control plane.
Unicast transport/tunnel LSP can be used for Unicast but not for
Multicast. Thus the correct answer of this question is ‘ A’.
Question 9 :
Fictitious Service Provider Company runs MPLS Traffic Engineering on
their network. They protect both MPLS Layer 2 and Layer 3 VPN service
customers with MPLS Traffic Engineering Fast Reroute.
Company has chosen to deploy local protection rather than Path
protection since they know that local protection can provide better fast
reroute time in case of failure.
They deployed full mesh link and node protection LSPs.
Which one of the below failure scenarios Service Provider Company can
cover?
A. PE-CE link failure
B. CE node failure
C. PE node failure
D. P node failure
E. P to P link failure
F. PE to P link failure
Answer 9:
In the question, it is given that company is doing MPLS Traffic
Engineering Local protection. As it is explained in the MPLS chapter, two of
the Local protection mechanisms are Link and Node protection.
With Link and Node protection, edge device failure and edge link failures
cannot be protected. This failure could be covered with BGP PIC Edge
feature but question is specifically asking about MPLS Traffic Engineering
link and node protection.
P node failure, PE to P link and P-to-P link failure scenarios can be
protected with TE FRR backup LSPs since none of them are the edge failure
case.
Question 10:
Which of the options below are used in the MPLS header?
A. 20 bits MPLS label space.
B. Link cost
C. 12 bits TTL field.
D. 3 bits EXP field for the QoS.
E. Protocol number
Answer 10:
Link cost and protocol number is not in the MPLS header.
There are MPLS Label, TTL and the EXP fields in the MPLS header.
Label field is 20 bits, EXP is 3 bits and TTL is 8 bits long.
But in the question TTL field is shown as 12 bits in Option C. Thus that is
wrong.
Correct answer of this question is A and D.
Question 11:
What are the characteristics of the below topology? (Choose all that
apply)
A. It doesn’t support flow based load balancing
B. There is no spanning tree in the network core
C. Split horizon is enabled in the network core
D. D. It advertises the MAC Addresses through BGP control plane
E. MAC Address information is learned through dataplane
F. It requires full-mesh point to point PW between the VPN sites
G. IS-IS is used to advertise MAC addresses between the sites
Answer 11:
In order to give correct answer to this question, you should understand the
topology first. In the picture, VPLS architecture is shown.
VPLS uses data plane learning. MAC Address information from the
customer site is learned through data plane. There is no MAC address
advertisement through the control plane. EVPN does that though.
Also in the network core, MAC addresses are learned through data plane.
Routing protocols; BGP or IS-IS are not used to advertise MAC address
information. Spanning tree is not used in the network core. Split horizon is
enabled in the network core.
If the traffic is received from the PW, it is not sent back to another PW since
full-mesh point-to-point PW has to be enabled between the VPN Sites.
VPLS doesn’t support flow based load balancing. EVPN does.
Thus the correct answer of this question is A, B, C, E and F.
Question 12:
When designing an IS-IS network with MPLS, when is route leaking
required from Level 2 to Level 1 sub domain?
A. If PE loopback will be carried in BGP
B. If PE devices in the L1 sub domain
C. If there is more than one L1-L2 router
D. If there are low end devices in the L1 sub domain
Answer 12:
When designing an IS-IS network, the problem with MPLS is, PE devices
loopback IP addresses are not sent into IS-IS L1 domain.
In IS-IS L1 domain, internal routers only receive ATT (Attached) bit
from the L1-L2 router. This bit is used for default route purpose.
In order to have MPLS Layer 3 VPN, PE devices should be able to reach
each other and MPLS LDP LSP should be setup end to end.
If the PE loopback is not sent, end-to-end LSP cannot be setup. Answer
of this question is ‘ B’.
If there is more than one L1-L2 router, still only default route is sent into
L1 subdomain/area.
If PE loopback will be carried in BGP, which is called BGP + Label or
BGP LU (Label Unicast) then there is no need for route leaking, but since the
question is asking when it is required, answer is ‘ B’.
Question 13:
Which option below can be used as a PE-CE routing protocol in MPLS
Layer 3 VPN? (Choose all that apply).
A. IS-IS
B. BGP
C. PIM
D. HSRP
E. OSPF
F. Static Route
Answer 13:
PIM and HSRP are not routing protocols. They cannot be used as PE-CE
routing protocol in the context of MPLS Layer 3 VPNS.
OSPF, IS-IS, RIP, EIGRP, BGP and static routing, all of them are
supported as MPLS VPN PE-CE routing protocol in theory. In practice, most
of the Service Providers only provide Static Routing and BGP.
But in the question, it says, which protocols can be used !
Thus the correct answer of this question is ‘A’, ‘B’, ‘E’, and ‘F’.
Question 14:
In an MPLS VPN, which below option is correct if the unique/different
RD and same RT is configured on the PE devices for a particular VPN?
A. Routes are rejected by the remote PEs.
B . Routes are accepted by the remote PEs and doesn’t consume
extra resources
C . Routes are accepted by the remote PEs and consume extra
resources
D . They cannot be send from the Local PE since RD and RT
should be the same across PE devices in a particular VPN
Answer 14:
For a particular VPN different RD and RT values can be configured.
Local PEs advertise the routes and remote PEs accept these routes.
But the routes consume extra resources on the PEs since they are different
VPN prefixes. When RD values append to the IP prefixes, VPN prefix is
created. RD value is used to create different VPN prefixes in an MPLS VPN
environment. Thus the correct answer of this question is ‘ C’.
Question 15:
What is the reason of using unique/different RD per VRF per PE in an
MPLS VPN environment?
A. It is not good practice to use unique RD per VRF per PE.
B. It is used to send different VPN prefix to the VPN RR
C. It is used to send same VPN prefix to the VPN RR
D. It is used for scalability purpose
Answer 15:
Unique RD is a common approach in MPLS VPN environment.
It is a best practice because with unique RD per VRF per PE, VPN RR
(Route Reflector) can receive more than one BGP next hops for a given VPN
site from the local PEs and the remote PEs can receive more than one best
path from the VPN RR.
These paths can be used for Hot Potato (Optimal routing), fast reroute and
the Multipath purposes.
Question 16:
European Service Provider Company recently acquired smaller Service
provider in Dubai. They want to merge two MPLS VPN via the Internet.
They want solution to be deployed very quickly so they can start utilizing
end-to-end MPLS service for their customer.
Which below technologies can satisfy all the given requirements above?
A. MPLS over GETVPN
B. MPLS over GRE
C. MPLS VPWS
D. MPLS VPLS
E. MPLS over L2TPv3
F. MPLS over IPv6
Answer 16:
Important two points in this question are, solution should be over Internet
and should be deployed quickly.
GETVPN cannot run over Internet due to IP Header Preservation. MPLS
over GETVPN cannot be an answer.
IPv6 is not a tunneling mechanism, which MPLS can run over. Thus
MPLS over IPv6 is not an answer.
VPWS and WPLS could be setup if between two service providers deploy
long haul link between their core devices, but this is not an Internet based
solution and requires too much time for provision.
Only remaining solution, which can run over Internet and quickly
deployable are MPLS over GRE and MPLS over L2TPv3.
Question 17
What are the two possible options to create MPLS Layer 2 VPN
pseudowire?
A. Martini Draft, LDP signalled pseudowires
B. Segment Routing
C. BGP EVPN
D. Rosen GRE Draft
E. Kompella Draft, BGP signalled pseudowires
Answer 17:
Two different methods to create MPLS Layer 2 VPNs are Kompella and
Martini methods.
As it is explained in the MPLS chapter, Kompella method uses BGP for
psedowire signaling and LDP for transport LSP. Martini method uses LDP
for both pseudowire signaling and transport LSP.
Rosen GRE is a Multicast application on MPLS VPN network and not
used for Layer 2 VPN.
BGP EVPN is used to advertise MAC address information of the
customer between the PEs, so it provides MPLS Layer 2 VPN as well. But
since question is asking MPLS Layer 2 VPN pseudowire creation and since
there is no pseudowire in BGP EVPN, this option is wrong.
Correct answer of this question is A and E.
Question 18:
Which below options are correct for MPLS TP (MPLS Transport Profile)
as a transport mechanism? (Choose all that apply)
A. MPLS TP requires routing control plane
B. MPLS TP requires Penultimate Hop Popping
C . MPLS TP is a newer packet transport mechanism which
replaces SONET/SDH
D. MPLS TP brings extra OAM capability to MPLS OAM.
E . MPLS TP benefits from ECMP (Equal Cost Multi Path) for
better link utilization
F. MPLS TP uses Label 13 (GAL) for OAM purpose
Answer 18:
MPLS TP as it is explained in the MPLS chapter is a newer packet
transport mechanism that replaces SONET/SDH. Today there are many
discussion between MPLS TP and the Carrier Ethernet in the SP access
domain.
MPLS TP doesn’t use PHP, ECMP and the routing control plane. Thus,
‘A’, ‘ B’ and ‘ E’ are the wrong answers.
One of the most important reasons to deploy MPLS TP is excellent OAM
capability. It uses IANA assigned Label 13 for OAM operations.
Question 19:
Enterprise Company receives an MPLS Layer 2 VPN service from the
Service Provider. Enterprise topology is Hub and Spoke.
With which devices do the Enterprise spoke routes form an IGP
adjacency?
A. Hub CE Routers
B. Other Spoke CE routers
C. Hub PE routers
D. Spoke PE routers
Answer 19:
Question is looking whether you know the MPLS Layer 2 VPN behavior.
In an MPLS Layer 2 VPN, CE routers form an IGP adjacency with each
other. Not with the PE routers.
Thus option C and D is wrong.
Also since in the question, it is given that The Company’s topology is
Hub and Spoke, spoke shouldn’t form and IGP adjacency with each other.
That’s why; the answer of this question is ‘ A’, Hub CE routers.
Question 20:
Which of the below options are the results of having MPLS in the
network? (Choose all that apply)
A. BGP Free Core
B . Hiding service specific information (customer prefixes, etc.)
from the core
C. More scalable network
D. Faster convergence
E. Better security
Answer 17:
MPLS removes the need of BGP in the core. P devices don’t know the
customer information.
They don’t keep layer 2 or layer 3 information of the customer. This
provides scalability for the core but it is not enough to say that overall
scalability of the network increases with MPLS.
If question would say, scalability of the core it could be correct.
MPLS doesn’t bring security by default. If security is needed then IPSEC
should run on top of that. Best IPSEC solution for the MPLS VPNs is
GETVPN since it provides excellent scalability.
Without MPLS, network could convergence fast as well. MPLS TE FRR
is a fast reroute mechanism, which can provide sub 200msec data plane
convergence for the MPLS encapsulated traffic.
Same data plane fast reroute convergence can be provided with IP FRR
mechanisms such as LFA, Remote LFA or Topology Independent LFA.
Thus the correct answer of this question is A, B and C.
Question 21:
If customer is looking to carry Layer 2 traffic with the encryption, which
below options can be chosen?
A. VPLS
B. EoMPLS
C. GET VPN
D. MACsec 802.1AE
E. IPSEC
F. L2tpv3
Answer 21:
Question is looking for a technology, which provides Layer 2 VPN and
encryption.
VPLS, EoMPLS and L2TPv3 is used to provide Layer 2 VPN service
across Layer 3 infrastructure.
VPLS and EoMPLS does this with MPLS, L2TPv3 doesn’t require MPLS
but accomplished it over IP.
But none of them support encryption.
Only the correct answer of this question is MACsec, which is ‘ D’.
Question 22:
Which of the below options can be used to extend VRFs across a Campus
network IF there are not much VRFs (Choose all that apply)
A. 802.1q Trunk
B. GRE tunnels
C. CDP
D. RSVP-TE
E. LDP LSPs
Answer 22:
If there are not much VRF, there is no scalability concern. LDP LSPs
could be setup to carry if there are too many VRFs.
Since this is given in the question as a requirement, best and easiest
options are 802.1q trunks and GRE tunnels to carry VRF across a campus
network.
Question 23:
Which of the below options are correct for the Inter-AS MPLS VPN
Option A?
A . It provides the most flexible QoS deployment compared to
other Inter-AS MPLS VPN options.
B. It is least secure Inter-AS option.
C. It is most scalable Inter-AS option.
D. It requires MPLS between the Autonomous Systems.
E . BGP+Label (RFC3107) is used between two Autonomous
Systems.
Answer 23:
Inter-AS Option A provides most flexible QoS deployment since there
are separate interfaces per customer. It is the most secure VPN option since
there is no information sharing between Autonomous Systems.
It is least scalable VPN option since requires per customer configuration
and ASBRs keep too much information compare to other Inter-AS VPN
options.
Inter-AS MPLS VPN Options comparison charts provided too much
information on pros and cons of each of the method in the MPLS Chapter.
It doesn’t require MPLS between the Autonomous Systems and BGP +
Label is not required as well.
Correct answer of this question is ‘ A’.
Question 23:
Enterprise Company is using OSPF on their network and has Frame
Relay transport. They want to receive MPLS VPN service as well and
continue with OSPF as a PE-CE protocol. They have received a good SLA
for the MPLS VPN service from the Service Provider thus they want to use
for their all traffic MPLS VPN link.
Which below feature MPLS VPN Service Provider should enable to
ensure in steady state always MPLS VPN link is used?
A. OSPF Super backbone
B. OSPF Sham link
C. OSPF Virtual link
D. MP-BGP (Multi Protocol BGP)
E. Multi area OSPF
Answer 24:
As it is explained in the MPLS Chapter, when OSPF is used as a PE-CE
routing protocol, if there are backdoor link, backdoor link can be used if
Service Provider doesn’t setup OSPF sham link.
When OSPF is used as a PE-CE protocol, service provider backbone is
called Super backbone and it is unrelated with the question. Only the way of
ensuring MPLS VPN link to be used as a primary link is OSPF Sham link.
Question 25:
Which of the terms below are used to define a label that provides
reachability from one PE to another PE in MPLS networks? (Choose all that
apply)
A. Topmost Label
B. Transport Label
C. Outer Label
D. VC Label
E. VPN Label
F. Tunnel Label
Answer 25:
Topmost label, Transport label, Outer label and Tunnel label are used to
define end-to-end LSP between the PE devices.
They can be used interchangeably since they define the same thing.
With this reachability MPLS Layer 2 and Layer 3 VPN, MPLS Traffic
Engineering tunnels are created.
Question 26:
Which below attributes are carried in link state protocol messages in
MPLS Traffic Engineering for constrained based path computation? (Choose
all that apply).
A. Link bandwidth
B. Link delay
C. Link utilization
D. Link jitter
E. Link affinity/color
Answer 26:
For constrained based computation purpose, link reserved bandwidth,
unreserved, used and unused bandwidth are carried in the protocol messages.
OSPF and IS-IS carries this information. OSPF does this with Opaque LSAs;
IS-IS carries with TLV 22, 135.
Link delay and jitter information is not carried. Link affinity (A.K.A
coloring) information is carried for Shared Risk Link Group purpose. Links
which use same fiber conduit, same transport equipment or even same
building can be avoided and disjoint LSPs can be setup.
Link utilization is the dataplane information and it cannot be carry.
Routers can act locally and change the LSP status if the utilization increases
on the link, by configuring ‘ Auto-bandwidth ‘ feature but link utilization
information is not carried between the devices.
Thus the answer of this question is ‘ A’ and ‘ E’.
Question 27:
Which below options provide Control Plane MAC address advertisement
for MPLS Layer 2 VPNs?
A. EVPN
B. VPLS
C. EoMPLS
D. BGP L3VPN
E. PBB EVPN
F. VXLAN EVPN
Answer 27:
Only EVPN provides Layer 2 MAC advertisement through control plane.
VPLS does Layer 2 VPN through dataplane.
BGP L3 VPN is used for Layer 3 prefixes not for the MAC addresses.
EVPN can use different dataplane for scalability purposes. Common ones
are PBB EVPN and VXLAN EVPN.
Thus the correct answer of this question is A, E and F.
Question 28:
What are the requirements to run MPLS Traffic Engineering in the
network with constraint based SPF?
A. Extensions to routing protocols
B. RSVP
C. LDP
D. D. BFD
E. BGP
Answer 28:
MPLS Traffic Engineering can be enabled either in a distributed or
centralized manner.
If TE LSPs will be computed at the centralize location with the offline
MPLS TE tools, link state routing protocols are not required. CSPF is not
used as well.
If there is no offline tool to compute the MPLS TE topology, routers
should run link state routing protocols and CSPF (Constraint based SPF)
should be enabled.
CSPF can access to the TED (Traffic Engineering Database) with the help
of routing protocol extensions.
Also RSVP has to be enabled on the every link which MPLS TE are
required.
Since in the question, it is said that it will be used with constraint based
SPF, we need routing protocol extensions and only OSPF and IS-IS can
provide it.
LDP, BGP or BFD is not required to run MPLS TE, BFD can help fast
failure detection though.
Correct answer of this question is A and B.
Question 29:
Service Provider creates a network design that runs MPLS in its WAN
backbone, using IS-IS as the infrastructure IGP routing protocol.
What would be two effects of additionally implementing MPLS-TE?
(Choose all that apply)
A. For the sub second convergence MPLS TE FRR is required
B. MPLS Traffic Engineering and IS-IS cannot be used together
C . MPLS Traffic Engineering overcome the problems in Multi
Level IS-IS design
D . MPLS Traffic Engineering is required to create backup path
independently from the IS-IS
E . To route different MPLS QoS classes through different path,
MPLS Traffic Engineering is required
Answer 29:
For the sub second convergence MPLS Traffic Engineering is not
required if the IGP protocol is IS-IS. IS-IS can be tuned as it is shown in the
IS-IS chapter to convergence in sub second. Option A is incorrect.
MPLS Traffic Engineering works best with IS-IS and OSPF, thus Option
B is incorrect.
MPLS Traffic Engineering doesn’t solve the Multi Level IS-IS traffic
engineering issue. Actually it creates.
Because; MPLS TE requires topology information. But in Multi Level IS-
IS design, topology information is not sent between Levels.
Thus Option C is incorrect as well.
MPLS Traffic Engineering allows backup path to be used. This is
explained with the Fish diagram in MPLS Chapter.
Also different MPLS QoS classes can be routed through different paths
with MPLS Traffic Engineering at the Headend router.
The correct answer of this question is D and E.
Question 30:
Enterprise Company wants to upgrade their legacy Frame Relay WAN
circuits to MPLS. Based on the below migration steps, can you choose the
best option for the smooth migration?
Answer 30:
In the migration questions, first step should be choosing the transit site.
Some amount of time, any particular site will have both Frame Relay and
MPLS VPN connections.
Second step should be arranging a new circuit at the transit site and
configure the required protocol at the transit site.
After that remote site circuit one by one can be enabled and their
configuration can be done. MPLS service is preferred over legacy service.
This site reaches to the sites that have not been converged, through the transit
site.
QoS and security operation is done after routing protocol configuration at
the remote site.
When one site is finished, legacy circuit is removed and next remote site
provisioning starts.
When all the remote sites are migrated to the MPLS, Transit site legacy
circuit is removed as well.
Thus the correct order of operation of this question should be as below.
Choose a transit site for the communication between migrated and non-
migrated site
Establish a new circuit at the transit site
Establish a new circuit at the remote site
Establish a BGP over the new circuit
Arrange the routing protocol metric to choose MPLS over Frame Relay
Enable QoS and monitoring for the new MPLS connection
Remove the Frame Relay circuit from the remote site
Remove the Frame Relay circuit from the transit site
VIDEOS
Ciscolive Session-BRKRST-2021 Ciscolive Session – BRKMPL-2100
https://www.youtube.com/watch?v=DcBtot5u_Dk
https://www.nanog.org/meetings/nanog37/presentations/mpls.mp4
https://www.youtube.com/watch?v=p_Wmtyh4kS0
https://www.nanog.org/meetings/nanog33/presentations/l2-vpn.mp4
ARTICLES
http://orhanergun.net/2015/02/carrier-supporting-carrier-csc
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise
http://orhanergun.net/2015/06/advanced-carrier-supporting-carrier-design/
http://d2zmdbbm9feqrf.cloudfront.net/2013/usa/pdf/BRKMPL-2100.pdf
CHAPTER 12
CCDE PRACTICAL SCENARIO
SPEEDNET TELECOM
Background Documentation
SpeedNet Background Information
SpeedNet Telecom is a US Service Provider Company, which was
founded in 1990. Company started their business with the Residential Wire
line Dial up customers. They had Metropolitan Wireless infrastructure in
Rural Areas in the beginning. When the Broadband become mainstream, they
started to deploy DSLAMs in every major cities throughout U.S
Beginning of 2000, SpeedNet started to deliver Metro Ethernet service as
well. They deployed 1000 CPE devices throughout U.S and started to provide
MPLS VPN Services for the Business Customers. They upgraded their Core
backbone uplinks two times in the past. Although inside the POPs they have
10Gbps and 40Gbps uplinks, their all POPs are connected through minimum
2x10Gbps.
Today SpeedNet is serving around 6.5 Million customers including more
than 5.5 Million Residential Broadband customers, L2 and L3 VPN Services,
some Rural Wireless deployments and so on.
Their customer growth forecast is 30% year-to-year basis for the next 5-6
years. All the forecasts had been successful for far.
Their Internal IPv4 Addressing scheme as below:
For Internal purposes they are using 10.0.0.0/8 block:
/16 per Data Center
/16 per Region
/31 for point to point links
/32 for the Loopbacks
They don’t currently have IPv6 in their network due to lack of demand
from their customers. SpeedNet haven’t implemented QoS on their network
since they rely on 50% rule on their core network. So if any of the redundant
links fail, remaining link doesn’t become congested. When the overall
bandwidth requirement exceeds 50% of the link capacity, they upgrade the
link capacity.
Their IGP is flat ISIS L1. They are running BGP as external protocol and
they are also providing BGP as a PE-CE protocol to their MPLS L3 VPN
customer due to the corporate network security policy and the operational
challenges with the other protocols.
Services provided to clients:
Residential Internet Access
L3VPN for Business Customers
L2VPN VPLS for Business Customers
L2VPN VPWS for Business Customers
MPLS NNI (Inter-AS Connections with some providers)
Metropolitan Wireless
SpeedNet is using a TCP based home built application, as their CRM.
It is very sensitive to any kind of delay and drops.
There’s also a billing system that primarily using IPFIX to communicate
with networking HW and their Corporate File Exchange protocol is NFS.
SpeedNet doesn’t have currently Multicast on their network but their all
Layer 2 switches support IGMP Snooping and MLD, and their routers
support all Multicast Routing protocols.
SpeedNet is internally using Voice over IP and the video conferencing.
They are utilizing FTP heavily for their HR applications specifically. They
are using an entertaining application but they don’t want these applications to
consume more bandwidth.
Main concern and top priority is that merged network should be able to
provide all of current SpeedNet MPLS customers.
1. WHAT INFORMATION DO YOU NEED FROM SPEEDNET TO START MERGING THE
Answer 1:
IP address information is already provided in the background
documentation that’s why you shouldn’t ask it again.
In the CCDE exam, if information is provided already, you can’t ask it
again. This is an analyzing the design type question.
IGP routing is given as IS-IS Flat Level 1 design, so you don’t need to
ask it again as well.
You don’t know whether SpeedNet is using single AS or multiple ASes
such as confederation, we need more information on this, that’s why BGP
architecture should be learned.
POP architecture should be provided as well, so if we need to remove
some POPs we can plan accordingly.
Backbone speed of SpeedNet is already provided. QoS information is
provided as well and also you wouldn’t need it in order to start merging the
two networks
It is already given that SpeedNet is providing MPLS VPN to their
customers.
That’s why, you should learn BGP and POP architectures, and answer is
C and D.
Also please note that, if you need to ask two items, as it is in this
question, you will be told that choose two in the real exam. If you would need
to choose three, they would tell you, choose three in the question.
2. WHAT INFORMATION DO YOU NEED FROM HYPERCOM TO START MERGING
Answer 2:
IPv4 Addressing scheme of Hypercom is not provided, in order to
understand whether there is conflict for the merged network, we need to learn
IP addressing of Hypercom.
IGP routing information was provided so you cannot ask it again.
BGP architecture should be learned as well. We don’t know how their
Internal and External BGP is setup. Do they have full mesh IBGP, Route
Reflector design or Confederation?
POP architecture should be learned as well. How their POP location is
connected to the DCs and DC to the core network and so on.
QoS information can be asked if needed later on, it is not needed to start
merging design. We know that they don’t support MPLS services from the
background information.
You cannot ask since it is already known.
Answer of this question is A, C and D.
EMAIL 2
This is some of the information we have been able to get for you:
• SpeedNet BGP architecture is single AS in their network. Each POP
location has separate BGP RR for Internet and VPN services.
• We decided to continue with the existing POPs and DCs for now. In the
future we can reevaluate but currently we will not redesign any physical
location.
Hypercom backbone uplink between the POPs is 2x10G links. They are
considering to connect the POPs via direct links since there is too much
overhead with GRE tunnels and they don’t want to see GRE tunnels on their
network.
• Hypercom is using 10.10.0.0/16, 10.0.0.0/16, 172.16.0.0/18 and
172.22.0.0/16 IP addressing blocks
• Hypercom has currently full mesh IBGP peering at the moment.
Answer 3:
There is no known problem with the current IS-IS design. Detailed
answer will be provided in the subsequent answers.
4. WHAT MIGHT BE THE CONCERN OF THE SPEEDNET FOR THEIR IS-IS?
A. Migration from IS-IS L1 to Multiple flooding domains is hard
B. IS-IS L1 does not support traffic engineering
C. Redistribution is not possible to ISIS L1
D. ISIS L1 is not a scalable solution
Answer 4:
Flat IS-IS L1 is not the different from the scalability point of view from
the IS-IS L2, that’s why Option D is not correct.
Redistribution is possible into IS-IS L1 domains, this is not correct either.
IS-IS L1 provides Traffic Engineering and so far there is no requirement
for MPLS Traffic Engineering. It is not the feature that every network should
have also. Thus Option B is not correct answer either.
Option A is definitely correct; in general it is hard to migrate from flat IS-
IS L1 design to multiple flooding domains such as L1 in the POP, L2 in the
Core. Answer of this question is Option A.
But this is not a realistic concern in the current situation since we don’t
know whether merged network will have problem with the scalability. If
scalability and manageability is okay for the customer, IGP can be migrated
to either different one or to the multiple levels one.
5. SHOULD THEY MIGRATE THE IGP PROTOCOLS TO RUN A COMMON IGP FOR
THE MERGED NETWORK?
A. Yes
B. No
Answer 5:
There is no need currently. SpeedNet’s main concern and top priority is to
extend their MPLS services as it is given in the initial background
information.
Providing an Inter-AS MPLS VPN services don’t require to use common
IGP, although using common IGP would provide additional benefits;
especially in the MPLS network. But this can be decided after the first phase
of merge; right now there is no need.
6. WHAT CAN BE THE PROBLEM IF HYPERCOM WANTS TO DEPLOY BGP ROUTE
REFLECTOR BASED ON THEIR CURRENT BGP DESIGN? (CHOOSE TWO)
A. They would loose path visibility
B. There is no problem, it is same as running full mesh
C. BGP RR always bring benefits to BGP design
D. BGP RR puts additional load into the control plane
E. BGP RR can cause suboptimal routing
Answer 6:
Classical problem of BGP Route Reflector is path visibility. If you have
more than one exit point from the domain for the same prefix, BGP RR
selects the best path from its point of view and sends all the BGP RR clients.
That’s why Option A is one of the correct answers. Less number of total
path means might cause sub optimal routing from some BGP RR clients.
Thus Option E is the other correct answer.
BGP RR doesn’t put additional burden into the control plane. It actually
removes the load from the full-mesh IBGP design.
BGP Route Reflector placement, best practices and the design
recommendations were explained in the BGP Chapter of the book.
That’s why answer of this question is A and E.
Answer 7:
Except BGP PIC, all the option is used to send more than one BGP path
to the BGP speaker. In order BGP PIC to function properly, one of the above
options is used.
In the exam they don’t ask as ‘ Choose All that apply ‘ but instead, they
ask as ‘ Choose Two ‘, ‘ Choose Three ‘ and so on. Number of options that
they want will be given.
Additional information: In the MPLS VPN network, using unique RD per
PE provides same functionality. That’s why to send more than one path for a
given customer prefix, unique RD per PE is the best option in MPLS VPN
network.
EMAIL 3
One of our customers asked us about the best way to provide connectivity
between their HQs and the remote sites. Could you help us out?
8. PLEASE FILL IN THE TABLE BELOW
Answer 8:
In the exam, you will not fill the blank. But they may provide you an
already filled table and you choose the correct option or you may select the
correct option from the drop-down menu.
9- WHAT IS THE BEST SOLUTION FOR VPN SECURITY WITH MINIMAL OPEX?
A. GETVPN
B. DMVPN
C. mGRE
D. P2P IPSec
Answer 9:
GETVPN provides minimum OPEX. More information is available in the
online classes.
EMAIL 4
10- WHICH BELOW OPTION WOULD BE THE BEST SHORT-TERM SOLUTION IN THIS
CASE?
Answer 10:
Using tactial MPLS traffic engineering. Their problem is Routing metric.
They cannot use their free capacity. Because IGP routing protocols chooses
the shortest path.
You don’t need QoS, GRE tunnel or PBR and so on. If IGP metric is
carefully chosen and still there are links, which are not used, MPLS Traffic
Engineering allows you to utilize the available bandwidth efficiently.
Strategic and the tactical MPLS Traffic Engineering approaches was
explained in the MPLS chapter in detail.
Answer 11:
Using Strategic MPLS Traffic engineering. Since it seems that they
cannot use their available uplinks because IGP is only utilizing Shortest Path,
Strategic MPLS Traffic Engineering helps them in the long term to provide
guaranteed services and better capacity usage.
Details of the Strategic and Tactical MPLS Traffic Engineering
approaches were provided in the MPLS chapter of the book.
In order to understand why Tactical MPLS Traffic Engineering has been
chosen as short term and Strategic MPLS Traffic Engineering for the long
term solution, please read MPLS Traffic Engineering section of MPLS
chapter.
EMAIL 5
It seems that MPLS Traffic Engineering Strategic Approach can provide
us a better capacity management. Can you help us to setup MPLS Traffic
Engineering on our network?
Also we will have series of questions for you regarding MPLS Traffic
Engineering. We have been also told to provide QoS all across the new
network within the next couple of months. We need your expert
recommendations.
Answer 12:
LDP, MP-BGP, VRF and Send-label (BGP + Label/RFC 3107) are not
required for MPLS Traffic Engineering.
MPLS TE tunnels are unidirectional tunnel. If traffic will be placed in
MPLS TE tunnels, unidirectional tunnels should be created in two directions.
Answer of this question is option B, D, E, G.
Please refer MPLS Traffic Engineering section of MPLS chapter for
detail.
EMAIL 6
We created an MPLS tunnels, RSVP and other necessary extensions are
in place but unfortunately our traffic doesn’t go through the TE tunnels.
Once you help to get the traffic into the MPLS TE tunnels one little
thing will left.
13 WHY DO YOU THINK SPEEDNET CANNOT SEND TRAFFIC INTO THE MPLS TE
TUNNELS ALTHOUGH EVERYTHING IS SET?
A. Multicast traffic can pass but unicast traffic might have an issue
B . Routing table should point to the tunnel interface for the TE
destination prefixes
C. TE tunnel links must be advertised into the IGP protocol
D. SpeedNet probably didn’t create reverse unidirectional tunnel
Answer 13:
If Multicast is setup in the network and unicast traffic would pass over the
TE tunnels, Multicast could follow it. But SpeedNet doesn’t say that they
have a problem with Multicast, actually they didn’t say anything about
Multicast yet.
TE tunnels don’t need to be advertised as a link into the routing protocol
in order to IGP take MPLS TE link in the SPF calculation. This is done
through forwarding adjacency and could be one of the solutions for the
problem of putting traffic into the MPLS TE tunnel but since the Option C
says that ‘ TE tunnel links must be advertised into the IGP protocol’ this
statement is wrong.
SpeedNet even doesn’t create reverse uni-directional tunnel, traffic would
follow the MPLS TE tunnel in one direction. Return traffic could follow the
IGP shortest path. SpeedNet cannot place the traffic into the TE tunnel at all;
nothing is said about one direction.
That’s why the answer is Option B. Routing table should show that the
destination behind the MPLS TE tail- end should be seen from the tunnel
interface. This can be done via many methods. (Static route, PBR, CBTS,
Auto Route, Forwarding Adjacency)
Thus, Option B is the correct answer.
A. Yes, it’s possible but its not a good idea to run both IntServ and
DiffServ in the same network
B. No, there’s no specific restrictions and they can both run in the
same network
Answer 14:
Answer is Option A. It is possible but since both are two different
approaches for QoS design as they were explained in the QoS chapter of the
book, they shouldn’t be used on the same network together.
15. WE ARE CONSIDERING SEVERAL QOS MODELS. WHICH ONE IS THE BEST FIT
FOR US?
A. 1 PQ, 3 BQ
B. 1 PQ, 4 BQ
C. 3BQ
D. 5BQ
E. 3PQ, 1BQ
Answer 15:
SpeedNet was provided their application profile in the background
document as well as in Email 6. Based on the given information of course:
Voice and Video conferencing should go to the PQ, SAP and HR
applications are business critical, thus they should be placed into the same
queue but separate than bulk traffic.
NFS and FTP are the bulk traffic; we can place them into the same queue.
But different bandwidth queue than business critical traffic.
Since Company is allowing gaming/entertaining application, which
should go to the scavenger queue, and the rest of the traffic should go to the
best effort.
That’s why we need 1 PQ and 4 BQ.
EMAIL 7
Answer 16:
Answer is closer to the application since all the users have a problem at
the same time, so the problem should be with the application.
All the traffic should come through those nodes, which the probe is
running. Correct answer is Option A.
EMAIL 8
Since we have done merging, we have lots of requests from the SpeedNet
customers to extend their current VPN networks to different locations where
Hypercom have presence. SpeedNet wants to have separation of their core
network task from the Inter domain task, that’s why they implemented a new
ASBR routers but Hypercom is okay with the existing routers for new inter
domain communication.
A. Inter-AS Option A
B. Inter-AS Option B
C. Inter-AS Option C
D. Redistributing the prefixes of two networks to each other
Answer 17:
Since there is overlapping IP addresses between SpeedNet and Hypercom
that’s why we cannot leak the internal prefixes, so Option C cannot be
chosen.
Option A cannot be chosen either since it is stated in the email that lots of
request is coming for the Inter-AS service.
Very detailed explanation for the Inter-AS MPLS VPNs have been
provided in the MPLS Chapter of the book.
Answer is Option B.
Answer 18:
It fits SpeedNet scalability needs. It is not the most secure option. Inter-
AS Option A is the most secure Inter-AS MPLS VPN solution.
It is not the easiest to configure, Inter-AS Option A is the easiest one but
when the number of Inter-AS MPLS customer grows, it doesn’t scale.
Inter-AS MPLS Option B doesn’t provide end-to-end LSP, only Inter-AS
Option C does.
Answer is B.
H-S-
Protocols S-PE-1 S-RR H-BR-1 H-P-1
ASBR
VRF
MP-IBGP
MP-EBGP
Infrastructure
IGP
Customer IGP
Send Label
MPLS
Answer 19:
Answer should be as below. Please note that there is no Customer IGP on
the PE devices since in the scenario you are told that as the company policy
they just want to provide BGP as a PE-CE protocol.
Also there is no send-label in Inter-AS Option B and Route Reflector
doesn’t have to run MPLS when they are not used as inbound RR.
H-S-
Protocols S-PE-1 S-RR H-BR-1 H-P-1
ASBR
VRF *
MP-IBGP * * * *
MP-EBGP * * *
Infrastructure
* * * * *
IGP
Customer IGP
Send Label
MPLS * * * *
EMAIL 9
One of our lead architect came up with new IP addressing scheme that
new network is going to migrate to within the next 6 months. And Hypercom
Full Mesh IBGP is migrated to Route Reflector topology. RRs will be placed
in the centralized location, they will not be used as inline RR.
It gives us opportunity to use Inter-AS Option C.
S-PE- S-
Protocols H-RR S-RR H-BR-1 H-P-1
1 ASBR
VRF
MP-IBGP
MP-EBGP
Infrastructure
IGP
Customer IGP
Send Label
MPLS
Answer 20:
S-PE- S-
Protocols H-RR S-RR H-BR-1 H-P-1
1 ASBR
VRF *
MP-IBGP * * * * *
MP-EBGP * * * * *
Infrastructure
IGP
* * * * * *
Customer IGP
Send Label
MPLS * * * *
21. WHAT IS THE MAIN BENEFIT OF IMPLEMENTING INTER-AS OPTION-C
BETWEEN SPEEDNET AND HYPERCOM?
A. The only option with the support of 6vPE
B. Better scalability compared to Inter-AS Option B
C. The easiest Inter-AS Option to implement
D. More secure compare to Inter AS Option B
Answer 21:
It is not the only option, which can support 6VPE. It is not the easiest way
to deploy Inter-AS MPLS service.
Option A is the easiest as it is explained before. It is not the most secure
Inter-AS MPLS VPN solution. Option A is the most secure one.
Correct answer is B.
EMAIL 10
Hi Mr. Designer,
As you know we have MPLS Layer 3 VPN, Internet, Point to point
MPLS VPN and VPLS customers. Especially for the VPLS customers, when
we want to add a new site to the current VPLS of the customers, it is
operationally very hard for us to touch every PE of the customer.
We afraid that this will be a bigger problem for the merged network since
we want to span the VPLS and our other services throughout the merged
network. But especially for the VPLS issue, we want to have an immediate
solution.
Please note that we have an LDP-based VPLS in our network and the
Hypercom network doesn’t have VPLS at all currently.
Can you help us to fix our operational problem?
Answer 22:
As it is given in the email, SpeedNet wants to reduce the operational
touch point for the existing services, especially VPLS.
In the CCDE Practical exam, most important thing is to answer the
question based on the given requirements. Requirements are given in the
initial background documentations and in the emails.
Answer is Option C.
Answer 23:
Replacing VPLS with EVPN or PBB-EVPN is not an option since they
want immediate solution and we are don’t know whether their devices
support EVPN or PBB-EVPN.
BGP Auto Discovery reduces their operational tasks by advertising the
VPLS membership information. And we know that BGP is already used on
their networks.
Answer is Option E.
26. IS THERE ANY PROBLEM FOR LDP AND BGP BASED VPLS TO SUPPORT END-
TO-END VPLS?
A. Yes
B. No
Answer 26:
Correct answer is No.
With adding interconnect nodes between the LDP-VPLS and BGP-VPLS
domains, end-to-end VPLS service is created.
EMAIL 11
One of our customers is asking whether we can provide IPv6 L3 VPN
services for them. We have not been thinking about it, but as our assessment
all our networking nodes support IPv6
27. WHICH TECHNOLOGY WILL HELP SPEEDNET TO MEET THE REQUIREMENTS
ABOVE?
A. 6PE
B. DMVPN
C. 6vPE
D. NAT64
E. NAT46
Answer 27:
Correct answer is 6VPE, Option C. All the details of the IPv6 transition
mechanisms have been provided in the IPv6 chapter of the book. 6VPE is the
best solution for the VPNs on the MPLS backbones.
If Internet reachability would be asked over MPLS backbone 6VPE
would be the solution.
Answer 29:
Answer is Option C. It is the fastest solution to extend their VPNs,
although there might be a lot of problem with GRE tunnels.
30. WHAT WOULD BE THE PROBLEM WITH THIS SHORT-TERM SOLUTION?
(CHOOSE THREE)
A. It is not reliable and there is no SLA guarantee
B. It is not secure
C. QoS is not under control of SpeedNet
D. For each customers require separate overlays
E. Multicast routing is not supported with it
F. All of the above
Answer 30:
It doesn’t require separate overlay tunnels per customer.
Multicast routing is supported over GRE tunnels. But it is not secure
since it is over the Public Internet. IPSEC can run on top of that but it was not
mentioned in the question.
It is not reliable and there is no SLA since the Internet is best effort.
QoS is not under control of the SpeedNet because of the above reason,
Internet is best effort and if there is any congestion throughout the path, there
is no SLA for QoS.
Correct answer, Option A, B and C.
CHAPTER 13
CCDE PRACTICAL SCENARIO
MAG ENERGY
DOCUMENT 1
Company Profile:
MAG Energy (MAG-E) is an energy broker and middleman between
Energy Providers and their customers located in the United States. MAG-E
has been in business for just over 10 years. The company and its network
were built organically, only as the needs of the business increased.
Historically, the primary source of revenue has been deploying Site Devices
at customer locations. While this primary method has been effective over the
years, it has not been efficient from both a monetary and time to deployment
standpoint. For the short term, MAG-E has purchased a manufacturing plant
in Boise Idaho to bring all Site Device manufacturing in house to
significantly reduce the overall cost of each Site Device. As for the long
term, the Executive team is currently researching different SaaS solutions that
would replace the current Site Device model.
Power Usage / Reduction Event Process:
MAG-E is a middleman between Energy Providers and the Enterprise
Customers of the Energy Provider. For example, the energy provider would
first work with MAG-E to negotiate a contract for power reduction in the
energy provider’s area of responsibility. Once the contract is finalized,
MAG-E works with Enterprise customers of the energy provider to negotiate
a child contract for a reduction in power usage. Common Enterprise
customers are grocery stores, pharmacies, retail stores, farms and silos, and
factories.
An Event is when the energy provider has a high amount of power usage
that they cannot maintain. When this occurs, the energy provider will initiate
the Event by calling the Support Line at MAG-E, which starts the internal
process within MAG-E to engage all child contracts to comply with the
reduction in power. Traditionally, these Events happen more in the summer
seasons when the temperatures are very high which causes a high power
usage state with all of the Air Conditioners being turned on.
Some Event responses are automatic with the deployed Site Device
turning a system off and on as needed while other Event responses are
manual, requiring MAG-E to contact the child customer to manually lower
their power usage by shutting equipment down.
Site Profiles:
MAG-E currently has two Data Centers, one located in Boston, MA and
the other located in Dallas, TX. The primary DC in Boston has 2000 servers,
while the Dallas DC has 1400 servers. MAG-E is headquartered out of
Boston, MA. In Boston there is an Event Support Center staffed with 500
users that process all of the Events placed by the Energy Providers. In
addition to the Event Support Center Staff, the Boston location has 3000
more employees. In Boise Idaho, there is the newly acquired manufacturing
plant that consists of 1000 employees and another separate legacy remote
office that consists of 50 employees. The rest of the US network consists of
57 office locations that range from five users to one hundred users.
Site Devices:
MAG-E’s innovative Site Devices have been the bread and butter for
their business since the beginning. Over the years, there have been 3
different Site Device model series; the S, X, and E. The S series was the
pioneer of Site Devices but were limited in functionality as they could only
push data back to the servers in Boston and Dallas. The S series also lacked
common security best practices such as SSH support and AES support. The
second generation of Site Devices was the X model series. With the X series,
some significant improvements were implemented. The majority of these
improvements was around security of the Site Device and the Energy data, by
integrating AES and SSH support. MAG-E wanted to significantly improve
data efficiency that the current Site Device series lacked by implementing a
data pull operation. This last and final series of Site Devices was the E
series. The E Series has now become the Spearhead of the business. The E
series was developed with both a push and pull data method that could
function independently and concurrently. MAG-E has a total of ~2,000 Site
Devices deployed today.
MAG-E’s Network:
MAG-E’s US WAN currently uses a single MPLS L3VPN provider
network from Level 3. The two data centers have two routers connecting to
the MPLS L3VPN network over 200mb/s Ethernet circuits. The headquarter
location also has two routers connecting to the MPLS L3VPN network but
over 50mb/s Ethernet circuits. All other office locations have a single router
with a single connection to the MPLS L3VPN network with bandwidth
ranging from 1.5mb/s to 50mb/s, depending on the needs of the office
locations. The manufacture plant in Boise, Idaho was brought into the MPLS
L3VPN network over a single 10mb/s Ethernet circuit. There are two Level 3
Gigabit Ethernet Circuits connecting the two data centers together and there
is a single 10GB dark fiber connection between the Boson data center and the
Boston headquarters.
Site Device connectivity is terminated in MAG-E’s production DMZ.
The most popular termination method currently implemented is via private
cellular networks. All private cellular networks being used have a dedicated
hub router in the production DMZ where all traffic for that cellular provider
is provided. For this termination method, a dedicated router with a 3G/4G
card is deployed alongside the Site Device.
The next termination method is via site to site VPNs between a dedicated
firewall cluster in the production DMZ and customer firewalls. This
termination is primary used for customers that have a significant number of
Site Devices.
The final termination method is via the Internet where a Site Device is
deployed and given a static public address from the customer’s network to
then connect back to the servers in MAG-E’s data centers.
A single EIGRP process is configured throughout MAG-E’s network to
handle all of the internal routing.
Applications:
MAG-E has a number of production and corporate applications:
PRODUCTION APPLICATIONS:
•MAG-E Energy Intel (MAG-E-EI) – MAG-E-EI is a Web based
dashboard to display all customer energy information, reports, usage,
billing, and savings data in real time. This application is the primary
resource for the Energy Providers and the Enterprise Customers of the
Energy Providers.
•MAG-E Ordering System (MAG-E-OS) – MAG-E-OS is an internal only
application for MAG-E’s sales departments to put in new orders and track
already placed orders. The front end of MAG-E-OS is a Web based portal
for easy access, while the backend is a SQL database.
•MAG-E Data Feeds (MAG-E-DF) – MAG-E-DF differ depending on the
Site Device data method (push or pull). Site Devices will either push
(UDP) data to the backend servers or the servers will periodically pull
(TCP) data from the site devices. The Site Devices do have a local buffer
that can store 12 hours of data in a failure situation. After 12 hours, the
Site Device starts to overwrite the oldest data in its buffer. With all Site
Device Series, data feed traffic is a custom XMPP packet over ports 5002
(push) and 5502 (pull).
MAG-E Event Tracking Center (MAG-E-ETC) – MAG-E-ETC is the heart
and the brain behind the Event Support Center. This web based
application tracks all Events as they are happening. In addition to the live
tracking of Events, this application also sends instructions to Site Devices
during Events to automatically turn systems off and on. For the manual
Site Devices, this system will alert the operator to call the Enterprise
Customer as they are not setup for automatic Event instructions. The
protocol between the application and the site devices is also using XMPP
over TCP port 9002.
CORPORATE APPLICATIONS:
MAG-E currently runs VoIP, IM, Video, and Email internally. These
applications are used by all employees of MAG-E but VoIP is specifically
critical to the Event Support Center Staff as they cannot act on an Event if
they cannot call an Enterprise Customer.
DIAGRAM 1
MAG-E WAN Diagram
DIAGRAM 2
MAG-E Site Device Termination Internet Option
DIAGRAM 3
MAG-E Site Device Termination Site to Site IPSEC VPN Option
DIAGRAM 4
MAG-E Site Device Termination Private Cellular Network Option
DOCUMENT 2
From: bob_murphy@mag-e.com
To: Network_Designer
Subject: SAAS Acquisition & Immediate need!
Designer,
The Board will be finalizing the Acquisition of Canada Energy (CAN-
ENG) by the end of the week. I need you to clear your schedule ASAP as this
is going to be a huge project which I am going to need some significant help.
From the little information I have been given today, CAN-ENG has 2,000
employees geographically dispersed across Canada in 37 office locations.
CAN-ENG has one data center located in Vancouver and one headquarters
located in Montreal. CAN-ENG’s Energy Eye, SAAS application, lives in
Vancouver. For the short term, we will be setting up Site to Site VPNs
between MAG-E’s Boston HQ and CAN-ENG’s Montreal HQ, and between
MAG-E’s Dallas DC and CAN-ENG’s Vancouver DC. I’m looking to you to
design a long term solution.
The board wants CAN-ENG integrated ASAP so that all MAG-E and
CAN-ENG applications can be used from all locations.
In addition to the above, we have an immediate need to develop a new
Site Device termination solution. In the past, you’ve heard me complain
about this customer before, and this request is no different. To say it nicely,
this Enterprise Customer is a primadonna but we have to play nice because
this is a 50 Million dollar contract for us. This customer will not use
NAT/PAT or static IP Addresses. They will not change their subnets or
configure any VPNs on their hardware. We need you to design a solution
that meets these needs and also keeps the Site Devices secure. We need to
keep future scalability in mind. Cost shouldn’t be a concern but let’s not go
hog wild now.
Good luck Designer, I know you will do us proud!
Dr. Bob Murphy
VP of Network Infrastructure, MAG-E
Diagram 5
MAG-E and CAN-ENG Site to Site IPSEC VPN
Question 1)
What is the most important design issue with the short term integration
plan between MAG-E and CAN-ENG (Choose 1)?
A. A) There is no design issue and this design is a good long term
solution
B . B) This design does not follow redundancy/resiliency best
practices
C. C) There are a number of bandwidth saturation issues with the
different circuits
D . D) There is no guaranty that all applications from both
companies will properly function
E. E) This design does not meet the time requirement the customer
is requiring
Question 2
Which of the following items will you need from MAG-E to create a
successful network design for the new Site Device termination solution
(Choose 3)?
A. Network Security Policy
B. IP Addressing Scheme
C. Expected Growth Increase
D. Network Utilization Reports
E. Memory/CPU Utilization Reports
Question 3
If you requested IP Addressing Scheme, which is the best reason to
request IP Addressing Scheme (choose 1)?
A. Route summarization
B. IP address scaling
C. Customer needing to change subnets
D. IP address overlap
E. I did not request IP Addressing Scheme
Question 4
What information is needed to properly design the CAN-ENG Energy
Eye integration with MAG-E (Choose 1)?
A. QoS values for application traffic
B. Encryption requirements
C. Application IP address
D. CAN-ENG’s Routing protocol
DOCUMENT 3
From: bob_murphy@mag-e.com
To: Network_Designer
Subject: New Network Security Policy – Encryption Requirements
Designer,
We at MAG-E have recently updated our Network Security policy per the
recent Government regulations placed on Energy Data. All data on the wire
must be encrypted no matter if it’s our own wire, leased wire, or over the
internet. We are highly out of compliance with this on our current MPLS
L3VPN Cloud and could use some assistance with migrating to a new design
that will comply with this new policy. In addition to that, CAN-ENG is also
not in compliance with this security policy.
Question 5
Which of the following proposed network solution will meet MAG-E’s
new encryption requirements for the new Site Device Termination solution?
(Choose all that apply)?
A. DMVPN
B. GETVPN
C. Full Mesh of IPSEC VPNs
D. Hub and Spoke IPSEC VPNs
E. VPLS
Question 6
Which of the following proposed network solution will meet all MAG-
E’s requirements for the new Site Device Termination solution (Choose 1)?
A. DMVPN
B. GETVPN
C. Full Mesh IPSEC VPNs
D. Hub and Spoke IPSEC VPNs
E. VPLS
Question 7a
If you selected DMVPN, which option below is the best reason why
(Choose 1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
Question 7b
If you selected GETVPN, which option below is the best reason why
(Choose 1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
Question 7c
If you selected Full Mesh IPSEC VPNs, which option below is the best
reason why (Choose 1)?
Running EIGRP is needed on hub and spoke networks
A solution that supports encryption is needed per the new security
policy implemented.
A solution that is highly scalable is needed per the requirements.
I did not selected this option
Question 7d
If you selected Hub and Spoke IPSEC VPNs, which option below is the
best reason why (Choose 1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
Question 7e
If you selected VPLS, which option below is the best reason why (Choose
1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
DOCUMENT 4
From: bob_murphy@mag-e.com
To: Network_Designer
Subject: New Site Device Termination Solution
Designer,
As you have seen with our network in the past, we use RFC 1918
addressing. Our Boston data center uses 10.0.0.0/11, and our Dallas data
center uses 10.120.0.0/11. All of our remote office locations currently fit in
the 172.16.0.0/12 block in different /22 increments. The 192.168.50.0/24 and
192.168.51.0/24 are reserved networks for our Production DMZ and are
used for translating overlapping customer subnets in regards to deployed Site
Devices. If there isn’t a subnet overlap with a customer’s network, then we
just dynamically route for the customer’s network in our own network. As
you can imagine, this leads to a lot of random networks in our routing table
that are not our networks but we do need to access them to connect to the Site
Devices at the customer locations. Our applications use the following IP
addresses:
Question 8
Based on the new requirements which solution should MAG-E implement
for the New Site Device Termination Solution?
A. GETVPN
B. DMVPN
Question 9a
Why is GETVPN the best option?
A. It fulfills the encryption requirement
B. It fulfills the spoke to spoke traffic pattern requirement
Question 9b
Why is DMVPN the best option?
A. It fulfills the encryption requirement
B. It fulfills the spoke to spoke traffic pattern requirement
DOCUMENT 5
From: bob_murphy@mag-e.com
To: Network_Designer
Subject: New Site Device Termination Solution # 2
Designer,
Thank you for your help thus far. I know it’s been a rocky road and I can
definitely promise you it’s only going to get rockier. As for the New Site
Device Termination Solution that you have been working on in your sleep, we
are going to implement DMVPN but I still need your help selecting which
DMVPN design to implement.
Question 10
Which DMVPN phase and routing protocol combination can meet the
requirements (Check all that apply)?
EIGRP OSPF BGP RIP ISIS
DMVPN
Phase 1
DMVPN
Phase 2
DMVPN
Phase 3
Question 11
Which DMVPN implementation is the best design given the requirements
(Choose 1)?
A. DMVPN Phase 3 with EIGRP
B. DMVPN Phase 2 with OSPF
C. DMVPN Phase 1 with BGP
D. DMVPN Phase 1 with EIGRP
E. DMVPN Phase 3 with ISIS
F. DMVPN Phase 2 with RIP
Question 12
Please place the following implementation tasks regarding the new Site
Device Termination solution in the correct order.
A. Protect the mGRE tunnel with IPSEC
B. Configure DMVPN on the spoke routers
C. Configure EIGRP routing between DMVPN mGRE Tunnels
D. Deploy new spoke routers at Site Device Locations
E. Deploy new hub routers at Dallas and Boston DCs
F. Create FVRF on hub routers
G. Configure DMVPN on the hub routers
H. Create FVRF on spoke routers
Question 13
Which of the following information is needed to create a valid network
design for the merger of between MAG-E and CAN-ENG (Choose 3)?
A. MAG-E QoS information
B. CAN-ENG QoS information
C. MAG-E Subnet information
D. CAN-ENG Subnet information
E. MAG-E WAN Network Diagram
F. CAN-ENG WAN Network Diagram
Diagram 6
CAN-ENG WAN Site to Site IPSEC VPN Diagram
Question 14
mary concern if CAN-ENG were to continue with a hub and spoke of
IPSEC Tunnels over the internet for its WAN connectivity (Choose 1)?
A. Over Subscription of Circuits
B. Performance of applications
C. Security of energy data
D. Control Plane instability
Question 15
If we were to replace the hub and spoke IPSEC Tunnels that CAN-ENG
is using with another technology which technologies below best meet the
requirements (Choose 2)?
A . Provision a second MPLS L3VPN network for all Canada
location and bridge both MPLS L3VPNs together at the DCs.
B. Implement VPLS to replace the current WAN
C. Deploy a hub and spoke network of L2TPv3 connections
D. Implement LISP to replace the current WAN
E . Add the CON-ENG network into the current MPLS L3 VPN
network
DOCUMENT 6
From: bob_murphy@mag-e.com
To: Network_Designer
Subject: QoS Design
Designer,
Question 16
Before we go any further, I need help determining all of the QoS related
information for the following application QoS matrix. Please check each box
that applies for each application, for the DSCP field please place the DSCP
value needed for that application in this design?
Question 17
Which of the following CAN-ENG MPLS L3VPN designs meets the
requirements (Choose 1)?
A. Diagram A
B. Diagram B
C. Diagram C
D. Diagram D
Question 18
Place the following tasks in the order that they should occur to properly
migrate the CAN-ENG WAN to MPLS L3VPN.
A . Configure PE-CE routing protocol at DC and HQ, and
redistribute
B. Deploy new WAN router at DC and HQ
C. Decommission all Site to Site VPNs
D . Provision and connect new MPLS L3VPN circuits at each
location
E. Configure PE-CE routing protocol at each remote location, and
redistribute
F. Deploy new WAN router at each remote location
G. Deploy QoS design on all WAN routers.
DOCUMENT 7
From: bob_murphy@mag-e.com
To: Network_Designer
Subject: WAN Migration Complete
Designer,
Question 20
What solution below would be the quickest way to resolve the issue with
the MAG-E-ETC application?
A. Advertise a host route for 10.2.0.100
B. Configure NAT for 10.2.0.0/16 to an unused /16 subnet
C. Configure NAT for 10.0.0.0/11 to an unused /11 subnet
D. Advertise a host route for 10.2.0.18
E. Change the IP address of the MAG-E-ETC to another IP in the
10.0.0.0/11 range.
Question 21
What solution below would be the most efficient way to resolve the issue
with the MAG-E-ETC application?
A. Advertise a host route for 10.2.0.100
B. Configure NAT for 10.2.0.0/16 to an unused /16 subnet
C. Configure NAT for 10.0.0.0/11 to an unused /11 subnet
D. Advertise a host route for 10.2.0.18
E. Change the IP address of the MAG-E-ETC to another IP in the
10.0.0.0/11 range.
Question 22
Which solutions below are capable of meeting the Data separation
requirements, assuming that each option below also includes VRF-Lite
(Choose all that apply)?
A. L2TPv3
B. VPLS
C. MPLSoDMVPN
D. GETVPN
E. VXLAN
Question 23
Which solution below meets all of the requirements, assuming that each
option below also includes VRF-Lite (Choose 1)?
A. L2TPv3
B. VPLS
C. MPLSoDMVPN
D. GETVPN
E. VXLAN
CCDE PRACTICAL SCENARIO
MAG ENERGY DETAILED ANSWERS
Question 1
What is the most important design issue with the short-term integration
plan between MAG-E and CAN-ENG (Choose 1)?
A . There is no design issue and this design is a good long term
solution
B . This design does not follow redundancy/resiliency best
practices
C . There are a number of bandwidth saturation issues with the
different circuits
D. There is no guaranty that all applications from both companies
will properly function
Question 2
Which of the following items will you need from MAG-E to create a
successful network design for the new Site Device termination solution
(Choose 3)?
A. Network Security Policy
B. IP Addressing Scheme
C. Expected Growth Increase
D. Network Utilization Reports
E. Memory/CPU Utilization Reports
Question 3
If you requested IP Addressing Scheme, which is the best reason to
request IP Addressing Scheme (choose 1)?
A. Route summarization
B. IP address scaling
C. Customer needing to change subnets
D. IP address overlap
E. I did not request IP Addressing Scheme
Question 4
What information is needed to properly design the CAN-ENG Energy
Eye integration with MAG-E (Choose 1)?
A. QoS values for application traffic
B. Encryption requirements
C. Application IP address
D. CAN-ENG’s Routing protocol
Question 5
Which of the following proposed network solution will meet MAG-E’s
new encryption requirements for the new Site Device Termination solution?
(Choose all that apply)?
A. DMVPN
B. GETVPN
C. Full Mesh of IPSEC VPNs
D. Hub and Spoke IPSEC VPNs
A. VPLS
Question 6
Which of the following proposed network solution will meet all MAG-
E’s current requirements for the new Site Device Termination solution
(Choose 1)?
A. DMVPN
B. GETVPN
C. Full Mesh IPSEC VPNs
D. Hub and Spoke IPSEC VPNs
E. VPLS
Question 7a
A. If you selected DMVPN, which option below is the best reason
why (Choose 1)?
B. Running EIGRP is needed on hub and spoke networks
C . A solution that supports encryption is needed per the new
security policy implemented.
D . A solution that is highly scalable is needed per the
requirements.
E. I did not selected this option
Question 7b
If you selected GETVPN, which option below is the best reason why
(Choose 1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
Question 7c)
If you selected Full Mesh IPSEC VPNs, which option below is the best
reason why (Choose 1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
DETAILED ANSWER BREAKDOWN:
A. This is an incorrect option.
B. This is an incorrect option.
C. This is an incorrect option.
D. This is an incorrect option.
Question 7d
If you selected Hub and Spoke IPSEC VPNs, which option below is the
best reason why (Choose 1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. ) I did not selected this option
Question 7e
If you selected VPLS, which option below is the best reason why (Choose
1)?
A. Running EIGRP is needed on hub and spoke networks
B . A solution that supports encryption is needed per the new
security policy implemented.
C . A solution that is highly scalable is needed per the
requirements.
D. I did not selected this option
DETAILED ANSWER BREAKDOWN:
Question 8
Based on the new requirements which solution should MAG-E implement
for the New Site Device Termination Solution?
A. GETVPN
B. DMVPN
Question 9a
Why is GETVPN the best option?
A. It fulfills the encryption requirement
B. It fulfills the spoke to spoke traffic pattern requirement
Question 9b)
Why is DMVPN the best option?
A. It fulfills the encryption requirement
B. It fulfills the spoke to spoke traffic pattern requirement
Question 10)
Which DMVPN phase and routing protocol combination can meet the
requirements (Check all that apply)?
Question 11)
Which DMVPN implementation is the best design given the requirements
(Choose 1)?
A. DMVPN Phase 3 with EIGRP
B. DMVPN Phase 2 with OSPF
C. DMVPN Phase 1 with BGP
D. DMVPN Phase 1 with EIGRP
E. DMVPN Phase 3 with ISIS
F. DMVPN Phase 2 with RIP
Question 13
Which of the following information is needed to create a valid network
design for the merger of between MAG-E and CAN-ENG (Choose 3)?
A. MAG-E QoS information
B. CAN-ENG QoS information
C. MAG-E Subnet information
D. CAN-ENG Subnet information
E. MAG-E WAN Network Diagram
F. CAN-ENG WAN Network Diagram
Question 15
If we were to replace the hub and spoke IPSEC Tunnels that CAN-ENG
is using with another technology which technologies below best meet the
requirements (Choose 2)?
A . Provision a second MPLS L3VPN network for all Canada
location and bridge both MPLS L3VPNs together at the DCs.
B. Implement VPLS to replace the current WAN
C. Deploy a hub and spoke network of L2TPv3 connections
D. Implement LISP to replace the current WAN
E . Add the CON-ENG network into the current MPLS L3 VPN
network
Question 16
Before we go any further, I need help determining all of the QoS related
information for the following application QoS matrix. Please check each box
that applies for each application, for the DSCP field please place the DSCP
value needed for that application in this design?
Detailed Answer Breakdown:
Question 17
Which of the following CAN-ENG MPLS L3VPN designs meets the
requirements (Choose 1)?
A. Diagram A
B. Diagram B
C. Diagram C
D. Diagram D
DETAILED ANSWER BREAKDOWN:
For this question you had to find the requirement in Document 6,
“During this migration we would also like to remove any single points of
failures in the DC and HQ locations for the CAN-ENG network”
If you happened to miss this requirement it would have made answering
this question correctly, very hard.
Question 18
Place the following tasks in the order that they should occur to properly
migrate the CAN-ENG WAN to MPLS L3VPN.
A . Configure PE-CE routing protocol at DC and HQ, and
redistribute
B. Deploy new WAN router at DC and HQ
C. Decommission all Site to Site VPNs
D . Provision and connect new MPLS L3VPN circuits at each
location
E. Configure PE-CE routing protocol at each remote location, and
redistribute
F. Deploy new WAN router at each remote location
G. Deploy QoS design on all WAN routers.
Question 19
What is the best possible reason why the MAG-E-ETC application is no
longer accessible throughout the network?
A . Duplicate IP addresses with the Energy Eye Application in
CAN-ENG
B . Traffic is no longer allowed via an infrastructure ACL on the
core
C. Missing a dynamic route for the MAG-E-ETC subnet
D. Subnet overlap
E. There is a route redistribution issues between EIGRP in MAG-
E and OSPF in CAN-ENG.
Question 21
What solution below would be the most efficient way to resolve the issue
with the MAG-E-ETC application?
A. Advertise a host route for 10.2.0.100
B. Configure NAT for 10.2.0.0/16 to an unused /16 subnet
C. Configure NAT for 10.0.0.0/11 to an unused /11 subnet
D. Advertise a host route for 10.2.0.18
E. Change the IP address of the MAG-E-ETC to another IP in the
10.0.0.0/11 range.
Question 22
A . Which solutions below are capable of meeting the Data
separation requirements, assuming that each option below also
includes VRF-Lite (Choose all that apply)?
B. L2TPv3
C. VPLS
D. MPLSoDMVPN
E. GETVPN
F. VXLAN
Question 23
Which solution below meets all of the requirements, assuming that each
option below also includes VRF-Lite (Choose 1)?
A. L2TPv3
B. VPLS
C. MPLSoDMVPN
D. GETVPN
E. VXLAN
CONCLUSIONS:
SEGMENT ROUTING
Segment routing refers to a source routing mechanism that provides
Traffic Engineering, Fast Reroute, and MPLS VPNS without LDP or RSVP-
TE.
As you are reading this post, you will learn everything about segment
routing. With some extension to the existing protocols, this source routing
mechanism will assist you to solve all the complex problems related to
Traffic Engineering, Fast Reroute, and MPLS VPNS.
With RSVP-TE, you can use MPLS to create BGP free core, VPN
services (layer 2 and layer 3), and traffic engineering capability.
What is Segment Routing ?
Segment Routing is one of the ways of implementing source routing
mechanism.
I implore you not to confuse source routing with policy based routing
(PBR), they are totally different.
With Segment routing, end-to-end path is pushed to the ingress node and
the subsequent nodes just apply the instructions. With PBR, if path will be
different than the routing table, each and every node as hop by hop fashion
should be configured.
Segment routing can be compared with the MPLS Traffic Engineering
since both protocols can route the traffic explicitly.
While the source is an edge node, it can be a server, a top of rack switch,
a virtual switch, or an edge router. Source allows service chaining, and its
entire path can be exposed to ingress/head end router.
What does segment means ?
Segment is the component path that allows the packets to travel, a task
specified by the user.
For instance, you could direct a component travelling from firewall X to
go to router A, and then to router B. Yes, you can do that.
In fact, service chaining can be achieved with Segment Routing.
Even though Segment Routing uses IP control plane, it employs MPLS
data plane in its operation. Segment ID is equivalent to MPLS label, and
segment list is exposed to label stack.
Some extensions of OSPF and IS-IS are necessary for the Segment
Routing because segment/label moves within the link state IGP protocol
messages.
To understand how Segment Routing functions, you need to understand
MPLS VPN operation.
MPLS VPN Operation
If you know everything about MPLS VPN operation already, you can
skip this section.
The below diagram depicts the MPLS VPN operation.
MPLS VPN Label Operation (Control and Dataplane)
The diagram above has two labels: core label, also known as transport,
tunnel or topmost label. In MPLS layer 2 or layer 3 VPN operations, the
topmost label moves from PE1 loopback to PE2 loopback. While the topmost
label provides an edge-to-edge reachability, LDP, RSV, or BGP allows
core/transport label.
In the context of MPLS VPN, LDP is the most commonly used label
distribution protocol.
If you want to use MPLS Traffic Engineering architecture, then you need
to enable RSVP-TE for label distribution. And of course, LDP and RSVP can
coexist in the network.
VPN label is provided by BGP, specifically Multi-protocol BGP.
PE routers change BGP next hop as their loopback addresses to the VPN
prefixes. Also, core/transport label is used to reach the BGP next hop.
PE1 pushes two labels: the red label and the blue label. Sent by P1 to PE1
via LDP, red label – which is the core/transport label – is changed at every
hop.
The red label is removed at P2 if PE2 sends an implicit null label, a
process known as PHP (Penultimate hop popping).
The blue label is the VPN label sent by PE2 to PE1 through MP-BGP
session.
Next, I will explain MPLS VPN operations with Segment Routing.
MPLS VPN with Segment Routing
If similar operation is done with Segment Routing, the red label is sent
from PE2 to all the routers within the IGP domain via link state protocols
(OSPF or IS-IS), not within the LDP label messages (see picture below).
Node segment ID, also known as prefix segment ID, is used for
specifying the loopback interface of Segment Routing enabled device.
Within the loopback interface, Segment Routing is enabled; because of
that, Node/Prefix Segment identifier is assigned to such loopback interface.
Throughout this post, I will use the SID abbreviation for Segment ID.
Node/Prefix SID is sent via either IS-IS or OSPF LSP and LSAs.
All the Segment Routing enabled routers receive and learn Node/Prefix
SID from one another.
To assist you to understand this topic, I will explain MPLS Layer 3 VPN
operation as well as segment routing.
Segment Routing Label Operation (Control and Dataplane)
As you must have observed, there is no LDP in the above diagram. Label
100 is advertised with the IGP protocol (Not via LDP or RSVP), and all the
routers use identical label.
Unlike LDP, label 100 does not change hop by hop.
Through MP-BGP, PE1 still receives a VPN label for the CE2 prefixes.
BGP next hop is PE2 loopback. PE2 loopback uses label 100 in the IS-IS
sub-TLV or OSPF Opaque LSA.
PE1 assumes label 100 as a core / transport label, and so too does the
outer label consider label 2000 the inner VPN label.
P1 does not change the core/transport label; rather, it sends the packet to
the P2.
If P2 receives an implicit null label from PE2, P2 does PHP (Penultimate
Hop Popping). In sum, only the VPN label is sent to the PE2.
Without using LDP but by using IGP, MPLS VPN service is provided.
Segment Routing does not require LDP for the transport tunnel because it
uses IGP for the label advertisement.
Please note that Segment Routing eliminates to use LDP only for the
transport label operation.
If you setup MPLS layer 2 VPN for the PW label, you will use either
LDP or BGP because Segment Routing does not provide such capability.
PW (Pseudowire) can be signaled via LDP or RSVP. LDP signaled
pseudowire is also known as Martini pseudowire, while BGP signaled
pseudowire is also known as Kompella pseudowire.
So, if you provide layer 2 VPN service with Segment Routing, you will
notice two labels: transport label provided by the IGP to reach the correct PE;
and LDP or BGP assigned label for the end customer AC (Attachment
circuit) identification in the remote PE.