IPexpert IPV4 6 Multicast Operations and Troubleshooting E1 V1.2 PDF
IPexpert IPV4 6 Multicast Operations and Troubleshooting E1 V1.2 PDF
IPexpert IPV4 6 Multicast Operations and Troubleshooting E1 V1.2 PDF
Before We Begin
This
product
is
part
of
the
IPexpert
suite
of
materials
that
provide
CCIE
candidates
and
network
engineers
with
a
comprehensive
training
program.
For
information
about
the
full
solution,
contact
an
IPexpert
Training
Advisor
today.
Telephone:
+1.810.326.1444
Email:
sales@ipexpert.com
Congratulations!
You
now
possess
one
of
the
ULTIMATE
CCIETM
Lab
preparation
and
network
operation
resources
available
today!
Senior
engineers,
technical
instructors,
and
authors
boasting
decades
of
internetworking
experience
produced
this
resource.
In
order
to
enjoy
technical
support
from
IPexpert
and
your
CCIE
community,
be
sure
to
visit
the
following
Internet
resources:
http://blog.ipexpert.com
http://onlinestudylist.com
IPexpert
is
proud
to
lead
the
industry
with
multiple
support
options
at
your
disposal
free
of
charge.
Our
online
communities
have
attracted
a
membership
of
over
20,000
of
your
peers
from
around
the
world!
At
blog.ipexpert.com,
you
can
keep
up
to
date
with
everything
IPexpert
does
and
read
the
latest
in
technical
articles
from
word-renowned
IPexpert
instructors.
At
OnlineStudyList.com,
you
may
subscribe
to
multiple
SPAM-free,
moderated
CCIE-focused
email
lists.
Feedback
Do
you
have
a
suggestion
or
other
feedback
regarding
this
book
or
other
IPexpert
products?
At
IPexpert,
we
look
to
you
our
valued
clients
for
the
real
world,
frontline
evaluation
that
we
believe
is
necessary
so
that
we
may
always
improve.
Please
send
an
email
with
your
thoughts
to
feedback@ipexpert.com
or
call
1.866.225.8064
(international
callers
dial
+1.810.326.1444).
In
addition,
for
those
using
this
book
as
CCIETM
preparation,
when
you
pass
the
CCIETM
Lab
exam,
we
want
to
hear
about
it!
Email
your
CCIETM
number
to
success@ipexpert.com
and
let
us
know
how
IPexpert
helped
you
succeed.
We
would
like
to
send
you
a
gift
of
thanks
and
congratulations.
This
is
a
legally
binding
agreement
between
you
and
IPEXPERT,
the
Licensor,
from
whom
you
have
licensed
the
IPEXPERT
training
materials
(the
Training
Materials).
By
using
the
Training
Materials,
you
agree
to
be
bound
by
the
terms
of
this
License,
except
to
the
extent
these
terms
have
been
modified
by
a
written
agreement
(the
Governing
Agreement)
signed
by
you
(or
the
party
that
has
licensed
the
Training
Materials
for
your
use)
and
an
executive
officer
of
Licensor.
If
you
do
not
agree
to
the
License
terms,
the
Licensor
is
unwilling
to
license
the
Training
Materials
to
you.
In
this
event,
you
may
not
use
the
Training
Materials,
and
you
should
promptly
contact
the
Licensor
for
return
instructions.
The
Training
Materials
shall
be
used
by
only
ONE
(1)
INDIVIDUAL
who
shall
be
the
sole
individual
authorized
to
use
the
Training
Materials
throughout
the
term
of
this
License.
Exclusions
of
Warranties
THE
TRAINING
MATERIALS
AND
DOCUMENTATION
ARE
PROVIDED
AS
IS.
LICENSOR
HEREBY
DISCLAIMS
ALL
OTHER
WARRANTIES,
EXPRESS,
IMPLIED,
OR
STATUTORY,
INCLUDING
WITHOUT
LIMITATION,
THE
IMPLIED
WARRANTIES
OF
MERCHANTABILITY
AND
FITNESS
FOR
A
PARTICULAR
PURPOSE.
SOME
STATES
DO
NOT
ALLOW
THE
LIMITATION
OF
INCIDENTAL
DAMAGES
OR
LIMITATIONS
ON
HOW
LONG
AN
IMPLIED
WARRANTY
LASTS,
SO
THE
ABOVE
LIMITATIONS
OR
EXCLUSIONS
MAY
NOT
APPLY
TO
YOU.
This
agreement
gives
you
specific
legal
rights,
and
you
may
have
other
rights
that
vary
from
state
to
state.
Choice
of
Law
and
Jurisdiction
This
Agreement
shall
be
governed
by
and
construed
in
accordance
with
the
laws
of
the
State
of
Michigan,
without
reference
to
any
conflict
of
law
principles.
You
agree
that
any
litigation
or
other
proceeding
between
you
and
Licensor
in
connection
with
the
Training
Materials
shall
be
brought
in
the
Michigan
state
or
courts
located
in
Port
Huron,
Michigan,
and
you
consent
to
the
jurisdiction
of
such
courts
to
decide
the
matter.
The
parties
agree
that
the
United
Nations
Convention
on
Contracts
for
the
International
Sale
of
Goods
shall
not
apply
to
this
License.
If
any
provision
of
this
Agreement
is
held
invalid,
the
remainder
of
this
License
shall
continue
in
full
force
and
effect.
Entire
Agreement
This
is
the
entire
agreement
between
the
parties
and
may
not
be
modified
except
in
writing
signed
by
both
parties.
Table
of
Contents
Before
We
Begin
......................................................................................................................................
1
Feedback
..................................................................................................................................................
1
Additional
CCIETM
Preparation
Material
..................................................................................................
2
IPEXPERT
END-USER
LICENSE
AGREEMENT
.............................................................................................
3
Copyright
and
Proprietary
Rights
............................................................................................................
3
Exclusions
of
Warranties
.........................................................................................................................
4
Choice
of
Law
and
Jurisdiction
.................................................................................................................
4
Limitation
of
Claims
and
Liability
.............................................................................................................
4
Entire
Agreement
....................................................................................................................................
4
U.S.
Government
-
Restricted
Rights
.......................................................................................................
5
Chapter
1:
Introduction
to
IPv4/6
Multicast
Operation
and
Troubleshooting
.........................................
1-1
About
the
Authors
................................................................................................................................
1-2
About
the
Technical
Editors
..................................................................................................................
1-2
About
the
Technical
Consultant
...........................................................................................................
1-2
About
the
Editor
...................................................................................................................................
1-3
Who
Should
Read
this
Book?
................................................................................................................
1-3
How
to
Use
this
Book
...........................................................................................................................
1-3
An
Introduction
to
IPv4/6
Multicast
.....................................................................................................
1-4
An
Introduction
to
IPv4/6
Multicast
Troubleshooting
..........................................................................
1-5
Chapter
2:
Internet
Group
Management
Protocol
(IGMP)
.......................................................................
2-1
IGMP
Technology
Review
.....................................................................................................................
2-2
IGMP
Version
1
.................................................................................................................................
2-3
IGMP
Version
2
.................................................................................................................................
2-4
IGMP
Version
3
.................................................................................................................................
2-5
IGMP
Leave
Process
..........................................................................................................................
2-6
The
Operation
and
Troubleshooting
of
IGMP
......................................................................................
2-7
IGMP
Version
1
.................................................................................................................................
2-8
Chapter
1:
Introduction
to
IPv4/6
Multicast
Operation
and
Troubleshooting
Chapter
1:
Introduction
to
IPv4/6
Multicast
Operation
and
Troubleshooting
introduces
the
team
of
authors,
consultants,
and
editors
that
completed
this
book
and
describes
the
books
purpose.
This
chapter
also
provides
suggestions
for
the
usage
of
this
written
work.
This
introductory
chapter
also
covers
a
basic
overview
of
multicasts
operations
and
troubleshooting
concerns.
Readers
who
are
very
familiar
with
basic
multicast
principles
may
safely
skip
this
section.
Terry
Vinson,
CCNP,
Terry
Vinson
is
a
highly
experienced
training
consultant,
specializing
in
documentation
development,
validation,
verification
and
communications.
For
the
last
10
years,
Terry
has
worked
in
the
private
sector
as
a
Senior
Technology
Consultant
and
Trainer
for
several
consulting
firms
in
Washington
DC,
Northern
and
Central
Virginia.
In
this
capacity,
he
has
provided
services
to
Major
Metropolitan
Health
Systems,
the
Mexican
Embassy,
and
the
Executive
Office
of
the
President
of
the
United
States
of
America
(EOP).
Jason
Gooley,
CCNP,
Jason
is
a
highly
motivated
network
engineer
with
over
17
years
of
experience
in
the
communications
industry.
Based
in
Chicago,
Jason
currently
manages
the
network
for
the
nations
most
famous
next
day
carpet
company.
Jason
is
currently
in
the
process
of
pursuing
his
CCIE
certification
for
Routing
and
Switching
while
also
expanding
his
knowledge
in
Unified
Communications
and
Security.
The
books
second
audience
is
those
readers
that
must
support
multicast
technologies
in
their
actual
network
environments.
This
book
serves
as
an
amazing
guide
and
reference
for
real-world
problem
solving
within
production
networks
that
deploy
these
specific
technologies.
In
fact,
while
many
courses
and
texts
purport
to
have
certification
success
as
a
by-product
of
a
thorough
investigation
of
all
protocols,
this
book
actually
succeeds
in
this
approach.
Each
chapter
concludes
with
sample
troubleshooting
scenarios
that
provide
a
full
walkthrough
of
a
well-
designed
approach
for
troubleshooting
each
major
issue.
The
text
provides
reference
guides
for
the
most
popular
and
powerful
show
and
debug
commands
for
a
specific
technology.
Each
chapter
concludes
with
sample
Trouble
Tickets
on
the
specific
technology.
Readers
may
download
initial
configurations,
or
install
them
in
a
simple
Graphical
User
Interface
(GUI)
on
www.proctorlabs.com.
These
sample
Trouble
Tickets
allow
students
to
build
confidence
and
expertise
by
actually
troubleshooting
issues
in
the
multicast
domain
presented
in
the
chapter.
Students
are
encouraged
to
follow
along
on
a
rack
of
equipment
for
every
section
of
every
chapter.
This
really
enhances
and
strengthens
the
learning
process.
IP
multicast
routing
enables
a
source
host
to
send
packets
to
a
group
of
receivers
anywhere
within
the
IP
network
by
using
a
special
form
of
IP
address
called
the
IP
multicast
group
address.
This
special
multicast
group
address
forms
the
destination
IP
address
of
the
packet.
Multicast
enabled
routers
and
switches
forward
these
incoming
multicast
packets
out
all
interfaces
that
lead
to
members
of
the
multicast
group.
Any
host,
regardless
of
whether
it
is
a
member
of
a
group,
can
send
to
a
group.
However,
only
the
members
of
a
group
receive
the
message.
A
quick
question
that
arises
when
examining
this
multicast
approach
is
how
can
hosts
that
are
interested
in
receiving
the
multicast
information
actually
join
this
multicast
group?
Internet
Group
Management
Protocol
(IGMP)
makes
this
possible
in
IPv4,
while
the
Multicast
Listener
Discovery
(MLD)
protocol
makes
this
possible
in
IPv6.
Network
administrators
who
assign
multicast
group
addresses
must
make
sure
the
addresses
conform
to
the
multicast
address
range
assignments
reserved
by
the
Internet
Assigned
Numbers
Authority
(IANA).
The
IANA
assigns
the
IPv4
Class
D
address
space
for
multicast.
The
first
four
high-order
bits
of
a
Class
D
address
are
1110.
This
causes
the
multicast
group
addresses
to
fall
in
the
range
224.0.0.0
to
239.255.255.255.
This
book
fully
describes
multicast
addressing
for
IPv6
networks
in
Chapter
14:
IPv6
Multicast.
To
provide
predictable
behavior
for
various
address
ranges
and
for
address
reuse
within
smaller
domains,
the
overall
multicast
address
range
shown
above
is
subdivided.
The
Protocol
Independent
Multicast
(PIM)
protocol
is
responsible
for
routing
multicast
traffic
through
the
network
infrastructure.
As
the
name
of
this
protocol
conveys,
it
is
not
dependent
on
a
specific
unicast
routing
protocol;
it
is
IP
routing
protocol
independent
and
can
leverage
the
unicast
routing
protocol
used
to
populate
the
unicast
routing
table,
including
simple
static
routes.
PIM
relies
upon
this
unicast
routing
information
to
perform
the
multicast
forwarding
function.
PIM
uses
the
unicast
routing
table
to
perform
the
reverse
path
forwarding
(RPF)
check
function
instead
of
building
up
a
completely
independent
multicast
routing
table.
The
RPF
process
is
a
key
in
the
operation
and
subsequent
troubleshooting
of
multicast
As
such,
this
book
covers
the
RPF
process
in
depth
throughout
its
chapters.
Unlike
other
routing
protocols,
PIM
does
not
send
and
receive
routing
updates
between
routers.
PIM
can
operate
in
dense
mode
or
sparse
mode.
The
mode
determines
how
the
router
populates
its
multicast
routing
table
and
how
the
router
forwards
multicast
packets
it
receives
from
its
directly
connected
LANs.
Chapter
2:
Internet
Group
Management
Protocol
(IGMP)
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
functionality
of
the
Internet
Group
Management
Protocol
(IGMP)
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
Internet
Group
Management
Protocol
(IGMP).
The
chapter
begins
with
a
thorough
review
of
IGMP,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
support
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
Hosts
identify
group
memberships
by
sending
IGMP
messages
to
their
local
multicast
router.
These
multicast
routers
listen
to
IGMP
messages
and
periodically
send
out
queries
to
discover
which
groups
are
active
or
inactive
on
a
particular
subnet.
Figure
2-1
illustrates
a
sample
IGMP
topology.
Figure
2-1:
A
Sample
IGMP
Topology
IGMP
version
1
-
provides
a
basic
query-response
mechanism
that
allows
the
multicast
router
to
determine
which
multicast
groups
are
active;
also
enables
hosts
to
join
and
leave
a
multicast
group
IGMP
version
2
introduces
the
IGMP
leave
process,
group
specific
queries,
and
an
explicit
maximum
response
time
field;
also
adds
the
capability
for
routers
to
elect
the
IGMP
querier
without
dependence
on
the
multicast
protocol
to
perform
this
task
IGMP
version
3
-
provides
source
filtering;
supports
the
link
local
address
224.0.0.22,
which
is
the
destination
IP
address
for
IGMP
version
3
membership
reports
Note:
By
default,
enabling
PIM
on
an
interface
enables
IGMP
version
2
on
that
interface.
IGMP
version
2
was
designed
to
be
as
backward
compatible
with
IGMP
version
1
as
possible.
IGMP
Version
1
IGMP
version
1
routers
send
IGMP
queries
to
the
"all-hosts"
multicast
address
of
224.0.0.1
to
solicit
multicast
groups
with
active
multicast
receivers.
The
multicast
receivers
also
send
IGMP
reports
to
the
router
to
notify
it
that
they
are
interested
in
receiving
a
particular
multicast
stream.
Hosts
can
send
the
report
independently
or
in
response
to
the
IGMP
queries
sent
by
the
router.
If
more
than
one
multicast
receiver
exists
for
the
same
multicast
group,
only
one
of
these
hosts
sends
an
IGMP
report
message;
the
other
hosts
suppress
their
report
messages.
In
IGMP
version
1,
there
is
no
election
of
an
IGMP
querier.
If
more
than
one
router
on
the
segment
exists,
all
the
routers
send
periodic
IGMP
queries.
IGMP
version
1
has
no
special
mechanism
by
which
the
hosts
can
leave
the
group.
If
the
hosts
are
no
longer
interested
in
receiving
multicast
packets
for
a
particular
group,
they
simply
do
not
reply
to
the
IGMP
query
packets
sent
from
the
router.
The
router
continues
sending
query
packets.
If
the
router
does
not
hear
a
response
in
three
IGMP
queries,
the
group
times
out
and
the
router
stops
sending
multicast
packets
on
the
segment
for
the
group.
If
there
are
multiple
routers
on
a
LAN,
a
designated
router
(DR)
must
be
elected
to
avoid
duplicating
multicast
traffic
for
connected
hosts.
PIM
routers
follow
an
election
process
to
select
a
DR.
The
PIM
router
with
the
highest
IP
address
becomes
the
DR.
Sending
PIM
register,
PIM
Join,
and
Prune
messages
toward
the
rendezvous
point
(RP)
to
inform
it
about
host
group
memberships
Sending
IGMP
host-query
messages
Sending
host-query
messages
in
order
to
keep
the
IGMP
overhead
on
hosts
and
networks
very
low
IGMP
Version
2
IGMP
version
2
improves
the
query
messaging
capabilities
of
IGMP
version
1.
The
query
and
membership
report
messages
in
IGMP
version
2
are
identical
to
the
IGMP
version
1
messages
with
two
exceptions:
IGMP
version
2
query
messages
are
broken
into
two
categories:
general
queries
(identical
to
IGMP
version
1
queries)
and
group-specific
queries
IGMP
version
1
membership
reports
and
IGMP
version
2
membership
reports
have
different
IGMP
type
codes
IGMP version 2 also enhances IGMP by providing support for the following capabilities:
Querier
election
processIGMP
version
2
routers
can
elect
the
IGMP
querier
without
having
to
rely
on
the
multicast
routing
protocol
to
perform
the
process
Maximum
Response
Time
fielda
new
field
in
query
messages
permits
the
IGMP
querier
to
specify
the
maximum
query-response
time;
this
feature
permits
the
tuning
of
the
query-
response
process
to
control
response
burstiness
and
to
fine-tune
leave
latencies
Group-Specific
Query
messagespermits
the
IGMP
querier
to
perform
the
query
operation
on
a
specific
group
instead
of
all
groups
Leave-Group
messagesprovides
hosts
with
a
method
of
notifying
routers
on
the
network
that
they
wish
to
leave
the
group
Unlike
IGMP
version
1,
in
which
the
DR
and
the
IGMP
querier
are
typically
the
same
router,
in
IGMP
version
2
the
two
functions
are
decoupled.
The
DR
and
the
IGMP
querier
are
selected
based
on
different
criteria
and
may
be
different
routers
on
the
same
subnet.
The
DR
is
the
router
with
the
highest
IP
address
on
the
subnet,
whereas
the
IGMP
querier
is
the
router
with
the
lowest
IP
address.
Step
1
-
when
an
IGMP
version
2
router
starts,
they
each
multicast
a
general
query
message
to
the
all-
systems
group
address
of
224.0.0.1
with
their
interface
address
in
the
source
IP
address
field
of
the
message.
Step
2
-
when
an
IGMP
version
2
router
receives
a
general
query
message,
the
router
compares
the
source
IP
address
in
the
message
with
its
own
interface
address.
The
router
with
the
lowest
IP
address
on
the
subnet
is
elected
the
IGMP
querier.
Step
3
-
all
routers
(excluding
the
querier)
start
the
query
timer,
which
is
reset
whenever
a
general
query
message
is
received
from
the
IGMP
querier.
If
the
query
timer
expires,
it
is
assumed
that
the
IGMP
querier
has
gone
down,
and
the
election
process
is
performed
again
to
elect
a
new
IGMP
querier.
IGMP
Version
3
IGMP
version
3
adds
support
in
the
IOS
for
source
filtering,
which
enables
a
multicast
receiver
host
to
signal
to
a
router
which
groups
it
wants
to
receive
multicast
traffic
from,
and
from
which
sources
this
traffic
is
expected.
This
membership
information
enables
Cisco
IOS
software
to
forward
traffic
only
from
those
sources
from
which
receivers
requested
the
traffic.
IGMP
version
3
supports
applications
that
explicitly
signal
sources
from
which
they
want
to
receive
traffic.
With
IGMP
version
3,
receivers
signal
membership
to
a
multicast
group
in
the
following
two
modes:
INCLUDE
mode
the
receiver
announces
membership
to
a
group
and
provides
a
list
of
IP
addresses
(the
INCLUDE
list)
from
which
it
wants
to
receive
traffic
EXCLUDE
modethe
receiver
announces
membership
to
a
group
and
provides
a
list
of
IP
addresses
(the
EXCLUDE
list)
from
which
it
does
not
want
to
receive
traffic;
to
receive
traffic
from
all
sources,
like
in
the
case
of
the
Internet
Standard
Multicast
(ISM)
service
model,
a
host
expresses
EXCLUDE
mode
membership
with
an
empty
EXCLUDE
list
IGMP
version
3
is
the
industry-designated
standard
protocol
for
hosts
to
signal
channel
subscriptions
in
a
Source
Specific
Multicast
(SSM)
network.
For
SSM
to
rely
on
IGMP
version
3,
IGMP
version
3
must
be
available
in
the
network
stack
portion
of
the
operating
systems
running
on
the
last
hop
routers,
hosts
and
be
used
by
the
applications
running
on
those
hosts.
In
IGMP
version
3,
hosts
send
their
membership
reports
to
224.0.0.22;
all
IGMP
version
3
routers,
therefore,
must
listen
to
this
address.
Hosts,
however,
do
not
listen
or
respond
to
224.0.0.22;
they
only
send
their
reports
to
that
address.
In
addition,
in
IGMP
version
3,
there
is
no
membership
report
suppression
because
IGMP
version
3
hosts
do
not
listen
to
the
reports
sent
by
other
hosts.
Therefore,
when
a
general
query
is
sent
out,
all
hosts
on
the
wire
respond.
When
a
host
wants
to
join
a
multicast
group,
the
host
sends
one
or
more
unsolicited
membership
reports
for
the
multicast
group
it
wants
to
join.
The
IGMP
join
process
is
the
same
for
IGMP
version
1
and
IGMP
version
2
hosts.
When
a
hosts
wants
to
join
a
group,
it
sends
an
IGMP
version
3
membership
report
to
224.0.0.22
with
an
empty
EXCLUDE
list
When
a
host
wants
to
join
a
specific
channel,
it
sends
an
IGMP
version
3
membership
report
to
224.0.0.22
with
the
address
of
the
specific
source
included
in
the
INCLUDE
list
When
a
host
wants
to
join
a
group
excluding
particular
sources,
it
sends
an
IGMP
version
3
membership
report
to
224.0.0.22
excluding
those
sources
in
the
EXCLUDE
list
IGMP
version
3
enhances
the
leave
process
by
introducing
the
capability
for
a
host
to
stop
receiving
traffic
from
a
particular
group,
source,
or
channel
in
IGMP
by
including
or
excluding
sources,
groups,
or
channels
in
IGMP
version
3
membership
reports.
The
concept
of
IP
multicast
was
the
perfect
solution
to
this
one-to-many
model
of
data
distribution.
However,
IP
multicast
brought
with
it,
its
own
issues.
This
section
focuses
on
the
first
of
these
issues:
IP
multicast
environments
propagate
membership
information
via
Internet
Group
Management
Protocol
(IGMP).
IGMP
propagates
membership
information
from
the
host
toward
the
routers
attached
to
discreet
sources.
This
is
odd
behavior
when
compared
to
the
typical
unicast
routing
model.
So
odd,
in
fact,
that
many
phrases
exist
to
describe
this
process
including
"upside-down
routing"
or
"bottoms-up
routing.
It
is
important
to
understand
this
concept
early
on.
Multicasting
routes
packets
away
from
the
source
(toward
the
receivers)
not
toward
a
given
destination.
Understanding
this
one
concept
will
make
deploying
and
troubleshooting
IP
multicast
much
simpler.
In
summary,
routers
attached
to
multicast
sources
learn
about
group
members
via
IGMP.
This
section
will
take
an
exhaustive
look
at
the
operation
and
troubleshooting
of
the
three
versions
of
IGMP
protocol
by
using
the
topology
in
the
Figure
2-2.
Figure
2-2:
IGMP
Lab
Topology
IGMP
Version
1
A
critical
analysis
of
the
inner
working
of
IGMP
version
1
must
begin
with
the
protocol
message
types
it
supports:
IGMPv1
Membership
Query
Messages
-
Generated
by
the
IGMP
querier
-
One
multicast
router
per
LAN
must
periodically
transmit
host
"membership
query
messages".
These
messages
identify
which
groups
have
members
on
a
directly
connected
network.
Query
Messages
use
an
address
of
224.0.0.1.
An
adjacent
router
does
not
forward
query
messages
to
any
other
multicast
enabled
router.
IGMPv1
Membership
Report
Messages
-
Generated
by
the
hosts
-
When
a
host
receives
an
IGMP
query
message
it
responds
with
a
membership
report.
The
membership
report
identifies
the
groups
that
a
host
has
joined.
These
two
message
types
are
part
of
a
two-phase
mechanism
where,
an
IGMP
version
1
host
sends
a
report
when
it
joins
a
multicast
group.
In
order
to
configure
a
router
to
utilize
IGMP
version
1
protocol
it
is
necessary
to
apply
the
ip
igmp
version
1
command
at
the
interface
level:
interface FastEthernet0/0
ip address 172.16.100.2 255.255.255.0
ip igmp version 1
An
IGMP
version
1
router
(known
as
a
querier)
queries
periodically
using
query
messages
to
dynamically
identify
active
members
of
groups.
Whenever
a
host
receives
a
query
message,
it
responds
with
membership
report
messages
for
all
its
associated
multicast
groups.
The
host
sends
an
individual
membership
report
for
each
multicast
group
it
has
joined
to
the
querier.
A
host
will
wait
a
random
period
between
responses
to
queries
(no
more
than
10
seconds)
for
each
group
it
has
joined.
This
delay
affords
the
host
time
to
receive
a
valid
report
sent
by
another
device
on
the
segment.
If
a
host
does
not
receive
a
report
from
another
host
on
the
same
segment
during
the
delay
period,
it
will
generate
a
membership
report
itself.
If
a
host
does
receive
a
membership
report
from
another
host
for
one
of
its
multicast
group
associations,
it
will
suppress
its
own
membership
report
for
that
group.
This
process
prevents
a
"storm"
of
membership
report
messages.
Observe
this
behavior
in
the
output
provided
by
the
debug
ip
igmp
command
on
R2:
R2#
IGMP(0): Received v1 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 0.2 seconds for 224.7.7.7 on GigabitEthernet0/0
IGMP(0): Send v1 Report for 224.7.7.7 on GigabitEthernet0/0
On R1 we can see the report for the group address 224.0.1.40 being canceled.
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.5 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.5 for 0
sources
IGMP(0): Cancel report for 224.0.1.40 on FastEthernet0/0
When
the
querier
receives
a
membership
report
for
a
given
group,
it
will
start
an
EXCLUDE
Group
timer.
The
querier
will
remove
group
membership
information
not
refreshed
by
a
subsequent
membership
report
sent
in
response
to
periodic
general
queries,
within
the
configured
EXCLUDE
group
timer.
First,
we
see
the
EXCLUDE
timer
update
in
the
output
of
the
debug
ip
igmp
command
on
R4
when
the
membership
report
arrives:
R4#
IGMP(0): Received v1 Report on FastEthernet0/0 from 172.16.100.2 for 224.7.7.7
IGMP(0): Received Group record for group 224.7.7.7, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.7.7.7
This
output
clearly
illustrates
the
process
we
have
just
described.
The
querier
received
the
IGMP
version
1
membership
report
from
172.16.100.2
for
the
group
224.7.7.7.
The
router
in
question
has
no
knowledge
of
an
active
source,
but
it
still
resets
the
EXCLUDE
group
timer.
At
first
blush,
the
IGMP
version
1
protocol
seems
to
accomplish
all
the
goals
we
have
discussed
to
date.
However,
this
version
IGMP
does
have
one
huge
Achilles
heel.
Hosts
have
no
special
mechanism
to
allow
them
to
leave
a
group.
As
stated
earlier
in
the
IGMP
Technology
Review,
in
IGMP
version
1
a
host
that
no
longer
needs
to
receive
multicast
packets
for
a
particular
group,
simply
stops
replying
to
the
IGMP
query
packets
sent
from
the
router.
The
router
will
continue
sending
queries
and
only
stops
sending
queries
on
the
network
for
the
group
after
three
consecutive
query
messages
go
unanswered.
If
a
host
on
the
segment
wants
to
receive
multicast
packets
after
this
timeout
period,
it
simply
sends
a
new
IGMP
join
to
the
router,
and
the
router
will
begin
forwarding
the
packets
once
more.
Some
call
this
long
period
needed
to
identify
the
loss
of
an
active
multicast
member
"leave
latency".
IGMP
Version
2
IGMP
version
2
introduced
a
number
of
critical
changes
to
IGMP.
These
modifications
made
IGMP
more
efficient
by
adding
a
new
operational
mechanism,
and
two
additional
message
types.
The
new
operational
mechanism
involves
the
election
of
the
IGMP
querier.
In
IGMP
version
1
more
than
one
device
on
a
segment
could
and
often
does
send
IGMP
version
1
queries
as
illustrated
by
the
following
debug
ip
igmp
output
on
R2:
This
output
demonstrates
that
R2
has
received
IGMP
version
1
Query
messages
on
GigabitEthernet0/0
from
each
of
its
neighbors
on
the
VLAN
1245
segment.
Recognizing
that
having
more
than
one
device
on
a
segment
constantly
sending
query
messages
was
less
than
ideal;
this
behavior
in
IGMP
version
2
was
changed.
Once
we
execute
the
ip
igmp
version
2
commands
at
the
interface
level
on
R1,
R2,
R4
and
R5
the
devices
will
now
elect
a
single
querier
for
the
VLAN
1245
segment.
The
router
with
the
lowest
IP
address
on
the
segment
becomes
the
querier.
In
the
case
of
this
topology,
the
process
chooses
R1.
Verify
this
with
show
ip
igmp
interface
on
any
device
connected
to
VLAN
1245:
224.0.1.40(1) 224.7.7.7(1)
This
command
provides
a
lot
of
output.
The
third
line
from
the
bottom
identifies
R1
as
the
IGMP
querying
router
by
its
IP
address
172.16.100.1.
Note
that
the
output
notifies
us
that
the
current
IGMP
host
version
is
2,
and
that
the
current
IGMP
router
version
is
2.
IGMP
version
2
is
backwards
compatible
with
IGMP
version
1.
This
is
possible
because
IGMP
version
2
still
supports
IGMP
version
1
query
and
report
messages.
Where
IGMP
version
1
had
2
types
of
messages,
IGMP
version
2
has
three:
query
messages,
report
messages
and
leave
group
messages.
IGMPv2
General
Query
Messages
-
Created
by
the
querier
-
Allows
dynamic
discovery
of
all
multicast
group
members
on
a
segment.
IGMPv2
Group-Specific
Query
Messages
-
Created
by
the
querier
-
Used
to
identify
the
existence
of
any
members
for
a
specific
group.
R1#
IGMP(0): Send v2 general Query on FastEthernet0/1
IGMP(0): Received v2 Report on FastEthernet0/1 from 172.16.17.7 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.17.7 for 0
sources
We
see
that
R1
has
sent
a
IGMP
version
2
general
Query.
Removing
the
group
membership
to
224.7.7.7
from
the
GigabitEthernet0/0
interface
of
R2
will
force
R1
to
send
a
group-specific
query:
R2(config)#interface GigabitEthernet0/0
R2(config-if)#no ip igmp join-group 224.7.7.7
R1#
IGMP(0): Send v2 Query on FastEthernet0/0 for group 224.7.7.7
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.5 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.5 for 0
sources
Note that R1 sent an IGMP version 2 query specifically for the group 224.7.7.7.
On
joining
a
multicast
group
-
the
host
will
immediately
notify
the
elected
querier
that
it
has
actively
joined
a
multicast
group.
Upon
receiving
a
General
Query
-
the
host
will
send
a
report
after
a
random
delay
in
the
same
fashion
employed
in
IGMP
version
1
in
response
to
a
general
query.
Upon
receiving
a
Group-Specific
Query
-
the
host
will
send
a
report
in
response
to
a
group-
specific
query
if
it
is
a
member
of
the
group
queried.
An
IGMP
version
2
host
will
send
a
leave
group
message
when
it
is
no
longer
interested
in
a
multicast
group.
Once
this
message
reaches
the
elected
querier
a
group-specific
query
is
generated.
As
discussed
in
the
section
on
IGMP
version
2
query
messages,
the
group-specific
query
determines
if
other
members
of
a
group
exist.
If
a
report
message
for
a
group-specific
query
arrives
the
IGMP
version
2
router
maintains
the
membership
information
for
that
specific
group.
In
the
case
where
no
report
messages
arrive
for
a
given
group-specific
query,
the
querier
will
discard
the
membership
information.
Typically,
an
IGMP
version
2
router
will
transmit
two
queries
with
an
interval
of
1
second
before
discarding
any
information.
Leave
messages
enable
IGMP
version
2
to
find
multicast
groups
without
active
members
quickly.
This
alleviates
the
"leave
latency"
issues
common
in
the
older
version
of
the
protocol.
In
an
effort
to
illustrate
this
process,
R2
will
remove
its
membership
to
the
group
224.7.7.7
(debug
ip
igmp
is
running):
R2(config)#interface GigabitEthernet0/0
R2(config-if)#no ip igmp join-group 224.7.7.7
R2(config-if)#end
R2#
IGMP(0): IGMP delete group 224.7.7.7 on GigabitEthernet0/0
IGMP(0): Received Group record for group 224.7.7.7, mode 3 from 172.16.100.2 for 0
sources
IGMP(0): Send Leave for 224.7.7.7 on GigabitEthernet0/0
IGMP(0): Received v2 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Lower expiration timer to 2000 msec for 224.7.7.7 on GigabitEthernet0/0
R2
deletes
the
group,
sends
the
leave
message,
and
adjusts
the
expiration
timer
for
its
224.7.7.7
entry
to
2
secs.
This
results
in
an
efficient
leave
process
and
the
IGMP
version
2
router
will
stop
forwarding
the
multicast
packets
for
this
group.
IGMP
Version
3
This
enhancement
to
IGMP
introduced
the
concept
of
Group-Source
Report
messages.
As
stated
earlier
in
the
IGMP
Technology
Review,
these
messages
allow
a
host
to
elect
to
receive
traffic
from
specific
sources
of
a
multicast
group;
a
concept
referred
to
as
Source
Specific
Multicast
(SSM).
Chapter
13:
Multicast
Security
and
Advanced
Features
will
cover
this
topic
in
depth.
The
role
of
IGMP
is
to
notify
the
querier/IGMP
router,
of
the
existence
of
devices
that
have
joined
multicast
groups
or
group
ranges.
We
have
thoroughly
examined
the
operational
mechanisms
for
each
version
of
IGMP
in
the
previous
sections
of
this
chapter.
Now
we
will
look
at
the
general
operational
process
of
IGMP
as
it
relates
to
the
overall
multicast
process:
Step
One
-
The
client/host
sends
an
IGMP
join
message
to
a
designated
multicast
router.
The
destination
MAC
address
maps
to
the
Class
D
address
of
the
group
being
joined
not
to
the
MAC
address
of
the
router.
The
body
of
the
IGMP
datagram
also
includes
the
Class
D
group
address.
Step
Two
-
The
IGMP
router
logs
the
join
message
and
uses
a
multicast
routing
protocol
(covered
later
in
this
document)
to
add
this
segment
to
the
multicast
distribution
tree.
Step
Three
-
IP
multicast
traffic
is
then
transmitted
from
the
server
via
the
designated
router.
The
designated
router
manages
the
distribution
of
multicast
packets
to
the
host's
subnet.
The
destination
MAC
address
that
is
used
corresponds
to
the
Class
D
address
of
the
multicast
group.
Step
Four
-
The
switch
receives
the
multicast
packet
and
examines
its
forwarding
table.
If
no
entry
exists
for
the
MAC
address,
the
packet
will
be
flooded
to
all
ports
within
the
broadcast
domain.
If
an
entry
does
exist
in
the
switch
table,
only
the
designated
ports
will
forward
packets.
Step
Five
-
With
IGMP
version
2,
the
client
can
end
a
group
membership
by
sending
an
IGMP
leave
message
to
the
router.
With
IGMP
version
1,
the
client
remains
a
member
of
the
group
until
it
fails
to
send
a
join
message
in
response
to
a
query
from
the
router.
Multicast
routers
also
periodically
send
an
IGMP
query
to
the
"all
multicast
hosts"
group
(224.0.0.1)
or
if
using
IGMP
version
2,
to
a
specific
multicast
group
on
the
subnet
to
determine
which
groups
are
still
active
within
the
subnet.
Each
host
delays
its
response
to
a
query
for
a
small
random
period
and
will
only
respond
if
no
other
hosts
in
the
group
have
already
responded.
This
mechanism
prevents
multiple
hosts
from
congesting
the
network
with
simultaneous
reports.
Failure
of
a
host
to
send
IGMP
joins
would
most
probably
be
the
result
of
a
poorly
written
multicast
application,
or
a
configuration
mistake
in
a
testing
scenario.
Configuration
mistakes
may
include
but
are
not
limited
to;
failure
to
apply
an
ip
igmp
join-group
command
or
ip
igmp
static-group
command
under
an
interface.
Additionally,
the
wrong
multicast
group
applied
to
an
interface
prevents
a
host
router
from
sending
a
membership
report
for
the
correct
multicast
group.
IGMP
snooping,
scopes
the
flooding
of
multicast
traffic
by
dynamically
configuring
Layer
2
interfaces
so
that
multicast
traffic
is
forwarded
to
only
those
interfaces
associated
with
IP
multicast
devices.
IGMP
snooping
requires
the
LAN
switch
to
monitor
IGMP
transmissions
between
the
host
and
the
router
and
to
keep
track
of
multicast
groups
and
member
ports.
When
a
switch
receives
an
IGMP
report
from
a
host
for
a
particular
multicast
group,
the
switch
adds
the
host
port
number
to
its
forwarding
table
entry;
when
it
receives
an
IGMP
Leave
Group
message
from
a
host,
it
removes
the
host
port
from
the
table
entry.
It
also
periodically
deletes
entries
if
it
does
not
receive
IGMP
membership
reports
from
any
multicast
clients.
The
most
common
issue
that
can
cause
problems
where
a
switch
does
not
forward
IGMP
messages
are
where
IGMP
Filtering
and
Throttling
have
been
erroneously
configured,
or
previous
configurations
have
not
been
completely
removed.
IGMP
Filtering
-
filters
multicast
joins
on
a
per-port
basis
by
configuring
IP
multicast
profiles
and
associating
them
with
individual
switch
ports.
IGMP
filtering
controls
only
group-specific
query
and
membership
reports,
including
join
and
leave
reports
and
does
not
control
general
IGMP
queries.
IGMP
filtering
is
applicable
only
to
the
dynamic
learning
of
IP
multicast
group
addresses,
not
static
configuration.
IGMP
Throttling
-
throttling
sets
the
maximum
number
of
IGMP
groups
that
Layer
2
interfaces
can
join.
Utilization
of
this
technique
can
cause
a
switch
to
drop
IGMP
membership
reports.
Apply
IGMP
Filtering
using
the
ip
igmp
filter
command.
Apply
IGMP
Throttling
using
the
ip
igmp
max-
groups
command.
Both
are
interface
level
commands.
ip
igmp
access-group
-
used
to
filter
groups
from
IGMP
membership
reports
by
applying
a
standard
access
list.
This
command
restricts
hosts
on
a
subnet
to
joining
only
multicast
groups
permitted
by
the
configured
standard
IP
access
list.
ip
igmp
limit
-
used
to
configure
a
limit
on
the
number
of
mroute
states
that
are
created
as
a
result
of
IGMP
membership
reports
(IGMP
joins).
Membership
reports
exceeding
the
configured
limits
do
not
enter
the
IGMP
cache.
The
most
common
issue
that
can
cause
problems
where
a
router
does
not
accept
IGMP
join
messages
are
where
IGMP
Filtering
or
limiting
have
been
erroneously
configured,
or
previous
configurations
have
not
been
completely
removed.
In
the
IGMP
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
of
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
IGMP
section,
three
primary
types
of
problems
were
identified:
Host
Fails
to
Send
IGMP
joins;
Switch
Fails
to
Forward
IGMP
Packets,
and
IGMP
Packet
Filtering.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problems
exists.
There
are
three
types
of
devices
in
this
topology:
Hosts
(R2,
R4
and
R5),
FastEthernet
Switch
(CAT1),
and
an
IGMP
router
(R1).
R2#debug ip igmp
IGMP debugging is on
R2#
IGMP(0): Received v2 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 3.1 seconds for 224.0.1.40 on GigabitEthernet0/0
R2#
IGMP(0): Received v2 Report on GigabitEthernet0/0 from 172.16.100.1 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.1 for 0
sources
IGMP(0): Cancel report for 224.0.1.40 on GigabitEthernet0/0
IGMP(0): Updating EXCLUDE group timer for 224.0.1.40
IGMP(0): MRT Add/Update GigabitEthernet0/0 for (*,224.0.1.40) by 0
R2
is
receiving
IGMP
membership
reports,
but
it
is
not
sending
any
for
224.2.2.2.
Based
on
Figure
2-3,
R2's
GigabitEthernet0/0
interface
should
have
joined
the
group
224.2.2.2.
The
quickest
and
simplest
way
to
determine
what
multicast
groups
a
device
has
joined
on
an
interface-by-interface
basis
is
to
execute
the
show
ip
igmp
interface
command:
This
output
indicates
that
R2
has
only
joined
the
multicast
group
224.0.1.40
on
GigabitEthernet0/0.
There
is
no
listing
for
224.2.2.2.
According
to
Figure
2-3,
this
address
should
appear
here.
To
verify
why,
the
next
step
would
be
to
look
for
an
ip
igmp
join-group
command
under
GigabitEthernet0/0
for
the
group
224.2.2.2:
The
join
command
is
missing.
Applying
the
ip
igmp
join-group
224.2.2.2
command
on
the
GigabitEthernet0/0
interface
of
R2
will
force
R2
to
begin
sending
IGMP
version
2
membership
reports
on
the
VLAN
1245
segment
for
224.2.2.2.
R2(config)#interface GigabitEthernet0/0
R2(config-if)#ip igmp join-group 224.2.2.2
R2(config-if)#end
%SYS-5-CONFIG_I: Configured from console by console
R2#debug ip igmp
IGMP debugging is on
IGMP(0): Send v2 Report for 224.2.2.2 on GigabitEthernet0/0
IGMP(0): Received v2 Report on GigabitEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(0): Received Group record for group 224.2.2.2, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.2.2.2
IGMP(0): MRT Add/Update GigabitEthernet0/0 for (*,224.2.2.2) by 0
Step One: Is the host sending membership reports for the specific multicast group?
In
this
scenario,
we
will
look
at
R4
to
determine
if
it
is
sending
membership
report
messages
for
224.4.4.4.
This
was
accomplished
with
the
debug
ip
igmp
command
on
R4:
R4#debug ip igmp
IGMP debugging is on
R4#
IGMP(0): Send v2 Report for 224.4.4.4 on FastEthernet0/0
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.4 for 224.4.4.4
IGMP(0): Received Group record for group 224.4.4.4, mode 2 from 172.16.100.4 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.4.4.4
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.4.4.4) by 0
The
IGMP
membership
reports
are
being
transmitted
for
the
group
224.4.4.4,
based
on
the
debug
output.
What
device
is
acting
as
the
querier
on
the
VLAN
1245
segment?
This
can
be
determined
with
the
show
ip
igmp
interface
command:
The
third
line
from
the
bottom
of
this
show
output
indicates
that
the
membership
reports
are
being
sent
to
R1
(172.16.100.1).
Step 2: Are the IGMP version 2 membership reports making it to the querier?
The answer to this question is best provided by the output of the show ip igmp groups command:
The
output
indicates
that
R1
does
indeed
know
about
the
multicast
group
224.4.4.4,
and
it
did
indeed
learn
about
the
Group
Address
membership
from
the
"Last
Reporter"
R4
(172.16.100.4).
In
a
situation
where
the
host
is
sending
the
membership
report
and
the
IGMP
router
is
not
receiving
it,
a
logical
assumption
is
that
the
switch
is
blocking
it.
The
switch
in
this
topology
is
CAT1.
The
easiest
way
to
learn
what
multicast
enabled
IGMP
devices
are
connected
to
the
switch
is
by
using
the
show
ip
igmp
snooping
mrouter
command:
---- -----
1245 Gi0/2(dynamic), Fa0/1(dynamic), Fa0/4(dynamic),
Fa0/5(dynamic)
After
identifying
what
ports
are
connected
to
IGMP
enabled
routers
on
the
catalyst
switch,
the
next
step
is
to
verify
if
there
are
any
commands
configured
on
the
switch
that
will
alter
the
forwarding
of
IGMP
packets.
The
first
thing
to
look
for
is
if
the
ip
igmp
filter
and/or
ip
igmp
max-groups
command(s)
have
been
applied
to
any
of
these
interfaces:
CAT1#show run interface FastEthernet0/5
Building configuration...
The
output
of
this
command
on
CAT1
for
the
interface
connected
to
R5
illustrates
examples
of
the
commands
we
are
looking
for.
In
this
particular
instance,
the
configured
commands
are
not
effecting
the
current
environment.
Note
that
the
seventh
line
up
from
the
bottom
notifies
us
that
the
interface
will
support
only
10
maximum
IGMP
groups.
The
ninth
line
up
from
the
bottom
identifies
a
standard
access-list
has
been
applied
to
the
interface.
In
this
environment,
there
are
no
issues
on
R1,
but
if
there
were;
a
closer
examination
of
the
access-list
or
the
maximum
IGMP
state
limit
could
isolate
a
fault.
show
COMMAND:
show
ip
igmp
[vrf
vrf-name]
interface
[interface-type
interface-number]
Where:
EXAMPLE
OUTPUT:
R2#show ip igmp interface GigabitEthernet0/0
GigabitEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.2/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
show
COMMAND:
show
ip
igmp
[vrf
vrf-name]
groups
[group-name
|
group-address
|
interface-type
interface-number]
[detail]
This
command
displays
the
multicast
groups
with
receivers
directly
connected
to
the
router
and
learned
through
IGMP.
Where:
EXAMPLE
OUTPUT:
R1#show ip igmp groups
IGMP Connected Group Membership
Group Address Interface Uptime Expires Last Reporter Group
Accounted
224.4.4.4 FastEthernet0/0 02:04:36 00:02:38 172.16.100.4 Ac
224.5.5.5 FastEthernet0/0 00:00:24 00:02:35 172.16.100.5 Ac
224.2.2.2 FastEthernet0/0 02:04:31 00:02:34 172.16.100.2 Ac
224.0.1.40 FastEthernet0/1 1d06h 00:02:37 172.16.17.7
224.0.1.40 FastEthernet0/0 1d06h 00:02:35 172.16.100.4 Ac
show
COMMAND:
show
ip
igmp
snooping
[groups
[count
|
vlan
vlan-id
[ip-address
|
count]]
|
mrouter
[[vlan
vlan-id]
|
[bd
bd-id]]
|
querier
|
vlan
vlan-id
|
bd
bd-id]
Where:
EXAMPLE
OUTPUT:
CAT1#show ip igmp snooping mrouter
Vlan ports
---- -----
1245 Gi0/2(dynamic), Fa0/1(dynamic), Fa0/4(dynamic),
Fa0/5(dynamic)
debug
COMMAND:
debug
ip
igmp
[vrf
vrf-name]
[group-address]
This command displays IGMP packets received and sent, and IGMP-host related events.
Where:
EXAMPLE
OUTPUT:
IGMP(0): Received v2 Query on FastEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 1.5 seconds for 224.0.1.40 on FastEthernet0/0
IGMP(0): Set report delay time to 8.4 seconds for 224.2.2.2 on FastEthernet0/0
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.4 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.4 for 0
sources
IGMP(0): Cancel report for 224.0.1.40 on FastEthernet0/0
IGMP(0): Updating EXCLUDE group timer for 224.0.1.40
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.0.1.40) by 0
debug
COMMAND:
debug
ip
igmp
snooping
{group
|
management
|
router
|
timer}
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:
EXAMPLE
OUTPUT:
CAT1#debug ip igmp snooping router
IGMPSN: router: Received IGMP pak on Vlan 1245, port Fa0/1
IGMPSN: router: Is a router port on Vlan 1245, port Fa0/1
IGMPSN: router: Learning port: Fa0/1 as rport on Vlan 1245
The network topology used in this section is shown in Figure 2-6 below:
Figure
2-6:
The
Chapter
Challenge
Topology
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
R2
is
not
generating
IGMP
version
2
membership
reports
for
the
multicast
group
224.2.2.2.
Correct
this
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
membership
reports
generated
by
R4
for
the
multicast
group
224.4.4.4
are
not
making
it
to
the
IGMP
router.
Correct
this
issue.
Trouble
Ticket
#3
Your
supervisor
has
notified
you
that
membership
reports
generated
by
R2
for
the
multicast
group
224.2.2.2
are
not
making
it
into
the
IGMP
Groups
table
on
R1.
Correct
this
issue
without
the
removal
of
existing
configurations.
Figure
2-7:
IGMP
Quick
Fire
Troubleshooting
Flowchart
R2#debug ip igmp
IGMP debugging is on
R2#
IGMP(0): Received v2 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 3.8 seconds for 224.0.1.40 on GigabitEthernet0/0
R2#
IGMP(0): Received v2 Report on GigabitEthernet0/0 from 172.16.100.4 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.4 for 0
sources
IGMP(0): Cancel report for 224.0.1.40 on GigabitEthernet0/0
IGMP(0): Updating EXCLUDE group timer for 224.0.1.40
IGMP(0): MRT Add/Update GigabitEthernet0/0 for (*,224.0.1.40) by 0
R2
is
receiving
reports
and
queries
but
is
not
sending
reports
for
224.2.2.2.
This
verifies
that
the
problem
actually
exists.
Step
2
-
Fault
Isolation:
The
next
course
of
action
is
to
use
the
show
ip
igmp
interface
command
on
R2.
R2#show ip igmp interface
GigabitEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.2/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 11 joins, 9 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1
Multicast groups joined by this system (number of users):
224.0.1.40(1)
The
last
line
of
the
output
does
not
list
224.2.2.2
as
a
group
that
R2
has
joined.
This
is
most
likely
a
missing
or
erroneous
configuration
of
the
ip
igmp
join-group
command
under
the
GigabitEthernet0/0
interface.
This
is
verified
with
the
show
run
interface
command
on
R2:
duplex auto
speed auto
end
This
isolates
the
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
igmp
join-group
224.2.2.2
command
needs
to
be
applied
to
the
GigabitEthernet0/0
interface
of
R2.
R2(config)#interface GigabitEthernet0/0
R2(config-if)#ip igmp join-group 224.2.2.2
R2(config-if)#end
R1#debug ip igmp
IGMP debugging is on
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(*): Group 224.2.2.2 access denied on FastEthernet0/0
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.5 for 224.5.5.5
IGMP(0): Received Group record for group 224.5.5.5, mode 2 from 172.16.100.5 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.5.5.5
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.5.5.5) by 0
R1
receives
membership
reports
for
the
groups
224.2.2.2
and
224.5.5.5
but
not
224.4.4.4.
This
means
that
the
next
step
of
the
verification
needs
to
be
on
CAT1.
Use
the
debug
ip
igmp
filter
command
on
CAT1
to
identify
any
interfaces
that
may
have
profiles
filtering
specific
multicast
groups:
CAT1#
IGMPFILTER: igmp_filter_process_pkt(): checking group 224.4.4.4 from Fa0/4: deny
CAT1
has
an
igmp-filter
applied
on
interface
FastEthernet0/4
that
denies
the
multicast
group
224.4.4.4.
The
actual
parameters
of
the
profile
can
be
seen
via
the
show
ip
igmp
profile
command:
CAT1(config)#interface FastEthernet0/4
CAT1(config-if)#no ip igmp filter 1
R1#debug ip igmp
IGMP debugging is on
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(*): Group 224.2.2.2 access denied on FastEthernet0/0
R1#
R1
is
receiving
the
membership
reports
from
R2
but
they
are
being
actively
denied
on
FastEthernet0/0,
thus
verifying
the
validity
of
the
trouble
ticket.
Step
2
-
Fault
Isolation:
To
determine
why
the
IGMP
packets
for
the
group
224.2.2.2
are
being
dropped
use
the
show
ip
igmp
interface
command.
R1#show ip igmp interface FastEthernet0/0
FastEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.1/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
R1#show ip access-list 1
Standard IP access list 1
10 deny 224.2.2.2 (39 matches)
20 permit any (139 matches)
Looking
at
this
output,
we
see
that
the
multicast
group
224.2.2.2
is
being
denied
by
access-list
1.
Additionally,
the
output
tells
us
that
IGMP
report
messages
are
arriving
on
R1,
but
we
have
denied
39
of
them.
This
is
the
problem
with
the
configuration.
Step
3
-
Fault
Remediation:
In
this
scenario,
access-list
1
should
be
edited
such
that
line
10
is
removed.
To verify that the editing worked we use the show ip access-list command:
Once
the
error
has
been
isolated
and
remediated,
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
used
to
verify
the
fault
initially.
R1#debug ip igmp
IGMP debugging is on
R1#
IGMP(0): Send v2 general Query on FastEthernet0/0
IGMP(0): Set report delay time to 1.1 seconds for 224.0.1.40 on FastEthernet0/0
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(0): Received Group record for group 224.2.2.2, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.2.2.2
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.2.2.2) by 0
R1
is
no
longer
denying
the
membership
report
from
R2
for
the
group
224.2.2.2.
As
a
final
verification
this
group
should
now
appear
in
the
output
of
the
show
ip
igmp
groups
command:
R1 now has 224.2.2.2 in the list of active IGMP groups. Thus verifying that the issue has been corrected.
Chapter
3:
Protocol
Independent
Multicast
-
Dense
Mode
(PIM-DM)
This
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting
examines
the
processes
and
the
functionality
of
the
PIM
dense-mode
(PIM-DM)
protocol
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
PIM
dense-mode
(PIM-DM)
protocol.
The
chapter
begins
with
a
thorough
review
of
PIM-DM,
and
then
quickly
launches
into
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
routing
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
Where
IGMP
is
the
protocol
used
to
exchange
information
between
hosts
and
routers,
PIM
is
used
between
routers
to
build
the
multicast
tree
from
the
sender
down
to
the
interested
hosts.
PIM
is
protocol
independent
because
no
topology
information
is
exchanged
while
the
multicast
tree
is
created.
Instead,
PIM
relies
on
the
underlying
interior
gateway
protocol
running
in
the
network.
This
means
that
the
multicast
tree
is
built
with
no
concern
for
the
existence
of
loops
in
the
topology.
There
is
however,
a
convention
built
into
the
multicast
forwarding
and
routing
logic
that
works
in
unison
with
the
routing
protocol
to
prevent
multicast
loops.
This
convention,
Reverse
Path
Forwarding
(RPF),
ensures
that
all
multicast
packets
that
arrive
on
an
interface
that
are
not
used
to
reach
the
source
of
the
packets
are
dropped.
RPF
will
be
covered
in
detail
later
in
this
chapter,
but
first
it
will
be
necessary
to
discuss
the
two
current
versions
of
PIM
that
can
be
deployed
in
a
network.
Just
like
IGMP;
PIM
has
evolved
over
the
years.
By
default,
current
versions
of
the
IOS
will
run
PIM
version
2.
In
order
to
better,
understand
the
nature
of
the
enhancements
made
to
the
protocol
in
its
current
version
a
close
examination
of
PIM
version
1
is
in
order.
PIMv1
PIM
version
1
is
a
Cisco
propriety
protocol
that
can
dynamically
map
RP's
to
multicast
groups
in
concert
with
a
standalone
protocol
called
Auto-RP.
PIM
version
1
uses
a
time-to-live
value
to
scope
its
announcements.
PIM
version
1
packets
are
transmitted
inside
IGMP
packets.
PIM
routers
that
create
the
multicast
tree
use
these
PIM
laden
IGMP
packets.
IGMP
packets
containing
PIM
packets
are
designated
as
Type
5
IGMP
messages,
or
"Router
PIM
Messages".
PIMv2
PIM
version
2
is
a
standards-based
track
protocol
that
made
several
improvements
on
the
earlier
Cisco
proprietary
version.
These
improvements
include
the
concept
of
a
single
active
RP
per
multicast
group
with
multiple
alternate
RP's.
PIM
version
1
supported
multiple
active
RPs
for
the
same
group.
In
version
2,
PIM
packets
are
stand-alone
packets
and
no
longer
embedded
in
IGMP
messages.
These
new
PIM
version
2
packets
have
support
for
automated
fault
tolerant
RP
discovery
and
distribution
called
a
Bootstrap
router
(BSR).
This
means
that
PIMv2
does
not
need
any
standalone
protocols
like
Auto-RP
does
to
allow
routers
to
dynamically
learn
group-to-RP
mappings.
Additional
modifications
to
the
PIM
version
1
protocol
also
include
more
flexible
encoding
of
future
capability
options
inside
PIM
join
and
prune
messages,
as
well
as
a
robust
and
flexible
Hello
Packet
format
that
replaces
the
old
Query
packet
operation
adopted
from
IGMP.
To
better
understand
the
operation
of
multicast
in
IP
networks,
it
is
best
to
separate
the
construction
of
the
PIM
control
plane
from
the
actual
forwarding
of
multicast
data
(the
data
plane).
Once
the
control
plane
protocols
have
actually
constructed
the
multicast
tree
the
actual
forwarding
of
multicast
packets
will
take
place
in
the
data
plane.
The
data
plane
uses
Reverse
Path
Forwarding
(RPF)
to
ensure
traffic
is
not
forwarded
in
a
fashion
that
results
in
a
loop
of
the
multicast
stream.
We
mentioned
briefly
the
fact
that
the
multicast
tree
is
built
with
no
concern
for
loops
in
the
tree.
In
fact,
it
is
worth
mentioning
that
commonly
there
will
be
valid
deployments
where
the
tree
is
built
as
a
looped
topology.
Realizing
this
fact,
PIM
was
created
to
use
RPF
at
all
times
for
all
multicast
traffic.
Specifically,
the
reverse
path
forwarding
check
is
going
to
ensure
that
multicast
packets
will
only
be
forwarded
when
they
arrive
on
interfaces
that
are
designated
as
being
loop-free
unicast
paths
back
to
the
source
of
the
multicast
stream.
The
main
goal
of
the
reverse
path
forwarding
check
is
every
time
a
device
receives
a
multicast
packet
on
an
interface
the
router
will
perform
a
lookup
based
on
the
source
IP
address.
This
lookup
recurses
to
the
interface
used
by
the
underlying
routing
protocol
to
reach
the
source
of
the
packet.
This
interface
will
be
compared
to
the
interface
the
packet
actually
arrived
on.
If
the
interfaces
match
then
the
RPF
check
passes.
If
the
interfaces
do
not
match
then
the
RPF
check
fails
and
the
packet
is
dropped.
It
is
important
to
understand
that
these
RPF
checks
are
a
data
plane
protection
process,
separate
and
apart
from
the
control
plane
and
they
are
performed
against
each
and
every
multicast
packet
a
device
receives.
So
in
the
event
of
a
loop
in
the
IGP
topology
or
if
PIM
was
not
able
to
create
a
loop-free
tree,
it
is
assured
that
multicast
packets
will
not
loop
as
they
are
forwarded.
Essentially
guaranteeing
that
even
if
a
looped
tree
is
built
the
data
plane
will
determine
which
interfaces
should
or
should
not
be
used
in
forwarding.
All
this
is
based
on
the
underlying
unicast
routing
table.
The
majority
of
problems
encountered
when
deploying
or
troubleshooting
PIM
will
actually
result
in
some
part
of
the
physical
network
design
failing
the
RPF
check
mechanism.
This
means
that
most
multicast
issues
are
going
to
be
related
to
RPF
check
failures
that
will
need
to
be
remediated
either
by
changing
the
underlying
unicast
routing,
implementing
static
multicast
routes,
deploying
tunnels
or
by
using
multicast
BGP.
The
last
stage
of
data
plane
forwarding
is
going
to
be
the
creation
of
the
multicast
routing
table
for
a
particular
device.
PIM
neighbors
exchange
messages,
specifically
join
and
prune
messages
that
are
used
to
create
the
multicast
tree,
these
messages
are
also
used
in
the
creation
of
the
multicast
routing
table.
The
role
of
the
multicast
routing
table
is
to
keep
track
of
what
interfaces
point
to
multicast
sources,
and
what
interfaces
lead
to
multicast
receivers.
This
table
can
at
first
seem
confusing
and
difficult
to
interpret,
but
a
short
amount
of
time
looking
at
how
the
information
is
organize
will
reveal
just
how
powerful
this
table
is
when
trying
to
troubleshoot
any
multicast
routing
problem.
Two
classifications
of
information
found
in
the
multicast
routing
table
correspond
to
the
interface
classification
mentioned
previously.
Interfaces
that
point
toward
the
source
of
a
multicast
feed
(upstream
facing)
are
classified
as
incoming
interfaces.
These
interfaces
are
placed
in
the
section
of
the
multicast
routing
table
called
the
"incoming
interface
list".
The
remaining
links
(downstream
facing)
lead
toward
any
possible
multicast
receivers.
These
interfaces
compose
what
is
called
the
Outgoing
Interface
List
or
"OIL".
At
this
point
it
is
important
to
note
that
there
is
a
"split-horizon-like"
behavior
that
multicast
follows.
This
behavior
prevents
an
interface
from
being
able
to
be
classified
as
both
incoming
and
outgoing
for
a
given
multicast
group
simultaneously.
This
behavior
can
be
disabled
in
some
categories
of
PIM,
but
not
in
others.
For
troubleshooting
purposes,
it
is
important
to
remember
this
behavior
as
it
can
cause
significant
issues
in
network
designs
that
have
multipoint
non-broadcast
interfaces
like
some
deployments
of
frame
relay.
It
should
also
be
noted
that
this
behavior
could
also
result
in
some
network
designs
that
make
it
impossible
to
deploy
some
modes
of
PIM.
As
a
rule,
it
is
best
to
run
multicast
over
a
Layer
2
technology
such
as
frame-relay
using
point-to-point
PIM
enabled
links
everywhere.
Once
the
multicast
tree
has
been
created
using
the
associated
PIM
join
and
prune
messages,
and
assuming
the
RPF
checks
pass,
multicast
routes
(mroutes)
will
begin
to
populate
the
multicast
routing
table.
These
routes
utilize
an
annotation
scheme
unique
to
multicast.
This
annotation
format
follows
the
model
of
(S,G)
where
"S"
is
the
source
and
"G"
is
the
group.
This
means
that
the
router
knows
the
identity
of
the
source
for
a
specific
group
as
opposed
to
the
(*,G)
entry.
The
*,G
entry
means
that
the
device
knows
the
group
but
not
the
identity
of
the
source.
The
(S,G)
is
often
referred
to
as
the
"Source
Tree",
where
the
(*,G)
is
known
as
the
"Shared
Tree".
Multicast
routing
has
one
other
thing
in
common
with
unicast
routing.
The
most
specific
route
or
"longest
match"
will
always
be
preferred.
This
means
that
a
*,G
entry
will
always
be
less
specific
than
a
S,G
entry
in
the
multicast
routing
table.
This
being
the
case,
once
a
multicast
packet
arrives
on
an
interface
the
router
will
switch
the
packet
from
the
incoming
interface,
to
all
interfaces
in
the
OIL.
The
mechanism
being
described
here
is
one
where
a
"single"
packet
arrives
on
an
interface,
and
a
layer
three
replication
of
that
single
packet
takes
place
and
it
is
forwarded
out
all
the
interfaces
in
the
OIL.
The
mode
that
this
chapter
will
explore
is
PIM-DM.
The
remaining
operational
modes
each
will
have
their
own
dedicated
chapters.
It
was
decided
to
start
with
PIM-DM
because
the
protocol
is
the
simplest
of
the
three,
and
affords
a
very
streamline
technology
to
introduce
the
basic
and
advanced
verification
commands,
tools
and
processes
associated
with
troubleshooting
all
of
the
PIM
mode
types.
Figure
3-1:
A
Sample
PIM-DM
Topology
In
this
chapter,
we
are
going
to
look
critically
at
the
specifics
of
PIM-DM
to
include
the
different
verification
techniques
used
to
validate
its
operation,
as
well
as
the
commands
used
to
isolate
faults
in
the
protocol.
PIM-DM
operates
via
an
implicit
join
paradigm.
This
implicit
join
model
is
often
called
a
"push
model"
because
the
routers
are
going
to
flood
all
multicast
traffic
they
receive
out
every
single
interface
running
PIM-DM
(except
the
interface
the
packet
arrived
on).
This
means
that
the
individual
adjacent
routers
are
responsible
for
deciding
if
they
are
interested
in
the
multicast
feed
or
not.
If
the
router
has
no
interest
in
the
multicast
feed,
it
will
send
a
PIM
Prune
message.
This
message
is
the
equivalent
of
an
un-join
instruction
sent
out
the
originating
link.
This
"flood
and
prune"
behavior
makes
PIM-DM
operation
unwieldy
due
to
the
sheer
volume
of
state
information
it
must
maintain.
The
larger
the
network
the
more
state
information
that
needs
to
be
maintained,
therefore
PIM-DM's
biggest
detractor
is
its
lack
of
scalability.
A
PIM-DM
router
is
first
going
to
attempt
to
identify
all
the
PIM-DM
enabled
neighbors
on
its
individual
links.
In
an
effort
to
find
these
PIM
enabled
neighbors
PIM-DM
will
use
the
reserved
link-local
multicast
address
224.0.0.13.
As
a
result
of
this
methodology
it
is
essential
to
ensure
that
any
Layer
2
protocols,
like
frame-relay,
are
configured
to
support
multicast
transport
when
it
comes
to
the
neighbor
discovery
process
or
multicast
forwarding.
This
is
accomplished
by
ensuring
that
these
links
are
configured
to
support
pseudo-broadcast.
The
application
of
the
"broadcast"
command
allows
a
link
to
support
broadcast
addresses
like
255.255.255.255,
but
by
extension,
the
command
also
supports
multicast
packets.
A
closer
look
at
the
operation
of
PIM-DM
can
be
accomplished
by
using
the
debug
ip
packet
detail
command.
Using
this
command
before
actually
configuring
multicast
routing
will
afford
valuable
insight
into
the
processes
that
take
place
on
the
router.
Using
the
topology
in
Figure
3-2,
we
will
demonstrate
this
in
detail.
Looking
at
the
output
of
debug
ip
packet
on
any
of
these
devices
after
we
apply
the
ip
multicast-routing
and
the
ip
pim
dense-mode
commands
we
will
see
the
underlying
process
take
place
on
the
console.
In
order
to
filter
out
traffic
associated
with
our
internal
routing
protocol
we
will
apply
an
ACL
to
the
debug
ip
packet
command
that
will
prevent
EIGRP's
locally
process
switched
traffic
from
cluttering
the
console
messages.
Also,
note
that
like
other
chapters
in
this
text
we
have
disabled
the
service
timestamps
feature,
again
in
an
effort
to
reduce
any
possible
confusion
regarding
the
interpretation
of
the
debug
output.
R1(config)#end
R1#
R1#debug ip packet detail 101
IP packet debugging is on (detailed) for access list 101
With
this
accomplished
our
next
task
is
to
enable
ip
multicast-routing
and
apply
the
ip
pim
dense-mode
command
under
the
FastEthernet0/0
interface
of
R1.
Once
completed,
we
will
observe
the
output
of
the
debug
ip
packet
command:
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ip multicast-routing
R1(config)#interface FastEthernet0/0
R1(config-if)#ip pim dense-mode
R1(config-if)#end
Almost
immediately,
debug
output
begins
to
appear
on
the
console
of
R1.
It
is
this
output
we
need
to
interpret
in
order
to
understand
the
process
that
is
taking
place.
Specifically,
we
need
to
look
at
these
three
samples
of
output.
Notice
that
each
represents
an
instance
where
R1
is
sending
a
broadcast/multicast
packet,
but
there
are
two
different
protocol
types,
and
three
different
destination
addresses
used.
First,
we
will
look
at
the
protocol
types.
In
the
sample
output
above,
we
see
protocols
103
and
2.
What
are
these
protocol
types?
It
is
simple
enough
to
find
out
by
creating
an
extended
access-list
using
the
IP
protocol
number.
The
default
behavior
of
Cisco
IOS
is
to
translate
these
protocol
numbers
into
a
human
readable
form.
Taking
advantage
of
this
feature
is
a
handy
tool
to
identify
the
protocol
types
without
needing
external
resources
or
materials.
The
output
of
the
show
ip
access-list
100
command
reveals
that
the
protocol
types
in
question
are
PIM
(type
103)
and
IGMP
(type
2)
messages
respectively.
Now
that
we
know,
what
type
of
messages
R1
is
sending
the
next
logical
step
is
to
look
at
where
the
messages
are
going.
The
type
103
or
PIM
messages
are
being
sent
to
the
link-local
multicast
group
224.0.0.13.
The
type
2
messages
are
actually
being
sent
to
two
different
destination
groups.
The
first
group
is
224.0.0.1,
a
link-local
multicast
group
employed
by
all
multicast
hosts.
This
means
that
by
virtue
of
enabling
PIM
on
the
interface
R1
is
now
sending
IGMP
query
messages
to
all
hosts
on
the
VLAN
15
segment.
However,
it
is
also
observed
that
R1
is
sending
type
two
protocol
messages
to
a
new
multicast
destination
address
that
has
not
been
discussed
as
of
yet;
the
multicast
group
224.0.1.40.
This
address
is
used
in
the
process
of
Auto-RP
that
was
discussed
briefly
in
Chapter
2:
Internet
Group
Management
Protocol
(IGMP).
Auto-RP
will
be
discussed
in
detail
in
Chapter
8:
AutoRP.
At
this
point,
it
is
enough
to
know
that
the
group
224.0.1.40
is
the
multicast
group
that
the
Auto-RP
mapping
agent
will
use
to
disseminate
Auto-RP
information,
but
the
important
thing
to
note
is
that
as
soon
as
the
multicast
routing
process
is
enabled,
the
router
will
automatically
join
the
group
224.0.1.40.
Regardless
of
what
PIM
mode
we
are
running,
the
router
will
attempt
to
dynamically
learn
the
identity
of
a
rendezvous
point
(RP).
PIM-DM
does
not
utilize
an
RP
as
part
of
the
multicast
routing
process
but
none-the-less
the
router
will
join
the
group.
This
behavior
affords
us
an
ideal
opportunity
to
look
at
the
multicast
routing
table
while
it
has
only
one
entry.
R1#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
Observe
the
(*,
224.0.1.40)
entry.
This
entry
is
referred
to
as
a
"star
comma
gee",
and
fits
the
annotation
model
of
(*,G).
This
means
that
the
router
has
joined
this
multicast
group,
but
there
are
no
known
sources
for
it
at
this
time.
How
can
we
actually
tell
the
router
has
joined
this
group?
Remember
the
behavior
of
IGMP.
An
IGMP
host
records
all
the
groups
it
has
joined
in
an
IGMP
membership
list.
The
contents
of
this
list
can
be
viewed
with
the
show
ip
igmp
membership
command.
The
output
verifies
that
R1
has
actually
joined
the
multicast
group
224.0.1.40
on
Interface
FastEthernet0/0.
These
two
show
commands
show
ip
mroute
and
show
ip
igmp
membership,
demonstrate
that
we
have
IP
multicast-routing
enabled
globally,
and
at
the
interface
level
we
are
running
PIM-DM.
Also
based
on
the
output
we
have
observed
we
know
that
R1
is
sending
PIM
messages
to
224.0.0.13
with
a
protocol
number
of
103,
and
IGMP
messages
related
to
the
group
224.0.1.40.
In
fact,
R1
has
actually
become
an
IGMP
client
for
the
Auto-RP
mapping
agent
group.
Now
that
we
have
observed
the
behavior
of
PIM-DM
critically
on
one
device
and
have
an
understanding
of
how
it
operates
it
is
time
to
enable
ip
multicast-routing
and
PIM-DM
on
every
router
interface
with
an
ip
address
in
the
topology.
At
this
point
in
our
critical
analysis,
we
will
enable
PIM-DM
on
all
interfaces
in
an
effort
to
prevent
any
failures
in
the
reverse
path
forwarding
mechanism.
Once
this
is
accomplished,
it
is
very
simple
to
identify
and
map
the
multicast
routing
topology
based
on
the
output
of
a
single
show
command:
show
ip
pim
neighbors.
Using
this
command
on
all
the
routers
in
this
topology
will
quickly
allow
us
to
construct
a
logical
drawing
of
the
multicast
routing
topology.
Beginning on R1:
We
see
that
R1
has
formed
a
PIM
neighbor
relationship
with
R5.
The
same
command
on
R5
informs
us
that
R5
is
neighbored
with
R1
and
R4:
Consistently
repeating
this
command
on
all
the
routers
in
the
topology
will
produce
a
topology
drawing
matching
Figure
3-3.
Figure
3-3:
PIM
neighbor
relationships
Note:
When
using
the
show
ip
pim
neighbor
command
be
sure
to
check
for
adjacencies
on
both
ends
of
a
link.
There
are
scenarios
where
the
PIM
neighbor
can
show
up
on
one
side
and
not
the
other.
Now
that
we
see
the
topology,
there
are
issues
that
need
discussion.
Observe
that
the
PIM
neighbor
relationships
between
R4,
R2
and
R6
form
a
distinct
loop.
At
first
glance,
this
will
seem
extremely
odd,
but
remember
the
control
plane
is
formed
with
no
thought
toward
possible
loops.
This
is
where
the
RFP
check
mechanism
will
come
into
play.
Later
in
this
section,
this
entire
process
will
be
laid
out
and
dissected
step-by-step,
but
for
now
it
is
time
to
move
to
the
next
stage
of
the
process.
We
have
reviewed
the
nature
and
purpose
of
the
PIM
and
IGMP
messages
that
sent
between
PIM-DM
enabled
devices,
now
it
is
time
to
look
at
the
IGMP
join
process.
In
Chapter
2:
Internet
Group
Management
Protocol
(IGMP)
we
saw
the
inner
workings
of
this
protocol
and
we
know
that
to
force
a
router
to
join
a
multicast
group
we
can
employ
a
number
of
commands
at
the
interface
level.
In
this
explanation,
the
FastEthernet0/1
interface
of
R9
will
use
the
ip
igmp
join-group
command
to
join
the
multicast
group
224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Once
this
is
accomplished,
the
show
ip
mroute
or
show
ip
igmp
membership
commands
will
tell
us
if
the
router's
interface
has
indeed
joined
the
group
224.9.9.9.
In
order
to
develop
more
familiarity
with
the
show
ip
mroute
command,
we
use
it
below.
R9#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
The
output
reveals
that
the
router
has
in
fact
joined
the
group
224.9.9.9
as
well
as
the
group
224.0.1.40.
Note
that
there
is
a
(*,G)
entry
for
each
of
these
groups.
In
this
instance,
R9
is
acting
like
a
host
and
as
a
result
will
send
its
IGMP
membership
reports
to
R7.
This
can
be
observed
via
the
show
ip
igmp
membership
command.
R9
has
notified
the
IGMP
router
R7
that
it
is
interested
in
the
multicast
group
224.9.9.9.
Does
R7
forward
any
of
this
information
to
adjacent
devices?
No,
R7
has
no
mechanism
or
requirement
to
forward
these
IGMP
report
messages
to
any
of
its
adjacent
neighbors
in
PIM-DM.
This
can
be
seen
with
either
the
show
ip
igmp
membership
or
show
ip
mroute
commands
on
R6.
There
is
no
entry
for
a
(*,
224.9.9.9)
on
R6,
nor
is
there
an
entry
for
that
group
in
the
IGMP
membership
list.
This
clearly
illustrates
that
the
IGMP
router
is
not
forwarding
any
of
the
information
learned
from
R9.
The
reason
for
this
behavior
is
that
PIM-DM
uses
an
implicit
join
methodology
where
the
protocol
assumes
all
devices
are
interested
in
receiving
the
multicast
feed.
So
rather
than
notifying
any
of
its
neighbors
that
R9
has
joined
a
specific
group,
R7
instead
will
assume
that
any
multicast
feed
for
the
group
224.9.9.9
will
be
flooded
to
it.
This
means
that
in
PIM-DM,
R4
will
be
listening
for
the
multicast
traffic
to
arrive
on
any
of
its
interfaces,
but
it
will
not
notify
any
other
routers
that
it
has
an
adjacent
host
that
has
joined
any
particular
group.
This
is
the
flooding
portion
of
the
PIM-DM
"Flood
and
Prune"
behavior.
The
previous
step
outlined
the
relationship
between
the
IGMP
host
and
the
IGMP
router.
The
previous
chapter
described
the
IGMP
mechanism
as
the
protocol
that
encompasses
the
last
leg
of
the
multicast
routing
tree.
We
have
observed
that
only
the
host
and
its
adjacent
IGMP
router
knows
about
the
join-
group
command
under
the
FastEthernet0/1
interface
of
R9.
The
next
step
is
to
emulate
a
multicast
source
and
observe
how
PIM-DM
will
flood
the
multicast
stream.
The
source
in
our
topology
is
R1,
and
to
emulate
a
multicast
feed
we
will
initiate
a
ping
destined
to
a
multicast
group.
For
testing
purposes,
this
ping
will
not
match
the
multicast
group
that
R9
has
joined.
No
host
being
interested
in
the
multicast
feed
will
result
in
the
pings
on
R1
failing.
Nevertheless,
the
object
of
this
exercise
is
to
follow
the
multicast
feed
through
the
network,
observe
its
behavior,
and
note
any
related
characteristics.
We
will
use
a
very
high
repeat
count
on
R1
to
make
sure
that
the
multicast
feed
remains
active
throughout
our
testing.
The
pings
are
not
successful
as
we
expected,
but
what
about
the
multicast-routing
tables
on
all
the
routers
in
the
topology.
We
will
look
at
R5
first:
Now
we
observe
something
new.
For
the
multicast
group
224.1.1.1
there
are
two
entries,
a
(*,G)
entry
and
a
(S,G)
entry.
This
affords
us
an
ideal
opportunity
to
look
closely
at
the
"more
specific"
match
rule
that
we
discussed
earlier.
The
output
of
this
show
command
illustrates
that
R5
knows
about
both
the
group
and
the
source
for
that
group.
This
constitutes
the
creation
of
the
(S,G)
entry.
Observe
that
the
(*,G)
entry
is
stopped,
and
that
each
of
R5's
PIM-DM
enabled
interfaces
are
in
the
OIL.
The
more
specific
(S,G)
entry
is
different.
We
see
that
there
is
now
an
expiration
timer
00:00:16
in
this
output
capture,
and
that
FastEthernet0/0
is
in
the
Incoming
Interface
List
and
FastEthernet0/1
is
in
the
OIL.
Based
on
the
multicast
"flooding"
model
used
by
PIM-DM,
we
can
expect
to
see
this
(*,G),
(S,G)
pair
on
each
router
in
our
topology.
To
reduce
this
output
in
this
verification
this
test
will
only
be
done
on
R6
and
R9,
and
the
output
will
be
filtered
to
reduce
the
amount
of
output
to
a
more
manageable
level.
Remember
the
traffic
will
be
flooded
throughout
the
multicast
domain,
because
of
the
flood
and
prune
model
employed
by
PIM-DM.
So
all
PIM-DM
routers
will
receive
the
multicast
feed
whether
they
want
it
or
not.
This
covers
the
flood
portion
of
PIM-DM.
What
about
the
"prune"
aspect
of
PIM-DM?
We need to look at the output of the show ip mroute command on R9 once more.
Now
there
is
only
a
(*,G)
entry
in
the
multicast
routing
table
where
before
there
was
both
a
(*,G)
and
an
(S,G).
What
happened?
We
need
to
repeat
the
command.
Now both the (*,G) and the (S,G) entries are back. Is there a problem with our configuration?
No.
We
are
observing
the
normal
flood
and
prune
behavior
of
PIM-DM.
When
all
PIM-DM
enabled
routers
receive
a
multicast
feed
and
they
do
not
have
an
IGMP
join
state
for
that
feed,
or
they
do
not
have
a
neighbor
with
a
join
state
for
that
feed
they
will
send
a
prune
message
back
toward
the
source.
Evidenced
by
the
output
of
the
debug
ip
pim
command
on
R9.
R9#debug ip pim
PIM debugging is on
R9#
PIM(0): Insert (172.16.15.1,224.1.1.1) prune in nbr 172.16.79.7's queue
PIM(0): Building Join/Prune packet for nbr 172.16.79.7
PIM(0): Adding v2 (172.16.15.1/32, 224.1.1.1) Prune
PIM(0): Send v2 join/prune to 172.16.79.7 (FastEthernet0/1)
Looking
critically
at
the
output
of
the
debug
command
we
see
that
R9
built
a
Join/Prune
packet.
The
router
then
added
a
Prune
state
to
its
own
multicast
routing
table,
and
then
sent
the
join/prune
packet
to
its
neighbor
172.16.79.7.
This
entire
mechanism
is
known
as
a
pruning.
Without
employing
the
debug
commands
we
can
see
in
the
multicast
routing
table
if
a
multicast
feed
is
in
a
pruned
state
or
not.
In
this
output,
we
see
there
is
a
field
called
"flags".
For
the
(S,G)
entry
for
224.1.1.1
these
flags
are
PT.
The
index
for
the
show
command
tells
us
that
the
"P"
flag
means
this
multicast
feed
has
been
pruned.
Since
there
are
no
interested
receivers
in
this
topology,
we
expect
the
status
of
this
feed
to
be
pruned
on
all
routers
between
R1
and
R9.
Notice
that
the
flags
for
each
of
these
multicast
routing
table
entries
are
P
for
pruned.
Additionally,
observe
that
the
interfaces
in
the
OIL
all
have
a
state/mode
of
Prune/Dense.
This
indicated
by
looking
at
the
value
after
the
interface
designator.
This
output
tells
us
that
the
adjacent
router
on
the
segment
has
sent
us
a
prune
message.
In
effect,
informing
the
router
that
there
are
no
interested
devices
on
or
beyond
this
segment.
As
an
additional
test,
we
will
have
the
FastEthernet0/1
interface
of
R6
join
the
multicast
group
224.1.1.1.
R6(config)#interface FastEthernet0/1
R6(config-if)#ip igmp join-group 224.1.1.1
R6(config-if)#end
After
implementing
this
command
on
R6
there
will
be
a
significant
change
in
the
multicast
routing
table
of
the
devices
in
the
path
from
R1
to
R6.
Most
notably
the
state/mode
value
will
change
to
forward/dense.
This
means
that
the
these
devices
are
actively
forwarding
the
multicast
feed
to
R6
this
can
be
seen
by
observing
the
contents
of
the
multicast
routing
tables
on
R5,
R4,
R2
and
R6.
Once
the
multicast
feed
reaches
R6
we
will
see
that
it
is
not
forwarded
beyond
R6
because
there
are
no
devices
beyond
R6
that
have
joined
the
multicast
group
224.1.1.1.
This
is
reflected
by
the
flag
of
"P"
and
the
state/mode
value
of
Prune/Dense
found
in
the
multicast
routing
table
for
the
(S,G)
pair.
<output omitted>
This
tells
us
that
everything
is
working,
but
it
is
important
to
note
that
successful
ICMP
echo
replies
may
always
tell
us
that
the
test
is
working,
but
not
getting
echo
replies
back
may
not
mean
that
the
tests
are
failing.
Keep
in
mind
that
normal
multicast
operations
are
unidirectional
in
nature.
Specifically,
they
will
be
unidirectional
UDP
packet
flows
travelling
from
the
sender
down
to
the
receivers.
This
process
being
UDP
takes
place
without
any
explicit
acknowledgement
from
any
device
in
the
multicast
path.
This
means
in
a
normal
multicast
feed
the
multicast
application
at
the
hosts
will
never
reply
to
the
source
of
the
multicast
stream.
Though
using
pings
and
looking
for
echo
replies
is
a
useful
tool,
the
important
thing
to
observe
is
whether
the
multicast
packets
are
actually
arriving
at
the
host,
by
using
debug
ip
mpacket
at
the
host.
For
instance
on
R6
we
see
the
packets
arriving.
This
output
demonstrates
that
multicast
packets
are
arriving
on
R6
via
the
FastEthernet0/1
interface
sourced
from
the
IP
address
172.16.15.1
and
destined
to
the
multicast
group
224.1.1.1.
Also,
observe
the
statement
that
reads
"mroute
olist
null".
The
mroute
olist
null
states
that
there
are
interfaces
interested
in
this
feed.
This
leads
us
to
the
next
question.
What
does
the
output
of
this
command
look
like
on
a
device
in
the
transit
path
where
there
should
be
an
outbound
interface
in
the
OIL?
We
can
see
this
by
using
debug
ip
mpacket
on
a
router
in
the
path
like
R4.
Oddly
enough,
no
matter
how
long
we
wait,
we
see
no
output
on
this
device,
but
we
know
the
multicast
feed
is
transiting
R4
to
get
to
R6.
Verified
with
the
show
ip
mroute
command.
R4#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 05:16:16/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 05:16:16/00:00:00
FastEthernet0/0, Forward/Dense, 05:16:16/00:00:00
Serial0/0/0.1, Forward/Dense, 05:16:16/00:00:00
(172.16.15.1, 224.1.1.1), 00:42:52/00:02:56, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
Serial0/0/0.1, Prune/Dense, 00:03:00/00:00:00, A
FastEthernet0/0, Forward/Dense, 00:37:26/00:00:00
This
confirms
our
expectation.
The
output
verifies
the
forwarding
of
the
multicast
feed
for
224.1.1.1
out
the
FastEthernet0/0
interface
of
R4.
What
is
blocking
the
results
of
the
debug
ip
mpacket
command?
We
cannot
see
the
packets
transit
through
R4
because
they
are
not
being
process
switched.
Only
traffic
destined
to
or
from
a
multicast
enabled
router
will
be
process
switched
by
default.
Currently,
multicast
traffic
on
the
FastEthernet
interfaces
of
R4
is
fast
switched
made
evident
via
the
show
ip
interface
FastEthernet
command.
Note
that
the
output
tells
us
that
the
router
has
enabled
fast
switching,
and
disabled
distributed
fast
switching.
Distributed
fast
switching
is
hardware
based
where
regular
fast
switching
is
software
based.
The
Cisco
routers
used
in
this
topology
do
not
support
hardware
switching.
What
can
we
do
to
change
this
behavior?
In
order
to
disable
multicast
fast
switching
we
will
need
to
disable
the
multicast
routing
cache
feature
that
is
on
by
default
on
all
interfaces.
Doing
so
will
force
the
router
to
process
switch
multicast
packets.
R4(config)#interface FastEthernet0/0
R4(config-if)#no ip mroute-cache
R4(config-if)#interface FastEthernet0/1
R4(config-if)#no ip mroute-cache
R4(config-if)#end
Once
this
is
accomplished,
we
will
immediately
begin
to
see
output
from
the
debug
ip
mpacket
command.
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10022, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0) id=10022,
ttl=253, prot=1, len=100(100), mforward
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10023, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0) id=10023,
ttl=253, prot=1, len=100(100), mforward
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10024, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0) id=10024,
ttl=253, prot=1, len=100(100), mforward
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10025, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0) id=10025,
ttl=253, prot=1, len=100(100), mforward
Now
that
the
traffic
is
being
process
switched,
we
can
see
the
multicast
packets
enter
and
leave
R4.
The
multicast
feed
is
entering
the
FastEthernet0/1
interface
and
being
multicast
forwarded
out
the
FastEthernet0/0
interface.
We
can
verify
that
we
are
process
switching
the
multicast
traffic
with
the
show
ip
interface
command.
R4#show ip interface FastEthernet0/0 | inc multicast
IP multicast fast switching is disabled
IP multicast distributed fast switching is disabled
After
the
flood
and
prune
has
run,
we
get
this
message.
Why
do
we
get
output
for
transit
traffic
when
we
just
discussed
the
necessity
to
disable
fast
switching?
In
multicast
fast
switching
the
first
packet
of
a
feed
will
be
processes
switched.
Therefore,
we
will
see
the
first
packet
that
arrives
during
each
flood
refresh.
The
issue
with
this
output
is
the
"not
RPF
interface"
value.
We
have
discussed
the
fact
that
the
PIM
multicast
routing
topology
forms
with
no
concern
for
multicast
routing
loops.
Instead,
PIM,
in
this
instance
PIM-DM,
relies
on
the
RPF
check
to
prevent
loops
in
the
multicast
data
plane.
Here
R4
is
receiving
an
mpacket
from
R6
because
there
is
a
loop
in
the
multicast
topology,
but
RPF
prevents
the
looping
of
multicast
packets
by
dropping
packets
that
arrive
on
interfaces
that
are
not
in
the
unicast
path
back
to
the
source.
The
RPF
check
takes
place
as
follows.
Once
a
multicast
packet
arrives
on
a
given
interface,
the
router
immediately
knows
two
things
based
on
the
information
in
the
packet:
the
source
and
group.
This
comprises
the
(S,G)
entry
in
the
multicast
routing
table
as
illustrated
by
the
show
ip
mroute
command.
The
output
of
this
simple
command
provides
us
with
a
wealth
of
information.
We
see
the
multicast
source,
the
RPF
interface
for
that
source,
as
well
as
the
IP
address
of
the
neighbor
the
router
expects
to
send
the
multicast
packets.
The
output
even
tells
us
the
unicast
routing
protocol
used
to
perform
the
RPF
check.
RPF
Failures
In
the
Troubleshooting
PIM-DM
section,
this
text
discussed
the
phases
of
the
PIM-DM
operational
mechanism.
Since
these
mechanisms
utilize
messages
communicated
using
multicast
they
all
are
subject
to
Reverse
Path
Forwarding
(RPF)
checks.
Logically
then,
RPF
issues
can
prevent
optimal
multicast
routing,
or
stop
multicast
forwarding
entirely.
PIM-DM performs RPF checks in both the control and data plane.
Control
Plane
-
The
PIM-DM
control
plan
uses
PIM
messages
in
its
creation.
PIM
sends
messages
via
the
link-local
multicast
group
224.0.0.13,
and
are
therefore
subject
to
RPF
checks.
It
is
important
to
note
that
RPF
checks
in
the
control
plane
are
against
the
source
IP
address
encapsulated
into
each
PIM
packet
as
they
arrive.
More
often
than
not,
this
will
be
the
IP
address
of
the
adjacent
neighbor.
Data
Plane
-
PIM-DM
will
perform
RPF
checks
on
each
individual
multicast
packet
before
deciding
to
forward
it.
This
means
that
the
source
IP
address
of
each
multicast
packet
a
router
receives
must
be
reachable
out
the
receiving
interface
before
the
router
will
forward
it
to
an
adjacent
neighbor.
In
PIM-DM,
RPF
always
performs
checks
against
the
source
of
the
multicast
feed.
The
RPF
check
mechanism
can
result
in
scenarios
where
the
control
plane
fails
to
form
correctly,
or
multicast
packets
fail
to
transit
the
multicast
tree.
When
only
a
few
packets
or
no
packets
reach
the
receivers,
RPF
failures
will
normally
be
the
cause.
We
will
perform
a
walk
through
for
each
of
these
RPF
issues
in
the
PIM-DM
Sample
Troubleshooting
Scenarios
section
that
follows.
deactivation
of
this
behavior.
So
in
order
to
facilitate
the
forwarding
of
multicast
packets
in
scenarios
like
multicast
hub-and-spoke
designs
it
will
be
necessary
to
utilize
other
solutions.
Solutions
for
these
situations
include,
but
are
not
limited
to
Tunnel
Interfaces
or
M-BGP.
If
the
packet's
TTL
is
higher
than
the
multicast
threshold
configured
on
an
interface
(and
it
passes
the
RPF
check),
the
packet
will
be
forwarded.
If
the
TTL
of
the
packet
is
lower
than
the
multicast
threshold,
the
router
drops
the
packet.
The
possible
range
for
a
multicast
threshold
value
is
0
to
255,
with
0
meaning
all
packets
will
be
forwarded
verses
255
where
virtually
no
packets
will
be
forwarded.
In
the
PIM-DM
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
of
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
PIM-DM
section,
three
primary
types
of
problems
were
identified:
RPF
failures,
Hub
and
Spoke
Designs,
and
Multicast
Threshold
problems.
This
section
explores
these
three
categories
of
failure
by
directing
our
attention
to
the
commands
necessary
to
verify
a
problem,
isolate
it
and
remediate
it.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
Are
there
interfaces
in
the
OIL
for
the
(172.16.15.1,
224.9.9.9)
group?
The
output
indicates
that
there
are
no
interfaces
in
the
output
list
with
the
value
of
"Null".
Looking
at
the
output
for
the
show
ip
mroute
count
command
for
the
group
224.9.9.9
we
see
what
is
happening:
This
output
tells
us
many
things;
first,
we
know
we
are
dropping
about
1
packet
per
second.
This
can
be
seen
under
the
Forwarding
Counts
based
on
the
value
-1.
Next,
we
know
that
we
do
not
have
an
RPF
check
issue
because
the
second
field
in
the
Other
Counts
category
is
0.
Lastly,
we
see
that
we
have
received
86
multicast
packets
for
the
group
224.9.9.9
and
we
have
dropped
86
packets
for
"Other"
reasons.
The
command
even
points
out
some
common
issues.
Interpreting
this
output
is
simple.
We
have
no
interfaces
in
the
OIL
for
this
group.
The
reason
will
be
revealed
when
we
look
at
the
output
of
show
ip
pim
neighbors
on
R5.
Immediately,
we
can
see
there
is
no
PIM-DM
neighbor
relationship
with
R4.
Logically
the
next
step
would
be
to
verify
what
interfaces
are
participating
in
PIM-DM
on
R4:
R5#show ip pim interface
R5
is
running
PIM-DM
on
the
FastEthernet0/1
interface
toward
R4.
What
about
R4
is
it
running
the
PIM-
DM
on
the
interface
toward
R4?
R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.24.2 FastEthernet0/0 01:53:26/00:01:26 v2 1 / S
172.16.46.6 Serial0/0/0.1 01:52:41/00:01:43 v2 1 / S
R4
is
not
running
PIM-DM
on
its
FastEthernet0/1
interface.
To
correct
this
issue
the
ip
pim
dense-mode
command
will
be
applied:
R4(config)#interface FastEthernet0/1
R4(config-if)#ip pim dense-mode
R4(config-if)#end
R4#
R5
now
has
FastEthernet0/1
in
OIL
of
the
entry.
Are
multicast
packets
being
forwarded
to
R4?
We
will
clear
the
multicast
routing
table
to
before
we
take
a
look
at
the
counters.
R5#clear ip mroute *
PIM-DM
mode,
but
has
a
state
of
Pruned.
We
know
this
means
no
device
on
this
segment
has
joined
or
knows
of
a
member
of
the
group
224.9.9.9.
However,
the
interface
Serial0/0/0.1
is
in
PIM-DM
and
in
a
forwarding
state.
Seeing
that
R4
is
forwarding
out
this
interface
the
next
logical
step
will
be
to
follow
this
link
to
the
next
adjacent
PIM-DM
neighbor.
We
can
find
that
by
with
show
ip
pim
neighbor:
R4#show ip pim neighbor Serial0/0/0.1
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.46.6 Serial0/0/0.1 02:18:56/00:01:32 v2 1 / S
The
next
device
we
need
to
look
at
is
R6
(172.16.46.6)
according
to
this
output.
We
need
to
look
at
the
multicast
routing
table
on
this
router
now:
R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
We
have
interfaces
in
the
OIL
for
the
224.9.9.9
S,G
pair
and
they
are
both
operating
in
PIM-DM
and
are
in
the
forwarding
state.
However,
something
very
important
is
missing
in
this
output.
The
incoming
interface
list
is
NULL.
This
means
R6
is
learning
the
S,G
entry
for
224.9.9.9
but
it
is
not
learning
it
from
its
RPF
neighbor.
What
is
the
RPF
neighbor?
The
output
demonstrates
that
R6
expects
to
learn
about
this
multicast
source
from
R2
(172.16.26.2).
This
means
that
R6
is
expecting
to
receive
the
multicast
stream
sourced
from
R1's
172.16.15.1
unicast
address
on
what
interface?
R6#
There
are
no
packets
arriving
on
this
interface.
Are
there
any
PIM
neighbors
on
out
this
interface?
R6#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.67.7 FastEthernet0/0 10:05:05/00:01:35 v2 1 / DR S
172.16.46.4 Serial0/1/0.1 10:05:21/00:01:38 v2 1 / S
FastEthernet0/1
has
no
neighbors.
Is
FastEthernet0/1
running
PIM-DM?
R6#show ip pim interface
This
output
tells
us
that
each
of
the
16
packets
R6
has
received
have
been
forwarded.
We
used
the
clear
ip
mroute
*
command
to
clear
the
counters.
The
multicast
issues
on
R6
have
been
remediated.
Are
the
pings
from
R1
working
now?
R1#
The
hop-by-hop
method
of
fault
isolation
lends
itself
readily
to
environments
where
there
may
be
multiple
issues
associated
with
the
multicast
topology
failing
to
operate.
There
are
two
primary
tools
that
can
be
used
to
quickly
identify
the
location
where
a
multicast
fault
may
exist.
In
this
section
the
mtrace
utility
will
be
used
to
isolate
a
fault.
In
this
section
R9
is
still
joined
to
the
group
224.9.9.9,
and
R1
has
failed
to
successfully
send
pings
to
that
address.
This
test
can
be
done
with
any
multicast
group
and
does
not
require
the
receiver
to
have
actually
joined
the
group
used.
Step One: Use mtrace on R1 to isolate the place in the topology where the multicast stream fails.
The
output
clearly
demonstrates
that
the
path
between
R1
and
R9
no
longer
has
any
issues.
As
a
final
verification,
the
ping
test
will
be
repeated.
show
COMMAND:
show
ip
igmp
membership
[group-address
|
group-name]
[tracked]
[all]
This
command
displays
Internet
Group
Management
Protocol
(IGMP)
membership
information
for
multicast
groups
and
(S,
G)
channels.
Where:
EXAMPLE
OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE
OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
This
command
displays
information
about
interfaces
configured
for
Protocol
Independent
Multicast
(PIM).
EXAMPLE
OUTPUT:
R7#show ip pim interface
show
COMMAND:
show
ip
pim
[vrf
vrf-name]
neighbor
[interface-type
interface-number]
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R5#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.15.1 FastEthernet0/0 00:26:04/00:01:16 v2 1 / S
172.16.45.4 FastEthernet0/1 00:22:39/00:01:43 v2 1 / S
show
COMMAND:
show
ip
rpf
[vrf
vrf-name]
{route-distinguisher
|
source-address
[group-address]
[rd
route-
distinguisher]}
[metric]
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
EXAMPLE
OUTPUT:
R5#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.45.4)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=7, ttl=254, prot=1,
len=114(100), mroute olist null
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=8, ttl=254, prot=1,
len=114(100), mroute olist null
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=9, ttl=254, prot=1,
len=114(100), mroute olist null
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This
command
displays
Protocol
Independent
Multicast
(PIM)
packets
received
and
sent
and
displays
PIM-related
events
Where:
EXAMPLE
OUTPUT:
R7#debug ip pim
PIM debugging is on
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.6, to us
PIM(0): Prune-list: (172.16.79.7/32, 224.9.9.9)
PIM(0): Prune FastEthernet0/0/224.9.9.9 from (172.16.79.7/32, 224.9.9.9)
IP(0): s=172.16.79.7 (FastEthernet0/1) d=224.9.9.9 id=8, ttl=254, prot=1,
len=114(100), mroute olist null
The network topology used in this section is shown in Figure 3-7 below:
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
users
on
the
192.1.2.0/24
network
cannot
receive
the
multicast
feed
from
the
source
R1.
The
feed
should
be
destined
to
the
multicast
address
224.2.2.2.
You
must
correct
the
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
users
on
the
network
192.1.6.0/24
cannot
receive
traffic
destined
to
the
multicast
group
224.6.6.6.
You
must
correct
this
issue.
Trouble
Ticket
#3
Your
supervisor
has
notified
you
that
users
on
the
network
172.16.79.0/24
are
not
receiving
multicast
feeds.
Use
the
multicast
group
224.9.9.9
to
verify
this
task.
You
must
correct
this
issue.
Figure
3-8:
PIM-DM
Quick
Fire
Troubleshooting
Flowchart
The
pings
are
not
successful.
This
verifies
that
the
problem
actually
exists.
Step
2
-
Fault
Isolation:
The
next
course
of
action
is
to
use
the
mtrace
utility
to
rule
out
the
possibility
of
an
RPF
issue.
Make
certain
to
perform
this
process
in
both
directions,
first
from
R2
toward
R7,
then
from
R7
toward
R2.
R1#mtrace 172.16.15.1 192.1.2.2
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.2.2
-1 172.16.24.2 PIM [172.16.15.0/24]
-2 172.16.24.4 None No route
This
output
indicates
that
the
multicast
traffic
is
stopping
on
R4.
The
show
ip
pim
neighbor
command
will
tell
us
whether
or
not
we
have
PIM-DM
relationships
with
the
neighboring
PIM-DM
enabled
routers.
R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.24.2 FastEthernet0/0 1d09h/00:01:15 v2 1 / S
172.16.46.6 Serial0/0/0.1 1d09h/00:01:30 v2 1 / S
The
verification
clearly
demonstrates
that
R4
has
not
neighbor
relationship
with
R5
via
FastEthernet0/1.
This
most
likely
means
that
the
ip
pim
dense-mode
is
missing
either
on
R4
or
R5.
The
show
ip
pim
interface
will
quickly
reveal
which
device.
The
ip
pim
dense-mode
command
is
missing
under
FastEthernet0/1
interface
thus
blocking
the
multicast
traffic.
This
has
unquestionably
isolated
our
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
pim
dense-mode
command
should
be
applied
under
the
interface
as
we
have
in
the
past.
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface FastEthernet0/1
R4(config-if)#ip pim dense-mode
R4(config-if)#end
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
of
the
initial
fault
verification.
R1#ping 224.2.2.2 repeat 5
R1#mtrace 172.16.15.1 192.1.6.6
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.6.6 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.6.6
-1 172.16.26.6 PIM [172.16.15.0/24]
-2 172.16.26.2 PIM [172.16.15.0/24]
-3 172.16.24.4 PIM [172.16.15.1/32]
-4 172.16.45.5 PIM [172.16.15.0/24]
-5 172.16.15.1
There
seems
to
be
no
issues
involving
multicast
RPF
failures.
With
this
observation,
we
can
next
look
to
see
if
the
issue
may
be
TTL-Threshold
induced
with
the
mstat
command:
R1#mstat 172.16.15.1 192.1.6.6
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.6.6 via RPF
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:
Now R4:
Now R2:
Now R6:
Then
R7:
R7#show ip mroute 224.9.9.9 | sec .1, 224.9
(172.16.15.1, 224.9.9.9), 00:01:00/00:01:59, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Int Limit 0 kbps
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:01:00/00:00:00
We
see
that
the
routers
between
R1
and
R9
are
all
forwarding
the
multicast
traffic
for
the
group
224.9.9.9.
However,
on
R7
we
see
something
new
in
the
show
ip
mroute
output.
Observe
that
the
incoming
interface
for
the
S,G
pair
states
that
multicast
rate-limiting
has
been
applied.
The
interface
is
configured
to
allow
0
kbps
of
multicast
traffic.
This
can
be
confirmed
via
show
run
interface
FastEthernet0/0:
R7#show run interface FastEthernet0/0
Building configuration...
!
interface FastEthernet0/0
ip address 172.16.67.7 255.255.255.0
ip pim dense-mode
ip multicast rate-limit in 0
duplex auto
speed auto
end
Looking
carefully
at
this
output
on
R7,
leads
us
to
believe
that
the
router
is
not
going
to
allow
any
multicast
traffic
to
enter
this
interface.
This
has
isolated
the
design
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
multicast
rate-limit
command
on
FastEthernet0/0
should
be
removed.
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/0
R7(config-if)#no ip multicast rate-limit in 0
R7(config-if)#end
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated,
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
used
to
verify
the
fault
initially.
R1#ping 224.9.9.9 repeat 5
Chapter
4:
Protocol
Independent
Multicast
-
Sparse
Mode
(PIM-SM)
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
the
functionality
of
the
Protocol
Independent
Multicast
-
Spare-Mode
(PIM-SM)
protocol
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
PIM-SM.
The
chapter
begins
with
a
thorough
review
of
PIM-SM,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
You
should
note
that
in
PIM-SM,
it
is
the
RP
that
keeps
track
of
multicast
groups.
Hosts
that
send
multicast
packets
are
registered
with
the
RP
by
the
first
hop
router
of
that
host.
The
RP
then
sends
Join
messages
toward
the
source.
Packets
are
then
forwarded
on
a
shared
distribution
tree.
If
the
multicast
traffic
from
a
specific
source
is
sufficient,
the
first
hop
router
of
the
host
may
send
Join
messages
toward
the
source
to
build
a
source-based
distribution
tree.
An
administrator
can
force
traffic
to
stay
on
the
shared
tree
by
using
the
ip
pim
spt-threshold
infinity
command.
Another
powerful
option
for
the
enforcement
of
shared
tree
topologies
is
Bidirectional
Protocol
Independent
Multicast.
This
book
provides
exhaustive
coverage
of
this
technology
in
Chapter
6:
Bidirectional
Protocol
Independent
Multicast
(BIDIR-PIM).
It
is
certainly
worth
noting
that
while
the
name
sparse
mode
implies
that
this
approach
to
multicast
routing
would
only
be
appropriate
in
topologies
with
a
sparse
distribution
of
receivers,
PIM-SM
scales
well
to
a
network
of
any
size
and
with
any
number
of
potential
receivers.
ip pim sparse-mode
Note:
Remember,
the
rendezvous
point
(RP)
is
a
critical
component
that
should
also
be
configured.
The
various
options
for
doing
this
appear
in
later
chapters
of
this
book.
This
discussion
outlines
the
first
issue
that
can
affect
the
operation
of
PIM-SM.
All
devices
must
agree
on
the
identity
of
the
RP
for
the
protocol
to
function
correctly.
Using
the
topology
in
Figure
4-1,
we
will
systematically
deploy
PIM-SM
and
then
designate
a
device
to
fulfill
the
role
of
RP.
Figure
4-1:
Sample
PIM-SM
Topology
We
will
statically
assign
R2
to
be
the
RP
in
this
network.
Beginning
with
R9
and
working
our
way
to
R2
initially.
All
four
devices
agree
that
R2
(192.1.2.2)
is
the
RP
for
the
entire
multicast
range
of
224.0.0.0/4.
Now
that
this
has
been
configured
and
verified,
we
will
have
the
FastEthernet0/1
interface
of
R9
join
the
multicast
group
224.9.9.9.
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Once
this
is
accomplished,
we
will
use
the
show
ip
mroute
to
follow
the
formation
of
the
tree
that
forms
between
R9
and
the
R2
(the
RP).
There
is
the
systematic
process:
R9
has
joined
the
group
224.9.9.9
as
evidenced
by
that
group
appearing
in
the
output
of
show
ip
igmp
groups.
R9
has
the
group
in
the
list
as
expected.
Now
R6
will
send
an
IGMP
join
to
R7,
this
join
packet
will
arrive
on
the
FastEthernet0/1
interface
of
that
router.
Illustrated
by
the
output
of
debug
ip
igmp:
R7#
IGMP(0): Received v2 Report on FastEthernet0/1 from 172.16.79.9 for 224.9.9.9
IGMP(0): Received Group record for group 224.9.9.9, mode 2 from 172.16.79.9 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.9.9.9
IGMP(0): MRT Add/Update FastEthernet0/1 for (*,224.9.9.9) by 0
Having
received
this
message
R7
will
create
a
(*,
224.9.9.9)
entry
in
its
multicast
routing
table,
with
the
FastEthernet0/1
interface
in
the
OIL,
as
evidenced
by
a
show
ip
mroute:
R6
has
FastEthernet0/0
in
the
OIL
and
FastEthernet0/1
in
the
incoming
as
expected.
Now
R6
will
actively
begin
to
send
PIM
join/prune
messages
to
R2.
These
messages
will
arrive
on
the
GigabitEthernet0/1
interface
of
R2.
Seeing
this
R2
will
create
the
same
*,G
entry
for
the
group
224.9.9.9
with
GigabitEthernet0/1
in
the
OIL.
Again,
evidenced
by
show
ip
mroute:
All
the
interfaces
between
each
of
these
devices
are
running
ip
pim
sparse-mode,
and
we
have
statically
assigned
the
RP
on
each
device.
The
output
of
show
ip
pim
rp
mapping
will
demonstrate
that
these
four
devices
agree
on
the
identity
of
the
RP.
Now
PIM-SM
messages
can
be
sent
because
the
identity
of
the
RP
has
been
established.
Now
if
a
source
where
to
appear
the
PIM-DR
can
sent
the
PIM
register
message.
If
the
RP
accepts
the
PIM
register
message
it
will
send
an
acknowledgement
to
the
PIM-DR
known
as
a
Register
Stop.
The
Register
Stop
informs
the
PIM-DR
that
the
RP
has
received
the
PIM
register
message,
created
an
(S,G)
entry
for
the
group
identified
in
the
message,
and
tells
the
PIM-DR
to
stop
sending
register
messages.
To
illustrate
this
process
R1
will
send
multicast
traffic
to
the
group
224.1.1.1.
It
should
come
as
no
surprise
that
the
ping
fails.
No
hosts
have
joined
this
particular
group.
We
will
observe
the
exchange
of
information
between
the
PIM-DR
and
the
RP.
First,
the
PIM-DR
sends
the
PIM
register
message
to
the
RP
via
unicast:
R5#debug ip pim
PIM debugging is on
R5#
PIM(0): Send v2 Data-header Register to 192.1.2.2 for 172.16.15.1, group 224.1.1.1
We
see
that
R5
sends
the
Register
message
to
192.1.2.2
for
the
group
224.1.1.1.
This
is
a
unicast
packet.
R2
will
then
receive
this
message
and
send
a
register
stop
as
evidence
by
the
output
of
debug
ip
pim:
R2#debug ip pim
PIM debugging is on
R2#
PIM(0): Received v2 Register on FastEthernet0/0 from 172.16.45.5
(Data-header) for 172.16.15.1, group 224.1.1.1
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.1.1.1
The
output
demonstrates
that
R2
receives
the
Register
message
for
the
group
224.1.1.1
and
responds
by
sending
the
Register-Stop
for
the
group
224.1.1.1.
Observe
that
this
process
is
taking
place
via
unicast.
Before
we
look
to
see
if
the
Register-Stop
made
it
to
R5
we
will
want
to
see
if
the
(S,G)
entry
for
the
multicast
group
224.1.1.1
was
created
in
R2's
multicast
routing
table.
Use
show
ip
mroute
to
accomplish
this:
R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
The
S,G
pair
is
clearly
installed.
Note
that
there
are
no
interfaces
in
the
OIL
at
this
time.
Now
back
to
R5
to
check
for
the
Register-Stop:
R5#debug ip pim
PIM debugging is on
R5#
PIM(0): Received v2 Register-Stop on FastEthernet0/1 from 192.1.2.2
PIM(0): for source 172.16.15.1, group 224.1.1.1
PIM(0): Clear Registering flag to 192.1.2.2 for (172.16.15.1/32, 224.1.1.1)
The
Register-Stop
was
received
for
the
group
224.1.1.1
from
the
RP
(192.1.2.2),
and
the
register
flag
for
the
S,G
pair
(172.16.15,1,
224.1.1.1)
was
cleared.
There
will
be
an
entry
for
this
S,G
pair
in
the
multicast
routing
table
of
R5:
R5#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
Wait
for
the
multicast
routing
table
entry
for
224.1.1.1
to
expire
on
R2:
R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R2#debug ip pim
PIM debugging is on
R2#
On R1, generate the multicast pings for the group 224.9.9.9 with a repeat count of 1.
The
RP
sent
the
Register-Stop.
This
seems
strange
given
the
fact
that
we
have
a
host
that
wants
to
receive
this
multicast
group.
The
things
we
need
to
point
out
are
as
follows:
R2
will
now
have
the
(*,G)
and
(S,G)
entry
in
its
multicast
routing
table:
R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
This
output
tells
us
that
the
RP
has
learned
the
identity
of
the
source,
so
now
once
a
PIM
join
message
for
this
source
arrives
on
the
RP
it
will
send
a
PIM
join
up
the
reverse
path
tree
toward
the
source.
As
evidenced
by
the
debug
ip
pim
output
on
R2:
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.9.9.9), Forward state, by PIM
*G Join
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (172.16.15.1, 224.9.9.9), Forward
state, by PIM *G Join
We
see
join
arrive
from
R6,
and
then
the
RP
builds
and
sends
its
own
join
to
R4
toward
the
source:
R2#
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
This
PIM
join
creates
a
(*,G)
entry
in
all
the
devices
between
the
RP
and
the
source.
R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Right
now
we
are
only
interested
in
the
(*,G)
entry
in
the
multicast
routing
table.
Notice
that
the
RPF
address
on
R4
is
the
ip
address
used
to
reach
the
RP
(172.16.24.2).
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R5
has
the
(*,G)
entry
in
the
multicast
routing
table.
This
entry
was
created
as
a
result
of
a
PIM
join
arriving
on
the
FastEthernet0/1
interface,
and
the
RPF
neighbor
is
172.16.45.4.
Also
note
that
we
have
(S,G)
entries
on
these
routers
as
well.
The
appearance
of
both
the
(*,G)
and
(S,G)
values
in
the
table
tells
us
that
the
router
has
passed
some
packets.
Following
the
pattern
that
we
see
in
the
show
command
output
above,
we
see
that
initially
the
multicast
tree
is
built
end-to-end
through
the
RP.
We
will
observe
this
behavior
by
sending
just
one
ping
as
we
did
before.
This
time
we
will
enable
debug
ip
pim
on
R2
and
analyze
the
PIM
messages
we
receive:
R2#
PIM(0): Insert (172.16.15.1,224.9.9.9) join in nbr 172.16.24.4's queue
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.9.9.9
PIM(0): Forward decapsulated data packet for 224.9.9.9 on GigabitEthernet0/1
PIM(0): Insert (172.16.15.1,224.9.9.9) join in nbr 172.16.24.4's queue
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
R2#
We
see
that
the
(S,G)
entry
is
inserted
into
R2's
multicast
routing
table.
The
RP
builds
a
Join
packet
for
R4.
This
join
is
forward
via
the
link-local
multicast
address
224.0.0.13,
and
once
it
arrives
on
R5
the
PIM-
DR,
the
PIM
register
message
will
be
sent
back
toward
the
RP.
Observe
that
this
time
R2
does
not
send
a
Register
Stop.
Instead,
R2
begins
to
forward
multicast
packets.
Once
the
first
packet
arrives
at
R7,
the
source
ip
address
is
learned
and
R7
will
send
a
PIM
Prune
message
to
the
RP
once
it
determines
that
there
is
a
shorter
path
toward
the
source.
This
prune
message
is
propagated
in
a
hop-by-hop
manner
using
the
link-local
multicast
address
224.0.0.13.
Once
this
messages
arrives
at
the
RP
it
will
stop
forwarding
multicast
packets,
create
its
own
prune
message
and
sent
that
to
its
next
hop
neighbor
toward
the
source.
In
the
output
provided
we
see
the
"S-bit
Prune"
value
for
the
(S,G)
pair.
This
indicates
that
the
status
of
the
pair
(172.16.15.1,
224.9.9.9)
will
transition
to
pruned:
The
incoming
interface
list
for
the
(S,G)
pair
has
Serial0/1/0.1
as
expected.
Though
this
process
of
switching
from
the
RPT
to
the
SPT
is
default
behavior,
it
is
possible
to
change
this
behavior
with
the
ip
pim
spt-threshold
infinity
command
on
the
IGMP
router.
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#ip pim spt-threshold infinity
R7(config)#end
Once
this
command
has
been
applied
to
the
IGMP
router,
the
multicast
tree
will
always
be
formed
end-
to-end
through
the
RP.
We
will
demonstrate
this
by
issuing
pings
from
R1
for
the
group
224.9.9.9,
and
observing
the
output
of
the
debug
ip
pim
on
R2:
R2#
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 0.0.0.0, group 0.0.0.0
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.9.9.9
PIM(0): Forward decapsulated data packet for 224.9.9.9 on GigabitEthernet0/1
PIM(0): Insert (172.16.15.1,224.9.9.9) join in nbr 172.16.24.4's queue
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
R2#
In
this
instance,
R2
receives
and
forwards
the
"S-Bit
Join"
to
its
next
hop
neighbor
toward
the
source.
But,
R2
never
receives
a
PIM
Prune
from
the
IGMP
router,
thus
the
multicast
tree
remains
rooted
to
the
RP.
This
can
also
be
verified
via
show
ip
mroute
on
R6.
This
output
will
show
that
the
incoming
interface
on
R6
will
be
the
FastEthernet0/1
interface:
R6#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
This
output
demonstrates
two
things.
First,
the
incoming
interface
for
the
group
224.9.9.9
is
FastEthernet0/1
as
we
expected.
Second,
we
have
no
(S,G)
entries
in
the
multicast
routing
table
of
R6.
There
will
be
no
(S,G)
entries
on
R6,
R7
or
R9
because
all
of
these
routers
will
send
their
multicast
packets
to
the
RP,
rather
than
the
unicast
ip
address
of
the
source.
RPF
Failures
In
the
Troubleshooting
PIM-SM
section,
this
text
discussed
what
phases
of
the
PIM-SM
operational
mechanisms
use
unicast
and
multicast.
The
multicast
driven
portions
of
this
protocol
are
all
subject
to
Reverse
Path
Forwarding
(RPF)
checks.
Recall
that
of
all
the
phases,
only
the
PIM
Register
and
Register
Stop
processes
rely
on
unicast
everything
else
uses
link-local
multicast.
Logically
then,
RPF
issues
can
prevent
an
RP
from
learning
multicast
routing
information
from
the
IGMP
router.
Additionally,
this
problem
can
prevent
the
RP
from
successfully
merging
the
multicast
trees.
The
following
list
of
issues
has
a
relatively
high
probability
of
occurring
thanks
to
RPF
failures.
Remember
that
these
RPF
checks
are
performed
against
the
IP
address
of
either
the
RP
or
the
Multicast
source.
Be
aware
that
anytime
all
interfaces
in
a
network
are
not
running
PIM
-
these
issues
may
arise.
RP
is
not
learning
the
(*,G)
entries
for
some
or
all
multicast
groups.
RP
fails
to
notify
the
PIM-DR
to
begin
forwarding
multicast
packets.
We
will
perform
a
walk
through
for
each
of
these
RPF
issues
in
the
PIM-SM
Sample
Troubleshooting
Scenarios
section
that
follows.
This
is
a
situation
where
it
will
be
necessary
to
look
at
the
underlying
routing
protocols
used
in
the
network.
Typically,
this
would
be
an
issue
of
asynchronous
routing,
and
should
be
something
obvious
once
the
routing
tables
of
the
source
and
transit
devices
are
analyzed.
Situations
like
the
following
exist
when
information
fails
to
propagate
to
any
or
all
devices,
but
RPF
checks
and
unicast
routing
seem
to
be
functioning
correctly:
In
the
PIM-SM
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
PIM-SM
section,
three
primary
types
of
problems
were
identified:
RPF
failures,
unicast
routing
failures,
and
multicast
forwarding
and
routing
failures.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problems
exists.
There
are
four
types
of
devices
in
this
topology:
RP,
Host/Receiver,
Source
and
transit
devices
(PIM
enabled
routers).
Now that R9 has joined the group 224.99.99.99 where does it send the IGMP Join message?
R9#show ip igmp interface FastEthernet0/1
FastEthernet0/1 is up, line protocol is up
Internet address is 172.16.79.9/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 3 joins, 1 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.79.9 (this system)
IGMP querying router is 172.16.79.7
Multicast groups joined by this system (number of users):
224.0.1.40(1) 224.99.99.99(1)
R9#
The
identity
of
the
IGMP
querying
router
is
172.16.79.7,
R7.
This
router
will
have
a
record
of
the
IGMP
report
sent
by
R9:
In
this
output,
R7
has
recorded
the
IGMP
Report
sent
by
R9
for
the
multicast
group
224.99.99.99.
Now
R7
will
use
link-local
multicast
messages
to
form
the
PIM-SM
RPT.
Evidenced
by
the
output
of
show
ip
mroute
on
the
devices
leading
to
the
RP.
Based
on
the
output
we
see
that
R7
is
forwarding
the
PIM
Report
messages
to
R6
via
its
FastEthernet0/0,
the
interface
used
to
reach
the
RP.
If
these
messages
arrive
on
R6
there
will
be
a
(*,G)
entry
in
its
multicast
routing
table
for
the
group
and
FastEthernet0/1
should
be
in
the
incoming
interface
list.
This
can
be
verified
via
show
ip
mroute:
Observe
the
(*,G)
is
present
but
there
are
no
interfaces
in
the
incoming
interface
list:
Null.
Additionally,
the
RPF
neighbor
address
is
0.0.0.0.
Something
is
preventing
the
FastEthernet0/1
address
from
being
assigned
to
the
incoming
interface
list.
The
RPF
value
of
0.0.0.0
usually
indicates
that
the
RP
cannot
be
recognized,
and
commonly
occurs
because
of
an
RPF
error.
What
interface
is
the
RPF
interface
for
the
RP?
R6#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2) failed, no route exists
This
output
indicates
that
there
is
no
multicast
route
to
the
RP.
What
interface
would
R6
use
to
reach
192.1.2.2?
FastEthernet0/1
is
not
running
PIM-SM.
Corrected
this
by
applying
ip
pim
sparse-mode
under
the
interface.
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#interface FastEthernet0/1
R6(config-if)#ip pim sparse-mode
R6(config-if)#end
Now
we
will
use
show
ip
mroute
to
determine
if
R6
can
now
resolve
the
identity
of
the
RP,
and
place
FastEthernet0/1
into
the
incoming
interface
list:
Step Three: Does the PIM-DR send the PIM Register message to the RP?
This
process
is
accomplished
via
unicast,
and
as
such
is
not
subjected
to
the
RPF
check
mechanism.
The
initiation
of
a
multicast
source
on
R1
will
allow
us
to
confirm
whether
the
PIM
Register
message
is
sent.
R1#ping 224.99.99.99 r 5
The ping was not successful. Was a (S,G) record created on R2 for the group 224.99.99.99?
R2 has no record of the (S,G) group. Does R5 send the PIM Register message?
R2#debug ip pim
PIM debugging is on
R2#
PIM(0): Send RP-reachability for 224.0.1.40 on GigabitEthernet0/0
PIM(0): Send RP-reachability for 224.0.1.40 on GigabitEthernet0/1
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward state, by
PIM *G Join
R2#
R2
is
not
receiving
a
PIM
Register
from
R5.
Does
R5
have
a
(S,G)
entry
for
the
group
224.99.99.99?
Observe
that
again
we
have
an
RPF
neighbor
value
of
0.0.0.0.
R5
is
not
the
RP,
so
this
means
that
R5
cannot
recognize
the
RP.
The
next
most
logical
consideration
is,
"Is
there
an
RPF
error?"
Once again, we see that there is no route to the RP. Are all the interfaces running PIM-SM?
Observe
that
both
interfaces
are
running
PIM-SM.
What
is
the
next
step
in
this
part
of
the
PIM-SM
operational
mechanism?
R5
should
now
unicast
the
PIM
Register
message
to
the
RP.
Can
R5
reach
the
RP?
R5#ping 192.1.2.2
Now we can verify that that the ping from R1 is successful now:
show
COMMAND:
show
ip
igmp
membership
[group-address
|
group-name]
[tracked]
[all]
This
command
displays
Internet
Group
Management
Protocol
(IGMP)
membership
information
for
multicast
groups
and
(S,
G)
channels.
Where:
EXAMPLE
OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE
OUTPUT:
R6#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
This
command
displays
information
about
interfaces
configured
for
Protocol
Independent
Multicast
(PIM).
EXAMPLE
OUTPUT:
R6#show ip pim interface
show
COMMAND:
show
ip
pim
rp
mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE
OUTPUT:
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:40:34, expires: 00:01:34
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:40:36, expires: 00:01:34
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R6#
show
COMMAND:
show
ip
pim
[vrf
vrf-name]
neighbor
[interface-type
interface-number]
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R6#show ip pim neighbor
PIM Neighbor Table
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
EXAMPLE
OUTPUT:
R6#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.26.2)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R6#
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This
command
displays
Protocol
Independent
Multicast
(PIM)
packets
received
and
sent
and
displays
PIM-related
events
Where:
EXAMPLE
OUTPUT:
R6#
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (*, 224.0.1.40), Forward state, by PIM
*G Join
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (172.16.46.6/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (172.16.46.6, 239.9.9.9), Forward state,
by PIM SG Join
R6#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 239.9.9.9
PIM(0): Send v2 Null Register to 192.1.7.7
PIM(0): Received v2 Register-Stop on FastEthernet0/0 from 192.1.7.7
PIM(0): for source 0.0.0.0, group 0.0.0.0
R6#
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.26.2's queue
PIM(0): Building Join/Prune packet for nbr 172.16.26.2
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.26.2 (FastEthernet0/1)
R6#
PIM(0): Received v2 Bootstrap on FastEthernet0/1 from 172.16.26.2
PIM(0): Update (224.0.0.0/4, RP:192.1.7.7), PIMv2
PIM(0): Update (224.0.0.0/4, RP:192.1.5.5), PIMv2
PIM(0): Received v2 Bootstrap on Serial0/0/0.1 from 172.16.46.4
R6#
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (172.16.15.1/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM SG Join
R6#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.0.1.40
PIM(0): Insert (*,224.0.1.40) join in nbr 172.16.26.2's queue
PIM(0): Building Join/Prune packet for nbr 172.16.26.2
PIM(0): Adding v2 (192.1.2.2/32, 224.0.1.40), WC-bit, RPT-bit, S-bit Join
PIM(0): Send v2 join/prune to 172.16.26.2 (FastEthernet0/1)
R6#
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (*, 224.0.1.40), Forward state, by PIM
*G Join
R6#
The network topology used in this section is shown in Figure 4-5 below:
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
the
RP
in
this
topology
is
not
learning
about
the
multicast
group
224.9.9.9
that
R9
has
joined.
You
must
correct
the
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
the
RP
is
not
creating
the
S,G
entry
for
the
group
224.9.9.9
and
any
pings
sent
to
this
group
from
R1
fail.
Correct
this
issue.
Trouble
Ticket
#3
Your
supervisor
has
instructed
you
to
prevent
any
multicast
traffic
associated
with
the
group
224.9.9.9
from
traversing
the
point-to-point
frame-relay
circuit
between
R4
and
R6.
You
are
not
allowed
to
change
or
remove
any
pim
sparse-mode
commands
at
the
interface
level
to
accomplish
this.
Figure
4-6:
PIM-SM
Quick
Fire
Troubleshooting
Flowchart
R9 has joined the multicast group 224.9.9.9 does the RP (R2) have a (*,G) entry for this multicast group?
R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
There
is
no
RPF
interface
that
can
be
used
to
reach
R9.
Using
the
show
ip
pim
neighbors
we
need
to
verify
that
PIM-SM
is
running
on
the
interface
that
would
be
used
to
reach
R9.
The
quickest
method
to
verify
this
is
to
execute
the
show
ip
pim
interface
command
on
R6:
R6#show ip pim interface
The
ip
pim
sparse-mode
is
not
configured
on
the
FastEthernet0/0
interface.
This
has
unquestionably
isolated
our
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
pim
sparse-mode
command
needs
to
be
added
to
FastEthernet0/0.
R6(config)#interface FastEthernet0/0
R6(config-if)#ip pim sparse-mode
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
of
the
initial
fault
verification.
R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
0 172.16.15.1
-1 172.16.15.1 PIM [192.1.2.0/24]
-2 172.16.15.5 PIM [192.1.2.0/24]
-3 172.16.45.4 PIM [192.1.2.0/24]
-4 172.16.24.2 PIM [192.1.2.0/24]
-5 192.1.2.2
Next
we
will
reverse
the
direction
of
the
check:
R2#show run | inc interface | pim
interface Loopback0
ip pim sparse-mode
interface GigabitEthernet0/0
ip pim sparse-mode
interface GigabitEthernet0/1
ip pim sparse-mode
interface Serial0/1/0
interface Serial0/2/0
ip pim rp-address 192.1.2.2
ip pim accept-register list 100
Note
that
the
last
line
of
this
output
shows
that
a
pim
accept-register
command
has
been
applied
to
the
RP.
The
access-list
being
called
by
this
configuration
is
extended
access-list
100.
What
does
this
access-
list
permit
or
deny?
R2#conf t
R2(config)#ip access-list extended 100
R2(config-ext-nacl)#5 permit ip host 172.16.15.1 any
R2(config-ext-nacl)#end
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated,
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
used
to
verify
the
fault
initially:
R1#ping 224.9.9.9 r 1000
The entry has been created. Thus demonstrating that the fault has been corrected.
The incoming interface is now FastEthernet0/1 toward R2 as expected. This fault has been corrected.
Chapter
5:
Protocol
Independent
Multicast
Sparse-
Dense
Mode
(PIM-S-
DM)
This
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting
details
the
processes
and
functionality
of
the
PIM
sparse-dense
mode
(PIM-S-DM)
protocol.
Following
the
coverage
of
the
operational
characteristics
of
the
protocol,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
PIM
sparse-dense
mode
(PIM-S-DM)
protocol.
The
chapter
begins
with
a
thorough
review
of
PIM-S-DM,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
routing
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
With
PIM
sparse-dense
mode
(PIM-S-DM)
the
interface
is
treated
as
dense
mode
if
the
group
is
in
dense
mode;
the
interface
is
treated
in
sparse
mode
if
the
group
is
in
sparse
mode.
Obviously,
for
sparse
mode
operation,
there
must
be
a
Rendezvous
Point
(RP).
PIM
sparse-dense
mode
solves
issues
with
Auto-RP,
covered
in
detail
in
Chapter
8:
AutoRP.
With
sparse-
dense
mode,
dense
mode
operation
can
distribute
the
Auto-RP
information,
while
the
multicast
groups
for
user
data
can
operate
in
sparse
fashion.
To
successfully
implement
Auto-RP
and
prevent
any
groups
other
than
224.0.1.39
and
224.0.1.40
from
operating
in
dense
mode,
Cisco
recommends
configuring
a
"sink
RP"
or
"RP
of
last
resort".
A
sink
RP
is
a
statically
configured
RP
that
may
or
may
not
actually
exist
in
the
network.
Configuring
a
sink
RP
does
not
interfere
with
Auto-RP
operation
since
the
default
behavior
is
for
Auto-RP
messages
to
supersede
static
RP
configurations.
When
an
interface
operates
in
dense
mode,
it
will
be
populated
into
the
outgoing
interface
list
of
a
multicast
routing
table
entry
when
either
of
the
following
conditions
are
true:
When
an
interface
operates
in
sparse
mode,
it
will
be
populated
into
the
outgoing
interface
list
of
a
multicast
routing
table
entry
when
either
of
the
following
conditions
are
true:
ip pim sparse-dense-mode
Figure
5-1:
Sample
PIM-S-DM
Topology
This
output
on
R1
clearly
illustrates
the
multicast
pings
are
successful
but
have
they
been
routed
in
dense
or
sparse
mode?
The
show
ip
mroute
command
will
tell
us
how
traffic
has
been
routed
on
any
given
device:
R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
Observe
that
the
traffic
sent
to
224.9.9.9
was
flagged
as
"S"
(sparse)
whereas
the
traffic
to
the
group
239.9.9.9
was
flagged
as
"D"
(dense).
What
determines
whether
a
group
is
treated
as
either
sparse
or
dense
mode
traffic?
Simply
put,
if
a
device
knows
of
an
RP
for
a
given
multicast
address
or
scope
of
addresses
that
group
or
scope
is
treated
as
sparse
mode
traffic.
If
a
device
does
not
have
an
RP
mapping
for
a
given
group
or
scope
of
addresses
then
this
traffic
will
be
forwarded
in
dense
mode.
Observe
in
this
topology
that
the
first
half
of
the
multicast
range
has
been
assigned
to
the
RP
192.1.2.2.
This
can
be
discovered
using
show
ip
pim
rp
mapping:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Acl: 1, Static
RP: 192.1.2.2 (?)
This
tells
us
that
an
access-list
was
used
to
assign
groups
to
an
RP
(192.1.2.2)
this
is
often
referred
to
as
a
group-to-RP
mapping.
What
multicast
groups
are
defined
in
the
access-list?
R1#show access-list 1
Standard IP access list 1
10 permit 224.0.0.0, wildcard bits 7.255.255.255 (978 matches)
As
we
can
see
this
means
that
any
multicast
address
between
224.0.0.0
and
231.255.255.255
will
have
an
RP
assigned,
and
therefore
be
forwarded
in
sparse-mode.
Verified
by
with
the
mtrace
utility:
R1#mtrace 172.16.15.1 172.16.79.9 224.1.1.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.1.1.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM Reached RP/Core [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]
Observe
that
the
-4
hop
arrives
at
R2.
R2
is
designated
as
the
RP
or
"Core"
router.
This
means
that
this
traffic
will
be
forwarded
in
sparse
mode.
As
evidenced
by
the
output
on
R2:
Observe
the
*,G
flag
is
"S"
for
sparse.
Now
we
will
look
at
the
last
group
in
the
range
that
will
have
an
RP
assigned.
Observe
that
the
-4
hop
arrives
at
R2.
R2
is
designated
as
the
RP
or
"Core"
router.
This
means
that
this
traffic
will
be
forwarded
in
sparse
mode.
As
evidenced
by
the
output
on
R2:
Observe
the
*,G
flag
is
"S"
for
sparse.
Now
we
will
look
at
the
first
group
in
the
range
that
will
not
have
an
RP
assigned.
Observe
that
the
-4
hop
arrives
at
R2.
R2
is
not
designated
as
the
RP
or
"Core"
router.
This
means
that
this
traffic
will
be
forwarded
in
dense
mode.
As
evidenced
by
the
output
on
R2:
R2#show ip mroute 232.0.0.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
PIM-S-DM
seems
to
be
the
most
flexible
of
the
PIM
modes
we
have
discussed
thus
far.
However,
there
is
one
behavior
of
PIM-S-DM
that
makes
it
less
than
attractive
as
a
PIM
deployment
option.
Remember,
any
group
that
does
not
have
an
RP
defined
is
forwarded
in
dense
mode.
Take
the
situation
where
significant
amounts
of
multicast
traffic
is
forwarded
in
the
topology
toward
a
handful
of
interested
hosts.
Assume
now
that
the
RP
fails.
In
this
situation,
the
routers
will
begin
treating
all
traffic
as
dense
and
begin
flooding
the
network
with
the
multicast
traffic.
This
means
that
all
PIM
enabled
devices
will
receive
all
multicast
traffic.
Additionally,
the
prune
process
we
discussed
in
Chapter
3:
Protocol
Independent
Multicast
-
Dense
Mode
(PIM-DM)
will
add
further
to
the
network
congestion.
The
environment
we
are
currently
working
with
uses
static
RP
assignment.
This
fallback
to
PIM-DM
behavior
only
takes
place
when
the
RP
has
been
dynamically
defined
in
AutoRP
and
will
be
covered
in
depth
in
Chapter
8:
AutoRP.
It
is
odd
that
the
unwanted
behavior
of
PIM-S-DM
only
takes
place
while
using
the
very
protocol
it
was
meant
to
facilitate.
This
behavior
can
be
stopped
in
a
number
of
ways
to
include
the
"sink
RP"
mentioned
in
the
Technology
Review
section,
or
the
more
commonly
used
no
ip
pim
dm-
fallback
command
on
all
PIM-S-DM
enabled
routers.
RPF
Failures
In
the
Troubleshooting
PIM-S-DM
section,
this
text
discussed
the
PIM-S-DM
operational
process.
Since
these
mechanisms
utilize
messages
communicated
using
multicast
they
all
are
subject
to
Reverse
Path
Forwarding
(RPF)
checks.
Logically
then,
RPF
issues
can
prevent
optimal
multicast
routing,
or
stop
multicast
forwarding
entirely.
Whether
traffic
is
forwarded
as
sparse
or
dense,
RPF
checks
are
performed
in
both
the
control
and
data
plane
processes.
Control
Plane
-
PIM-S-DM
control
plane
uses
PIM
messages
in
its
creation.
PIM
sends
messages
via
the
link-local
multicast
group
224.0.0.13,
and
are
therefore
subject
to
RPF
checks.
It
is
important
to
note
that
RPF
checks
in
the
control
plane
are
against
the
source
IP
address
encapsulated
into
each
PIM
packet
as
they
arrive.
More
often
than
not,
this
will
be
the
IP
address
of
the
adjacent
neighbor.
Data
Plane
-
PIM-S-DM
will
perform
RPF
checks
on
each
individual
multicast
packet
before
deciding
to
forward
it.
This
means
that
the
source
IP
address
of
each
multicast
packet
a
router
receives
must
be
reachable
out
the
receiving
interface
before
the
router
will
forward
it
to
an
adjacent
neighbor.
In
instances
when
traffic
is
forwarded
as
PIM-DM,
RPF
always
performs
checks
against
the
source
of
the
multicast
feed.
In
instances
where
the
traffic
is
treated
as
PIM-
SM
RPF
checks
will
first
be
done
toward
the
RP
and
then
toward
the
source
as
part
of
the
shortest
path
tree
failover
process.
The
RPF
check
mechanism
can
result
in
scenarios
where
the
control
plane
fails
to
form
correctly,
or
multicast
packets
fail
to
transit
the
multicast
tree.
When
only
a
few
packets
or
no
packets
reach
the
receivers
RPF
failures
will
normally
be
the
cause.
We
will
perform
a
walk
through
for
each
of
these
RPF
issues
in
the
PIM-S-DM
Sample
Troubleshooting
Scenarios
section
that
follows.
This
is
a
situation
where
it
will
be
necessary
to
look
at
the
underlying
routing
protocols
used
in
the
network.
Typically,
this
would
be
an
issue
of
asynchronous
routing,
and
should
be
something
obvious
once
the
routing
tables
of
the
source
and
transit
devices
are
analyzed.
When
sparse
mode
forwarding
is
being
used,
situations
like
the
following
normally
exist
when
information
fails
to
propagate
to
any
or
all
devices,
but
RPF
checks
and
unicast
routing
seem
to
be
functioning
correctly:
In
instances
when
dense
mode
forwarding
is
being
used
it
is
important
to
keep
in
mind
that
every
multicast
packet
has
a
TTL
value,
just
like
their
unicast
IP
counterparts.
In
many
environments,
using
PIM-DM
this
fact
is
used
as
a
method
to
scope
or
contain
multicast
packets
to
the
internal
network.
Multicast-threshold,
is
effectively
employed
to
keep
multicast
packets
from
leaking
into
any
internetwork
space.
However,
it
is
possible
to
create
a
multicast
routing
fault
by
setting
the
multicast
threshold
on
a
given
router
interface.
If
the
packet's
TTL
is
higher
than
the
multicast
threshold
configured
on
an
interface
(and
it
pass
the
RPF
check),
the
packet
will
be
forwarded.
If
the
TTL
of
the
packet
is
lower
than
the
multicast
threshold,
the
router
drops
the
packet.
The
possible
range
for
a
multicast
threshold
value
is
0
to
255,
with
0
meaning
all
packets
will
be
forwarded
verses
255
where
virtually
no
packets
will
be
forwarded
In
the
PIM-S-DM
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
PIM-S-DM
section,
three
primary
types
of
problems
were
identified:
RPF
failures,
Unicast
Routing
and
Forwarding
Problems,
and
Multicast
Routing
and
Forwarding
Problems.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
verify
a
problem,
isolate
it
and
remediate
it.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
By
generating
a
ping
on
R1
to
the
group
224.9.9.9,
R1
can
emulate
a
multicast
feed
that
will
be
forwarded
as
sparse
mode:
Step One: Verify possible RPF issues in the path between the RP and source bi-directionally.
Step Two: Verify RPF issues in the path between the RP and host bi-directionally.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 239.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
239.9.9.9?
By
generating
a
ping
on
R1
to
the
group
239.9.9.9,
R1
can
emulate
a
multicast
feed
that
will
be
forwarded
as
dense
mode:
Single Step Process: Verify possible RPF issues in the path between the host and source bi-directionally.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
By
generating
a
ping
on
R1
to
the
group
224.9.9.9,
R1
can
emulate
a
multicast
feed
that
will
be
forwarded
as
sparse
mode:
Single
Step
Process:
Verify
the
existence
of
a
(S,G)
entry
for
the
group
224.9.9.9
on
the
next
hop
router
from
the
source
and
on
the
RP
with
show
ip
mroute:
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
The
access-group
command
under
this
interface
is
referencing
the
extended
access-list
100.
What
is
being
denied
by
this
ACL?
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
By
generating
a
ping
on
R1
to
the
group
224.9.9.9,
R1
can
emulate
a
multicast
feed
that
will
be
forwarded
as
sparse
mode:
Step One: Verify possible RPF issues in the path between the source and the RP.
Step Two: Initiate a ping with a high repeat count from R1:
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 239.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
239.9.9.9?
By
generating
a
ping
on
R1
to
the
group
239.9.9.9,
R1
can
emulate
a
multicast
feed
that
will
be
forwarded
as
dense
mode
traffic:
Step
One:
Verify
possible
RPF
issues
in
the
path
between
the
source
and
the
host
bi-directionally.
R1#mtrace 172.16.15.1 172.16.79.9 239.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 239.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM Prune sent upstream [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]
show
COMMAND:
show
ip
igmp
membership
[group-address
|
group-name]
[tracked]
[all]
This
command
displays
Internet
Group
Management
Protocol
(IGMP)
membership
information
for
multicast
groups
and
(S,
G)
channels.
Where:
EXAMPLE
OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE
OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R7#
show
COMMAND:
show
ip
pim
interface
This
command
displays
information
about
interfaces
configured
for
Protocol
Independent
Multicast
(PIM).
EXAMPLE
OUTPUT:
R7#show ip pim interface
show
COMMAND:
show
ip
pim
rp
mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE
OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is a candidate RP (v2)
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:13:30, expires: 00:02:12
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:13:30, expires: 00:02:11
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R7#
show
COMMAND:
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.67.6 FastEthernet0/0 00:07:58/00:01:37 v2 1 / S
172.16.79.9 FastEthernet0/1 00:07:49/00:01:17 v2 1 / DR S
R7#
show
COMMAND:
show
ip
rpf
[vrf
vrf-name]
{route-distinguisher
|
source-address
[group-address]
[rd
route-
distinguisher]}
[metric]
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
EXAMPLE
OUTPUT:
R7#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1)
RPF interface: FastEthernet0/0
RPF neighbor: ? (172.16.67.6)
RPF route/mask: 172.16.15.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This
command
displays
Protocol
Independent
Multicast
(PIM)
packets
received
and
sent
and
displays
PIM-related
events
Where:
EXAMPLE
OUTPUT:
R7#debug ip pim
PIM debugging is on
R7#
PIM(0): Received v2 Register on FastEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 239.9.9.9
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Forward decapsulated data packet for 239.9.9.9 on FastEthernet0/1
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 224.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 224.9.9.9), Forward state, by PIM *G
Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 239.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 239.9.9.9), Forward state, by PIM *G
Join
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM *G Join
R7#
PIM(0): Send RP-reachability for 239.9.9.9 on FastEthernet0/1
PIM(0): Send RP-reachability for 224.9.9.9 on FastEthernet0/1
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.9.9.9
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.0.1.40
PIM(0): Insert (*,224.0.1.40) join in nbr 172.16.67.6's queue
The network topology used in this section is shown in Figure 5-5 below:
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
users
on
the
VLAN79
segment
connecting
R7
and
R9
cannot
receive
the
multicast
feed
for
233.99.99.99.
You
have
been
instructed
to
use
R9
and
R1
for
any
testing.
You
must
correct
the
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
users
on
the
VLAN79
segment
connecting
R7
and
R9
cannot
receive
traffic
destined
to
any
sparse
mode
forwarded
multicast
traffic.
Again,
you
are
to
use
R1
as
the
source
and
R9
as
a
simulated
host.
You
have
been
assigned
the
multicast
group
224.99.99.99
for
all
testing.
Correct
this
issue.
Now
use
show
ip
mroute
on
all
the
devices
and
observe
the
output
for
the
(S,G)
pair
172.16.15.1,
233.99.99.99
on
the
transit
devices.
R5 has the (S,G) pair, but it is in the Prune state for the interface in the OIL. We will now look at R4:
R4 has the (S,G) pair, but it is in the Prune state for the interface in the OIL. We will now look at R2:
R2 has the (S,G) pair, and it is in the Prune state for the interface in the OIL. We will now look at R6:
R6
has
the
(S,G)
pair,
but
there
are
no
interfaces
in
the
OIL.
We
need
to
examine
the
nature
of
the
PIM-
S-DM
configuration
on
R6:
R6#show ip pim interface
Observe
that
the
FastEthernet0/0
interface
is
running
PIM
version
2,
but
it
is
in
sparse
mode
as
indicated
by
the
value
of
"S".
In
this
lab
this
flag
should
be
SD
for
sparse-dense.
This
can
be
confirmed
further
by
looking
at
the
configuration
under
this
interface:
R6#show run interface FastEthernet0/0
Building configuration...
There
are
no
issues
between
the
RP
and
the
host,
what
about
the
RP
and
the
source:
R2#mtrace 192.1.2.2 172.16.15.1 224.99.99.99
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.15.1 via group 224.99.99.99
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.15.1
-1 * 172.16.15.1 PIM Prune sent upstream [192.1.2.0/24]
-2 * 172.16.15.5 PIM Prune sent upstream [192.1.2.0/24]
-3 * 172.16.45.4 PIM Prune sent upstream [192.1.2.0/24]
-4 * 172.16.24.2 PIM Reached RP/Core [192.1.2.0/24]
There
are
no
issues
in
the
creation
of
the
control
plane
between
the
RP
and
the
source.
Initiate
a
ping
from
R1
with
a
high
repeat
count
and
see
if
the
next
hop
router
and
the
RP
create
(S,G)
entries
for
224.99.99.99
after
enabling
debug
ip
pim
on
R2:
R2#debug ip pim
PIM debugging is on
Now on R1:
R2#
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.99.99.99
%PIM-4-INVALID_SRC_REG: Received Register from 172.16.45.5 for (172.16.15.1,
224.99.99.99), not willing to be RP
R2#
PIM(0): Register for 172.16.15.1, group 224.99.99.99 rejected
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.99.99.99
We
see
that
R2
is
receiving
the
PIM
Register
message
from
R5,
but
the
router
is
refusing
to
be
the
RP
for
this
group
claiming
the
Registration
is
coming
from
an
"INVALID_SRC".
This
is
most
commonly
caused
by
a
filter
or
security
configuration.
These
commands
are
best
located
with
show
run:
Chapter
6:
Bidirectional
Protocol
Independent
Multicast
(BIDIR-
PIM)
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
the
functionality
of
Bidirectional
PIM
(BIDIR-PIM)
protocol
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
Bidirectional
PIM
protocol.
The
chapter
begins
with
a
thorough
review
of
BIDIR-PIM,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
routing
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
Devices
signal
membership
in
a
bidirectional
PIM
group
using
explicit
Join
messages.
Sources
send
multicast
traffic
up
the
shared
tree
toward
the
Rendezvous
Point
(RP).
The
RP
then
passes
traffic
down
the
tree
to
any
receivers
on
each
branch.
Note
that
for
these
packets
passed
downstream,
there
is
no
fundamental
difference
between
BIDIR-PIM
and
PIM-SM.
The
unique
behavior
is
with
the
traffic
that
passes
from
the
various
sources
upstream
to
the
RP.
In
PIM-SM,
traffic
from
sources
destined
for
the
RP
does
not
flow
upstream
in
the
shared
tree,
but
downstream
along
the
shortest
path
tree
of
the
source
until
it
reaches
the
RP.
From
the
RP,
traffic
flows
along
the
shared
tree
toward
all
receivers.
In
BIDIR-PIM,
devices
can
pass
traffic
up
the
shared
tree
toward
the
RP.
To
avoid
multicast
packet
looping,
BIDIR-PIM
introduces
a
new
mechanism
called
the
designated
forwarder
(DF).
This
establishes
a
loop-free
shortest
path
tree
rooted
at
the
RP.
The
designated
forwarder
(DF)
election
takes
place
for
all
PIM
routers
on
every
network
segment
and
point-to-point
link.
The
procedure
selects
one
router
as
the
DF
for
every
RP
of
bidirectional
groups.
The
designated
forwarder
is
responsible
for
forwarding
multicast
packets
received
on
that
network.
Routers
use
unicast
routing
metrics
for
this
DF
election
process.
The
router
with
the
most
preferred
unicast
routing
metric
to
the
RP
becomes
the
designated
forwarder.
This
ensures
that
only
one
copy
of
every
packet
is
sent
to
the
RP,
even
if
there
are
parallel
equal-cost
paths.
Note:
Because
a
DF
is
selected
for
every
RP
of
bidirectional
groups,
multiple
routers
may
be
elected
as
DF
on
any
network
segment.
The
procedure
for
joining
the
shared
tree
of
a
bidirectional
group
is
almost
identical
to
that
used
in
PIM-
SM,
except
that
with
BIDIR-PIM,
the
role
of
the
designated
router
(DR)
is
assumed
by
the
designated
forwarder
for
the
RP.
On
a
network
that
has
local
receivers,
only
the
router
elected
as
the
DF
populates
the
outgoing
interface
list
(olist)
upon
receiving
Internet
Group
Management
Protocol
(IGMP)
Join
messages.
This
DF
then
sends
(*,
G)
Join
and
Leave
messages
upstream
toward
the
RP.
When
a
downstream
router
wishes
to
join
the
shared
tree,
the
reverse
path
forwarding
neighbor
in
the
PIM
Join
and
Leave
messages
is
always
the
DF
elected
for
the
interface
that
leads
to
the
RP.
When
a
router
receives
a
Join
or
Leave
message,
and
the
router
is
not
the
DF
for
the
receiving
interface,
the
message
is
ignored.
Otherwise,
the
router
updates
the
shared
tree
in
the
same
way
as
in
sparse
mode.
Another
unique
property
of
BIDIR-PIM
is
that
there
is
no
need
to
send
PIM
assert
messages.
This
is
because
the
DF
election
procedure
eliminates
parallel
downstream
paths
from
any
RP.
An
RP
never
joins
a
path
back
to
the
source,
nor
will
it
send
any
register
stops.
The
configuration
of
BIDIR-PIM
on
the
router
is
very
simple.
First,
configure
PIM
sparse-mode
on
the
appropriate
interfaces
using
the
command
ip
pim
sparse-mode.
Then,
use
the
global
configuration
mode
command
for
BIDIR-PIM:
ip pim bidir-enable
Where:
BIDIR-PIM
RP
In
PIM-SM,
the
RP
had
many
roles
the
most
notable
being
responding
to
PIM
Register
Messages
and
creation
of
the
source-base
tree
between
the
RP
and
multicast
sources.
As
indicated
in
the
Technology
Review
of
this
chapter
BIDIR-PIM
utilizes
the
concept
of
the
RP,
and
relies
on
it
exclusively
for
the
forwarding
and
distribution
of
multicast
traffic.
This
means
as
stated
earlier
that
BIDIR-PIM
only
uses
shared-trees.
Thus,
the
formation
of
the
source-based
tree
and
responding
to
PIM
Register
messages
are
no
longer
part
of
the
RP's
function
in
BIDIR-PIM.
In
BIDIR-PIM,
the
role
of
the
RP
is
significantly
different
from
PIM-SM.
The
RP
is
still
responsible
for
facilitating
sources
and
hosts
learning
about
each
other
but
rather
than
being
a
physical
device
it
is
more
of
a
logical
construct
that
fills
the
role
of
a
destination
vector
in
BIDIR-PIM.
This
means
that
the
address
of
the
RP
does
not
have
to
be
assigned
to
a
physical
device,
it
can
simply
be
part
of
a
subnet
on
the
device
intended
to
be
the
RP.
The
topology
outlined
in
Figure
6-1
will
be
used
to
illustrate
this
concept.
Figure
6-1:
Sample
BIDIR-PIM
Topology
In
this
topology,
R2
is
the
RP;
all
devices
are
running
BIDIR-PIM.
Evidenced
by
using
show
ip
pim
rp
mapping
on
all
devices:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Can
we
successfully
reach
a
host
using
this
ip
address
for
the
RP?
To
find
out
we
will
have
R9
join
the
multicast
group
224.9.9.9.
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
With
this
accomplished
we
will
then
ping
this
group
from
R1:
R1#ping 224.9.9.9 repeat 10
The
pings
are
successful.
This
illustrates
that
the
RP
address
is
not
required
to
exist
on
a
physical
interface,
and
to
illustrate
the
concept
that
in
BIDIR-PM
the
RP
is
more
like
a
destination
vector
as
mentioned
previously.
So
in
BIDIR-PM
the
RP
is
not
required
to
be
a
physical
router
like
in
PIM-SM.
Understanding
this
vector
idea
my
prove
useful
in
troubleshooting
BIDIR-PIM
problems.
We
see
that
172.16.79.9
was
the
last
reporter
for
the
group
224.9.9.9.
Now
that
R7
has
this
IGMP
membership
message
from
the
host,
the
router
will
send
PIM
joins
toward
the
RP
for
this
group.
These
joins
will
be
propagated
hop-by-hop
using
the
link-local
address
224.0.0.13.
We
verify
this
process
by
looking
for
the
creation
of
the
(*,
224.9.9.9)
entries
in
the
multicast
routing
tables
of
R7,
R6,
and
R2
as
evidenced
by
show
ip
mroute:
The
(S,G)
entry
is
on
R7
for
the
group
224.9.9.9.
Observe
the
flag
of
B
for
the
pair.
Also,
observe
that
the
is
no
incoming
interface
list
in
the
output.
This
has
been
replaced
with
the
entry
Bidir-Upstream.
The
interface
found
in
this
section
(FastEthernet0/0)
is
the
RPF
interface
used
to
reach
the
RP.
This
means
any
traffic
received
on
this
"upstream"
interface
will
be
forwarded
"downstream"
to
any
receivers
on
the
shared-tree.
Next
we
will
look
at
the
contents
of
the
multicast
routing
table
on
the
next
hop
router
using
show
ip
mroute:
Observe
that
we
have
the
entry
for
224.9.9.9,
again
note
the
absence
of
the
Incoming
Interface
List,
replaced
with
the
Bidir-Upstream
interface
toward
the
RP.
Next
look
at
R4:
Observe
that
the
RP
has
only
the
single
entry
for
the
group
224.9.9.9.
As
mentioned
in
the
Technology
Review,
BIDIR-PIM
does
not
utilize
(S,G)
entries,
nor
do
we
need
to
be
concerned
with
the
multicast
stream
reverting
to
the
shortest
path.
In
BIDIR-PIM,
this
behavior
simply
does
not
take
place.
The
RP
will
remain
the
root
of
the
BIDIR-PIM
environment
and
traffic
will
always
travel
upstream
toward
the
RP
or
downstream
away
from
the
RP.
Another
deviation
from
typical
PIM-SM
behavior
is
how
multicast
packets
are
forwarded;
this
process,
as
discussed,
involves
the
election
of
a
Designated
Forwarder.
checks.
The
ultimate
role
of
the
DF
is
to
forward
multicast
traffic
received
on
its
segment.
The
role
of
DF
is
assigned
based
on
the
lowest
metric
to
reach
the
RP,
and
in
the
event
of
a
tie
in
these
values
the
highest
IP
address
is
the
selection
criteria.
This
can
be
observed
based
on
the
output
of
debug
ip
pim
df
on
R1:
R1#debug ip pim df
PIM RP DF debugging is on
R1#clear ip route *
R1#
PIM(0): RP(192.1.2.255) metric changed from (NULL, unicast, 2147483647, -1)
PIM(0): to (FastEthernet0/0, unicast, 119, 3)
PIM(0): Elect DF for FastEthernet0/0, new RP 192.1.2.255
PIM(0): Send v2 Offer on FastEthernet0/0 (Non-DF) for RP 192.1.2.255
PIM(0): Sender 172.16.15.1, pref 2147483647, metric 2147483647
PIM(0): Receive DF Winner message from 172.16.15.5 on FastEthernet0/0 (Non-DF)
PIM(0): RP 192.1.2.255, pref 120, metric 2
PIM(0): Metric is better
When
troubleshooting
this
protocol
it
is
important
to
note
that
if
adjacent
routers
are
not
both
BIDIR-
PIM
enabled
then
the
DF
election
process
will
not
take
place.
This
fact
is
evident
when
a
router
receives
a
PIM
Hello
message
that
does
not
contain
the
BIDIR
flag.
If
the
designated
forwarder
cannot
be
elected
then
no
BIDIR-PIM
traffic
can
be
forwarded
to
or
from
the
segment.
This
process
is
designed
to
protect
the
network
from
possible
multicast
routing
loops.
The
fact
that
BIDIR-PIM
uses
nothing
but
shared-trees,
eliminates
the
PIM
Register
process
and
works
bi-directionally,
makes
it
a
very
attractive
version
of
PIM
to
utilize
in
environments
that
require
applications
like
video
conferencing.
However,
to
reduce
operational
overhead
by
eliminating
possibly
hundreds
of
multicast
routing
states,
BIDIR-PIM
has
eliminated
the
source
state
(S,G)
entries
from
the
multicast
routing
table.
The
most
used
tool
in
troubleshooting
multicast
issues
to
date
has
been
these
source
state
entries.
Therefore,
the
very
thing
that
makes
BIDIR-PIM
more
efficient
also
makes
it
more
difficult
to
troubleshoot.
RP
and
DR
Failures
In
the
Troubleshooting
BIDIR-PIM
section,
this
text
discussed
the
shared
tree
mechanism
used
to
create
the
BIDIR-PIM
operational
environment.
The
RP
is
so
pivotal
in
BIDIR-PIM
because
all
traffic
is
forwarded
to
the
RP
and
then
from
the
RP
to
any
member
hosts.
These
trees
can
transport
multicast
packets
bi-
directionally.
Keep
in
mind
that
these
bidirectional
trees
are
created
using
a
fail-safe
design.
This
design
involves
the
Designated
Forwarder
(DF)
election
mechanism
operating
on
each
link
in
the
multicast
topology.
With
the
assistance
of
the
DF,
multicast
data
is
natively
forwarded
from
sources
to
the
Rendezvous-
Point
(RP)
and
hence
along
the
shared
tree
to
receivers
without
requiring
source-specific
state
information
being
added
to
the
multicast
routing
table.
It
is
necessary
to
observe
that
this
process
only
works
if
all
devices
in
the
multicast
path
agree
on
the
identity
of
the
RP.
In
this
section,
we
are
only
working
with
static
mapping
to
a
single
RP,
but
in
environments
with
multiple
statically
assigned
RPs
like
those
discussed
in
Chapter
7:
Static
Rendezvous
Points
(RPs),
or
in
dynamically
assigned
RP's
using
BSR
or
AutoRP
the
agreement
on
the
specific
Group-to-RP
mapping
is
essential
on
all
devices
in
the
multicast
domain.
Fragmented
agreement
on
the
identity
of
the
RP
could
result
in
loops
or
complete
failure
of
the
BIDIR-PIM
configuration.
Situations
like
the
following
exist
when
information
fails
to
propagate
to
any
or
all
devices
and
unicast
routing
seems
to
be
functioning
correctly:
In
the
BIDIR-PIM
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
BIDIR-PIM
section,
two
primary
types
of
problems
were
identified:
RP
and
DF
failures,
and
Multicast
Routing
and
Forwarding
Problems.
This
section
explores
these
two
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
verify
a
problem,
isolate
it
and
remediate
it.
RP
Failure
in
BIDIR-PIM
Setting
the
stage:
R9
will
join
the
multicast
group
224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
Are
there
interfaces
in
the
OIL
for
the
(*,
224.9.9.9)
group?
The
output
indicates
that
there
are
no
interfaces
in
the
output
list
with
the
value
of
"Null".
Observe
that
the
RP
192.2.2.255
and
the
RPF
nbr
0.0.0.0
do
not
match.
This
tells
us
something
is
wrong
with
the
RP.
Step
Two:
Identify
the
RP
network
and
verify
reachability
to
it?
Find
the
configured
RP
address
with
show
ip
pim
rp:
R5#show ip pim rp
Group: 224.9.9.9, RP: 192.2.2.255, uptime 00:01:01, expires never
Group: 224.0.1.40, RP: 192.2.2.255, uptime 00:07:42, expires never
This
output
tells
us
that
the
Group-to-RP
mapping
on
R5
for
224.9.9.9
is
for
192.2.2.255.
Is
this
even
reachable
in
our
topology?
Immediately,
we
can
see
there
is
no
route
on
R5
for
this
address.
The
drawing
tells
us
that
the
RP
is
supposed
to
be
192.1.2.255.
To
correct
this
issue
use
the
correct
ip
pim
rp-address
command:
R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip pim rp-address 192.1.2.2 bidir
R5(config)#end
DF
Failure
in
BIDIR-PIM
Setting
the
stage:
R9
will
join
the
multicast
group
224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
Observe
that
the
RPF
nbr
is
0.0.0.0.
This
means
either
this
device
is
the
RP,
or
the
device
believes
the
RP
address
is
invalid.
This
device
has
no
interface
in
the
network
192.1.2.0/24,
so
it
cannot
be
the
RP.
This
leaves
the
later
problem.
Step Two: Verify the identity of the designated forward for all interfaces.
This
output
indicates
that
no
designated
forwarder
has
been
elected
on
the
FastEthernet0/0
interface
of
R5.
Use
show
ip
pim
interface
to
see
if
both
interfaces
are
running
sparse-mode.
This output indicates that neighbor relationships exit out both interfaces.
Step
Three:
Check
the
next
hop
router
in
the
multicast
path
for
the
(*,G).
Use
show
ip
mroute
on
R4:
R4#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
There
is
no
record
in
the
multicast
routing
table
for
the
group
224.9.9.9.
If
the
group
was
created
on
R5
that
means
R5
was
participating
in
BIDIR-PIM.
Also,
remember
that
R5
considered
the
identity
of
the
RP
to
be
invalid.
This
could
be
caused
by
a
router
in
the
transit
path
not
being
able
to
participate
in
BIDIR-
PIM
or
a
lack
of
end-to-end
PIM
communication
between
R5
and
the
RP.
Observe
that
the
show
ip
mroute
output
indicates
that
there
is
an
Incoming
Interface
List;
something
that
does
not
exist
in
BIDIR-
PIM.
Use
show
run
to
see
if
the
router
is
enabled
for
BIDIR-PIM:
We
can
see
that
R4
has
not
been
configured
with
BIDIR-PIM,
without
this
command
the
router
cannot
participate
in
the
DF
election
and
as
such
breaks
the
connectivity
to
the
RP.
Correct
this
issue
by
applying
the
ip
pim
bidir-enable
and
ip
pim
rp-address
commands:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#ip pim bidir-enable
R4(config)#ip pim rp-address 192.1.2.255 bidir
R4(config)#end
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
R4
has
no
issues.
We
see
the
RP,
the
RPF
nbr
used
to
reach
the
RP,
and
the
FastEthernet0/0
interface
pointing
to
R2.
Try
the
next
hop:
No
issues
exist
on
R2
either.
Note
that
the
RP
and
the
RPF
nbr
values
match,
meaning
that
this
device
is
the
RP
for
this
group.
Note
that
R7
has
FastEthernet0/1
in
the
OIL,
and
we
can
see
that
it
will
limit
any
outbound
traffic
to
0
kbps.
This
will
effectively
block
all
outbound
traffic
to
R9.
We
can
verify
this
with
the
show
run
command:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip multicast rate-limit out 0
R7(config-if)#end
show
COMMAND:
show
ip
igmp
membership
[group-address
|
group-name]
[tracked]
[all]
This
command
displays
Internet
Group
Management
Protocol
(IGMP)
membership
information
for
multicast
groups
and
(S,
G)
channels.
Where:
EXAMPLE
OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE
OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
This
command
displays
information
about
interfaces
configured
for
Protocol
Independent
Multicast
(PIM).
EXAMPLE
OUTPUT:
R7#show ip pim interface
show
COMMAND:
show
ip
pim
rp
mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE
OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.67.6 FastEthernet0/0 00:03:33/00:01:37 v2 1 / B S
172.16.79.9 FastEthernet0/1 00:03:15/00:01:28 v2 1 / DR B S
R7#
show
COMMAND:
show
ip
rpf
[vrf
vrf-name]
{route-distinguisher
|
source-address
[group-address]
[rd
route-
distinguisher]}
[metric]
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
EXAMPLE
OUTPUT:
R7#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1)
RPF interface: FastEthernet0/0
RPF neighbor: ? (0.0.0.0)
RPF route/mask: 172.16.15.0/24
RPF type: unicast (rip)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This
command
displays
Protocol
Independent
Multicast
(PIM)
packets
received
and
sent
and
displays
PIM-related
events
Where:
EXAMPLE
OUTPUT:
R2#debug ip pim
PIM debugging is on
R2#
R2#
R2#
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward state, by
PIM *G Join
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/0 from 172.16.24.4, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/0/172.16.24.4 to (*, 224.0.1.40), Forward state, by
PIM *G Join
R2#
PIM(0): Building Periodic Join/Prune message for 224.0.1.40
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward state, by
PIM *G Join
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/0 from 172.16.24.4, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/0/172.16.24.4 to (*, 224.0.1.40), Forward state, by
PIM *G Join
R2#
PIM(0): Building Periodic Join/Prune message for 224.0.1.40
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward state, by
PIM *G Join
The network topology used in this section is shown in Figure 6-7 below:
Trouble
Ticket
#1
Your
supervisor
has
been
experimenting
to
deploy
BIDIR-PIM
on
the
network
in
Figure
3-6.
During
testing
over
the
weekend
he
discovered
that
when
hosts
on
the
VLAN79
segment
between
R7
and
R9
join
multicast
groups,
these
group
memberships
are
not
propagated
to
the
RP.
You
have
been
instructed
to
use
the
multicast
address
224.9.9.9
on
R9
to
isolate
the
problem.
Once
the
fault
has
been
found
correct
this
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
while
doing
more
testing
has
observed
that
multicast
sources
generated
by
R1
never
reach
the
RP.
You
have
been
instructed
to
use
the
group
224.9.9.9
on
R1
to
isolate
this
issue.
Once
the
fault
has
been
isolated
correct
the
issue.
Does R2 create the (*, 224.9.9.9) entry in its multicast routing table?
The
verification
clearly
demonstrates
that
R6
is
not
using
BIDIR-PIM
to
communicate
with
the
RP.
Observe
that
there
is
an
Incoming
interface
lists
in
the
output.
This
can
be
verified
with
show
ip
pim
rp
mapping,
we
see
there
is
no
Bidir
Mode
after
the
Static:
The
verification
clearly
demonstrates
that
R6
is
not
using
BIDIR-PIM
to
communicate
with
the
RP.
Observe
that
there
is
an
Incoming
interface
list
in
the
output.
This
can
be
verified
with
show
ip
pim
rp
mapping:
R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Observe
that
the
output
indicates
there
is
no
RP
for
any
multicast
groups.
This
isolates
our
problem.
Chapter
7:
Static
Rendezvous
Points
(RPs)
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
the
functionality
of
static
Rendezvous
Points
(RPs)
are
examined
in
great
depth.
Once
the
operational
characteristics
of
static
RPs
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
static
RP
assignments.
The
chapter
begins
with
a
thorough
review
of
static
RP
assignment,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
In
IP
version
4,
there
are
three
main
options
for
the
dissemination
of
RP
information
to
the
multicast
domain.
There
is
the
manual
(static)
assignment
of
this
information
as
detailed
in
this
chapter.
There
is
the
AutoRP
protocol,
and
there
is
the
Bootstrap
Router
Protocol
(BSR).
Chapters
8
and
9
of
this
book
detail
the
latter
dynamic
technologies
in
detail.
Statically assigning the RP information in the domain hinges upon a single command:
Where:
vrf
optional;
specifies
that
the
static
group-to-RP
mapping
be
associated
with
the
Multicast
Virtual
Private
Network
VRF
instance
listed
rp-address
the
IP
address
of
the
RP
to
be
used
for
the
static
group-to-RP
mapping
access-list
optional;
the
standard
access
list
that
defines
the
multicast
groups
to
be
statically
mapped
to
the
RP;
if
no
access
list
is
defined,
the
RP
will
map
to
all
multicast
groups,
224/4
override
optional;
specifies
that
if
dynamic
and
static
group-to-RP
mappings
are
used
together
and
there
is
an
RP
address
conflict,
the
RP
address
configured
for
a
static
group-to-RP
mapping
will
take
precedence
bidir
optional;
specifies
that
the
static
group-to-RP
mapping
be
applied
to
a
bidirectional
PIM
RP;
Chapter
6:
Bidirectional
Protocol
Independent
Multicast
(BIDIR-PIM)
details
bidirectional
PIM
in
detail
Figure
7-1:
Sample
Static
RP
Topology
In
this
topology,
R4
will
perform
the
duties
of
RP
for
the
multicast
groups
ranging
from
224.0.0.1
to
231.255.255.255,
and
R6
will
be
the
RP
for
the
groups
ranging
from
232.0.0.1
to
239.255.255.255.
In
a
working
environment,
we
can
see
how
this
is
configured
by
looking
at
the
commands
used
on
any
single
device:
R2#show run | inc access-list | rp-address
ip pim rp-address 192.1.4.4 1
ip pim rp-address 192.1.6.6 2
access-list 1 permit 224.0.0.0 7.255.255.255
In
this
situation
we
see
that
any
groups
matching
the
standard
access-list
1
will
be
mapped
to
R4's
loopback0
interface,
and
that
any
matching
standard
access-list
2
will
be
mapped
to
R6.
These
define
the
group-to-RP
mappings
we
have
been
discussing.
The
nature
of
these
mappings
can
be
viewed
on
all
devices
in
the
topology
with
show
ip
pim
rp
mapping:
Acl: 1, Static
RP: 192.1.4.4 (?)
Acl: 2, Static
RP: 192.1.6.6 (?)
We
see
that
R1
has
two
mappings
defined
by
ACL1
and
ACL2.
We
can
see
what
each
ACL
matches
by
using
show
ip
access-list:
R1#show ip access-list
Standard IP access list 1
10 permit 224.0.0.0, wildcard bits 7.255.255.255
Standard IP access list 2
10 permit 232.0.0.0, wildcard bits 7.255.255.255
This
means
that
R1
will
use
R4
as
the
RP
for
all
groups
matched
by
the
standard
access-list
1.
This
can
be
tested
using
mtrace:
By
specifying
the
group
address
for
224.1.1.1
we
know
that
based
on
the
access-lists
available
that
R4
will
chose
as
the
RP
for
this
group.
We
can
repeat
this
test
using
the
group
231.255.255.255.
This
group
represents
the
highest
address
in
the
range
matched
by
ACL1.
Thus,
this
group
should
use
R4
as
the
RP.
The
multicast
stream
for
232.0.0.1
matches
ACL2
so
the
RP
for
this
group
becomes
R6
as
specified
by
the
"Reached
RP/Core"
entry
in
the
mtrace
output.
From
a
troubleshooting
point
of
view,
what
would
happen
if
the
Loopback0
interface
of
R6
goes
down?
Will
232.0.0.1
be
forwarded
using
R4
rather
than
R6?
R6(config)#interface Loopback0
R6(config-if)#shut
R6(config-if)#end
This
output
may
seem
confusing
at
first,
but
it
is
important
to
look
carefully.
The
mtrace
utility
verifies
the
multicast
path
hop-by-hop.
The
most
important
part
of
the
is
output
is
what
we
do
not
see.
Observe
that
there
are
no
entries
for
a
"RP/Core".
This
tells
us
that
R4
does
not
take
over
as
the
RP
for
this
group.
This
can
be
tested
by
having
R9
join
the
group
232.0.0.1:
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 232.0.0.1
R9(config-if)#end
Observe the test fails. This is because there is no RP mapped to the group 232.0.0.1.
In
the
Troubleshooting
Static
RP
section,
this
text
demonstrates
how
uniform
configuration
is
required
between
all
devices
in
order
for
the
PIM-SM
environment
to
work
properly.
The
most
common
issue
associated
with
the
assignment
of
the
RP
in
this
type
of
environment
is
typographical
mistakes.
Most
commonly,
these
involve
the
IP
address
used
in
the
individual
rp-address
statements,
or
improper
creation
or
application
of
the
access
lists
involved.
In
the
Static
RP
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
Static
RP
section,
two
primary
types
of
problems
were
identified:
Incorrect
RP
Assignment
or
ACL
Issues.
This
section
explores
these
two
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
verify
a
problem,
isolate
it
and
remediate
it.
Incorrect
RP
Assignment
Setting
the
stage:
R9
will
join
the
multicast
group
224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
The
output
from
the
ping
command
is
successful,
what
if
we
ping
another
address.
This
time
we
will
use
239.9.9.9
in
the
second
multicast
range.
Emulate
a
high
repeat
multicast
feed:
Generate
a
multicast
feed
for
239.9.9.9
on
R1
with
very
high
repeat
count:
Look
to
see
what
RP
has
been
assigned
for
the
group
239.9.9.9
on
all
devices
in
the
topology.
Keep
in
mind
that
the
group
239.9.9.9
should
use
R6:
This
output
indicates
that
R5
is
being
used
as
the
RP
for
the
group
rather
than
R6.
This
means
that
the
incorrect
address
was
used
on
R5
for
the
second
RP
address.
This
can
be
confirmed
with
show
run:
R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#no ip pim rp-address 192.1.5.5 2
R5(config)#ip pim rp-address 192.1.6.6 2
R5(config)#end
Verify that the correction has worked by repeating the multicast ping test from R1:
ACL
Issue
Setting
the
stage:
R9
will
join
the
multicast
group
224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
Emulating
a
multicast
feed:
Can
R1
successfully
source
a
multicast
stream
from
172.16.15.1
to
the
group
224.9.9.9?
The
output
from
the
ping
command
is
unsuccessful,
what
if
we
ping
another
address.
This
time
we
will
use
239.9.9.9
in
the
second
multicast
range.
Emulate
a
high
repeat
multicast
feed:
Generate
a
multicast
feed
for
224.9.9.9
on
R1
with
a
very
high
repeat
count:
Now
we
will
verify
the
identity
of
the
RP
elected
for
224.9.9.9
on
all
devices.
Remember
that
R4
should
be
selected
for
this
group
up
to
the
RP.
Of
the
routers
between
R1
and
R4,
it
is
clear
that
R4
is
not
in
agreement
with
the
rest
of
the
network,
because
it
identifies
the
RP
as
192.1.6.6
rather
than
192.1.4.4.
R4
is
making
the
incorrect
decision
regarding
the
RP.
The
question
is
why?
Acl: 1, Static
RP: 192.1.4.4 (?)
Group(s): 224.0.0.0/4, Static
RP: 192.1.6.6 (?)
We
see
that
we
have
an
ACL
applied
to
the
first
static
entry,
but
there
is
no
ACL
assigned
to
the
second.
Observe
that
the
second
group-to-RP
mapping
is
for
the
entire
224.0.0.0/4
range.
This
means
that
on
R4
there
is
an
overlap
between
the
static
assignments.
In
instances
like
this
when
static
RP
is
used
and
a
single
group
has
been
erroneously
assigned
to
more
than
one
RP,
the
RP
with
the
highest
IP
address
will
assume
the
role
of
RP.
This can be corrected by applying the standard access-list 2 to the second ip pim rp-address statement:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#no ip pim rp-address 192.1.6.6
R4(config)#ip pim rp-address 192.1.6.6 2
R4(config)#end
show
COMMAND:
show
ip
igmp
membership
[group-address
|
group-name]
[tracked]
[all]
This
command
displays
Internet
Group
Management
Protocol
(IGMP)
membership
information
for
multicast
groups
and
(S,
G)
channels.
Where:
EXAMPLE
OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE
OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R7#
show
COMMAND:
show
ip
pim
interface
This
command
displays
information
about
interfaces
configured
for
Protocol
Independent
Multicast
(PIM).
EXAMPLE
OUTPUT:
R7#show ip pim interface
show
COMMAND:
show
ip
pim
rp
mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE
OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.67.6 FastEthernet0/0 00:11:57/00:01:38 v2 1 / S
172.16.79.9 FastEthernet0/1 00:11:57/00:01:35 v2 1 / DR S
R7#
show
COMMAND:
show
ip
rpf
[vrf
vrf-name]
{route-distinguisher
|
source-address
[group-address]
[rd
route-
distinguisher]}
[metric]
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
EXAMPLE
OUTPUT:
R7#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/0
RPF neighbor: ? (172.16.67.6)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This
command
displays
Protocol
Independent
Multicast
(PIM)
packets
received
and
sent
and
displays
PIM-related
events
Where:
EXAMPLE
OUTPUT:
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (172.16.15.1/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM SG Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 239.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 239.9.9.9), Forward state, by PIM *G
Join
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM *G Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 224.0.1.40), Forward state, by PIM
*G Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 224.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 224.9.9.9), Forward state, by PIM *G
Join
R7#
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (172.16.15.1/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM SG Join
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.0.1.40
PIM(0): Insert (*,224.0.1.40) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (192.1.2.2/32, 224.0.1.40), WC-bit, RPT-bit, S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
The network topology used in this section is shown in Figure 7-5 below:
Trouble
Ticket
#1
Your
supervisor
has
informed
you
that
multicast
traffic
sourced
from
the
VLAN15
segment
to
the
group
224.9.9.9
never
reach
the
PIM-SM
RP.
There
are
two
RPs
in
this
topology;
R4
for
the
multicast
range
224.0.0.1
-
231.255.255.255,
and
R6
for
the
range
232.0.0.1
-
239.255.255.255.
You
have
been
instructed
to
use
the
multicast
group
224.9.9.9
to
isolate
this
issue.
You
must
correct
the
problem.
The
verification
clearly
demonstrates
that
R5
is
not
using
the
correct
IP
address
from
the
RP.
This
can
be
verified
using
show
run
on
R5:
The
first
ip
pim
rp-address
command
is
not
using
the
correct
ip
address.
This
has
unquestionably
isolated
our
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
pim
rp-address
command
will
need
to
be
corrected.
R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#no ip pim rp-address 192.1.4.5 1
R5(config)#ip pim rp-address 192.1.4.4 1
R5(config)#end
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
of
the
initial
fault
verification.
Ensure
that
the
ping
is
still
running
on
R1
and
verify
if
the
S,G
entry
is
now
created:
R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
Chapter
8:
AutoRP
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
the
functionality
of
the
AutoRP
protocol
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
AutoRP
protocol.
The
chapter
begins
with
a
thorough
review
of
AutoRP,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
support
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
AutoRP
consists
of
two
components.
A
router
or
routers
acting
as
candidate
RPs
and
a
router
designated
as
an
RP
mapping
agent
(MA).
The
RP
mapping
agent
receives
RP
announcement
messages
from
the
candidate
RPs
and
arbitrates
conflicts.
The
RP
mapping
agent
is
then
responsible
for
communicating
the
consistent
multicast
group
to
RP
mappings
to
all
other
routers
by
way
of
dense
mode
flooding.
This
allows
all
routers
to
discover
the
RP
to
use
with
the
groups
they
support.
A
major
design
flaw
with
AutoRP
is
the
fact
that
the
protocol
uses
multicast
groups
in
its
operation.
The
Internet
Assigned
Numbers
Authority
(IANA)
has
assigned
two
group
addresses,
224.0.1.39
and
224.0.1.40,
for
use
with
AutoRP.
Using
multicast
groups
for
the
dissemination
of
RP
information
creates
what
Cisco
terms
a
chicken
and
an
egg
paradox.
The
multicast
groups
disseminate
the
RP
information,
but
these
groups
need
an
RP
in
order
to
function
if
a
strict
sparse
mode
environment
is
desired.
The
Bootstrap
Router
Protocol
(BSR)
is
an
open
standard
protocol
that
solves
this
dilemma.
Chapter
9:
Bootstrap
Router
Protocol
(BSR)
details
this
important
protocol.
There
are
multiple
solutions
for
issues
presented
by
AutoRP
used
in
conjunction
with
sparse
mode
environments.
Some
are:
Where:
bidir
specifies
the
groups
are
to
function
as
bidirectional;
bidirectional
PIM
is
detailed
in
Chapter
6:
Bidirectional
Protocol
Independent
Multicast
(BIDIR-PIM)
Where:
C-RP
Announcements
In
the
previous
chapters
of
this
text
we
have
configured
a
static
RP
in
both
PIM-SM
and
PIM-S-DM.
The
issue
with
statically
making
these
assignments
is
the
amount
of
effort
that
goes
into
managing
the
process.
Recognizing
this
issue
Cisco
created
the
concept
of
Auto-RP.
The
primary
concept
being
to
afford
a
network
administrator
the
ability
to
dynamically
configure
devices
to
operate
in
role-based
assignments
so
that
RP
can
be
dynamically
elected
for
different
multicast
groups,
or
ranges
of
groups.
This
process
brings
with
a
mechanism
used
to
identify
devices
that
wish
to
be
considered
as
candidates
for
different
roles.
This
section
will
explore
the
concept
of
a
candidate
RP.
Figure
8-1
Sample
Auto-RP
Topology
In
this
network
R2
will
be
the
Auto-RP
Mapping
Agent
(covered
later
in
this
chapter)
and
both
R4
and
R6
will
be
configured
to
be
C-RPs
which
we
will
concern
ourselves
with
in
this
section.
As
mentioned
in
the
Technology
Overview
C-RPs
are
configured
by
using
the
ip
pim
send-rp-announce
command.
We
will
configure
R4
and
R6
using
this
command.
Once
we
do
this,
we
will
monitor
their
behavior.
To
accomplish
this
we
will
use
debug
ip
packet
on
both
R4
and
R6:
R4(config)#interface Loopback0
R4(config-if)#ip pim sparse-dense-mode
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 192.1.4.4 on interface Loopback0
R4(config-if)#exit
R4(config)#ip pim send-rp-announce loopback 0 scope 16
R4(config)#end
Immediately
after
the
command
is
applied
this
C-RP
tries
to
notify
the
Auto-RP
Mapping
Agent
that
is
has
been
configured
as
a
C-RP.
Observe
that
the
packets
now
generated
on
R4
are
sourced
from
the
Loopback0
address
and
destined
to
the
multicast
group
224.0.1.39.
The
role-based
behavior
of
a
C-RP
is
to
send
information
to
the
MA.
What
information
does
a
C-RP
send
when
it
is
configured
to
operate
in
Auto-RP?
We
can
find
out
with
debug
ip
pim
auto-rp:
Observe
that
the
RP-announce
messages
for
192.1.4.4
are
being
sent
out
all
PIM-S-DM
enabled
interfaces,
and
the
messages
state
that
R4
is
configured
to
offer
RP
services
for
the
entire
multicast
range
(224.0.0.0/4).
Before
we
enable
R6
to
act
as
a
C-RP
in
this
topology
we
need
to
look
at
the
multicast
routing
table
of
R2,
R6,
R7
and
R9:
R2
is
actually
routing
this
multicast
traffic.
It
is
being
sent
to
R6
via
GigabitEthernet0/1.
R6
will
also
route
this
multicast
traffic:
R6
is
routing
the
traffic
to
R7
via
FastEthernet0/0
and
R4
via
Serial0/1/0.1.
On
R7
we
see
that
it
to
routes
the
traffic.
What
this
process
illustrates
is
how
the
multicast
traffic
destined
to
the
group
224.0.1.39
is
being
actually
routed
throughout
the
multicast
domain
in
order
to
reach
the
Auto-RP
MA.
Currently,
in
this
topology
R2
is
not
the
MA.
We
will
get
to
that
part
after
both
R4
and
R6
have
been
configured
to
be
C-
RPs
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#ip pim send-rp-announce loopback 0 scope 16
R6(config)#end
Again
to
drive
home
the
point
that
this
traffic
is
being
multicast
routed
we
will
look
at
the
multicast
routing
table
on
R2:
Note now that there is a (S,G) entry for the group 224.0.1.39.
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.6.6, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.6.6, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.4.4), PIMv2 v1
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.4.4), PIMv2 v1
R2#
Auto-RP(0): Build RP-Discovery packet
Auto-RP: Build mapping (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1,
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/0 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/1 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on Loopback0(*) (1 RP entries)
Observe
that
R2
is
now
receiving
the
individual
RP-announcement
packets
from
both
R4
and
R6.
The
last
portion
of
the
screen
capture
illustrates
that
the
Mapping
Agent
builds
an
RP-Discovery
Packet
that
defines
the
group-to-RP
mapping
for
use
by
all
devices
in
the
multicast
topology.
Here
R4
and
R6
are
both
announcing
their
candidacy
to
be
the
RP
for
the
range
224.0.0.0/4.
R6
is
elected
by
the
MA
because
it
has
the
highest
IP
address.
We
can
demonstrate
this
by
changing
the
IP
address
used
on
R4
to
a
higher
value
than
that
used
on
R6:
R4(config)#interface Loopback0
R4(config-if)#ip address 192.1.44.44 255.255.255.0
R4(config-if)#end
Now we will see that the MA will elect to use R4 rather than R6 because of the higher IP address:
R2#
Auto-RP(0): Build RP-Discovery packet
Auto-RP: Build mapping (224.0.0.0/4, RP:192.1.44.44), PIMv2 v1,
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/0 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/1 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on Loopback0(*) (1 RP entries)
This
process
is
very
simple
to
see.
Now
we
need
to
look
at
the
traffic
leaving
R2
for
the
rest
of
the
network.
What
IP
address
is
it
using?
R2#
IP: s=192.1.2.2 (Null0), d=224.0.1.40 (GigabitEthernet0/0), len 48, sending
broad/multicast
UDP src=496, dst=496
IP: s=192.1.2.2 (Null0), d=224.0.1.40 (GigabitEthernet0/1), len 48, sending
broad/multicast
UDP src=496, dst=496
IP: s=192.1.2.2 (local), d=224.0.1.40 (Loopback0), len 48, sending broad/multicast
UDP src=496, dst=496
IP: s=172.16.24.2 (local), d=224.0.0.1 (GigabitEthernet0/0), len 28, sending
broad/multicast, proto=2
They
are
destined
to
the
multicast
address
224.0.1.40.
Again
this
group
is
going
to
be
multicast
forwarded
throughout
the
multicast
domain
in
order
to
allow
all
multicast
speakers
to
learn
the
identity
of
the
elected
RP
from
the
MA.
All
routers
running
current
version
of
Cisco
IOS
automatically
join
the
multicast
group
224.0.1.40
once
you
enable
multicast
routing.
This
default
behavior
was
created
to
facilitate
the
deployment
of
Auto-RP.
We
will
use
show
ip
mroute
on
all
devices
in
the
topology
to
illustrate
that
the
multicast
group
224.0.1.40
is
being
multicast
forwarded.
As
such
we
will
expect
to
see
a
(*,G)
and
a
(S,G)
entry
on
each
of
these
devices
for
the
group
224.0.1.40.
R1 has joined the group 224.0.1.40 from 192.1.2.2 via interface FastEthernet0/0 toward R5:
R4 has joined 224.0.1.40 sourced from 192.1.2.2 sourced from FastEthernet0/0 pointing to R2:
R2
is
the
MA
agent
in
this
equation.
Note
that
the
incoming
interface
is
the
loopback0
interface,
and
that
both
GigabitEthernet0/0
and
GigabitEthernet0/1
are
in
the
OIL
for
that
S,G
pair.
R6 has joined the group and the incoming interface is via FastEthernet0/1 pointing to R2:
R7 has joined 224.0.1.40, and is receiving the group via FastEthernet0/0 pointing toward R2:
Lastly,
R9
is
receiving
the
multicast
information
for
224.0.1.40
via
FastEthernet0/1.
Again
all
this
is
actually
being
forwarded
throughout
the
domain
using
routed
multicast
packets.
MA
to
communicate
group-to-RP
information
are
all
routed
in
PIM-DM
by
default.
We
discussed
the
use
of
an
RP
for
these
multicast
groups
to
avoid
this
behavior
in
the
Technology
Review
section.
We
can
illustrate
this
point
by
using
show
ip
mroute
dense:
The
output
of
this
command
reveals
that
only
the
group
224.0.1.39
and
224.0.1.40
are
operating
in
PIM-
DM.
This
works
because
we
are
currently
using
the
Cisco
Proprietary
PIM
mode
PIM-S-DM.
PIM-S-DM
was
initially
created
to
overcome
this
operational
paradox
where
Auto-RP
uses
dense
mode
traffic
to
elect
an
RP.
Based
on
the
current
topology
if
traffic
where
sourced
from
R1
for
the
multicast
group
224.9.9.9
that
traffic
will
be
forwarded
via
PIM-SM
evidenced
by
show
ip
mroute
sparse:
We
see
that
traffic
to
224.9.9.9
is
PIM-SM
forwarded.
This
is
because
there
is
a
group-to-RP
mapping
for
224.9.9.9:
R2#show ip pim rp
Group: 224.9.9.9, RP: 192.1.44.44, v2, v1, uptime 01:26:42, expires 00:02:14
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface Loopback0
R4(config-if)#shut
R4(config-if)#end
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#interface Loopback0
R6(config-if)#shut
R6(config-if)#end
Neither R4 or R6 can be the RP any longer. How will traffic to 224.9.9.9 be forwarded now?
Recognizing
that
this
was
less
than
efficient,
Cisco
created
the
no
ip
dm-fallback
command.
This
command
when
applied
to
all
PIM-S-DM
speaking
routers
will
stop
this
resorting
dense
mode
behavior
(here
we
illustrate
the
command
just
on
R1):
Now when traffic is generated from R1 to the group 224.9.9.9 the traffic will no longer use PIM-DM:
Now we will look to see if the traffic is still dense mode forwarded.
Based
on
this
output
it
is
obvious
that
R5
is
not
forwarding
the
traffic
because
there
are
not
interfaces
in
the
OIL.
There
are
no
interfaces
for
this
group
because
R5
has
no
valid
RP
address
to
use
for
the
group
as
typified
by
the
RP
0.0.0.0,
and
based
on
the
flag
for
the
*,G
entry
we
know
the
group
is
PIM-SM
forwarded.
Auto-RP
Listener
Auto-RP
listener
was
created
as
a
better
solution
than
the
no
ip
dm-fallback
command.
Auto-RP
allows
the
network
to
use
pure
PIM-SM,
by
making
on
simple
modification
to
the
operational
mechanism
that
protocol
uses.
Without
the
ip
pim
autorp
listener
command,
PIM-SM
will
attempt
to
forward
traffic
for
all
multicast
group
address
using
PIM-SM.
However,
with
the
command
ip
pim
autorp
listener
Cisco
IOS
affords
us
a
hack
on
this
process.
Once
the
command
is
deployed
all
multicast
groups
except
224.0.1.39
and
224.0.1.40
are
PIM-SM.
These
two
groups,
and
only
these
two
groups
will
be
PIM-DM.
In
the
following
topology
illustrated
in
Figure
8-2
where
all
interfaces
on
all
devices
are
running
ip
pim
sparse-mode
we
will
observe
the
application
and
operation
of
ip
pim
autorp
listener:
In
this
topology,
the
ip
pim
rp-listener
command
has
not
been
applied
to
any
device.
We
are
going
to
look
at
the
multicast
routing
tables
of
all
devices.
I
needs
to
be
pointed
out
that
we
are
going
to
observe
what
will
initially
appear
as
strange
behavior,
but
after
we
walk
through
what
is
happening
things
will
make
sense.
Right
now
in
the
topology
R2
is
the
MA
and
the
C-RPs
are
R4
and
R6.
We
will
look
at
R2
to
see
if
it
has
learned
the
identities
of
the
C-RPs.
If
R2
is
learning
this
information
we
would
expect
to
see
two
S,G
entries
in
the
multicast
routing
table.
One
for
R4
and
the
other
R6
evidenced
by
the
output
of
show
ip
mroute
224.0.1.39:
This
output
indicates
the
two
sources
are
active
for
the
group
224.0.1.39.
Many
aspiring
students
see
this
behavior
and
think
that
the
show
ip
pim
rp-listener
command
is
not
necessary.
Can
you
imagine
why
R2
is
learning
the
information
generated
on
these
two
C-RPs?
We
will
look
closer
at
this
process
by
using
debug
ip
auto-rp:
R2
is
actually
receiving
the
messages
from
the
two
C-RP,
and
building
the
RP-Discovery
packet.
In
this
scenario,
messages
from
R4
and
R6
are
making
it
to
R2
because
they
are
adjacent,
and
the
RP-discovery
packets
are
making
it
to
R4
and
R6
for
the
same
reason.
To
better
illustrate
this
point
we
will
disable
the
point-to-point
link
between
R4
and
R6.
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface Serial0/0/0.1
R4(config-subif)#shut
R4(config-subif)#end
With
this
accomplished
we
will
use
show
ip
mroute
to
get
a
snapshot
of
what
information
has
been
propagated.
This
means
that
R2
has
learned
about
the
224.0.1.39
group
from
both
R4
and
R6.
Now
we
will
look
at
R4
and
R6
respectively:
Because
the
information
for
the
C-RP
is
generated
by
R4
and
R6,
we
see
in
Figure
8-3
that
R2
and
R5
learn
the
source
and
destination
address
from
R4
and
R2
while,
R7
learns
about
the
source
and
destination
address
from
R6.
The
next
question
is
will
R5
and
R7
forward
these
multicast
packets?
This
output
lets
us
know
that
R5
will
not
forward
any
information
for
this
S,G
pair,
this
can
be
verified
on
R1
with
show
ip
mroute:
Again
there
are
no
interfaces
in
the
OIL
for
the
S,G
pair,
thus
there
will
be
no
knowledge
of
the
group
224.0.1.39
on
R9:
In Figure 8-4 we will apply this information to our drawing to better illustrate the issues at hand:
Figure 8-4: Incomplete Propagation of (S,G) entries for 224.0.1.39 based on adjacency
Figure
8-4
makes
it
very
clear
that
the
information
regarding
the
possible
C-RPs
are
not
being
properly
propagated
through
the
network.
We
have
looked
at
the
multicast
group
224.0.1.39,
and
admittedly,
in
this
topology
there
is
only
one
MA
so
this
issue
may
not
affect
us.
But
now
we
need
to
look
at
the
address
224.0.1.40
that
is
used
to
propagate
the
Auto-RP
Discovery
messages.
Starting
at
R2
where
these
messages
are
originiated
we
will
follow
the
multicast
stream
from
R2
to
R1:
R4
is
learning
the
S,G
for
224.0.1.40
from
R2,
but
we
can
see
that
it
is
not
forwarding
the
traffic
from
that
source
out
any
interfaces.
This
means
that
R5
will
not
have
an
S,G
entry
as
evidenced
by
show
ip
mroute:
We will see that this process repeats itself between R6 and R7:
No interfaces in the OIL means R6 will not forward the multicast stream from R2:
Placing
all
this
information
in
the
drawing
will
allow
us
to
get
a
better
understanding
of
where
the
configuration
has
failed.
Figure
8-5
has
all
this
information
Figure 8-5: Incomplete Propagation of (S,G) entries for 224.0.1.39 and 224.0.1.40 based on adjacency
Based
on
this
illustration
we
can
assume
that
only
R4
and
R6
will
have
knowledge
of
any
group-to-RP
mappings.
As
evidenced
by
show
ip
pim
rp
mapping
on
R4
and
R7:
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 03:55:32, expires: 00:02:33
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 03:55:32, expires: 00:02:33
But
no
other
devices
beyond
R4
and
R6
will
have
this
information
because
the
necessary
groups
to
propagate
the
Auto-RP
information
is
being
dropped
in
this
topology:
The
method
best
able
to
correct
this
issue
will
be
to
execute
ip
pim
autorp
listener
on
all
devices
(we
demonstrate
this
on
R1):
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ip pim autorp listener
R1(config)#end
Now
that
this
has
been
accomplished
we
will
look
at
the
topology
again
for
the
multicast
group
224.0.1.40,
and
verify
that
each
router
in
the
topology
agrees
on
the
identity
of
the
RP.
We
will
start
with
R1
and
work
our
way
to
R2:
R1
knows
now
knows
the
S,G
entry
from
192.1.2.2,
and
therefore
can
receive
the
Group-to-RP
mappings
from
the
MA:
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:01:43, expires: 00:02:13
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 04:04:49, expires: 00:02:12
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6 (?), elected via Auto-RP
Uptime: 04:10:10, expires: 00:02:49
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4 (?), via Auto-RP
Uptime: 04:06:03, expires: 00:02:52
Observe
that
the
MA
knows
the
identity
of
both
C-RPs,
but
it
is
only
propagating
information
about
the
RP
that
it
has
selected
for
the
topology.
We
see
exactly
what
we
would
expect.
The
multicast
path
forms
with
R6
as
the
RP/Core.
Verification
would
be
to
a
have
R9
join
the
multicast
group
224.9.9.9,
and
generate
a
multicast
feed
for
that
address
on
R1:
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
This
demonstrates
that
the
topology
is
fully
functional
after
the
application
of
the
ip
pim
autorp
listener
command.
RPF
Failures
In
the
Troubleshooting
Auto-RP
section,
this
text
discussed
what
phases
of
the
Auto-RP
operational
mechanisms
where
subject
to
Reverse
Path
Forwarding
(RPF)
checks.
Recall
that
all
phases
of
Auto-RP
subject
to
the
RPF
process.
Logically
then,
RPF
issues
can
prevent
a
MA
from
learning
about
C-RPs.
Additionally,
this
problem
can
prevent
any
device
in
the
domain
from
successfully
learning
the
identity
of
the
RP
elected
by
the
MA
for
a
given
group-to-RP
mapping.
The
following
list
of
issues
has
a
relatively
high
probability
of
occurring
thanks
to
RPF
failures.
Remember
that
these
RPF
checks
performed
by
C-RPs
are
done
against
the
IP
address
of
the
MA
itself.
Where
RPF
checks
made
by
the
MA
will
be
performed
against
the
IP
address
of
the
C-RPs.
We
will
perform
a
walk
through
for
each
of
these
RPF
issues
in
the
Auto-RP
Sample
Troubleshooting
Scenarios
section
that
follows.
Situations
like
the
following
exist
when
information
fails
to
propagate
to
any
or
all
devices,
but
RPF
checks
and
unicast
routing
seem
to
be
functioning
correctly:
One or more devices fail to receive the elected group-to-RP-set information from the MA.
One
or
more
C-RP
fails
to
communicate
with
the
its
candidacy
as
a
C-RP
for
a
given
group
or
scope.
In
the
Auto-RP
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
Auto-RP
section,
two
primary
types
of
problems
were
identified:
RPF
failures,
and
multicast
forwarding
and
routing
failures.
This
section
explores
these
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problem
exists.
There
are
three
types
of
devices
in
this
topology:
C-RP(s),
a
MA,
and
PIM
enabled
routers.
We
will
verify
that
this
environment
is
operating
correctly
by
checking
that
the
topology
agrees
with
Figure
8-6.
The fastest way to verify that R2 is the Mapping Agent would be show ip pim autorp:
The
output
indicates
that
R2
is
sending
Discovery
messages.
Only
the
MA
will
do
this
in
an
Auto-RP
configuration.
Step
Two:
Are
R4
and
R6
configured
as
C-RPs?
The fastest way to verify that R4 and R6 are Candidate-RPs would be show ip pim autorp:
Before
conducting
this
test,
we
will
need
to
have
R6
join
the
multicast
group
used
to
verify.
In
this
instance
R9,
will
join
the
group
224.9.9.9:
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
With this accomplished are pings to this group successful from R1?
Step
Four:
Which
of
the
two
possible
C-RPs
were
elected
to
serve
as
the
RP
for
the
group
224.9.9.9?
Verified
with
show
ip
pim
rp:
R1#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:07
R5#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, uptime 00:21:42, expires 00:02:06
R4#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:04
R2#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:16
R6#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, next RP-reachable in 00:00:47
R7#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:06
R9#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:04
This
output
clearly
identifies
R6
as
the
RP
for
the
group
224.9.9.9.
All
things
being
equal
in
the
configuration
between
R4
and
R6
we
would
expect
this
based
on
R6's
higher
IP
address.
RPF
failures
Pings
to
the
group
224.9.9.9
are
no
longer
successful
from
R1:
There
are
a
number
of
reasons
why
this
may
be
happening,
a
logical
approach
would
be
to
determine
if
R1
has
a
RP
mapping
for
the
group
224.9.9.9:
R1#
R1 has no mapping for this or any group. Are RP Discovery messages arriving on R1 from the MA?
We
see
that
32
RP
Discovery
messages
have
arrived.
We
know
that
these
messages
will
be
send
at
periodic
intervals.
So
logically,
we
would
expect
this
value
to
increment
over
time.
After
2
minutes
we
will
execute
the
command
again.
The
value
is
not
incrementing.
We
know
that
these
messages
are
subjected
to
RPF
checks,
and
that
they
are
sourced
from
the
loopback0
interface
of
R2.
To
verify
the
multicast
path
between
R2
and
R1
we
will
us
mtrace:
R1#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.15.1 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.15.1
-1 172.16.15.1 PIM [192.1.2.0/24]
-2 172.16.15.5 PIM [192.1.2.0/24]
-3 172.16.45.4 PIM Multicast disabled [192.1.2.0/24]
-4 172.16.24.2 PIM [192.1.2.0/24]
The
output
of
mtrace
shows
us
that
multicast
PIM
is
disabled
on
the
172.16.45.4
interface
of
R4
as
evidenced
with
show
run
on
that
device:
This can be corrected by applying the ip pim sparse-mode command under this interface:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface FastEthernet0/1
R4(config-if)#ip pim sparse-mode
R4(config-if)#end
R4#
%PIM-5-NBRCHG: neighbor 172.16.45.5 UP on interface FastEthernet0/1
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.45.5 on interface
FastEthernet0/1
We see the PIM neighbor come up with R5. Now do we see any group-to-RP mappings on R1?
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:01:26, expires: 00:02:31
We see the mapping. Are pings to the group 224.9.9.9 successful now?
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:17:24, expires: 00:02:22
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:18:50, expires: 00:01:59
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:02:02
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:02:50
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4 (?), via Auto-RP
Uptime: 00:35:36, expires: 00:02:24
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:01:59
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:02:02
No,
they
do
not.
R9
has
no
RP
mappings
assigned.
This
could
be
an
RPF
error
between
R9
and
the
MA.
If
so,
this
can
be
identified
via
mtrace:
R9#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.79.9 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.79.9
-1 172.16.79.9 PIM [192.1.2.0/24]
-2 172.16.79.7 PIM [192.1.2.0/24]
-3 172.16.67.6 PIM [192.1.2.0/24]
-4 172.16.26.2 PIM [192.1.2.0/24]
-5 192.1.2.2
There
does
not
appear
to
be
an
RPF
issue.
That
leaves
a
multicast
routing
and
forwarding
issue
somewhere
in
the
path.
By
looking
at
the
multicast
routing
table
on
R9
we
can
see
if
it
is
learning
any
information
from
the
MA
192.1.2.2
for
the
group
224.0.1.40:
This
output
demonstrates
that
R9
has
only
the
*,G
entry
for
the
group
224.0.1.40,
and
has
no
incoming
interface.
This
means
that
R9
is
not
receiving
any
multicast
packets
for
this
group.
Logically,
by
looking
at
Figure
8-6,
we
know
that
R9
should
receive
these
packets
from
R7.
We
need
to
look
at
the
multicast
routing
table
on
R7
now:
We
see
that
R7
has
both
the
*,G
and
the
S,G
entry
for
the
group
224.0.1.40,
but
we
also
see
that
the
OIL
is
empty
(Null).
Note
again
the
flag
of
"D"
for
this
traffic.
This
means
that
the
packets
are
dense
mode
forwarded.
We
are
running
pim
sparse-mode
under
both
interfaces
on
R7,
as
evidenced
by
show
run:
We
know
that
we
need
the
ip
pim
autorp
listener
command
to
allow
R7
to
forward
the
multicast
group
224.0.1.39
and
224.0.1.40
in
dense
mode.
We
can
see
if
this
command
is
configured
on
R7
with
show
run:
The command is not configured. This can be corrected by adding the command on R7:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#ip pim autorp listener
R7(config)#end
show
COMMAND:
show
ip
igmp
membership
[group-address
|
group-name]
[tracked]
[all]
This
command
displays
Internet
Group
Management
Protocol
(IGMP)
membership
information
for
multicast
groups
and
(S,
G)
channels.
Where:
EXAMPLE
OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE
OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R7#
show
COMMAND:
show
ip
pim
interface
This
command
displays
information
about
interfaces
configured
for
Protocol
Independent
Multicast
(PIM).
EXAMPLE
OUTPUT:
R7#show ip pim interface
show
COMMAND:
show
ip
pim
rp
mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE
OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.16.67.6 FastEthernet0/0 00:22:16/00:01:36 v2 1 / S
172.16.79.9 FastEthernet0/1 00:22:40/00:01:23 v2 1 / DR S
R7#
show
COMMAND:
show
ip
rpf
[vrf
vrf-name]
{route-distinguisher
|
source-address
[group-address]
[rd
route-
distinguisher]}
[metric]
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
group-address
-
optional;
IP
address
or
name
of
a
multicast
group
for
which
to
display
RPF
information
rd
route-distinguisher
-
optional;
displays
the
Border
Gateway
Protocol
(BGP)
RPF
next
hop
for
the
VPN
route
associated
with
the
RD
specified
for
the
route-distinguisher
argument
metric
-
optional;
displays
the
unicast
routing
metric
EXAMPLE
OUTPUT:
R7#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/0
RPF neighbor: ? (172.16.67.6)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This
command
displays
Protocol
Independent
Multicast
(PIM)
packets
received
and
sent
and
displays
PIM-related
events
Where:
EXAMPLE
OUTPUT:
R7#debug ip pim
PIM debugging is on
R7#
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Prune-list: (192.1.4.4/32, 224.0.1.39)
PIM(0): Prune FastEthernet0/1/224.0.1.39 from (192.1.4.4/32, 224.0.1.39)
PIM(0): Insert (192.1.4.4,224.0.1.39) prune in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (192.1.4.4/32, 224.0.1.39) Prune
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Prune-list: (192.1.4.4/32, 224.0.1.39)
R7#
The network topology used in this section is shown in Figure 8-9 below:
Figure
8-9:
The
Chapter
Challenge
Topology
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
the
router
R5
refuses
to
use
R6
as
the
RP
for
any
multicast
group.
This
behavior
is
not
acceptable
and
is
resulting
in
R9
from
being
able
to
receive
multicast
traffic.
You
have
been
instructed
to
isolate
this
issue
with
the
multicast
group
224.9.9.9.
You
must
correct
the
issue
once
is
it
identified.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
the
MA
(R2)
is
only
recording
RP
Announcements
from
R6
in
the
group-to-RP
mapping
table.
R2
needs
to
be
configured
such
that
it
accepts
RP
announcements
from
R5
and
R4
only.
Be
advised
that
this
task
was
previously
in
the
hands
of
a
junior
technician.
Correct
this
issue.
Trouble
Ticket
#3
Your
supervisor
has
notified
you
that
R7
will
be
assuming
the
role
of
RP
for
all
multicast
groups
in
this
topology.
Previous
testing
has
been
performed
using
R7
after
business
hours
as
the
RP.
You
have
been
instructed
place
R7
into
operation
immediately.
Figure
8-10:
Auto-RP
Quick
Fire
Troubleshooting
Flowchart
Initiate a ping test from R1 to the group 224.9.9.9 with a high repeat count:
R5 does not choose R6 as RP for this group thus verifying the problem.
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:45:18, expires: 00:02:44
Group(s): 224.0.0.0/4, Static-Override
RP: 192.1.5.5 (?)
We
see
that
R5
is
learning
about
R6's
desire
to
be
the
RP,
but
it
is
not
choosing
it
as
the
RP
because
of
a
static
RP
assignment.
Note
that
this
output
tells
us
that
the
static-override
option
has
been
used.
This
means
that
the
static
route
that
has
been
assigned
will
always
override
any
dynamically
learned
RP
information.
We
can
see
this
via
show
run:
R5#show run | inc override
ip pim rp-address 192.1.5.5 override
The
override
option
will
prevent
R5
from
using
the
dynamic
RP
assignment
via
Auto-RP.
This
has
unquestionably
isolated
our
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
pim
rp-address
command
needs
to
be
removed.
R5#conf t
R5 now chooses R6 as the RP, demonstrating that the issue has been corrected.
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6 (?), elected via Auto-RP
Uptime: 03:14:19, expires: 00:02:38
The
MA
is
only
learning
of
R6's
desire
to
be
considered
as
a
C-RP
thus
verifying
the
problem
exists.
Step
2
-
Fault
Isolation:
Are
R4
and
R5
advertising
RP
Announcement
messages?
R4#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
Note
that
R4
is
sending
RP
Announcements.
This
is
confirmed
by
the
fact
that
the
counter
increments
over
time:
Note
that
R5
is
also
sending
RP
Announcements.
This
is
confirmed
by
the
fact
that
the
counter
increments
over
time:
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
R5#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.45.5 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.45.5
-1 172.16.45.5 PIM [192.1.2.0/24]
-2 172.16.45.4 PIM [192.1.2.0/24]
-3 172.16.24.2 PIM [192.1.2.0/24]
-4 192.1.2.2
The
output
does
not
seem
to
indicate
an
RPF
issue.
Now
we
will
look
to
see
if
the
messages
are
arriving
on
R2
(MA)
via
debug
ip
pim
auto-rp:
Here we see the RP-announcement arrive from R6, but look closely at the messages for R4 and R5:
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Filtered 224.0.0.0/4 for RP 192.1.4.4
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Filtered 224.0.0.0/4 for RP 192.1.4.4
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.5.5, RP_cnt 1, ht 181
Auto-RP(0): Filtered 224.0.0.0/4 for RP 192.1.5.5
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.5.5, RP_cnt 1, ht 181
Auto-RP(0): Filtered 224.0.0.0/4 for RP 192.1.5.5
These
messages
are
being
filtered
and
therefore
dropped.
This
is
not
a
default
situation
and
must
be
related
to
a
filter
of
some
kind
as
evidenced
by
show
run:
R2#show run | inc filter
ip pim rp-announce-filter rp-list 1
This
command
references
an
access-list.
What
is
being
permitted
and
denied
by
the
standard
access-list
1:
R2#show ip access-list 1
Standard IP access list 1
10 deny 192.1.6.6 (146 matches)
20 permit any (296 matches)
This
access-list
has
been
incorrectly
configured.
Line
10
specifically
states
that
RP
Announcements
sourced
from
the
ip
address
192.1.6.6
are
not
allowed
to
filtered.
Therefore
any
line
matching
sequence
number
20
will
be
filtered.
The
junior
technician
should
have
implicitly
permitted
192.1.6.6
and
denied
all
other
traffic
sources.
This
has
isolated
our
fault.
R2#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6 (?), elected via Auto-RP
Uptime: 03:36:03, expires: 00:00:53
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5 (?), via Auto-RP
Uptime: 00:00:24, expires: 00:02:33
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4 (?), via Auto-RP
Uptime: 00:00:30, expires: 00:02:28
Now
the
MA
is
learning
about
R5
and
R4,
we
still
see
the
entry
for
192.1.6.6
but
notice
that
the
expiration
timer
only
has
53
seconds
remaining.
After
waiting
a
minute
R6
should
age
out
leaving
only
R4
and
R5:
R2#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5 (?), elected via Auto-RP
Uptime: 00:02:14, expires: 00:02:46
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4 (?), via Auto-RP
Uptime: 00:02:20, expires: 00:02:39
This is the desired behavior thus proving the issue has been corrected.
Group(s) 224.0.0.0/4
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5 (?), elected via Auto-RP
Uptime: 00:05:40, expires: 00:02:18
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4 (?), via Auto-RP
Uptime: 00:05:46, expires: 00:02:11
This
output
notifies
us
that
the
MA
does
not
know
about
R7's
C-RP
status.
Is
R7
sending
RP
Announcement
messages?
R7#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
R7
has
send
276
RP
Announce
messages,
if
we
wait
and
repeat
the
verification
this
number
should
increment:
If
we
are
not
careful,
we
could
miss
some
very
critical
information
in
this
debug
output.
Note
that
we
are
sending
an
announcement
for
192.1.7.7,
but
note
the
value
of
the
TTL.
Time-to-Live
has
been
set
to
a
value
of
1
on
this
device.
This
means
that
this
packet
will
expire
after
making
one
"hop".
As
mentioned
in
the
Technology
Review
section
of
this
chapter.
The
scope
command
can
be
used
to
"bound"
Auto-RP
information.
In
this
instance
the
value
has
been
set
to
low
for
the
packet
to
reach
the
MA
as
evidenced
by
show
run:
R7#sh run | inc send-rp-announce
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2v1
Info source: 192.1.7.7 (?), elected via Auto-RP
Uptime: 00:00:07, expires: 00:02:48
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5 (?), via Auto-RP
Uptime: 00:26:05, expires: 00:02:52
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4 (?), via Auto-RP
Uptime: 00:26:11, expires: 00:02:45
The
MA
is
now
learning
the
information
from
R7.
Based
on
R7's
higher
IP
address
it
is
being
elected
by
the
MA
as
the
RP
for
the
range
224.0.0.0/4.
This
should
mean
that
an
mtrace
from
R1
to
the
FastEthernet0/1
interface
of
R9
using
the
multicast
group
224.9.9.9
should
show
R7
as
the
RP/Core
device:
R1#mtrace 172.16.15.1 172.16.79.9 224.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM Prune sent upstream [172.16.15.0/24]
We see this is indeed the case demonstrating that the fault has been corrected.
Chapter
9:
Bootstrap
Router
(BSR)
Protocol
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
the
functionality
of
the
Bootstrap
Router
(BSR)
protocol
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
Bootstrap
Routing
(BSR)
protocol.
The
chapter
begins
with
a
thorough
review
of
BSR,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
support
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
While
BSR
addresses
many
issues
with
AutoRP,
it
operates
in
a
very
similar
manner
when
examined
from
a
high
level.
There
are
router(s)
that
act
as
candidate-Rendezvous
Points
(RPs)
and
router(s)
that
act
similar
to
the
Mapping
Agent
(MA)
found
in
AutoRP.
In
BSR
terminology,
the
equivalent
to
the
Mapping
Agent
is
the
Bootstrap
Router
itself.
Figure
9-1:
A
Sample
BSR
Topology
A
major
design
improvement
over
Ciscos
AutoRP
is
the
fact
that
BSR
requires
no
dense
mode
operation
whatsoever.
Rendezvous
Point
(RP)
information
is
within
BSR
messages,
which
are
carried
inside
of
Protocol
Independent
Multicast
(PIM)
messages
themselves.
These
PIM
messages
are
link-local
multicast
messages.
When
a
router
receives
a
BSR
message
containing
RP
information,
the
router
applies
the
Reverse
Path
Forwarding
(RPF)
check
and
then
floods
the
message
out
all
of
the
PIM-enabled
interfaces.
Remember,
the
link-local
multicast
address
used
for
PIM
messages
is
224.0.0.13.
Since
the
PIM
messages
that
carry
the
BSR
information
are
link-local
in
scope,
notice
that
there
is
no
Time
to
Live
(TTL)
scoping
that
can
be
used
with
BSR.
Note: BSR and AutoRP cannot interoperate directly with each other.
Obviously,
a
key
element
to
the
BSR
process
is
the
device
or
devices
that
want
to
serve
as
the
Rendezvous
Point
for
multicast
groups
in
the
Sparse
Mode
domain.
To
configure
a
candidate-RP
in
BSR,
use
the
following
command:
Where:
vrf
-
configures
the
router
to
advertise
itself
as
the
candidate-RP
to
the
Bootstrap
Router
for
the
Virtual
Routing
and
Forwarding
(VRF)
instance
specified
for
the
vrf-name
argument
interface-type
interface-number
-
the
interface
bound
to
the
IP
address
to
serve
as
the
candidate-RP
IP
address;
for
availability
purposes,
consider
the
use
of
a
loopback
interface;
this
interface
needs
to
be
PIM
enabled
bidir
-
optional
-
indicates
that
the
multicast
groups
specified
by
the
access-list
argument
are
to
operate
in
PIM
bidirectional
mode;
PIM
bidirectional
mode
is
covered
in
Chapter
6:
Bidirectional
PIM
group-list
-
optional
-
specifies
the
prefixes
that
are
advertised
in
association
with
the
RP
address;
note
that
unlike
AutoRP,
this
list
cannot
contain
DENY
entries
interval
-
optional
-
specifies
the
candidate-RP
advertisement
interval,
in
seconds;
the
range
is
from
1
to
16383
with
a
default
value
of
60
seconds
priority
-
optional
-
specifies
the
priority
for
the
candidate-RP;
the
range
is
from
0
to
255;
with
a
default
priority
value
of
0;
the
BSR
candidate-RP
with
the
lowest
priority
value
is
preferred;
be
aware
that
other
vendor
implementations
of
BSR
might
default
priority
to
192
as
this
is
the
recommended
default
priority
by
the
IETF
As
stated
earlier,
the
Bootstrap
Router
itself
in
the
topology
is
similar
to
the
Mapping
Agent
in
AutoRP
with
some
subtle
differences.
Like
the
AutoRP
Mapping
Agent,
the
BSR
listens
to
the
candidate-RP
announcements,
but
the
BSR
does
not
actually
select
the
best
RP
for
every
group
range.
Instead,
the
BSR
builds
a
set
of
candidate-RPs
for
each
group
range
and
disseminates
this
information
to
the
topology
using
PIM.
Multicast
routers
that
receive
these
BSR
messages
select
the
preferred
candidate-RP
using
a
special
hash
function.
Where:
vrf
-
configures
the
router
to
advertise
itself
as
the
Bootstrap
Router
for
the
Virtual
Routing
and
Forwarding
(VRF)
instance
specified
for
the
vrf-name
argument
interface-type
interface-number
-
the
interface
bound
to
the
IP
address
to
serve
as
the
BSR
device
IP
address;
for
availability
purposes,
consider
the
use
of
a
loopback
interface;
this
interface
needs
to
be
PIM
enabled
and
the
IP
address
is
sent
in
BSR
messages
as
the
BSR
IP
address
hash-mask-length
-
optional
-
the
length
of
the
mask
to
be
ANDed
with
the
group
address
before
the
PIMv2
hash
function;
all
groups
with
the
same
seed
hash
correspond
to
the
same
RP;
the
hash
mask
length
allows
one
RP
to
be
used
for
multiple
groups;
the
default
length
is
0
priority
-
priority
of
the
candidate-BSR;
the
range
is
from
0
to
255
with
a
default
priority
of
0;
the
candidate-BSR
with
the
highest
priority
value
is
preferred;
RFC
5059
specifies
that
64
be
used
as
the
default
priority
value
It
is
important
to
remember
that
in
BSR,
the
multicast
routers
determine
the
RP
to
use
based
on
RP-set
information
received
from
the
BSR
itself.
The
RP
selection
process
for
a
particular
multicast
group
is
as
follows:
Step
1
-
a
longest
match
lookup
is
performed
on
the
group
prefix
that
is
announced
by
the
BSR
candidate-RPs
Step
2
-
if
more
than
one
candidate-RP
is
found
by
the
longest
match
lookup,
the
candidate-RP
with
the
lowest
priority
(configured
with
the
ip
pim
rp-candidate
command)
is
preferred
Step
3
-
if
more
than
one
candidate-RP
have
the
same
priority,
the
BSR
hash
function
is
used
to
select
the
RP
for
a
group;
this
hash
function
is
covered
in
detail
in
the
Operation
and
Troubleshooting
BSR
section
of
this
chapter
Step
4
-
if
more
than
one
candidate-RP
return
the
same
hash
value
derived
from
the
BSR
hash
function,
the
candidate-RP
with
the
highest
IP
address
is
preferred
Note:
RFC
2362
does
not
specify
the
longest
match
lookup
step,
to
ensure
compatibility
with
this
standard,
configure
the
same
group
prefix
length
for
redundant
candidate-RPs.
BSR
Election/Announcements
In
this
stage
of
the
BSR
operation
a
Bootstrap
Router
has
either
been
assigned,
or
elected
on
the
basis
of
its
configured
priority.
This
works
in
a
two
stage
process
whereby
the
identity
of
a
BSR
is
discovered.
During
this
stage,
every
router
that
has
been
configured
to
work
as
a
Bootstrap
Router
will
begin
to
flood
"bootstrap
messages"
while
simultaneously
listening
for
"bootstrap
messages"
from
other
candidate-BSRs
in
the
domain.
Once
a
BSR
learns
of
another
BSR
with
a
higher
priority
it
will
immediately
relinquish
its
role
as
BSR.
This
demonstrates
that
the
BSR
election
process
is
preemptive
in
nature
and
is
designed
to
provide
alternate
availability
during
equipment
or
process
failures.
It
must
be
observed
that
this
process
ultimately,
if
configured
properly,
will
result
in
the
election
of
a
single
BSR.
Other
devices
may
exist
in
the
multicast
domain
that
can
assume
the
role
of
the
BSR,
but
they
will
always
remain
in
a
standby
state
until
the
existing
BSR
goes
down
or
the
priority
settings
are
changed.
After
the
BSR
is
elected,
it
will
begin
attempting
to
discover
the
identity
of
any
existing
candidate-RPs
(C-
RPs).
The
BSR
will
also
actively
listen
for
messages
coming
from
these
C-RPs
as
they
are
discovered.
In
an
effort
to
find
the
C-RPs
in
the
domain,
the
BSR
will
first
begin
to
inform
the
other
devices
in
the
topology
of
its
existence.
BSR
accomplishes
this
using
PIM-SM
version
2
protocol
messages.
These
messages
are
flooded
on
a
hop-by-hop
basis
between
all
devices
in
the
multicast
domain.
Figure
9-2
illustrates
this
process.
Clearly,
BSR
can
only
be
employed
between
PIM
version
2
enabled
devices.
It
is
important
to
note
that
because
of
this;
BSR
is
not
compatible
with
PIM-SM
version
1.
The
primary
difference
between
PIM-SM
versions
1
and
2
is
that
in
version
2,
messages
are
no
longer
encapsulated
inside
IGMP.
PIM
version
2
messages
are
encapsulated
in
IP
packets
with
a
protocol
number
of
103.
Another
significant
difference
is
that
PIM-SM
version
2
messages
are
propagated
throughout
the
domain
via
the
link-local
multicast
group
224.0.0.13
(ALL-PIM-ROUTERS).
As
described
in
the
BSR
Technology
Review
section,
this
means
that
BSR
does
not
require
any
legacy
dense
mode
functionality
to
announce
its
presence
throughout
the
multicast
domain
as
is
the
case
with
AutoRP.
Fortunately,
these
distinctions
reduce
the
level
of
difficulty
associated
with
troubleshooting
all
phases
of
the
BSR
operational
process.
Placement
of
the
BSR
is
no
longer
as
sensitive
an
issue
as
with
its
AutoRP
counterpart,
the
Mapping
Agent
(MA).
Furthermore,
the
use
of
a
link-local
multicast
address
for
hop-by-hop
flooding
of
BSR
announcements
creates
fewer
overall
issues
compared
to
AutoRP
in
general.
The
fact
that
BSR
uses
a
multicast
address
to
communicate
its
identity
and
presence
to
the
multicast
domain
means
that
this
phase
of
BSR
is
subjected
to
Reverse
Path
Forwarding
(RPF)
checks.
Specifically,
any
messages
destined
for
PIM
speakers
in
the
domain
will
need
to
pass
the
RPF
check
toward
the
IP
address
of
the
BSR
from
the
C-RP.
The
multicast
process
drops
messages
that
fail
this
check.
The
fastest
method
to
determine
that
BSR
announcements
are
being
propagated
successfully
is
to
execute
the
show
ip
pim
bsr-router
command
on
each
of
the
respective
candidate-RPs.
In
a
working
environment,
like
that
shown
in
Figure
9-3,
we
would
expect
to
see
output
similar
to
the
following
for
this
command:
Notice
the
identity
of
the
Bootstrap
Router
is
192.1.2.2,
which
is
the
Loopback0
interface
of
R2.
We
have
two
C-RPs
in
this
topology
so
we
will
repeat
this
show
command
on
the
second
C-RP:
The
second
C-RP
knows
the
identity
of
the
BSR
as
well.
Clearly,
the
BSR
announcements
have
successfully
propagated
to
all
C-RPs
in
the
topology.
Note:
Notice
the
(?)
entry
next
to
the
BSR
address.
This
is
not
a
point
for
concern.
This
simply
means
that
the
C-RP
cannot
resolve
the
IP
address
to
a
hostname.
Earlier
it
was
described
how
the
elected
BSR
begins
to
actively
listen
for
messages
coming
from
the
C-
RPs.
It
is
clear
now
that
the
C-RPs
know
where
to
send
their
messages
thanks
to
the
propagation
of
the
BSR
information.
Notice
also
in
the
output
of
the
show
commands
used
above
that
there
is
candidate-
RP
information
on
both
R5
and
R7
that
needs
to
be
communicated.
Also
note
the
holdtime
and
advertisement
intervals
for
each.
Before
the
next
stage
of
verification,
however,
execute
the
same
show
command
on
a
device
participating
in
the
multicast
domain
that
is
not
a
C-RP.
For
example,
R4:
Note
that
R4
knows
the
identity
of
the
BSR,
but
has
no
candidate-RP
information
to
communicate.
This
is
a
normal
condition
for
this
stage
of
the
BSR
operation.
It
is
time
to
take
a
closer
look
at
the
second
stage
of
BSR.
After
a
C-RP
discovers
the
BSR
it
will
immediately
begin
to
send
periodic
C-RP
advertisements
directly
to
the
BSR
every
60
seconds
via
unicast.
This
is
the
default
advertisement
interval
and
can
be
changed.
The
default
holdtime
is
150
seconds.
In
addition
to
notifying
the
BSR
of
its
identity,
the
RP
candidates
also
communicate
what
group-to-RP
mappings
they
possess.
Figure
9-4
illustrates
this
unicast
process.
Let
us
examine
the
BSR
to
see
if
it
is
receiving
the
information
unicast
by
the
individual
C-RPs.
Use
the
show
ip
pim
rp
mapping
command
on
the
BSR
itself
to
accomplish
this:
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 172.16.67.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:57:49, expires: 00:01:36
RP 192.1.5.5 (?), v2
Info source: 172.16.45.5 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:35:57, expires: 00:01:29
Note the BSR has learned the identity of both candidate-RP devices.
The
fact
that
C-RP
advertisements
are
sent
via
unicast
means
that
RPF
checks
are
unnecessary.
This
makes
this
phase
of
the
BSR
operational
mechanism
very
streamlined
and
easy
to
troubleshoot.
Should
any
information
from
any
RP
candidate
fail
to
appear
in
the
BSR's
group-to-RP
mappings
table,
it
is
most
likely
an
IP
routing
issue.
This
can
quickly
be
identified
by
pinging
the
IP
address
of
the
C-RP
in
question
from
the
BSR.
To
ensure
the
proper
testing
of
bidirectional
unicast
reachability,
always
source
this
ping
from
the
BSR's
IP
address.
For
example:
In
the
third
and
final
stage
of
the
BSR
operational
mechanism,
the
BSR
begins
to
flood
group-to-RP
mapping
information
to
other
devices
in
the
multicast
domain.
This
propagation
of
mappings
phase
uses
the
same
methodology
that
the
BSR
employed
to
communicate
its
presence
to
the
RP
candidates
in
the
earlier
step.
This
means
that
if
the
initial
phase
of
the
BSR
announcement
process
operated
without
complication,
then
it
is
highly
likely
that
this
stage
will
perform
likewise.
Figure
9-5
illustrates
the
hop-by-hop
flooding
process
employed
to
propagate
the
RP-
set
information.
The
most
efficient
method
employed
to
test
whether
or
not
the
BSR
has
successfully
communicated
the
RP-set
information
to
each
of
the
devices
in
the
topology
is
to
execute
the
show
ip
pim
rp
mapping
command
on
each
device
participating
in
the
multicast
domain.
This
should
produce
identical
output
on
all
devices
similar
to
the
following:
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:41:24, expires: 00:01:44
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:19:31, expires: 00:01:43
The
critical
component
here
is
that
all
devices
should
have
the
same
group-to-RP
mappings.
Note
that
both
RP
candidates
are
mapped
to
provide
RP
services
to
the
entire
multicast
group
range
of
224.0.0.0/4.
Which
C-RP
will
assume
the
role
of
RP
if
a
multicast
source
is
introduced
to
the
topology?
Emulate
a
multicast
source
destined
to
the
group
address
of
224.9.9.9
on
the
FastEthernet
interface
of
R1
and
examine
where
the
source-based
tree
terminates:
This
ping
will
not
be
successful
because
there
are
no
multicast
receivers
for
this
group
in
the
topology,
but
it
provides
a
way
to
verify
what
C-RP
will
assume
the
role
of
RP
for
the
group
224.9.9.9.
This
is
proven
using
the
show
ip
pim
rp
command
on
both
R5
and
R7:
R5#show ip pim rp
Group: 224.9.9.9, RP: 192.1.7.7, v2, uptime 01:34:58, expires 00:01:30
R7#show ip pim rp
Group: 224.9.9.9, RP: 192.1.7.7, v2, next RP-reachable in 00:00:16
The
output
indicates
that
the
RP
is
R7
(192.1.7.7).
The
BSR
process
selects
R7
as
the
RP
because
the
assigned
priority
for
each
of
the
candidates
is
the
same.
In
this
topology,
the
Cisco
default
of
priority
of
0
is
used.
In
instances
where
the
priority
is
a
tie,
the
determining
factor
for
RP
selection
is
the
highest
IP
address
as
described
in
the
BSR
Technology
Review
section.
Once
again,
it
is
critical
to
note
that
this
process
of
RP
selection
is
not
performed
by
the
BSR.
Unlike
the
mapping
agent
in
AutoRP,
the
Bootstrap
Router
communicates
the
entire
RP-set
to
the
devices
in
the
multicast
domain.
The
individual
devices
accept
the
RP-set
and
then
make
logical
decisions
as
to
which
C-RP
they
select.
Network
administrators
can
manipulate
or
influence
this
election
process
through
the
use
of
hashes,
priorities,
filters,
and
message
constraints.
Generally,
the
longer
the
hash
length,
the
more
evenly
the
BSR
process
will
try
to
assign
groups
to
individual
RPs
in
the
candidate
RP-Set.
The
assumptions
are
that
the
same
hash
mask
length
is
communicated
to
each
PIM-SM
version
2
device
by
the
BSR,
and
that
each
of
those
devices
has
the
same
candidate
RP-set.
As
a
result,
each
PIM
device
runs
the
same
algorithm
and
makes
the
same
RP
selection
for
each
multicast
group.
There
are
three
values
used
by
this
mathematical
function
in
BSR:
the
candidate-RP
address,
a
multicast
group
address,
and
the
hash
mask
length.
These
values
are
all
hashed
together
on
a
group-by-group
basis
in
order
to
approximate
the
load
balancing.
1
-
A
hashing
algorithm
is
run
for
each
multicast
group
address
for
each
C-RP
in
the
RP-set
and
a
hash
value
is
obtained.
2 - The C-RP with the highest calculated hash value becomes the RP for that particular multicast group.
3
-
There
is
the
possibility
that
the
hashing
algorithm
will
result
in
equal
hash
values
for
different
C-RPs.
In
this
case,
the
C-RP
with
the
highest
IP
address
will
become
the
RP
for
that
group.
With
all
this
taken
into
account,
the
outcome
should
be
predicable:
* This assumes that there are enough C-RPs in the RP-set to allow an even distribution.
Now
verify
what
RP
each
device
in
the
topology
will
use
for
any
given
group
through
the
use
of
the
show
ip
pim
rp-hash
command.
Here
is
such
a
test
on
R4
using
the
multicast
group
addresses
of
224.1.1.1
and
224.1.1.2:
The hashing process selects R5 as the RP because of its higher hash value. Now for 224.1.1.3:
Notice
the
hashing
process
selects
R5
again.
One
might
logically
think
that
it
would
fall
to
R7,
but
notice
this
is
not
the
case.
The
algorithm
will
try
to
evenly
distribute
the
load
between
the
two
available
candidate-RPs,
but
it
will
do
so
via
a
relatively
random
process,
given
the
wide
range
of
arbitrary
variables
that
go
into
the
hash
calculation.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
RPF
Failures
In
the
Troubleshooting
BSR
section,
this
text
discussed
what
phases
of
the
BSR
operational
mechanisms
where
subject
to
Reverse
Path
Forwarding
(RPF)
checks.
Recall
that
of
the
three
phases,
only
the
BSR
election/announcement
phase
and
the
propagation
of
group-to-RP
mappings
phase
are
subject
to
the
RPF
process.
Logically
then,
RPF
issues
can
prevent
a
candidate-BSR
from
learning
about
other
candidate-BSRs.
Additionally,
this
problem
can
prevent
an
elected
BSR
from
successfully
communicating
the
candidate
RP-set
to
any,
some,
or
all
of
the
other
PIM
enabled
devices
in
the
multicast
domain.
The
following
list
of
issues
has
a
relatively
high
probability
of
occurring
thanks
to
RPF
failures.
Remember
that
these
RPF
checks
are
performed
against
the
IP
address
of
the
BSR
itself.
Be
aware
that
anytime
all
interfaces
in
a
network
are
not
running
PIM
-
these
issues
may
arise.
Candidate-BSRs
do
not
agree
on
the
identity
of
the
BSR
for
the
multicast
domain.
All
or
some
of
the
PIM-SM
version
2
enabled
devices
in
the
multicast
domain
do
not
receive
any
candidate
RP-set
information
from
the
elected
BSR.
We
will
perform
a
walk
through
for
each
of
these
RPF
issues
in
the
BSR
Sample
Troubleshooting
Scenarios
section
that
follows.
An
elected
BSR
fails
to
learn
candidate
group-to-RP
mappings
from
all
or
some
of
the
C-RPs
in
the
topology
and
the
IP
address
of
the
candidate
RP(s)
are
not
reachable
when
ICMP
echoes
are
sourced
from
the
IP
address
of
the
BSR.
This
is
a
situation
where
it
will
be
necessary
to
look
at
the
underlying
routing
protocols
used
in
the
network.
Typically,
this
would
be
an
issue
of
asynchronous
routing,
and
should
be
something
obvious
once
the
routing
tables
of
the
source
and
transit
devices
are
analyzed.
Situations
like
the
following
exist
when
information
fails
to
propagate
to
any
or
all
devices,
but
RPF
checks
and
unicast
routing
seem
to
be
functioning
correctly:
One
or
more
candidate-RP(s)
fail
to
receive
any
c-RP-set
information
from
the
BSR.
One
or
more
candidate-BSR
fails
to
participate
in
the
BSR
election
process
resulting
in
the
assignment
of
more
than
one
BSR.
In
the
BSR
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
BSR
section,
three
primary
types
of
problems
were
identified:
RPF
failures,
unicast
routing
failures,
and
multicast
forwarding
and
routing
failures.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problems
exists.
There
are
three
types
of
devices
in
this
topology:
C-RP(s),
C-BSR(s),
and
transit
devices
(PIM
enabled
routers).
It
is
not
possible
for
this
to
happen
in
a
correctly
configured
BSR
environment.
This
issue
seems
to
indicate
that
the
two
candidate-BSR
devices
have
failed
to
exchange
their
BSR
announcement
messages.
How
are
those
BSR
announcement
messages
exchanged?
As
discussed
previously,
PIM-SM
version
2
messages
exchange
the
BSR
information,
and
these
Bootstrap
messages
are
how
the
C-BSRs
discover
each
other
and
decide
which
assumes
the
role
of
the
BSR.
The
link-local
multicast
group
224.0.0.13
accomplishes
this
process
and
is
subject
to
the
RPF
check
mechanism.
There
are
a
number
of
ways
to
isolate
RPF
issues
(mstat,
mtrace,
show
ip
rpf,
debug
ip
pim
bsr),
but
mstat
and
mtrace
cannot
be
used
with
link-local
multicast
as
they
result
in
a
"%
bad
IP
group
address"
message.
Eliminating
the
mstat
and
mtrace
commands
leaves
either
show
ip
rpf
or
debug
ip
pim
bsr,
or
some
combination
of
both.
However,
this
brings
up
an
issue
that
should
be
considered.
The
BSR
advertisement
interval
is
fixed
at
60
seconds
and
cannot
be
changed.
This
means
valuable
time
could
be
wasted
waiting
for
results
using
debug
ip
pim
bsr
on
all
devices
in
the
multicast
path.
This
leaves
show
ip
rpf
as
the
best
option
to
isolate
this
issue.
Verification
in
now
necessary
to
determine
if
one
BSR
has
been
elected.
Based
on
the
equal
priority
values,
R7
should
be
elected
as
the
BSR.
R7#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime: 00:47:31, BSR Priority: 0, Hash mask length: 0
Next bootstrap message in 00:00:29
And
R5
should
agree:
R5#show ip pim bsr
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime: 00:03:15, BSR Priority: 0, Hash mask length: 0
Expires: 00:01:54
This system is a candidate BSR
Candidate BSR address: 192.1.5.5, priority: 0, hash mask length: 0
This
output
indicates
that
both
R5
and
R7
agree
that
R7
(192.1.7.7)
is
the
BSR.
Note
that
R5
maintains
its
Candidate-BSR
status;
it
will
opt
to
elect
itself
BSR
should
R7
stop
functioning.
This
is
part
of
the
normal
Active/Passive
failover
mechanism
employed
by
BSR.
Having
corrected
the
issue
related
to
the
actual
election
of
the
BSR,
the
next
step
is
to
determine
whether
or
not
the
BSR
is
learning
each
of
the
C-RP
RP-sets.
This
is
best
accomplished
with
the
show
ip
rp
mapping
command
on
the
BSR
itself.
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 172.16.67.6 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:21:10, expires: 00:02:16
The
BSR
is
only
learning
about
R6's
RP-Set
information
for
the
multicast
scope
of
224.0.0.0/4.
How
are
the
C-RPs
communicating
this
information
to
the
BSR?
Unicast
routing
is
used
to
deliver
this
information
from
the
C-RP,
but
multicast
is
used
by
the
BSR
to
communicate
its
presence
to
the
individual
C-RPs.
Has
the
BSR
successfully
communicated
its
existence
to
both
C-RPs?
effective
tool
now
is
ping.
Remember,
the
unicast
of
the
RP-Set
information
will
be
sourced
and
destined
to
specific
IP
addresses,
and
the
easiest
method
of
testing
reachability
is
to
verify
from
the
BSR.
Specifically,
pings
should
be
sourced
from
the
IP
address
of
the
BSR
to
the
IP
address
of
each
C-RP.
R7#ping 192.1.6.6 source 192.1.7.7
Verification
on
R7
should
show
that
both
R4
and
R6
are
now
sending
their
respective
RP-Set
information
to
the
BSR:
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 172.16.67.6 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:59:04, expires: 00:02:20
RP 192.1.4.4 (?), v2
Info source: 172.16.24.4 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:27, expires: 00:01:58
R6
(192.1.6.6)
and
R4
(192.1.4.4)
have
actually
succeeded
in
communicating
their
RP-sets
to
the
BSR.
Now
that
the
BSR
has
learned
each
of
these
sets,
the
BSR
will
communicate
this
information
to
all
PIM-
SM
version
2
enabled
devices
in
the
multicast
domain.
This
is
observed
by
issuing
the
show
ip
pim
rp
mapping
command
on
each
device:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:12:33, expires: 00:01:53
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:11:37, expires: 00:01:55
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 02:54:14, expires: 00:01:56
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:11:37, expires: 00:01:56
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:12:33, expires: 00:01:54
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:11:37, expires: 00:01:53
R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:12:33, expires: 00:01:55
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:11:37, expires: 00:01:56
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 02:54:14, expires: 00:01:53
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:11:37, expires: 00:01:56
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is the Bootstrap Router (v2)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 172.16.67.6 (?), via bootstrap, priority 0, holdtime
150
Uptime: 02:10:14, expires: 00:02:14
RP 192.1.4.4 (?), v2
Info source: 172.16.24.4 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:11:37, expires: 00:01:49
R9#show ip pim rp mapping
PIM Group-to-RP Mappings
R9
has
not
received
any
RP-Set
information
from
the
BSR.
How
is
this
information
being
communicated?
Recall
that
BSR
announcements
are
sent
via
multicast.
Multicast
traffic
is
susceptible
to
RPF
checks.
Failure
of
the
multicast
traffic
to
pass
the
RPF
check
can
be
verified
via
the
show
ip
rpf
command
on
R9.
This
test
should
be
done
toward
the
IP
address
of
the
BSR.
debug
ip
pim
bsr
is
the
best
tool
for
troubleshooting
issues
on
one
device
associated
with
multicast
forwarding
and
how
this
can
specifically
effect
BSR
messages:
Careful
observation
will
show
that
under
the
FastEthernet0/1
interface
of
R9
someone
has
configured
the
ip
pim
bsr-border
command.
When
this
command
is
configured
on
an
interface,
no
PIM-SM
version
2
BSR
messages
will
be
sent
or
received
through
the
interface.
Removal
of
this
command
will
allow
R9
to
receive
the
RP-set
information.
R9(config)#interface fastethernet0/1
R9(config-if)#no ip pim bsr-border
Once
this
is
accomplished,
perform
the
verification
again:
R9#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:51, expires: 00:01:37
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:51, expires: 00:01:37
The
remediation
has
worked
and
now
all
devices
have
received
complete
RP-set
information
from
the
information
source:
192.1.7.7.
As
a
final
verification,
a
simulated
source
generated
on
R1
bound
for
the
multicast
group
224.9.9.9
can
successfully
reach
R9's
FastEthernet0/1
interface:
R1#ping 224.9.9.9 repeat 10
show COMMAND:
Where:
EXAMPLE
OUTPUT:
R1#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.2.2 (?)
Uptime: 00:26:57, BSR Priority: 0, Hash mask length: 0
Expires: 00:01:12
show
COMMAND:
show
ip
pim
[vrf
vrf-name]
rp-hash
{group-address
|
group-name}
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:
EXAMPLE
OUTPUT:
R4#show ip pim rp-hash 224.9.9.9
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 03:32:33, expires: 00:01:27
PIMv2 Hash Value (mask 0.0.0.0)
RP 192.1.7.7, via bootstrap, priority 0, hash value 390961567
RP 192.1.5.5, via bootstrap, priority 0, hash value 119808709
show
COMMAND:
show
ip
pim
[vrf
vrf-name]
rp
mapping
[rp-address]
This
command
displays
the
mappings
for
the
PIM
group
to
the
active
Rendezvous
Point(s).
Where:
EXAMPLE
OUTPUT:
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 03:45:33, expires: 00:01:29
RP 192.1.5.5 (?), v2
show
COMMAND:
show
ip
rpf
[vrf
vrf-name]
{route-distinguisher
|
source-address
[group-address]
[rd
route-
distinguisher]}
[metric]
This
command
displays
information
that
IP
multicast
routing
uses
to
perform
the
Reverse
Path
Forwarding
(RPF)
check
for
a
multicast
source
Where:
EXAMPLE
OUTPUT:
R5#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.45.4)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
show
COMMAND:
show
ip
pim
[vrf
vrf-name]
neighbor
[interface-type
interface-number]
This
command
displays
information
about
Protocol
Independent
Multicast
(PIM)
neighbors
discovered
by
PIM
version
1
router
query
messages
or
PIM
version
2
hello
messages.
Where:
EXAMPLE
OUTPUT:
R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR
Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR Address Prio/Mode
172.16.45.5 FastEthernet0/1 01:17:41/00:01:25 v2 1 / DR S
172.16.46.6 Serial0/0/0.1 01:16:39/00:01:19 v2 1 / S
debug
COMMAND:
debug
ip
mpacket
[vrf
vrf-name]
[detail
|
fastswitch]
[access-list]
[group]
This command displays multicast packets that are received and sent on the device.
Where:
EXAMPLE
OUTPUT:
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=7, ttl=254,
prot=1, len=114(100), mroute olist null
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=8, ttl=254,
prot=1, len=114(100), mroute olist null
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=9, ttl=254,
prot=1, len=114(100), mroute olist null
debug
COMMAND:
debug
ip
pim
[vrf
vrf-name]
[bsr]
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:
EXAMPLE
OUTPUT:
R4#debug ip pim bsr
PIM-BSR debugging is on
R4#
PIM-BSR(0): 192.1.2.2 bootstrap forwarded on FastEthernet0/1
PIM-BSR(0): 192.1.2.2 bootstrap forwarded on Serial0/0/0.1
PIM-BSR(0): bootstrap (192.1.2.2) on non-RPF path Serial0/0/0.1 or
from non-RPF neighbor 172.16.24.2 discarded
The network topology used in this section is shown in Figure 9-10 below:
Figure
9-10:
The
Chapter
Challenge
Topology
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
the
C-BSR
routers
R2
and
R7
do
not
agree
on
the
identity
of
the
BSR.
You
must
correct
the
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
a
new
C-BSR
(R5)
that
has
just
been
introduced
in
the
network
does
not
agree
with
R2
and
R7
regarding
the
identity
of
the
Bootstrap
Router.
Correct
this
issue.
Trouble
Ticket
#3
Your
supervisor
has
notified
you
that
R1
is
not
receiving
any
RP-set
information
from
the
BSR.
You
must
correct
this
issue.
Figure
9-11:
BSR
Quick
Fire
Troubleshooting
Flowchart
R2 and R7 are the C-BSRs that are of interest in this trouble ticket:
The
verification
clearly
demonstrates
that
R2
generates
a
Bootstrap
message.
R6
forwards
that
Bootstrap
message,
and
R7
drops
it.
This
means
that
either
there
is
a
PIM
neighborship
issue
or
a
filter/border/boundary
command
on
R7.
The
FastEthernet0/0
interface
of
R7
is
the
only
interface
capable
of
receiving
any
BSR
messages
from
R2
(192.1.2.2).
The
quickest
method
to
verify
this
is
to
execute
the
show
run
interface
FastEthernet0/0
command
on
R7:
The
ip
pim
bsr-border
command
under
the
interface
stops
the
BSR
messages
as
they
arrive
at
or
exit
R7.
This
has
unquestionably
isolated
our
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
pim
bsr-border
command
needs
to
be
removed.
R7(config)#interface FastEthernet0/0
R7(config-if)#no ip pim bsr-border
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
of
the
initial
fault
verification.
R2#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime: 00:01:51, BSR Priority: 255, Hash mask length: 0
Expires: 00:01:18
This system is a candidate BSR
Candidate BSR address: 192.1.2.2, priority: 200, hash mask length: 0
R2
and
R7
agree
that
R7
is
the
BSR,
but
R5
is
reporting
itself
as
the
BSR
in
the
topology.
This
verifies
that
the
problem
actually
exists.
Step
2
-
Fault
Isolation:
In
order
to
verify
that
RPF
issues
are
not
at
fault,
use
the
mtrace
utility.
Perform
this
check
in
both
directions,
first
from
R2
toward
R5,
and
then
in
reverse.
R2#mtrace 192.1.2.2 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.2.0/24]
-2 172.16.67.6 PIM [192.1.2.0/24]
-3 172.16.26.2 PIM [192.1.2.0/24]
-4 192.1.2.2
R5#mtrace 192.1.5.5 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.5.5 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.2.2
-1 172.16.24.2 PIM [192.1.5.0/24]
-2 172.16.24.4 PIM [192.1.5.0/24]
-3 172.16.45.5 PIM [192.1.5.0/24]
-4 192.1.5.5
Next
is
the
verification
of
the
BSR
messaging.
Use
the
debug
ip
pim
bsr
command
on
R2,
R4
and
R5:
R2#debug ip pim bsr
PIM-BSR debugging is on
R2#
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on Loopback0
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on GigabitEthernet0/0
R2
is
sending
BSR
announcements
out
the
Gi0/0
interface
directed
to
R4.
R5(config)#int f0/1
R5(config-if)#no ip pim version 1
Step
4
-
Verification
of
Remediation
Once
the
error
has
been
isolated
and
remediated,
it
is
highly
recommended
to
verify
that
the
Trouble
Ticket
has
been
repaired
using
the
same
method
used
to
verify
the
fault
initially:
R2#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime: 00:36:11, BSR Priority: 255, Hash mask length: 0
Expires: 00:01:58
This system is a candidate BSR
Candidate BSR address: 192.1.2.2, priority: 200, hash mask length: 0
R7#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime: 04:48:37, BSR Priority: 255, Hash mask length: 0
Next bootstrap message in 00:00:23
R5#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime: 00:02:06, BSR Priority: 255, Hash mask length: 0
Expires: 00:02:03
This system is a candidate BSR
Candidate BSR address: 192.1.5.5, priority: 250, hash mask length: 0
All
three
C-BSRs
agree
that
R7
is
the
BSR.
R1#
R1
is
not
receiving
the
C-RP
RP-set
information
from
the
BSR.
This
verifies
that
the
problem
actually
exists.
Step
2
-
Fault
Isolation:
To
ensure
that
BSR
messages
have
made
it
to
all
PIM
devices,
use
the
mtrace
utility.
Make
certain
to
perform
this
process
from
the
C-RPs
to
the
BSR.
R4#mtrace 192.1.4.4 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.4.4 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.4.0/24]
-2 172.16.67.6 PIM [192.1.4.0/24]
-3 172.16.26.2 PIM [192.1.4.0/24]
-4 172.16.24.4 PIM [192.1.4.0/24]
-5 192.1.4.4
There
are
no
problems
in
the
path
from
R4
to
R7.
Now
repeat
the
test
from
R6
to
R7:
R6#mtrace 192.1.6.6 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.6.6 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.6.0/24]
-2 172.16.67.6 PIM [192.1.6.0/24]
-3 192.1.6.6
This
indicates
that
there
are
no
RPF
errors.
Next,
execute
the
debug
ip
pim
bsr
command
on
R1,
R4,
R5,
R2,
R6
and
R7.
The
issue
is
an
RPF
failure
on
R1
toward
R5.
This
is
best
verified
by
examining
the
PIM-SM
neighbors
on
R1:
R1#sh ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR
Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address
Prio/Mode
R1#
There
is
no
neighbor
relationship
between
R1
and
R5.
A
show
run
interface
FastEthernet0/0
command
will
reveal
the
issue.
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:15, expires: 00:02:13
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 255, holdtime
150
Uptime: 00:00:15, expires: 00:02:12
R1
now
has
the
complete
C-RP
RP-set
information
as
expected.
Chapter
10:
Multicast
Source
Discovery
Protocol
(MSDP)
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
functionality
of
the
Multicast
Source
Discovery
Protocol
(MSDP)
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
Multicast
Source
Discovery
Protocol
(MSDP).
The
chapter
begins
with
a
thorough
review
of
MSDP,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
support
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
With
MSDP,
RPs
in
different
domains
can
exchange
information.
An
RP
can
join
the
interdomain
source
tree
for
sources
that
are
sending
to
groups
for
which
it
has
receivers.
When
a
last-hop
router
learns
of
a
new
source
outside
the
PIM-SM
domain
(through
the
arrival
of
a
multicast
packet
from
the
source
down
the
shared
tree),
it
then
can
send
a
join
toward
the
source
and
join
the
interdomain
source
tree.
If
the
RP
has
no
shared
tree
for
a
particular
group
or
it
has
a
shared
tree
whose
outgoing
interface
list
is
null,
it
does
not
send
a
join
to
the
source
in
another
domain.
With
MSDP,
an
RP
in
a
PIM-SM
domain
maintains
MSDP
peering
relationships
with
MSDP-enabled
routers
in
other
domains.
This
peering
relationship
occurs
over
a
TCP
connection
(port
639).
MSDP
relies
on
BGP
or
multiprotocol
BGP
(MP-BGP)
for
interdomain
operation.
When
utilizing
Multicast
Source
Discovery
Protocol,
when
a
PIM
designated
router
(DR)
registers
a
source
with
its
RP,
the
RP
sends
a
Source-Active
(SA)
message
to
all
of
its
MSDP
peers.
The
DR
sends
the
encapsulated
data
to
the
RP
only
once
per
source
when
the
source
goes
active.
The
SA
message
identifies
the
source
address,
the
group
that
the
source
is
sending
to,
and
the
address
of
the
RP.
Each
MSDP
peer
that
receives
the
SA
message
floods
the
message
to
all
of
its
peers
downstream
from
the
originator.
In
some
cases,
an
RP
may
receive
a
copy
of
an
SA
message
from
more
than
one
MSDP
peer.
To
prevent
looping,
the
RP
consults
the
BGP
next-hop
database
to
determine
the
next
hop
toward
the
originator
of
the
SA
message.
That
next-hop
neighbor
is
the
RPF-peer
for
the
originator.
SA
messages
that
are
received
from
the
originator
on
any
interface
other
than
the
interface
to
the
RPF
peer
are
dropped.
When
an
RP
receives
an
SA
message,
it
checks
to
see
whether
there
are
any
members
of
the
advertised
groups
in
its
domain
by
checking
to
see
whether
there
are
interfaces
on
the
group's
(*,
G)
outgoing
interface
list.
If
there
are
no
group
members,
the
RP
does
nothing.
If
there
are
group
members,
the
RP
sends
an
(S,
G)
join
toward
the
source.
As
a
result,
a
branch
of
the
interdomain
source
tree
is
constructed
across
autonomous
system
boundaries
to
the
RP.
As
multicast
packets
arrive
at
the
RP,
they
are
then
forwarded
down
its
own
shared
tree
to
the
group
members
in
the
RP's
domain.
The
members'
DRs
then
have
the
option
of
joining
the
rendezvous
point
tree
(RPT)
to
the
source
using
standard
PIM-
SM
procedures.
The
originating
RP
continues
to
send
periodic
SA
messages
for
the
(S,
G)
state
every
60
seconds
for
as
long
as
the
source
is
sending
packets
to
the
group.
When
an
RP
receives
an
SA
message,
it
caches
the
SA
message.
There
are
four
basic
MSDP
message
types,
each
encoded
in
their
own
Type,
Length,
and
Value
(TLV)
data
format.
These
messages
are:
SA
Messages
SA
Request
Messages
SA
Response
Messages
Keepalive
Messages
SA
messages
are
used
to
advertise
active
sources
in
a
domain.
In
addition,
these
SA
messages
may
contain
the
initial
multicast
data
packet
that
was
sent
by
the
source.
SA
messages
contain
the
IP
address
of
the
originating
RP
and
one
or
more
(S,
G)
pairs
being
advertised.
In
addition,
the
SA
message
may
contain
an
encapsulated
data
packet.
SA
request
messages
are
used
to
request
a
list
of
active
sources
for
a
specific
group.
These
messages
are
sent
to
an
MSDP
SA
cache
that
maintains
a
list
of
active
(S,
G)
pairs
in
its
SA
cache.
Join
latency
can
be
reduced
by
using
SA
request
messages
to
request
the
list
of
active
sources
for
a
group
instead
of
having
to
wait
up
to
60
seconds
for
all
active
sources
in
the
group
to
be
readvertised
by
originating
RPs.
SA
response
messages
are
sent
by
the
MSDP
peer
in
response
to
an
SA
request
message.
SA
response
messages
contain
the
IP
address
of
the
originating
RP
and
one
or
more
(S,
G)
pairs
of
the
active
sources
in
the
originating
RP's
domain
that
are
stored
in
the
cache.
Keepalive
messages
are
sent
every
60
seconds
in
order
to
keep
the
MSDP
session
active.
If
no
keepalive
messages
or
SA
messages
are
received
for
75
seconds,
the
MSDP
session
is
reset.
This
exchange
of
sources
sending
packets
to
multicast
groups
happens
as
a
result
of
the
MSDP
peering
relationship
that
takes
place
via
the
TCP
session
described
previously.
For
this
process
to
work
properly
there
must
be
a
PIM
enabled
path
between
the
RPs,
and
the
underlying
routing
protocol
must
contain
enough
information
to
support
the
creation
of
the
necessary
MSDP
peering
sessions.
Once
this
listing
of
active
sources
is
exchanged,
the
receiving
RP
will
use
them
to
establish
a
source
path
between
the
two
multicast
domains
to
a
specific
group.
This
operation
and
behavior
will
be
demonstrated
using
the
topology
provided
in
Figure
10-1.
Observe
that
we
have
two
statically
assigned
RPs.
One
RP
for
each
of
two
multicast
domains.
Please
observe
that
R1,
R5
and
R4
are
using
R5
as
the
RP
for
domain
"A",
and
R6,
R7
and
R9
are
using
R7
as
the
RP
for
domain
"B".
Figure
10-1:
MSDP
Lab
Topology
These
SA
messages
are
forwarded
away
from
the
RP
address
using
what
is
referred
to
as
a
peer-RPF
flooding
process.
In
this
method
of
flooding
the
multicast
routing
table
is
used
to
determine
which
peer
would
be
used
to
reach
the
originating
RP
of
a
given
SA
message;
this
peer
is
called
an
MSDP
"RPF
Peer".
This
process
takes
place
by
the
router
looking
for
a
*,G
entry
in
the
multicast
routing
table
for
the
group
with
any
interface
in
the
OIL.
The
fact
that
the
OIL
has
any
other
value
than
"Null"
implies
that
there
is
a
host
in
the
domain
interested
in
the
particular
group.
In
this
situation
the
RP
will
trigger
a
S,G
join
message
that
will
be
set
toward
the
multicast
source.
This
process
is
emulates
the
exact
mechanism
used
when
a
Join/Prune
message
is
received
that
is
addressed
to
the
RP
itself.
This
process
is
how
the
source-based
tree
is
created
to
reach
this
domain.
Any
data
packets
that
subsequently
arrive
at
the
RP
via
this
source-based
tree
will
be
forwarded
down
the
shared
tree
inside
the
domain
toward
the
receivers.
At
this
point
if
any
leaf
routers
choose
to
join
the
source-based
tree
they
can
do
so
using
the
standard
PIM-SM
mechanisms
described
in
previous
chapters.
With
this
in
mind
it
is
useful
to
note
that
if
an
RP
in
a
domain
receives
a
PIM
Join
message
for
a
new
group
(G),
the
RP
should
trigger
a
PIM
Join/Prune
message
for
each
active
S,G
entry
it
learns
from
its
MSDP
Peers
via
the
SA
messages.
This
process
is
often
times
referred
to
as
flood-and-join,
a
parody
of
flood
and
prune.
However,
in
flood-and-join,
if
an
RP
is
not
interested
in
a
group
it
can
ignore
any
SA
messages
for
that
group.
SA
Cache
An
MSDP
speaker
caches
SA
messages.
This
caching
process
allows
MSDP
messages
to
be
stored
locally.
This
mechanism
reduces
join
latency
for
new
receivers
of
a
particular
multicast
group
of
an
originating
RP
which
has
an
existing
MSDP
(S,G)
state
for
that
group.
This
process
paces
the
replication
of
SA
messages
between
MSDP
peers.
An
additional
benefit
to
this
process
is
that
it
makes
diagnosis,
and
debugging
of
various
problems
easier.
Another
issue
that
can
affect
the
formation
of
MSDP
peers
is
incomplete
routing
information
rendering
the
IP
addresses
used
during
the
peering
process
unreachable.
In
the
Common
Issues
with
MSDP
section,
three
primary
types
of
problems
were
identified:
Incorrect
Peering
Configuration,
No
PIM
Enabled
Path
Between
MSDP
Peers,
and
MSDP
Passwords
and
Filters.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problems
exists.
There
are
three
types
of
devices
in
this
topology:
Sources
(R1),
Hosts
(R9),
and
MSDP
peered
Static
RPs
(R5
and
R7).
Generate a multicast ping from R1 for the multicast group 224.1.1.1 with high repeat count:
Now
that
R1
is
generating
the
multicast
stream,
we
need
to
see
if
R5
is
sending
an
SA
message
to
its
MSDP
peer.
This
is
accomplished
via
show
ip
msdp
peer:
This
output
indicates
that
R5
is
advertising
an
SA
message
for
the
S,G
pair
of
224.1.1.1,
172.16.15.1
to
the
peer
located
at
192.1.7.7.
What
is
the
status
of
the
TCP
session
to
this
MSDP
Peer?
This
output
indicates
that
the
status
of
the
connection
is:
Down.
We
also
see
that
R5
is
using
the
connection
source
192.1.5.5.
It
would
be
worthwhile
to
see
the
status
of
the
output
of
the
command
on
R7:
R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Down, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:06:55, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:06:55 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
The
status
on
this
MSDP
RP
is
also
down
but
observe
that
the
connection
source
is
not
configured.
This
can
be
confirmed
with
show
run:
R7#show run | inc msdp
ip msdp peer 192.1.5.5
This
indicates
that
R7
is
not
using
the
correct
source
for
the
MSDP
peering.
It
is
corrected
by
adding
connection-source
key
word:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#ip msdp peer 192.1.5.5 connect-source loopback0
%MSDP-5-PEER_UPDOWN: Session to peer 192.1.5.5 going up
R7(config)#end
We
see
that
R7
now
has
a
peering
with
192.1.5.5
(R5).
Is
R7
receiving
the
SA
Message
for
the
group
224.1.1.1,
172.16.15.1
now?
The
SA
Message
has
been
learned.
The
question
now
is,
"Is
the
S,G
pair
added
to
the
multicast
routing
table
of
R7?
This
behavior
is
normal
based
on
our
discussion
of
the
mechanisms
used
by
MSDP.
The
S,G
will
only
be
added
to
the
multicast
routing
table
if
a
host
joins
the
multicast
group
224.1.1.1:
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.1.1.1
R9(config-if)#end
Generate a multicast ping from R1 for the multicast group 224.9.9.9 with high repeat count:
We
see
that
the
first
packet
actually
succeeds
but
all
after
that
fail.
Is
R5
sending
the
SA
Message?
This
is
accomplished
via
show
ip
msdp
peer:
This
output
indicates
that
R5
is
advertising
an
SA
message
for
the
S,G
pair
of
224.9.9.9,
172.16.15.1
to
the
peer
located
at
192.1.7.7.
What
is
the
status
of
the
TCP
session
to
this
MSDP
Peer?
This
output
indicates
that
the
status
of
the
connection
is:
Up.
We
also
see
that
R5
is
using
the
connection
source
192.1.5.5.
It
would
be
worthwhile
to
see
the
status
of
the
output
of
the
command
on
R7:
R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:17:48, Messages sent/received: 17/21
Output messages discarded: 0
Connection and counters cleared 00:17:49 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 2
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabledd
The
status
on
this
MSDP
RP
is
also:
Up
and
the
connection
source
is
correctly
configured.
Is
R7
receiving
the
SA
Message?
R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)
Is the S,G entry making it into the multicast routing table on R7?
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
The
S,G
entry
is
not
made
to
the
multicast
routing
table
of
R7
because
R9
is
not
a
member
of
the
group.
This
can
be
corrected
via
ip
igmp
join-group
on
R9:
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
This
means
that
the
packets
are
not
making
it
from
R5
to
R7.
This
situation
is
most
likely
associated
with
a
problem
in
the
multicast
routing
and
forwarding
plane.
This
could
be
an
RPF
issue,
and
asynchronous
routing
issue,
or
a
corrupt
multicast
routing
table
between
the
MSDP
peers.
All
of
these
issues
can
be
isolated
via
mtrace
(used
bidirectionally):
This
output
informs
us
that
R2
the
interface
with
the
ip
address
172.16.26.2
is
not
enabled
with
ip
pim
sparse-mode,
as
evidenced
with
show
run
interface:
To correct this problem we will apply ip pim sparse-mode under the GigabitEthernet0/1 interface of R2:
R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface GigabitEthernet0/1
R2(config-if)#ip pim sparse-mode
R2(config-if)#end
R2#
%PIM-5-NBRCHG: neighbor 172.16.26.6 UP on interface GigabitEthernet0/1
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.26.6 on interface
GigabitEthernet0/1
We see the PIM neighbor come up on FastEthernet0/1. Are the pings successful from R1?
The pings are successful thus demonstrating the issue has been corrected.
Generate a multicast ping from R1 for the multicast group 224.3.3.3 with high repeat count:
This
output
indicates
that
the
status
of
the
connection
is:
Up.
We
also
see
that
R5
is
using
the
connection
source
192.1.5.5.
It
would
be
worthwhile
to
see
the
status
of
the
output
of
the
command
on
R7:
R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:07:08, Messages sent/received: 8/9
Output messages discarded: 0
Connection and counters cleared 00:07:09 ago
SA Filtering:
Input (S,G) filter: everything, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
An
MSDP
sa-filter
is
blocking
all
inbound
SA
messages
sourced
from
192.1.5.5,
to
correct
this
issue
the
filter
should
be
removed:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#no ip msdp sa-filter in 192.1.5.5
R7(config)#end
Has the pair been added to the multicast routing table of R7?
There
is
no
member
of
this
group.
To
correct
this
issue
have
R9's
interface
FastEthernet0/1
join
the
group:
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.3.3.3
R9(config-if)#end
Has the S,G pair been added to the multicast routing table on R7 now?
show
COMMAND:
show
ip
msdp
peer
ip_address
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:
EXAMPLE
OUTPUT:
R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:07:19, Messages sent/received: 7/10
Output messages discarded: 0
Connection and counters cleared 00:07:28 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:
EXAMPLE
OUTPUT:
R5#show ip msdp peer 192.1.7.7 advertised-SAs
MSDP SA advertised to peer 192.1.7.7 (?) from mroute table
R5#
debug
COMMAND:
debug
ip
msdp
detail
EXAMPLE
OUTPUT:
R5#debug ip msdp detail
MSDP Detail debugging is on
R5#
MSDP(0): Received 3-byte TCP segment from 192.1.7.7
MSDP(0): Append 3 bytes to 0-byte msg 12 from 192.1.7.7, qs 1
R5#
MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
R5#
MSDP(0): Received 3-byte TCP segment from 192.1.7.7
MSDP(0): Append 3 bytes to 0-byte msg 13 from 192.1.7.7, qs 1
R5#
The network topology used in this section is shown in Figure 10-5 below:
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
the
RP
in
Multicast
Domain
"A"
is
not
forming
an
MSDP
peering
relationship
with
the
RP
in
Multicast
Domain
"B".
You
have
been
instructed
to
correct
the
issue
causing
this
problem.
You
must
use
the
most
secure
method
possible
to
accomplish
this
task.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
R7
is
failing
to
accept
SA
Messages
from
R5
for
the
multicast
group
224.1.1.1.
What is the status of the MSDP peering connection between R5 and R7?
The
output
clearly
indicates
that
the
connection
status
with
the
peer
is
down
thus
verifying
the
problem
exists.
Step
2
-
Fault
Isolation:
The
next
course
of
action
is
to
use
the
show
ip
msdp
peer
command
on
R7
and
compare
the
results
with
those
seem
on
R5.
R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Listen, Resets: 0, Connection source: Loopback0 (192.1.7.7)
We
see
that
the
MSDP
peer
connection
with
192.1.7.7
is
now
up,
additionally
we
see
a
console
message
notifying
us
that:
Generate a multicast stream for the group 224.1.1.1 on R1 with a high repeat count:
R7#
R7
has
no
record
of
accepting
the
SA
for
224.1.1.1.
Thus
proving
the
problem
exists.
R5
is
sending
the
SA
Messages.
We
can
use
show
ip
msdp
peer
on
R7
to
see
what
is
happening
to
these
messages:
Note
that
SA
Filtering
is
taking
place
on
R7.
Specifically
an
Input
S,G
filter
has
been
applied
using
the
extended
access-list
100.
What
is
the
nature
of
the
access-list
being
called?
This
ACL
is
blocking
any
SA
Messages
for
the
group
224.1.1.1
sourced
from
any
sender.
This
has
isolated
our
fault.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
access-list
extended
100
command
needs
to
be
used
to
remove
sequence
number
10
from
the
access-list:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#ip access-list extended 100
R7(config-ext-nacl)#no 10
R7(config-ext-nacl)#end
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
functionality
of
the
Anycast-RP
protocol
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
Anycast-RP
protocol.
The
chapter
begins
with
a
thorough
review
of
Anycast-RP,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
support
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
Originally
developed
for
interdomain
multicast
applications,
Anycast-RP
provides
redundancy
and
load-
sharing
capabilities
for
the
critical
RP
role
within
multicast.
While
the
original
intent
was
interdomain
multicast,
enterprises
today
typically
use
Anycast-RP
for
configuring
a
PIM-SM
network
to
meet
fault
tolerance
requirements
within
a
single
multicast
domain.
With
Anycast-RP,
we
configure
two
or
more
rendezvous
points
with
the
same
IP
address
and
32-bit
mask
on
loopback
interfaces.
Configure
all
downstream
routers
in
the
multicast
domain
with
this
Anycast-RP
IP
address.
IP
routing
automatically
selects
the
closest
RP
for
each
source
and
receiver
based
on
the
IP
routing
protocol
information.
Careful
placement
of
the
Anycast-RP
devices
can
help
ensure
adequate
load
balancing.
In
Anycast-RP,
MSDP
shares
information
between
the
redundant
RPs.
This
is
important
because
a
source
may
register
with
one
RP
and
receivers
may
join
a
different
RP.
When
a
source
registers
with
one
RP,
an
SA
message
will
be
sent
to
the
other
RPs
informing
them
that
there
is
an
active
source
for
a
particular
multicast
group.
The
result
is
that
each
RP
will
know
about
the
active
sources
in
the
area
of
the
other
RPs.
If
any
of
the
RPs
were
to
fail,
IP
routing
would
converge,
and
one
of
the
RPs
would
become
the
active
RP.
Receivers
join
the
new
RP
and
connectivity
is
maintained.
The
concept
of
Anycast-RP
between
multiple
devices
presents
an
attractive
solution
to
deploying
RPs
in
a
multicast
domain.
In
Anycast-RP,
all
RPs
are
configured
with
identical
IP
addresses
typically
assigned
to
a
loopback
interface.
This
address
is
then
advertised
into
the
native
IGP
protocol
employed
in
the
routing
environment.
Often
times,
these
routes
will
be
advertised
as
external
in
nature,
in
order
to
more
readily
allow
routing
metrics
to
be
manipulated
to
obtain
desired
routing
behavior
or
RP
selection.
This
process
of
advertising
these
matching
prefixes
into
the
unicast
routing
domain
allow
devices
to
utilize
the
RP
with
the
lowest
IGP
cost
to
build
multicast
trees.
There
will
normally
be
several
trees
per
group,
and
at
least
one
per
RP.
The
issue
in
this
situation
is
that
by
default
RP
do
not
natively
exchange
information
regarding
PIM
Registrations
or
PIM
joins
for
multicast
groups.
The
solution
to
this
intra-
domain
conundrum
is
the
same
as
that
used
between
RPs
in
inter-domain
multicast
deployments.
Deploying
MSDP
between
these
devices
will
allow
this
exchange
to
take
place
via
the
TCP
peering
session
formed
between
the
RPs.
Once
configured,
SA
messages
will
notify
the
other
RP
when
a
source
is
active.
This
ensures,
all
RPs
are
aware
of
all
active
sources
so
that
they
can
facilitate
joining
the
SPT
to
receive
a
give
multicast
stream
directly.
We will use Figure 11-1 to illustrate the operational process employed by Anycast-RP.
Figure
11-1:
Anycast-RP
This process as mention earlier in this section is the same as that used in inter-domain multicast.
Step
One
-
First-hop
router
unicasts
a
PIM
Register
Message
(containing
an
encapsulated
multicast
packet)
to
the
closest
RP,
in
this
case
R5.
This
process
can
be
observed
by
generating
a
ping
from
R1
to
a
multicast
group,
and
observing
the
output
of
the
debug
ip
pim
on
R4:
R4#debug ip pim
PIM debugging is on
R4#
Now we will look at R4 to see the Register Message arrive from R5:
R4#
PIM(0): Received v2 Register on FastEthernet0/1 from 172.16.45.5
for 172.16.15.1, group 224.10.10.10
PIM(0): Check RP 192.1.100.100 into the (*, 224.10.10.10) entry
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.10.10.10
Step
Two
-
R4
will
then
advertise
an
MSDP
SA
Message
to
R6
notifying
it
that
there
is
an
active
source
(S)
for
the
group
(G).
The
first
SA
message
contains
the
encapsulated
multicast
packet,
future
SA
messages
for
this
group
will
not
have
this
packet.
Step
Three
-
If
R6
receives
the
SA
messages
and
it
has
a
receiver
for
that
particular
group,
it
joins
SPT
toward
the
source
by
sending
an
(S,G)
join.
We can see if the SA Message arrives by using the show ip msdp peer command on R6:
We
see
that
the
SA
message
arrives,
and
as
a
result
of
R9
having
joined
the
multicast
group
224.10.10.10
we
see
the
following
Join
Message
go
out
with
the
Shortest
Path
Bit
set:
R6#
PIM(0): Insert (172.16.15.1,224.10.10.10) join in nbr 172.16.26.2's queue
PIM(0): Building Join/Prune packet for nbr 172.16.26.2
PIM(0): Adding v2 (172.16.15.1/32, 224.10.10.10), S-bit Join
PIM(0): Send v2 join/prune to 172.16.26.2 (FastEthernet0/1)
Step
Four
-
R4
will
forward
via
the
shared
tree
to
the
hosts
the
multicast
packet
it
receives
in
the
SA
message
Step
Five
-
when
last-hop
router
receives
the
multicast
packet,
it
joins
the
SPT
unless
it
is
configured
not
to
do
so
(ip
pim
spt-threshhold
infinity)
In
a
MSDP
environment
the
behavior
is
the
same
when
the
first
encapsulated
multicast
packet
arrives
in
the
SA
Message.
After
receiving
the
SA
message,
the
RP
extracts
the
multicast
data
and
sends
the
multicast
data
down
the
RPT
to
the
DRs
at
the
receiver
side.
The
MSDP
RP
acts
as
a
transfer
station
for
all
multicast
packets.
The
whole
process
involves
the
following
issues:
Multicast
packets
could
be
delivered
via
the
MSDP
peering
relationship
along
a
path
that
might
not
be
the
shortest
path.
An
increase
in
multicast
traffic
could
add
a
potential
congestion
on
the
RP,
increasing
the
risk
of
failure.
To
solve
these
issues,
MSDP
permits
the
normal
PIM-SM
process
of
allowing
the
DR
at
the
receiver
side
to
initiate
the
SPT
switchover
process.
After
receiving
the
first
multicast
packet,
the
receiver-side
DR
initiates
an
SPT
switchover
process,
as
follows:
The
receiver-side
DR
sends
an
(S,
G)
join
message
hop
by
hop
toward
the
multicast
source.
When
the
join
message
reaches
the
source-side
DR,
all
the
routers
on
the
path
have
installed
the
(S,
G)
entry
in
their
forwarding
table,
and
thus
an
SPT
branch
is
established.
When
the
multicast
packets
travel
to
the
router
where
the
RPT
and
the
SPT
deviate,
the
router
drops
the
multicast
packets
received
from
the
RPT
and
sends
an
RP-bit
prune
message
hop
by
hop
to
the
RP.
After
receiving
this
prune
message,
the
RP
sends
a
prune
message
toward
the
multicast
source
(suppose
only
one
receiver
exists).
Thus,
SPT
switchover
is
completed.
Multicast
data
is
directly
sent
from
the
source
to
the
receivers
along
the
SPT.
In
Chapter
10:
Multicast
Source
Discovery
Protocol
(MSDP),
this
text
discussed
the
different
configuration
command
needed
to
enable
MSDP
peering
between
RPs
in
different
multicast
domains.
This
same
process
is
the
one
used
in
intra-domain
deployments.
More
often
than
not,
there
are
issues
associated
with
applying
these
commands
and
parameters.
Most
commonly,
configuration
or
typographical
errors
are
the
cause
of
failures
in
this
type
of
multicast
deployment.
MSDP
configuration
is
very
much
like
BGP
in
that
it
relies
on
TCP
session
initiation
to
operate.
This
means
that
it
is
necessary
to
use
the
connect-source
to
specify
an
interface
other
than
that
used
to
directly
reach
the
MSDP
peer.
These
source
interfaces
must
match
on
each
MSDP
enabled
RP.
In
order
for
MSDP
to
work
there
must
be
a
PIM
enabled
path
on
all
devices
between
the
MSDP
Peered
RPs.
This
necessity
is
part
of
the
successful
operation
of
the
protocol
because
MSDP
does
not
provide
a
replacement
for
PIM.
PIM
is
still
required
for
the
successful
transfer
of
multicast
packets,
and
is
part
of
the
data
plane
RPF
check
mechanism.
In
order
to
maintain
some
level
of
secure
deployment,
MSDP
is
designed
to
leverage
MD5
digests
as
part
of
its
authentication
process.
These
digests
must
match
exactly
and
are
easily
misconfigured.
Additionally,
MSDP
employs
a
number
of
filter
mechanisms
to
block
communications:
Beware
of
issues
where
these
have
been
deployed
incorrectly
or
in
situations
where
previous
configurations
have
not
been
completely
removed.
The
process
explained
above,
is
intended
to
make
you
aware
of
the
fact
that
the
routers
in
your
domain
may
select
RPs
that
you
do
not
expect
them
to
choose
based
on
the
dynamic
nature
of
the
routing
protocols
you
employ
on
the
network.
Three
such
circumstances
that
are
commonly
encountered
are:
More
Than
One
Routing
Protocol
-
Different
administrative
distances
can
have
unexpected
effects
on
the
RP
selection
process.
Matching
Metrics
To
Each
Of
The
RPs
-
This
will
result
in
a
router
alternating
between
the
RPs.
(A
less
than
favorable
situation.)
Unequal
Cost
Load
Balancing
-
This
situation
will
introduce
a
complex
weighted
round
robin
selection
process
that
can
make
troubleshooting
exceedingly
difficult.
The
MSDP
protocol
will
always
forward
Join/Prune
messages
to
the
appropriate
RP,
but
if
you
do
not
know
the
identity
of
the
RP
in
use,
you
will
have
issues
troubleshooting
any
protocol
faults.
In
the
Anycast-RP
Sample
Troubleshooting
Scenarios
section
that
follows,
troubleshooting
these
issues
are
demonstrated.
For
each
problem,
the
text
demonstrates
how
to
quickly
and
efficiently
verify
each
symptom,
isolate
the
cause,
and
remediate
the
issue.
In
the
Common
Issues
with
Anycast-RP
section,
two
primary
types
of
problems
were
identified:
MSDP
Peering
Issues
and
Unicast
Routing
Problems.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problems
exists.
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled
Observe
the
output
of
this
command
indicates
that
the
connection
state
is:
Down.
More
often
than
not,
a
direct
comparison
between
the
two
RPs
in
a
domain
will
reveal
the
cause
of
this
type
of
issue:
Note
that
R6
is
running
MD5
authentication
and
R4
is
not.
This
can
be
verified
via
show
run:
R4#sh run | inc msdp password
ip msdp password peer 192.1.6.6 CISCO
Repeating the show mspd peer command will now tell us that the connection is up:
In this scenario, we will generate a multicast stream for the group 224.9.9.9 on R1:
The ping is not successful. We need to determine if R4 is sending the SA Messages to its peer:
The SA messages are accepted by R6. Is the S,G entry addred to R6s multicast routing table?
The entry is not added. Oddly enough we do not even see a *,G for the group. Has R9 joined 224.9.9.9?
R7
is
learning
the
that
R9
has
joined
the
group
is
has
also
added
the
*,G
entry
with
the
interface
facing
R9
to
the
OIL
for
this
entry.
The
odd
thing
is
that
the
RPF
nbr
is
0.0.0.0.
We
know
from
past
experience
that
this
indicates
that
the
router
either
thinks
that
it
is
the
RP
or
that
the
addressed
for
the
RP
is
invalid.
We
know
that
R7
should
not
be
the
RP,
but
we
still
need
to
verify:
R7#show ip pim rp
Group: 224.9.9.9, RP: 192.1.100.100, next RP-reachable in 00:00:03
Group: 224.0.1.40, RP: 192.1.100.100, next RP-reachable in 00:00:02
We see that the RP is 192.1.100.100 which is correct. Can R7 reach this address?
R7#ping 192.1.100.100
We
see
that
R7
thinks
the
prefix
192.1.100.100
is
reachable
via
its
loopback100
interface.
This
means
that
the
address
resides
on
this
router
as
evidenced
by
show
run:
This interface needs to be removed because it is causing an issue with the unicast reachability to the RP.
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#no interface loopback 100
% Not all config may be removed and may reappear after reactivating the logical-
interface/sub-interfaces
R7(config)#
%LINK-5-CHANGED: Interface Loopback100, changed state to administratively down
%LINEPROTO-5-UPDOWN: Line protocol on Interface Loopback100, changed state to down
R7(config)#end
show
COMMAND:
show
ip
msdp
peer
ip_address
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:
EXAMPLE OUTPUT:
R6#show ip msdp peer 192.1.5.5
MSDP peer 192.1.5.5 not found
R6#show ip msdp peer 192.1.4.4
MSDP Peer 192.1.4.4 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.6.6)
Uptime(Downtime): 01:17:18, Messages sent/received: 77/86
Output messages discarded: 0
Connection and counters cleared 01:17:49 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:
EXAMPLE OUTPUT:
R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
R4#
debug
COMMAND:
debug
ip
msdp
detail
EXAMPLE OUTPUT:
R4#debug ip msdp detail
MSDP Detail debugging is on
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 92 from 192.1.6.6, qs 1
R4#
MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 93 from 192.1.6.6, qs 1
R4#
The network topology used in this section is shown in Figure 2-6 below:
Figure
11-6:
The
Chapter
Challenge
Topology
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
ping
sourced
from
the
FastEthernet0/0
interface
of
R1
for
the
group
224.9.9.9
is
not
reaching
receivers
on
R9s
VLAN79
segment.
Using
this
group
you
have
been
instructed
to
isolate
and
correct
this
issue.
The
pings
are
not
successful.
This
verifies
that
the
problem
actually
exists.
Step
2
-
Fault
Isolation:
The
next
course
of
action
is
to
verify
that
R4
is
sending
SA
messages
to
R6.
R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
The
SA
messages
are
being
sent.
Are
they
being
accepted
by
the
MSDP
peer:
R6#show ip msdp peer 192.1.4.4 accepted-SAs
MSDP SA accepted from peer 192.1.4.4 (?)
R6
is
accepting
the
SA
messages.
Is
the
S,G
pair
being
added
to
the
multicast
routing
table
for
the
group
(172.16.15.1,
224.9.9.9)
on
R6?
We
know
that
we
will
not
add
the
S,G
entry
unless
we
have
a
*,G
for
the
particular
group.
This
*,G
is
missing.
Has
R9
joined
the
group?
R9#show ip igmp interface Fa0/1
FastEthernet0/1 is up, line protocol is up
Internet address is 172.16.79.9/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 2 joins, 0 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.79.9 (this system)
IGMP querying router is 172.16.79.7
Multicast groups joined by this system (number of users):
224.9.9.9(1) 224.0.1.40(1)
We
see
that
R9
has
joined
the
group
224.9.9.9,
this
IGMP
version
2
join
information
should
be
sent
to
the
IGMP
Querying
router
(172.16.79.7).
This
means
that
R7
should
have
the
*,G
entry:
R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group,
V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
Note
that
we
see
the
*,G
entry
for
the
group.
However,
observe
the
RPF
nbr
of
0.0.0.0.
This
means
the
router
thinks
it
is
the
RP
or
that
the
RP
address
is
invalid.
What
RPF
interface
will
be
used
to
check
the
multicast
traffic
learned
from
this
RP
address?
R7#show ip rpf 192.1.100.100
RPF information for ? (192.1.100.100) failed, no route exists
This
output
tells
us
that
no
route
exists.
We
need
to
take
a
closer
look
at
the
routing
table
of
R7:
R7#show ip route 192.1.100.100
Routing entry for 192.1.100.0/24
Known via "connected", distance 0, metric 0 (connected, via interface)
Redistributing via eigrp 100
Routing Descriptor Blocks:
* directly connected, via Loopback100
Route metric is 0, traffic share count is 1
This
tells
us
that
the
network
192.1.100.0/24
resides
on
this
router
on
interface
Loopback100,
as
evidenced
by
show
run:
Chapter
12:
Multiprotocol-BGP
(MP-BGP)
and
Interdomain
Multicast
In
this
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting,
the
processes
and
functionality
of
the
Multiprotocol-BGP
(MP-BGP)
protocol
are
examined
in
great
depth.
Once
the
operational
characteristics
of
this
important
protocol
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
the
Multiprotocol-BGP.
The
chapter
begins
with
a
thorough
review
of
Multiprotocol-BGP,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting
this
multicast
support
protocol.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
Multiprotocol-BGP
is
very
simple
actually.
By
extending
the
Network
Layer
Reachability
Information
(NLRI)
that
BGP
can
carry,
the
protocol
is
capable
of
carrying
IP
multicast
routes.
With
MP-BGP,
BGP
ends
up
carrying
two
sets
of
routes,
one
set
for
unicast
routing
and
one
set
for
multicast
routing.
Protocol
Independent
Multicast
(PIM)
uses
the
routes
associated
with
multicast
routing
to
build
its
multicast
data
distribution
trees
as
described
in
previous
chapters
of
this
book.
Multiprotocol-BGP
allows
you
to
have
a
unicast
routing
topology
different
from
a
multicast
routing
topology,
thus,
providing
more
control
over
the
network
and
its
resources.
In
this
situation
both
MP-BGP
and
MSDP
will
connect
the
individual
PIM-SM
domains.
MP-BGP
in
this
scenario
provides
a
unique
capability.
MP-BGP
serves
as
a
policy-based
inter-domain
routing
protocol.
This
means
that
the
MP-BGP
protocol
will
be
used
for
choosing
best
paths
through
an
IP
internetwork.
As
discussed
in
Chapter
10:
MSDP,
MSDP
is
the
protocol
that
enables
RPs
from
different
domains
to
exchange
information
about
active
sources.
Having
spent
an
entire
chapter
covering
the
operation
and
troubleshooting
of
MSDP
we
will
not
go
into
any
detail
regarding
the
capabilities
or
deployment
of
that
protocol
here.
However,
MP-BGP
is
new
to
us
and
will
need
to
be
discussed
to
obtain
a
better
understanding
of
what
capabilities
it
brings
to
the
inter-
domain
multicast
topology
that
we
will
be
referencing
in
Figure
12-2.
Figure
12-2:
IGMP
Lab
Topology
MP-BGP
The
biggest
advantage
that
MP-BGP
brings
to
an
inter-domain
multicast
deployment
is
the
ability
for
a
Service
Provider
to
selectively
determine
what
prefixes
they
will
use
for
RPF
checks
in
the
multicast
environment.
We
have
discussed
in
many
of
the
previous
chapters
the
importance
of
the
RPF
mechanism
that
devices
use
to
create
multicast
forwarding
trees
in
a
loop
free
and
efficient
manner.
This
process
determines
what
trees
will
be
used
to
successfully
route
multicast
packets
between
sources
and
receivers.
MP-BGP
defines
Multiprotocol
Extensions
for
BGP
version
4.
As
such
it
is
an
extension
of
BGP
protocol
itself
that
defines
all
the
administrative
mechanisms
necessary
to
allow
service
providers
and
customers
to
independently
manipulate
their
inter-domain
routing
environment.
MP-BGP
allows
tools
traditionally
used
in
unicast
BGP
to
now
be
employed
for
the
benefit
of
inter-domain
multicast
needs.
These
tools
include
inter-AS
mechanisms
that
filter
and
control
routing
like
route-maps.
This
means
that
any
network
running
iBGP
or
eBGP
protocols
can
now
use
MP-BGP
to
apply
multiple
policy
control
mechanisms
traditionally
used
in
BGP
to
specify
routing
and
forwarding
policies
for
multicast.
The
two
primary
attributes
that
we
will
discuss
in
this
the
chapter
will
be:
MP_REACH_NLRI
and
MP_UNREACH_NLRI.
These
two
attributes
were
introduced
in
BGP
version
4
to
create
a
simple
way
for
the
protocol
to
carry
two
categories
of
routing
informationunicast
routing
and
multicast
routing.
This
means
that
the
routes
communicated
via
MP-BGP
that
are
associated
with
multicast
routing
are
the
routes
used
for
RPF
checking
at
the
inter-domain
borders.
This
extension
to
the
traditional
BGP
version
4
protocol
has
one
huge
advantage.
MP-BGP
allows
an
inter-domain
network
to
support
non-congruent
unicast
and
multicast
topologies.
However,
almost
as
importantly
when
the
unicast
and
multicast
topologies
are
congruent,
MP-BGP
can
support
different
policies
for
each.
This
provides
unparalleled
policy-based
inter-domain
scalability
for
our
multicast
and
unicast
routing
protocols.
The
operational
deployment
of
MP-BGP
for
the
purpose
of
allowing
inter-domain
multicast
functionality
follows
a
simple
five
step
approach.
This
approach
provides
not
only
an
organized
method
for
deploying
the
protocol,
but
also
a
controlled
and
optimized
technique
for
isolation
and
troubleshooting
issues
when
MP-BGP
is
used
in
unison
with
MSDP.
Step
1
-
Configure
MP-BGP
to
exchange
unicast
and
multicast
routing
information
This
process
means
that
we
will
need
to
configure
all
BGP
speaking
devices
in
the
topology
to
utilize
the
MP-BGP
multicast
extensions.
Specifically,
we
will
need
to
use
the
multicast
Address
Family
Identifier:
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#router bgp 154
R1(config-router)#address-family ipv4 multicast
R1(config-router-af)#neighbor 172.16.15.5 activate
R1(config-router-af)#end
R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#router bgp 154
R5(config-router)#address-family ipv4 multicast
R5(config-router-af)#neighbor 172.16.15.1 activate
R5(config-router-af)#neighbor 172.16.45.4 activate
R5(config-router-af)#neighbor 172.16.15.1 next-hop-self
R5(config-router-af)#neighbor 172.16.45.4 next-hop-self
R5(config-router-af)#neighbor 172.16.15.1 route-reflector-client
R5(config-router-af)#neighbor 172.16.45.4 route-reflector-client
R5(config-router-af)#end
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#router bgp 154
R4(config-router)# address-family ipv4
R4(config-router-af)# redistribute ospf 1
R4(config-router-af)#exit
R4(config-router)#address-family ipv4 multicast
R4(config-router-af)#neighbor 172.16.45.5 activate
R4(config-router-af)#neighbor 172.16.45.5 next-hop-self
R4(config-router-af)#redistribute ospf 1
R4(config-router-af)#neighbor 172.16.46.6 activate
R4(config-router-af)#end
Observe
that
R4
in
our
topology
is
the
boundary
device
between
AS154
and
AS679.
Additionally,
this
device
is
the
RP
for
the
AS154
domain.
This
means
that
R4
will
require
NLRI
for
the
other
RP
located
in
AS679.
To
accomplish
this
we
will
redistribute
OSPF
1
into
both
the
ipv4
unicast
and
multicast
address
families.
This
will
propagate
the
reachability
information
from
one
AS
to
the
other.
Now
we
will
repeat
this
process
in
AS679.
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#router bgp 679
R6(config-router)# address-family ipv4
R6(config-router-af)#redistribute eigrp 100
R6(config-router-af)#exit
R6(config-router)#address-family ipv4 multicast
R6(config-router-af)#neighbor 172.16.67.7 activate
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#router bgp 679
R7(config-router)#address-family ipv4 multicast
R7(config-router-af)#neighbor 172.16.67.6 activate
R7(config-router-af)#neighbor 172.16.67.6 next-hop-self
R7(config-router-af)#neighbor 172.16.67.6 route-reflector-client
R7(config-router-af)#neigh 172.16.79.9 activate
R7(config-router-af)#neigh 172.16.79.9 next-hop-self
R7(config-router-af)#neigh 172.16.79.9 route-reflector-client
R7(config-router-af)#end
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#router bgp 679
R9(config-router)#address-family ipv4 multicast
R9(config-router-af)#neighbor 172.16.79.7 activate
R9(config-router-af)#end
Once
MP-BGP
has
been
enabled
throughout
the
domain
we
need
to
verify
that
information
is
being
exchanged
for
both
the
AFIs,
this
is
done
by
using
show
ip
bgp
ipv4
for
each
AFI:
R4#show ip bgp ipv4 multicast regexp ^679_
BGP table version is 11, local router ID is 192.1.100.100
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
We
clearly
see
that
R4
is
learning
about
all
the
addresses
in
AS679,
now
we
need
to
see
if
R6
is
learning
all
four
of
the
prefixes
from
AS154:
We
see
that
the
multicast
AFI
is
working
correctly
but,
we
need
to
keep
in
mind
that
this
is
the
multicast
AFI
that
is
used
for
RPF
checks,
not
the
unicast
AFI
that
is
used
for
reachability
and
routing.
To
verify
that
we
have
reachability
between
the
two
domains
we
will
perform
tests
at
each
of
the
most
distant
ends.
First,
we
will
use
show
ip
bpg
ipv4
unicast
then
we
will
use
traceroute:
It
is
clear
that
the
unicast
AFI
has
learned
all
the
information
needed
for
reachability
and
forwarding.
However,
to
be
thorough
we
will
repeat
this
test
from
R9:
Configure
peering
sessions
from
the
local
RP
to
the
RP
in
another
AS
using
the
commands
referenced
in
the
Technology
Review
section
of
this
chapter.
Since
we
have
a
BGP
peering
session
with
this
MSDP
peer,
it
is
necessary
to
use
the
IP
address
used
to
form
the
eBGP
peering
relationship
for
MSDP
as
well.
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#ip msdp peer 172.16.46.6 remote-as 679
R4(config)#
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#ip msdp peer 172.16.46.4 remote-as 154
R6(config)#
%MSDP-5-PEER_UPDOWN: Session to peer 172.16.46.4 going up
R6(config)#end
This configuration can easily be verified using the show ip msdp peer command:
By
default
all
SA
messages
that
are
received
will
be
forwarded
to
an
MSDP
peer.
There
is
however,
an
access-list
argument.
This
argument
is
an
extended
access
list
that
describes
particular
S,G
pairs
that
are
allowed
to
pass
through
the
filter.
Additionally,
a
route-map
map-name
keyword
and
argument
can
also
be
specified,
that
will
allow
filtering
based
on
arguments
specified
in
match
criteria.
If
these
multiple
criteria
are
true,
a
permit
from
the
route
map
allows
routes
to
pass
through
the
filter.
A
deny
statement
will
filter
routes.
If
both
keywords
are
used,
all
conditions
must
be
true
to
pass
or
filter
any
specific
S,
G
pairs
in
outgoing
SA
messages.
If
no
keyword
is
specified,
all
S,G
pairs
will
be
filtered.
The
simplest
method
suggested
to
discover
if
MSDP
peers
are
operating
as
anticipated
is
to
use
the
show
ip
msdp
peer
command
used
previously.
This
command
tells
us
if
the
sessions
are
up,
but
another
useful
command
is
show
ip
msdp
cache:
In
the
Common
Issues
with
MP-BGP
section,
three
primary
types
of
problems
were
identified:
Peer
Rejects
all
MSDP
SA
Messages,
Failure
to
Advertise
the
MSDP
Peer
Network,
and
Using
the
Incorrect
Address
to
Form
MSDP
Peers.
This
section
explores
these
three
categories
of
failure,
by
directing
our
attention
to
the
commands
necessary
to
identify
that
a
problems
exists.
Now we will check to see of R4 is sending an SA message toward its MSDP peer:
This output indicates that R4 is sending the SA messages; we next need to verify if they arrive at R6:
R6#
R6
is
not
accepting
any
of
the
SA
messages
from
R4.
The
first
method
employed
to
ascertain
why
will
be
to
look
at
the
MDSP
peering
arrangement
between
R4
and
R6.
This
is
done
with
the
show
ip
msdp
peer
command:
The
output
on
R6
indicates
that
an
input
S,G
filter
has
been
applied
that
blocks
"everything".
Removing
this
filter
will
correct
the
issue
we
have
encounted.
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
This
situation
can
be
verified
by
generation
a
ping
to
a
test
multicast
address
to
see
if
SA
messages
are
sent
to
a
peer.
We
will
do
this
by
staring
the
ping
from
R1:
Now we will check to see of R4 is sending an SA message toward its MSDP peer:
This output indicates that R4 is sending the SA messages; we next need to verify if they arrive at R6:
R6
is
accepting
the
SA
messages.
The
next
step
is
to
determine
if
the
S,G
pair
for
the
group
we
see
in
the
SA
message
is
added
to
the
multicast
routing
table:
This
output
tells
us
that
the
S,G
group
is
not
added.
Knowing
that
we
have
the
*,G
entry
for
the
group
the
S,G
should
have
been
added.
Perhaps
there
is
a
filter
applied
on
either
R4
or
R6:
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
There
are
no
filters
applied
and
the
MSDP
peering
between
the
RPs
is
working
as
expected.
The
issue
we
are
looking
at
is
definitely
looking
like
an
RP
failure.
Knowing
that
the
group
224.9.9.9
is
being
sourced
from
172.16.15.1
we
can
verify
the
RPF
status
by
using
the
show
ip
rpf
command:
There
is
no
RPF
interface
toward
this
source
address.
We
know
that
we
should
be
learning
this
prefix
via
MP-BGP,
so
we
will
look
at
the
routing
tables
for
the
AFIs
we
have
running
in
this
topology.
First
we
know
that
RPF
information
is
exchanged
using
the
ipv4
multicast
AFI
so
we
will
look
there
first:
We
only
see
the
address
originating
in
our
own
AS
as
indicated
by
the
Weight
Attribute
of
32768
we
know
these
routes
are
redistributed
into
the
MP-BGP
process
because
of
the
incomplete
(?)
Origin
code.
What
about
the
unicast
AFI?
R6#
There are no routes at all in the ipv4 unicast AFI. Are any prefixes being advertised to us from R4?
Nothing
is
being
advertised
from
R4
for
either
of
the
AFIs.
We
can
look
at
the
BGP
configuration
via
the
show
run
command:
no auto-summary
no synchronization
exit-address-family
We see that OSPF routes are not being redistributed into the unicast AFI. This can be corrected via:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#router bgp 154
R4(config-router)#address-family ipv4 unicast
R4(config-router-af)#redistribute ospf 1
R4(config-router-af)#end
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
This
output
informs
us
that
the
MSDP
peering
is
down.
We
see
that
authentication
is
not
configured,
and
we
see
that
the
MSDP
peering
addresses
used
were
the
Loopback0
interfaces.
But
we
see
that
no
connection
source
was
specified.
This
might
initially
look
like
our
problem
but
closer
examination
of
the
output
tells
us
that
we
are
using
a
remote
AS.
Knowing
that
the
MSDP
must
use
the
same
ip
address
used
to
form
the
MP-BGP
peering
we
will
need
to
change
this
configuration.
What
address
was
used
to
form
the
MP-BGP
peering
session
between
R4
and
R6?
The
frame
relay
point-to-point
interface
was
used,
so
we
will
need
to
use
those
addresses
for
the
MSDP
peering:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#no ip msdp peer 192.1.6.6 remote-as 679
R4(config)#ip msdp peer 172.16.46.6 remote-as 679
R4(config)#end
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#no ip msdp peer 192.1.4.4 remote-as 154
R6(config)#ip msdp peer 172.16.46.4 remote-as 154
R6(config)#end
R6#
%MSDP-5-PEER_UPDOWN: Session to peer 172.16.46.4 going up
The
console
messages
tell
us
the
MSDP
session
has
been
formed.
As
final
test
are
ping
from
R1
successful?
show
COMMAND:
show
ip
msdp
peer
ip_address
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:
EXAMPLE OUTPUT:
R6#show ip msdp peer 192.1.5.5
MSDP peer 192.1.5.5 not found
R6#show ip msdp peer 192.1.4.4
MSDP Peer 192.1.4.4 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.6.6)
Uptime(Downtime): 01:17:18, Messages sent/received: 77/86
Output messages discarded: 0
Connection and counters cleared 01:17:49 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:
EXAMPLE OUTPUT:
R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
R4#
show
COMMAND:
show
ip
bgp
ipv4
multicast
summary
EXAMPLE OUTPUT:
R4#show ip bgp ipv4 multicast summary
BGP router identifier 192.1.4.4, local AS number 154
BGP table version is 12, main routing table version 12
9 network entries using 1188 bytes of memory
9 path entries using 432 bytes of memory
7/6 BGP path/bestpath attribute entries using 1176 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
Bitfield cache entries: current 2 (at peak 2) using 64 bytes of memory
show
COMMAND:
show
ip
bgp
ipv4
multicast
This command displays detailed information about ipv4 multicast learned prefixes.
EXAMPLE OUTPUT:
R4#show ip bgp ipv4 multicast
BGP table version is 12, local router ID is 192.1.4.4
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
debug
COMMAND:
debug
ip
msdp
detail
EXAMPLE OUTPUT:
R4#debug ip msdp detail
MSDP Detail debugging is on
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 92 from 192.1.6.6, qs 1
R4#
MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 93 from 192.1.6.6, qs 1
R4#
debug
COMMAND:
debug
ip
bgp
ipv4
multicast
This command displays detailed information regarding bgp ipv4 multicast operations.
EXAMPLE OUTPUT:
R4#debug ip bgp ipv4 multicast
BGP debugging is on for address family: IPv4 Multicast
BGPNSF state: 172.16.46.6 went from nsf_not_active to nsf_not_active
BGP: 172.16.46.6 went from Established to Idle
%BGP-5-ADJCHANGE: neighbor 172.16.46.6 Down User reset
R4#
BGP: 172.16.46.6 closing
R4#
BGP: 172.16.46.6 went from Idle to Active
BGP: 172.16.46.6 open active, local address 172.16.46.4
BGP: 172.16.46.6 read request no-op
BGP: 172.16.46.6 went from Active to OpenSent
BGP: 172.16.46.6 sending OPEN, version 4, my as: 154, holdtime 180 seconds
BGP: 172.16.46.6 send message type 1, length (incl. header) 61
BGP: 172.16.46.6 rcv message type 1, length (excl. header) 42
BGP: 172.16.46.6 rcv OPEN, version 4, holdtime 180 seconds
BGP: 172.16.46.6 rcv OPEN w/ OPTION parameter len: 32
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 6
BGP: 172.16.46.6 OPEN has CAPABILITY code: 1, length 4
BGP: 172.16.46.6 OPEN has MP_EXT CAP for afi/safi: 1/1
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 6
BGP: 172.16.46.6 OPEN has CAPABILITY code: 1, length 4
BGP: 172.16.46.6 OPEN has MP_EXT CAP for afi/safi: 1/2
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 2
BGP: 172.16.46.6 OPEN has CAPABILITY code:
R4#128, length 0
BGP: 172.16.46.6 OPEN has ROUTE-REFRESH capability(old) for all address-families
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 2
BGP: 172.16.46.6 OPEN has CAPABILITY code: 2, length 0
BGP: 172.16.46.6 OPEN has ROUTE-REFRESH capability(new) for all address-families
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 6
BGP: 172.16.46.6 OPEN has CAPABILITY code: 65, length 4
BGP: 172.16.46.6 OPEN has 4-byte ASN CAP for: 679
BGP: 172.16.46.6 rcvd OPEN w/ remote AS 679, 4-byte remote AS 679
BGP: 172.16.46.6 went from OpenSent to OpenConfirm
BGP: 172.16.46.6 went from OpenConfirm to Established
%BGP-5-ADJCHANGE: neighbor 172.16.46.6 Up
R4#
The network topology used in this section is shown in Figure 12-6 below:
Figure
12-6:
The
Chapter
Challenge
Topology
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
R4
and
R6
are
not
successfully
forming
a
MSDP
peering
relationship.
Correct
this
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
multicast
pings
from
R1
to
the
group
224.9.9.9
do
not
reach
test
clients
on
R9
VLAN79
segment.
Correct
this
issue.
The
status
of
the
peering
relation
to
172.16.46.6
is
Down.
This
verifies
that
the
problem
actually
exists.
Step
2
-
Fault
Isolation:
Failure
of
MSDP
peers
to
form
are
either
caused
by
misconfiguration
or
unicast
routing
failures.
We
will
rule
out
misconfiguration
by
comparing
the
output
of
the
show
ip
msdp
peer
command
on
R6:
R6#sh ip msdp peer
MSDP Peer 172.16.46.4 (?), AS 154 (configured AS)
Connection status:
State: Listen, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:06:10, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:06:10 ago
SA Filtering:
The
last
line
of
the
output
on
R6
says
that
MD5
authentication
is
enabled.
Authentication
is
not
enabled
on
R4,
this
has
isolated
our
problem.
Step
3
-
Fault
Remediation:
In
this
scenario,
the
ip
msdp
password
command
needs
to
be
applied
to
R4:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#ip msdp password peer 172.16.46.6 CISCO
R4(config)#end
R4#
%MSDP-5-PEER_UPDOWN: Session to peer 172.16.46.6 going up
The MSDP peering session status is: Up. The solution has successfully remediated the problem.
R6
accepts
the
SA
messages.
Under
normal
circumstances
R6
should
add
a
S,G
pair
to
its
multicast
routing
table
for
this
group,
if
there
is
a
*,G
learned
from
an
interested
source.
This
can
be
verified
with
show
ip
mroute:
The
*,G
entry
is
in
the
table
but
not
the
S,G
for
the
source
172.16.15.1.
This
could
be
a
reachability
issues
or
an
RPF
issue.
We
will
check
the
RPF
status
first.
R6#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1) failed, no route exists
This
clearly
tells
us
that
we
have
an
RPF
issue.
Remember
that
RPF
checks
in
MP-BGP
are
performed
against
the
contents
of
the
ipv4
multicast
AFI.
We
can
look
at
the
contents
of
this
table
via
show
ip
bgp
ipv4
multicast:
R6#show ip bgp ipv4 multicast
BGP table version is 53, local router ID is 192.1.6.6
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
This
output
reflects
that
we
only
see
prefixes
redistributed
(?)
into
our
AS
on
R6
(based
on
the
weight
value
of
32768).
Where
are
the
routes
learned
from
AS154?
R4
is
not
advertising
any
ipv4
multicast
prefixes
additionally
it
is
not
advertising
any
ipv4
unicast
routes
either:
A show run will reveal that R4 is not redistributing ospf information into the address-family ipv4 unicast:
Chapter
13:
Multicast
Security
and
Advanced
Features
This
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting
details
the
security
features
of
Multicast
technologies
in
great
depth.
This
chapter
also
covers
several
advanced
multicast
features.
This
chapter
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs
for
these
various
features.
The
chapter
begins
with
a
thorough
review
of
these
various
features,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
As
in
all
other
chapters
in
this
text,
we
will
utilize
a
single
deployment
environment
to
demonstrate
the
operation
and
troubleshooting
of
the
mechanisms
outlined
in
this
chapter.
This
topology,
illustrated
in
Figure
13-2,
will
allow
us
to
deploy
switch
and
router
versions
of
different
commands
as
well
as
technologies
that
only
work
on
catalyst
switches.
Figure
13-2:
Basic
Multicast
Security
and
Advanced
Features
Topology
Using
the
topology
outlined
in
figure
13-2
this
chapter
will
take
an
intuitive
approach
to
describing
the
operation
and
troubleshooting
issues
associated
with
the
following
multicast
security
topics:
R8#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 224.9.9.9
R8(config-if)#end
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/0
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end
We
can
clearly
see
that
both
devices
have
joined
the
multicast
group
224.9.9.9
by
looking
at
the
output
of
generating
a
ping
from
R5
for
the
group
224.9.9.9.
We
will
expect
based
on
the
fact
that
this
topology
is
running
PIM-DM
that
we
should
obtain
echo
replies
from
both
R8
and
R9:
R5#ping 224.9.9.9 r 10
CAT2(config-igmp-profile)#exit
CAT2(config)#interface FastEthernet0/9
CAT2(config-if)#ip igmp filter 1
CAT2(config-if)#end
The
most
common
issue
associated
with
IGMP
filtering
is
a
failure
to
create
the
profile,
or
the
miss
application
of
the
igmp
filter
command.
Issues
associated
with
this
process
can
best
be
isolated
using
debug
ip
igmp
filter
on
the
switch
where
the
process
has
been
applied:
IGMP
Snooping
is
a
method
that
actually
snoops
or
inspects
IGMP
traffic
on
a
switch.
When
enabled,
a
switch
will
watch
for
IGMP
messages
passed
between
a
host
and
a
router,
and
will
add
the
necessary
ports
to
its
multicast
table,
ensuring
that
only
the
ports
that
require
a
given
multicast
stream
actually
receive
it.
IGMP
Snooping
suffers
from
one
major
drawback
it
requires
the
switch
to
inspect
all
IGMP
traffic,
on
top
of
its
other
responsibilities.
This
inspection
of
the
IGMP
transmissions
between
the
host
and
the
router
permits
the
device
to
keep
track
of
multicast
groups
and
member
ports.
When
the
switch
receives
an
IGMP
report
from
a
host
for
a
particular
multicast
group,
the
switch
adds
the
host
port
number
to
the
forwarding
table
entry;
when
it
receives
an
IGMP
Leave
Group
message
from
a
host,
it
removes
the
host
port
from
the
table
entry.
It
also
periodically
deletes
entries
if
it
does
not
receive
IGMP
membership
reports
from
the
multicast
clients.
On
Catalyst
3560
switches
this
protocol
is
on
by
default,
and
utilizes
a
concept
known
as
IGMP
Report
Suppress.
IGMP
report
suppression
is
used
to
forward
only
one
IGMP
report
per
multicast
router
query
to
multicast
devices.
When
IGMP
router
suppression
is
enabled
(the
default),
the
switch
sends
the
first
IGMP
report
from
all
hosts
for
a
group
to
all
the
multicast
routers.
The
switch
does
not
send
the
remaining
IGMP
reports
for
the
group
to
the
multicast
routers.
This
feature
prevents
duplicate
reports
from
being
sent
to
the
multicast
devices
and
in
effect
is
filtering
IGMP
messages
for
the
purpose
of
bandwidth
and
processor
conservation.
There are no commands to activate this protocol, but it can be turned off via the no ip igmp snooping.
BSR-Border
Filter
In
situations
where
there
are
two
or
more
multicast
domains
it
is
often
desirable
to
bound
or
scope
multicast
control
plane
protocols
to
their
respective
domains.
This
is
accomplished
with
regard
to
BSR
environments
by
applying
the
ip
pim
bsr-border
command
at
the
multicast
domain
boundaries.
When
this
command
is
used
on
a
interface,
no
PIM
BSR
messages
will
be
allowed
to
enter
or
be
sent
out
the
interface.
This
prevents
BSR
information
from
being
exchanged
between
multicast
inter
domain
neighbors.
Typically,
this
is
to
prevent
the
accidental
election
of
an
RP
not
found
in
a
particular
multicast
domain.
Common
issues
associated
with
the
use
of
this
command
are
incorrect
placement.
This
command
can
in
effect
split
what
should
be
an
otherwise
contiguous
domain
into
two
dysfunctional
domains.
The
second
most
common
rationale
for
this
feature
is
to
create
a
multicast
"stub
routing
environment."
In
stub
routing,
the
all
routers,
even
those
we
do
not
manage,
are
still
able
to
take
part
in
the
forwarding
of
multicast
packets,
but
they
must
do
so
by
exchanging
IGMP
Join
and
Leave
packets
with
designated
devices
in
our
managed
domain.
In
this
scenario,
it
is
possible
to
conserve
resources
by
reducing
the
number
of
PIM
neighbor
states
to
track.
Typically,
multicast
stub
networks
use
PIM-DM
and
therefore
conserve
resources
on
your
RPs.
In
our
topology
we
create
a
multicast
stub
environment
by
prevent
R1
from
accepting
PIM
joins
from
R7
via
the
VLAN17
segment
connecting
them:
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/1
R1(config-if)#ip pim neighbor-filter 1
R1(config-if)#exit
R1(config)#access-list 1 deny 172.16.17.7
R1(config)#end
Keep in mind that this PIM relationship will have to expire before we see visible results:
R1#
%PIM-5-NBRCHG: neighbor 172.16.17.7 DOWN on interface FastEthernet0/1 DR
%PIM-5-DRCHG: DR change from neighbor 172.16.17.7 to 172.16.17.1 on interface
FastEthernet0/1
This issue now is the fact that we do not have a peering relationship between R1 and R7.
R1(config)#interface FastEthernet0/1
R1(config-if)#no ip pim dense
R1(config-if)#no ip pim dense-mode
R1(config-if)#end
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip pim dense-mode
R7(config-if)#end
This
deployment
facilitates
the
conversion
of
multicast
traffic
to
some
method
of
transfer
that
will
be
able
to
reach
host
across
the
now
PIM
disabled
connection.
We
will
illustrate
the
issue
by
having
R8
join
the
multicast
group
224.9.9.9
and
then
test
from
R5
using
that
group:
R8#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 224.9.9.9
R8(config-if)#end
The
pings
are
unsuccessful.
Nevertheless,
if
we
utilize
the
multicast
address-helper
command
to
use
broadcast
traffic
to
overcome
this
configuration
issue.
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0
R1(config-if)# ip multicast helper-map 224.9.9.9 172.16.100.5 100
R1(config-if)#!
R1(config-if)#interface FastEthernet0/1
R1(config-if)# ip directed-broadcast
R1(config-if)#!
R1(config-if)#ip forward-protocol udp 5001
R1(config)#!
R1(config)#access-list 100 permit udp any any eq 5001
At
this
point,
the
modification
to
R1
is
going
to
take
any
traffic
sourced
from
the
ip
address
172.16.100.5
for
the
group
224.9.9.9
and
translate
it
to
udp
broadcast
traffic
destined
to
port
5001.
The
next
portion
will
be
to
go
to
R7
and
translate
it
back
to
multicast
so
that
it
can
be
multicast
forwarded
the
rest
of
the
way
toward
the
hosts.
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)# ip multicast helper-map broadcast 239.1.1.1 100
R7(config-if)#!
R7(config-if)#ip forward-protocol udp 5001
R7(config)#!
R7(config)#access-list 100 permit udp any any eq 5001
This
configuration
can
now
be
tested
on
R5
by
repeating
the
multicast
ping.
R5#ping 224.9.9.9 repeat 10
This
configuration
is
working
perfectly.
In
scenarios
where
the
traffic
can
remain
broadcast
in
nature
it
is
not
necessary
to
translate
it
back
to
multicast.
Common
issues
impacting
the
deployment
of
this
solution
include
failure
to
enable
directed
broadcast,
use
of
the
wrong
udp
port
number,
or
incorrectly
configured
access-lists.
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ip multicast route-limit 2
The
most
common
issue
that
causes
problems
in
the
deployment
of
this
configuration
is
the
failure
to
take
multicast
groups
like
224.0.1.40
that
all
routers
will
join
in
account.
But
the
router
will
send
messages
explaining
that
an
issue
exists
so
long
as
the
route-limit
is
exceeded.
R1#
%MROUTE-4-ROUTELIMIT_ATTEMPT: Attempt to exceed multicast route-limit of 2 -Process=
"IGMP Input", ipl= 0, pid= 250
Most
times
in
Cisco
IOS
it
is
necessary
to
define
something
based
on
a
source
(multicast
source
in
this
case)
and
a
destination
(multicast
group(s))
starting
with
the
source
followed
by
the
destination.
The
most
common
problem
with
this
command
is
to
follow
this
logic.
In
multicast
rate
limiting
the
group
is
specified
first.
On
R1
we
will
deploy
the
ip
multicast
rate-limit
command
for
the
group
224.9.9.9
coming
from
the
source
172.16.100.5
to
block
this
particular
multicast
flow:
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#access-list 1 permit 224.9.9.9
R1(config)#access-list 2 permit 172.16.100.5
R1(config)#interface FastEthernet0/0
R1(config-if)#ip multicast rate-limit in group-list 1 source-list 2 0
R1(config-if)#end
If the command has worked as anticipated, the ping will work for the same group from R2:
There
are
two
common
problems
when
deploying
GRE
tunnels.
The
first
issue
is
that
GRE
tunnels
do
not
maintain
state
information.
This
means
that
once
a
tunnel
is
configured
it
will
show
an
up/up
state.
To
prevent
confusion
it
is
recommended
to
use
keepalive
support
under
the
tunnel
with
the
keepalive
command.
The
next
most
common
issue
is
known
as
a
recursive
lookup
failure.
This
is
where
the
source
or
destination
of
a
tunnel
are
learned
through
the
tunnel
itself.
To
prevent
this
issue
from
becoming
an
issue
it
is
advised
not
to
advertise
the
tunnel
interfaces
into
the
running
IGP,
but
this
makes
it
necessary
to
use
a
multicast
static
route
to
correct
any
RFP
issues.
We
will
remove
the
PIM
relationship
between
R1
and
R7
once
more
but
this
time
we
will
create
GRE
tunnel
between
R1
and
R7.
R1(config)#interface FastEthernet0/1
R1(config-if)#no ip pim dense-mode
R1(config-if)#end
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip pim dense-mode
R7(config-if)#end
R1(config)#interface tunnel 17
%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel17, changed state to down
R1(config-if)#ip address 17.17.17.1 255.255.255.0
R1(config-if)#tunnel source lo 0
R1(config-if)#tunnel destination 7.7.7.7
R1(config-if)#keepalive 1 3
R1(config-if)#ip pim dense-mode
R1(config-if)#end
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface tunnel 17
R7(config-if)#ip address 17.17.17.7 255.255.255.0
R7(config-if)#tunnel source lo 0
R7(config-if)#tunnel destination 1.1.1.1
R7(config-if)#keepalive 1 3
R7(config-if)#ip pim dense-mode
R7(config-if)#end
%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel17, changed state to up
We can look at the status of the tunnel and test that it works via pings:
R7#ping 17.17.17.1
This
can
be
corrected
by
applying
an
ip
mroute
for
the
source
172.16.100.5
that
specifies
the
tunnel
17
interface:
Most
notably
this
demonstration
illustrates
the
need
to
account
for
the
fact
that
the
GRE
tunnel
will
not
appear
in
the
unicast
routing
table,
and
thus
cannot
be
employed
to
reach
any
source.
Source-Specific
Multicast
RFC
3569
describes
source-specific
multicast
(SSM).
It
is
a
datagram
delivery
model
that
best
supports
audio
and
video-based
one-to-many
applications.
SSM
depends
on
Protocol
Independent
Multicast
source-specific
mode
(PIM-SSM)
and
Internet
Group
Management
Protocol
Version
3
(IGMPv3)
for
its
implementation.
PIM-SSM
is
based
on
PIM
sparse
mode
(PIM-SM).
Review
Chapter
4:
Protocol
Independent
Multicast
-
Sparse
Mode
(PIM-SM)
should
you
need
more
information
on
this
important
protocol.
A
multicast
network
must
maintain
knowledge
about
which
hosts
in
the
network
are
actively
sending
multicast
traffic.
With
source-specific
multicast,
this
information
is
provided
by
receivers
through
the
source
addresses
relayed
to
the
last-hop
routers
using
IGMPv3.
In
SSM,
receivers
must
subscribe
or
unsubscribe
to
(S,
G)
channels
to
receive
or
not
receive
traffic
from
specific
sources.
The
proposed
standard
approach
for
channel
subscription
signaling
utilizes
IGMP
INCLUDE
mode
membership
reports,
which
are
supported
only
in
IGMP
Version
3.
SSM
coexists
with
normal
PIM
and
IGMP
operations
by
applying
the
SSM
delivery
model
to
a
configured
subset
of
the
IP
multicast
group
address
range.
The
Internet
Assigned
Numbers
Authority
(IANA)
has
reserved
the
address
range
from
232.0.0.0
through
232.255.255.255
for
SSM
applications
and
protocols.
When
an
SSM
range
is
defined,
an
existing
IP
multicast
receiver
application
will
not
receive
any
traffic
when
it
tries
to
use
addresses
in
the
SSM
range
unless
the
application
is
modified
to
use
explicit
(S,
G)
channel
subscription
or
is
SSM-enabled
through
a
URL
Rendezvous
Directory
(URD).
SSM
can
be
deployed
alone
in
a
network
without
the
full
range
of
protocols
that
are
required
for
interdomain
PIM-SM.
In
other
words,
SSM
does
not
require
a
rendezvous
point
(RP),
so
there
is
no
need
for
an
RP
mechanism
such
as
Auto-RP,
MSDP,
or
bootstrap
router
(BSR).
Deploying
SSM
in
a
network
that
is
already
configured
for
PIM-SM
simply
requires
the
last-hop
routers
be
upgraded
to
a
software
image
that
supports
SSM.
These
non-last-hop
routers
must
only
run
PIM-SM
in
the
SSM
range.
The
SSM
mode
of
operation
is
enabled
by
configuring
the
SSM
range
using
the
ip
pim
ssm
global
configuration
command.
For
groups
within
the
SSM
range,
(S,
G)
channel
subscriptions
are
accepted
through
IGMPv3
INCLUDE
mode
membership
reports.
PIM
operations
within
the
SSM
range
of
addresses
change
to
PIM-SSM,
a
mode
derived
from
PIM-SM.
In
this
mode,
only
PIM
(S,
G)
Join
and
Prune
messages
are
generated
by
the
router.
Incoming
messages
related
to
rendezvous
point
tree
(RPT)
operations
are
ignored
or
rejected,
and
incoming
PIM
register
messages
are
immediately
answered
with
Register-Stop
messages.
PIM-SSM
is
backward-compatible
with
PIM-SM
unless
a
router
is
a
last-hop
router.
Therefore,
routers
that
are
not
last-hop
routers
can
run
PIM-SM
for
SSM
groups
(for
example,
if
they
do
not
yet
support
SSM).
For
groups
within
the
SSM
range,
no
MSDP
Source-Active
(SA)
messages
within
the
SSM
range
will
be
accepted,
generated,
or
forwarded.
IGMPv3
is
the
third
version
of
the
IETF
standards
track
protocol
in
which
hosts
signal
membership
to
last-hop
routers
of
multicast
groups.
IGMPv3
introduces
the
ability
for
hosts
to
signal
group
membership
that
allows
filtering
capabilities
with
respect
to
sources.
A
host
can
signal
either
that
it
wants
to
receive
traffic
from
all
sources
sending
to
a
group
except
for
some
specific
sources
(a
mode
called
EXCLUDE)
or
that
it
wants
to
receive
traffic
only
from
some
specific
sources
sending
to
the
group
(a
mode
called
INCLUDE).
IGMPv3
can
operate
with
both
ISM
and
SSM.
In
ISM,
both
EXCLUDE
and
INCLUDE
mode
reports
are
accepted
by
the
last-hop
router.
In
SSM,
only
INCLUDE
mode
reports
are
accepted
by
the
last-hop
router.
In
some
environments
it
is
necessary
to
adapt
the
typical
PIM
protocol
deployment
in
such
a
fashion
as
to
allow
it
to
better
support
one-to-many
packet
exchange
models.
This
adaptation
takes
the
form
of
a
special
extension
called
Source
Specific
Multicast
(SSM).
This
extension
enables
a
receiver
to
select
content
directly
from
a
specified
source.
This
results
in
the
creation
of
a
source
based
tree,
thus
bypassing
the
need
for
a
using
an
RP.
In
most
multicast
implementations,
applications
must
"join"
an
IP
multicast
group,
because
traffic
is
distributed
group
members.
If
two
applications
with
different
sources
and
receivers
use
the
same
IP
multicast
group
address,
receivers
of
both
applications
will
receive
traffic
from
the
senders
of
both
the
applications.
Even
though
the
receivers,
if
programmed
appropriately,
can
filter
out
the
unwanted
traffic,
using
filters
like
those
discussed
previously,
this
situation
generates
large
amounts
of
unwanted
traffic.
However,
in
an
SSM
multicast
network,
the
router
closest
to
the
receiver
will
be
aware
of
a
request
coming
from
an
application
to
join
to
a
particular
multicast
source
by
using
the
include
mode
in
IGMPv3.
The
multicast
router
now
forwards
the
request
directly
to
the
source
rather
than
sending
the
request
to
an
RP.
The
source
will
send
packets
directly
to
the
receiver
using
the
shortest
path.
In
SSM,
routing
of
multicast
traffic
relies
solely
on
source-based
trees.
This
means
that
an
RP
is
not
required.
The
ability
for
SSM
to
explicitly
include
and
exclude
particular
sources
allows
for
a
limited
amount
of
security.
Traffic
from
a
source
to
a
group
that
is
not
explicitly
listed
on
the
include
list
will
not
be
forwarded
to
uninterested
receivers.
SSM
also
solves
IP
multicast
address
collision
issues
associated
with
one-to-many
type
applications.
Routers
running
in
SSM
mode
will
route
data
streams
based
on
the
full
(S,
G)
address.
Assuming
that
a
source
has
a
unique
IP
address
to
send
on
the
internet,
any
(S,
G)
from
this
source
also
would
be
unique.
R1(config)#interface FastEthernet0/0
R1(config-if)#ip pim sparse-mode
R1(config-if)#interface FastEthernet0/1
R1(config-if)#ip pim sparse-mode
R1(config-if)#exit
R1(config)#ip pim ssm default
R1(config)#end
This
command
application
tells
the
router
that
multicast
groups
in
the
range
of
232.0.0.0/8
will
be
multicast
routed
using
source
specific
multicast.
Next
on
the
client
it
is
necessary
to
employ
an
IGMP
protocol
that
supports
SSM.
In
the
case
of
R8
we
will
have
its
FastEthernet0/0
interface
join
the
multicast
group
232.9.9.9
specifically
from
the
source
located
at
172.16.100.5
only:
R8#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R8(config)#ip pim ssm default
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 232.9.9.9 source 172.16.100.5
R8(config-if)#ip igmp version 3
R8(config-if)#end
This
means
that
R8
will
accept
the
multicast
stream
for
the
group
232.9.9.9
from
the
source
172.16.100.5
and
no
other
source.
The network topology used in this section is shown in Figure 13-3 below:
Trouble
Ticket
#1
Your
supervisor
has
brought
to
your
attention
that
R8
is
not
receiving
any
multicast
traffic
sent
from
R5
to
the
multicast
address
224.9.9.9.
Correct
this
issue.
Trouble
Ticket
#2
After
solving
Trouble
Ticket
#1,
your
supervisor
has
observed
that
multicast
traffic
being
sent
to
the
group
224.99.99.99
is
not
being
received
by
R9.
Correct
this
issue.
R8#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 224.9.9.9
R8(config-if)#end
Mtrace
output
seem
to
indicate
that
there
is
no
RPF
fault.
This
means
the
next
most
logical
step
will
be
to
look
at
the
multicast
routing
tables
of
the
devices
in
the
network
between
the
source
and
the
destination
devices:
R1#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
We
see
the
S,G
pair
in
the
multicast
routing
table
of
R1,
what
about
R7?
R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/0
R9(config-if)#ip igmp join-group 224.99.99.99
R9(config-if)#end
We
see
the
S,G
pair
in
the
multicast
routing
table
of
R1,
what
about
R7?
R7#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
We
see
the
S,G
pair
but
what
about
the
table
on
R9?
R9#show ip mroute 224.99.99.99
Group 224.99.99.99 not found
The
output
says
that
R9
does
not
have
any
record
of
the
group.
Not
long
after
executing
the
show
command
the
console
reports
the
message:
R9#
%MROUTE-4-ROUTELIMIT: Current count of 2 exceeds multicast route-limit of 1 -Process=
"<interrupt level>", ipl= 1
This
output
tells
us
that
we
have
a
route-limit
parameter
configured
on
R9
as
evidenced
by
the
output
of
show
run:
This
chapter
of
IPv4/6
Multicast
Operation
and
Troubleshooting
examines
IPv6
multicast
in
great
depth.
Once
the
operational
characteristics
of
IPv6
multicast
are
detailed
completely,
the
focus
becomes
that
of
troubleshooting.
This
includes
the
careful
examination
of
symptoms,
a
fault
isolation
methodology,
and
the
implementation
of
repairs.
The
chapter
begins
with
a
thorough
review
of
IPv6
multicast
technologies,
and
then
quickly
launches
in
to
an
exhaustive
analysis
of
the
art
of
troubleshooting.
This
important
chapter
concludes
with
sample
troubleshooting
scenarios,
reference
materials
for
the
most
important
show
and
debug
commands,
and
exciting
challenges
that
allow
readers
to
practice
implementing
the
troubleshooting
skills
they
have
obtained.
You
can
quickly
spot
an
IPv6
multicast
address
by
examining
the
initial
bit
settings.
A
multicast
address
begins
with
the
first
8
bits
set
to
1
(11111111).
The
corresponding
IPv6
prefix
notation
is
FF00::/8.
Following
the
initial
8
bits,
there
are
4
bits
(labeled
0RPT)
which
are
flag
fields.
The
high-order
flag
is
reserved,
and
must
be
initialized
to
0.
If
the
R
bit
is
set
to
1,
then
the
P
and
T
bits
must
also
be
set
to
1.
This
indicates
there
is
an
embedded
Rendezvous
Point
(RP)
address
in
the
multicast
address.
The next four bits are scope. The possible scope values are:
0
-
reserved
1
-
Interface-Local
scope
2
-
Link-Local
scope
3
-
reserved
4
-
Admin-Local
scope
5
-
Site-Local
scope
6
-
(unassigned)
7
-
(unassigned)
8
-
Organization-Local
scope
9
-
(unassigned)
A
-
(unassigned)
B
-
(unassigned)
C
-
(unassigned)
D
-
(unassigned)
E
-
Global
scope
F
-
reserved
The
remaining
112
bits
of
the
address
make
up
the
multicast
Group
ID.
An
example
of
an
IPv6
multicast
address
would
be
all
of
the
NTP
servers
on
the
Internet
FF0E:0:0:0:0:0:0:101.
Keep
in
mind
that
just
like
in
IPv4
multicast,
there
are
many
reserved
addresses
of
link-local
scope.
Here
are
some
examples:
A special, reserved IPv6 multicast address that is very important is the Solicited-Node multicast address:
FF02:0:0:0:0:1:FFXX:XXXX
A
Solicited-Node
multicast
address
is
created
automatically
by
the
router.
The
router
takes
the
low-
order
24
bits
of
the
IPv6
address
(unicast
or
anycast)
and
appends
those
bits
to
the
prefix
FF02:0:0:0:0:1:FF00::/104.
This
results
in
a
multicast
address
within
the
range
FF02:0:0:0:0:1:FF00:0000
to
FF02:0:0:0:0:1:FFFF:FFFF.
These
addresses
are
used
by
the
IPv6
Neighbor
Discovery
(ND)
protocol
in
order
to
provide
a
much
more
efficient
address
resolution
protocol
than
Address
Resolution
Protocol
(ARP)
of
IPv4.
Just
like
the
version
4
PIM-SM,
PIMv2
for
IPv6
utilizes
concepts
such
as
Designated
Routers,
Assert
Messages,
and
Rendezvous
Points.
Reverse
Path
Forwarding
(RPF)
checks
are
performed
against
the
underlying
IPv6
routing
database.
Again,
be
sure
to
review
Chapter
4
should
any
of
these
important
concepts
need
a
review.
Using
the
Multicast
Listener
Discovery
Protocol,
hosts
can
indicate
they
want
to
receive
multicast
transmissions
for
select
groups.
Routers
(queriers)
can
control
the
flow
of
multicast
in
the
network
through
the
use
of
MLD.
MLD
uses
Internet
Control
Message
Protocol
(ICMP)
to
carry
its
messages.
All
such
messages
are
link-
local
in
scope,
and
they
all
have
the
router
alert
option
set.
MLD
uses
three
types
of
messages
Query,
Report,
and
Done.
The
Done
message
is
like
the
Leave
message
in
IGMP
version
2.
It
indicates
a
host
no
longer
wants
to
receive
the
multicast
transmission.
This
triggers
a
Query
to
check
for
any
more
receivers
on
the
segment.
Static
BSR
Embedded
RP
As
you
might
guess,
BSR
is
functionally
very
similar
to
its
IPv4
counterpart
as
covered
in
Chapter
6:
Bidirectional
Protocol
Independent
Multicast
(BIDIR-PIM).
IPv6
Embedded
RP
The
embedded
RP
concept
of
IPv6
multicast
takes
direct
advantage
of
the
enormous
size
of
the
multicast
IPv6
address.
This
functionality
also
helps
fill
the
gap
left
by
no
Multicast
Source
Disocvery
Protocol
(MSDP)
in
IPv6
multicast.
Embedded
RP
allows
for
the
rendezvous
point
address
information
to
be
embedded
directly
in
the
multicast
group
address.
Gig0/0 Gig0/1
R2
20
64
01
/
:2
4::
62
42
6::
:2
/
01
64
Fa0/0 Fa0/1
20
Fa0/0 Fa0/0 Fa0/1 Fa0/1 Fa0/0 Fa0/0 Fa0/1 Fa0/1
2001:1515::/64 2001:4545::/64 2001:6767::/64 2001:7979::/64
R1
R5 R4 S0/0/0.1 S0/1/0.1 R6 R7
R9
Source Receiver
406
.
.
.
.
.
.
.
.
.604
.
.
.
.
2001:4646::/6
4
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ipv6 unicast-routing
R1(config)#interface fa0/0
R1(config-if)#ipv6 address 2001:1515::1/64
R1(config-if)#no shutdown
R1(config-if)#
*Mar 1 00:03:32.627: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up
*Mar 1 00:03:33.627: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
changed state to up
R1(config-if)#do show ipv6 interface fa0/0
FastEthernet0/0 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::20A:B8FF:FE1A:5030
No Virtual link-local address(es):
Global unicast address(es):
2001:1515::1, subnet is 2001:1515::/64
Joined group address(es):
FF02::1
FF02::2
FF02::A
FF02::D
FF02::16
FF02::1:FF00:1
FF02::1:FF1A:5030
Notice
that
because
IPv6
routing
capabilities
are
enabled
for
this
device,
one
of
the
multicast
groups
joined
is
ALL
ROUTERS
for
the
local-link
(FF02::2).
ipv6 multicast-routing
Should
you
need
to
disable
this
functionality
on
a
particular
interface,
simplt
type
no
ipv6
pim
under
that
interface
as
shown
below:
R1(config)#ipv6 multicast-routing
R1(config)#exit
R1#
R1#show ipv6 pim interface
Interface PIM Nbr Hello DR
Count Intvl Prior
Loopback0 on 0 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : this system
Null0 off 0 30 1
Address: FE80::1
DR : not elected
VoIP-Null0 off 0 30 1
Address: ::
DR : not elected
FastEthernet0/0 on 1 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : FE80::20A:B8FF:FE2C:80E0
FastEthernet0/1 off 0 30 1
Address: ::
DR : not elected
Loopback10 on 0 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : this system
Tunnel0 off 0 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : not elected
R1#conf t
R1(config)#interface lo10
R1(config-if)#no ipv6 pim
R1(config-if)#end
R1#show ipv6 pim interface
Interface PIM Nbr Hello DR
Count Intvl Prior
Loopback0 on 0 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : this system
Null0 off 0 30 1
Address: FE80::1
DR : not elected
VoIP-Null0 off 0 30 1
Address: ::
DR : not elected
FastEthernet0/0 on 1 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : FE80::20A:B8FF:FE2C:80E0
FastEthernet0/1 off 0 30 1
Address: ::
DR : not elected
Loopback10 off 0 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : not elected
Tunnel0 off 0 30 1
Address: FE80::20A:B8FF:FE1A:5030
DR : not elected
R1#
Static assigmnet of the RP for the topology is simple. Use the following command:
A
significant
difference
between
PIM
for
IP
version
4
and
PIM
for
IP
version
6
involves
the
source
registration
process.
Immediately
upon
learning
the
RP,
an
IPv6
multicast
Cisco
router
constructs
a
tunnel
interface
leading
to
the
RP.
The
tunnel
is
also
immediately
enabled
for
IPv6
multicast.
This
tunnel
is
used
for
the
duration
of
the
registration
process.
After
the
completion
of
registration,
multicast
receivers
switch
to
the
most
optimal
path,
which
negates
the
use
of
the
tunnel.
Examine
the
automatic
creation
of
the
tunnel
below:
In
order
to
examine
a
list
of
currently
active
tunnels,
Cisco
provides
the
command
show
ipv6
pim
tunnel.
R1#
R9(config-if)#
Various
MLD-related
query
paramets
may
be
set
as
well
on
the
Cisco
router,
once
again,
following
IGMP
logic.
The
following
commands
are
used:
You
may
also
use
an
IPv6
access
control
list
in
order
to
control
groups
joined
by
a
host.
In
order
to
apply
the
ACL
to
the
MLD
configuration,
use
the
command:
An
interesting
enhancement
to
the
IPv6
version
of
BSR
is
the
fact
that
you
may
statically
configure
the
bootstrap
router
itself
with
a
list
of
candidate
RPs
(C-RPs)
using
the
command:
Remarkably,
this
altogether
eliminates
the
need
for
the
dynamic
candidate
RP
announcements
from
other
devices
in
the
topology.
IPv6
Embedded
RP
As
described
earlier,
following
the
initial
8
bits
of
an
IPv6
multicast
address,
there
are
4
bits
(labeled
0RPT)
which
are
flag
fields.
The
high-order
flag
is
reserved,
and
must
be
initialized
to
0.
If
the
R
bit
is
set
to
1,
then
the
P
and
T
bits
must
also
be
set
to
1.
This
indicates
there
is
an
embedded
Rendezvous
Point
(RP)
address
in
the
multicast
address.
Cisco
routers
will
now
no
longer
rely
on
BSR
or
static
configurations
for
the
RP
assignment,
and
now
senders
and
receivers
in
IPv6
multicast
will
agree
to
embed
the
RP
IPv6
address
in
the
IPv6
multicast
address
itself.