Guide to Auto-RP Configuration and Diagnostics

                                                            Liming Wei
                                                     (last updated: 4/2/99)
 

This guide covers the following topics:
 

  1. 1. Why Auto-RP ? What is Auto-RP ?
  2. 2. Recommended Auto-RP configuration.
  3. 3. How do I know it works ? ("show" command)
  4. 4. It does not work! Where do I start to figure out what's wrong ?
  5. 5. Common errors in Auto-RP configurations.
  6. 6. Differences in Auto-RP in 12.0 [negative group prefix]

  7.     [How to force a subrange of group addresses into dense mode.]
  8. 7. How to configure local RPs for an administratively scoped zone that

  9.      is a subregion of my domain, which already has an RP for global groups ?
  10. 8. A few frequently reported problems
It is assumed that readers are familiar with the basic PIM mechanisms.
 
 

1. Why Auto-RP ? What is Auto-RP ?

       Auto-RP automates the distribution of group-to-RP mappings in a  network supporting sparse mode PIM. It supports the use of multiple RPs within a network to serve different group ranges, and allows configurations of redundant RPs for reliability purposes. In comparison, manual configuration of RP information is prone to inconsistency which can cause connectivity problems. In summary, the  automatic distribution of Group-to-RP mappings simplifies configuration task, and guarantees consistency.

In a nutshell, the Auto-RP mechanism operates on two basic components,  the candidate RPs and the RP mapping agents:

 
With Auto-RP, Multiple RPs can be used to serve different group ranges, or as hot-backups of each other.

The Auto-RP messages are UDP encapsulated and rely on the IP multicast routing mechanism for delivery. This creates an issue of  whether these two groups should be sparse mode or dense mode.

It it recommended that the two Auto-RP groups be made into dense mode --- by configuring all PIM interfaces in "sparse-dense" mode and do not configure any static RPs covering these two addresses. With "sparse-dense" interface mode, a group will be treated in dense mode (i.e. flood and prune mode) if there is no RP available for it.  Once a static RP is configured covering these 224.0.1.39 and 224.0.1.40, these two groups  will be turned into sparse mode. Be careful that the static RP should either be configured on all PIM routers in the domain, or not configured on any PIM routers at all. See the section on "Common Auto-RP configuration errors" for breakage information.

To avoid circular dependency on Auto-RP for the support of the two Auto-RP groups, all PIM routers will automaticly  "take out"  groups 224.0.1.40 and 224.0.1.39 from any group ranges advertised via Auto-RP.  To illustrate the circular dependency, consider the entire network to be configured in "sparse" interface mode. The two Auto-RP groups must have an RP before Auto-RP packets can be delivered. However the RP for the Auto-RP groups can not be known until Auto-RP packets are delivered everywhere.  Under "sparse-dense" interface mode, such circular dependency also exists, under slightly more complicated scenarios.

The candidate RPs can be configured to be RP-mapping agents at the same time.  Doing so may simplify the administrative  work  since the Auto-RP configuration information can be available from a single spot.

See the "recommended Auto-RP configuration" section for sample configurations.
 
 

2. Recommended Auto-RP Configuration

Here are the 5 basic components for the recommended configuration:
  1. 1) Configure multicast administrative boundary for 224.0.1.39 and 224.0.1.40 on the PIM domain boundary;
  2. 2) Configure sparse-dense mode PIM interface throughout the network;
  3. 3) Configure candidate RPs on routers with good connectivities;
  4. 4) Configure RP-mapping agents on routers with good connectivities;
  5. 5) Configure "accept-rp" filter to only accept RPs advertised via Auto-RP.
The following topology drawing has the sample configurations annotated. Note that Candidate RPs and RP mapping agents can be co-located on the same router. For demonstration purpose, different boxes are used in the example.  The RPs and RP-mapping agents have all configurations other PIM routers have.
 

      To neighbor PIM domain
             /|\
              | hssi0          Administrative boundary
  ---------------------------+ for 224.0.1.39, 224.0.1.40 and 239/8
  | PIM domain border routers|
  ---------------------------+ access-list 10 deny 224.0.1.39
      /                \       access-list 10 deny 224.0.1.40
     .                   .     access-list 10 deny 239.0.0.0 0.255.255.255
    .                     .    access-list 10 permit 224.0.0.0 15.255.255.255
    .                     .    interface hssi0
    .                     .      ip multicast boundary 10
    .                     .      ip multicast ttl-threshold 64
    |                     \
    |                       \........................
    |                                                \
  ---------------+                                    +--------------
  | Candidate RP1|---.............--------------------|Candidate RP2|
  ----------------                                    +-------------+
   | access-list 5 permit 224.0.0.0 15.255.255.255               |
   | ip pim send-rp-announce <ethernet3> scope 15 group-list 5   |
    \                                                           /
     \                --------------------                     /
      \..........---- | RP-mapping agents| ----------......----
                      --------------------
                        /             \  ip pim send-rp-discovery scope 63
       --- .........---                \
      /
   --------------  ip multicast-routing
   | PIM router |  interface <interface>
   --------------    ip pim sparse-dense mode

                   ip pim accept-rp auto-rp
 

Note:  access-lists for class-D space. "permit any" should also work. But the author of this note likes precision that makes things clear and unambigous. It takes one glance at access-list 5 to know that it is for the entire IP multicast address space, as opposed to the entire IP address space, as indicated by "permit any".

Refer to section "How do I know it works" to see if your configuration works.

Check the section "Common errors in Auto-RP configurations" to see if you are free of those.
 
 

3. How do I know it works ? ("show" command)

"show ip pim rp mapping"

Check the RP-mapping cache on all routers with command "show ip pim rp mapping".  If Auto-RP is working properly, the group-to-RP mapping cache should be consistent across the entire network --- meaning they should have the same content and roughly the same uptimes.
Here is an sample output with two group-to-RP mappings, annotated with explanations.
 

mt1-45a# show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)     <<<=== this box itself is an RP
This system is an RP-mapping agent <<<=== is RP-mapping agent too
Group(s) 239.1.2.0/24              <<<=== Check group address range
  RP 172.21.24.28 (?), v2v1        <<<=== Check advertised RP address
    Info source: 172.21.24.28 (?), via Auto-RP
         Uptime: 6d03h, expires: 00:02:43   <<<=== Uptime is long enough
Group(s) 239.1.3.0/24
  RP 172.21.24.9 (?), v2v1
    Info source: 172.21.24.9 (?), via Auto-RP
         Uptime: 6d03h, expires: 00:02:38

It is important to check the "uptime" of each (group, RP) entry. If the uptime is much shorter than the actual uptime of the RP or RP-mapping agent, something must be wrong. Check the configuration and the multicast forwarding path for 224.0.1.39 and 224.0.1.40.

If there is any inconsistencies, go to the section on "It does not work" for diagnostics information. Also check section "Common errors in Auto-RP configurations" to see if one of those  exists in your network.
 
 

4. It does not work! Where to start to figure out what's wrong ?

The general workflow for checking any Auto-RP related problems is to first make sure the RP-mapping agents have the correct RP-mapping cache, then check if other routers have the same RP-mapping cache.
 
  1. 1) Check the RP-mapping agent(s) for the correct RP mapping cache;    Follow the following flow diagram to perform diagnostics on the   "RP mapping agent":
         +----------------------------------------+
          | Is the RP mapping cache displayed by   |
          | "show ip pim rp mapping"  complete and |
          | have correct uptime ?                  |
          +----------------------------------------+
          / Yes                           \ No
        /                                   \
   ----------------    ---------------------------------------------+
   |Go to step #2)|    | Does "debug ip pim auto-rp" show reception |
   ----------------    | of "RP-announce packet" every 60 seconds   |
                       | from the candidate RP ?                    |
                       ---------------------------------------------+
                Yes   /                           | NO
                    /                             |
  -----------------------     -----------------------------------------
  | Another candidate RP |    | Does "show ip igmp group" indicate    |
  | must be advertising  |    | that the RP mapping agent joined the  |
  | for the same group   |    | group 224.0.1.39 ? I.e. the "Expires" |
  | range and is selected|    | column shows "never".                 |
  -----------------------     -----------------------------------------
         | No                   /No                        | Yes
  ---------------------------  /                           |
  | Report problem to TAC or| | ----------------------------------------
  | cs-ipmulticast@cisco.com| | | Check "show ip mr 224.0.1.39"        |
  --------------------------  / | Is it in the right forwarding mode ? |
                            /   | It should have the "L" flag set.     |
---------------------------     | The igmp-joined interface, if not on |
| Check  "show ip pim int"|     | the RPF interface, should be in the  |
| Make sure PIM and       |     | outgoing interface list and in       |
| multicast-routing are   |     | in forwarding state.                 |
| enabled. Make sure      |     ----------------------------------------
| Interface not shutdown  |    /Yes                      | No
---------------------------   /                          |
                             /                           |
 -----------------------------------------               |
 | Check "debug ip mpacket 224.0.1.39"   |               |
 | See if packets are being dropped      |               |
 | e.g. due to RPF failure when received |               |
 | on wrong interface. Don't worry about |               |
 | mroute with NUll olist --- as long as |               |
 | packets arrived on the RPF interface  |               |
 | the "L" flag marked entry will receive|               |
 | it.                                   |               |
 -----------------------------------------               |
                                                         |
          --------------------------------------------------------------
          |  Fix the 224.0.1.39 group.                                  |
          |  To see if the problem is local or caused by upstream,      |
          |  go to the RPF neighbor and check the state for 224.0.1.39  |
          |  there. In the following example "show ip mr state" the     |
          |  upstream neighbor is 172.21.24.20:                         |
          |                                                             |
          | (172.21.24.28, 224.0.1.39), 01:04:09/00:02:22, flags: CLT   |
          |   Incoming interface: Ethernet1, RPF nbr 172.21.24.20  ^    |
          |   Outgoing interface list:       ^^^^^^^^^^^^^^^^^^^^       |
          |     Ethernet2, Prune/Sparse-Dense, 01:04:10/00:02:21        |
          ---------------------------------------------------------------
 
 
  1. 2) Check "other" PIM routers for RP-mapping cache;
          +----------------------------------------+
          | Is the RP mapping cache displayed by   |
          | "show ip pim rp mapping"  complete and |
          | have correct uptime ?                  |
          +----------------------------------------+
          / Yes                           \ No
        /                                   \
   ------------------  ---------------------------------------------+
   |Problem is local|  | Does "debug ip pim auto-rp" show reception |
   |and is not in   |  | of "RP-discovery packet" every 60 seconds  |
   |Auto-RP         |  | from the RP-mapping agent ?                |
   ------------------  ---------------------------------------------+
                     /Yes                        | NO
                    /                             |
  --------------------------   -----------------------------------------
  | Check if accept-rp     |   | Does "show ip igmp group" indicate    |
  | filter is correctly    |   | that the router joined the            |
  | configured.  And       |   | group 224.0.1.40 ? I.e. the "Expires" |
  |"clear ip mr *" fixes it|   | column shows "never".                 |
  -------------------------    -----------------------------------------
         | Yes                   /No                       | Yes
  ---------------------------  /                           |
  | Report problem to TAC or| | ----------------------------------------
  | cs-ipmulticast@cisco.com| | | Check "show ip mr 224.0.1.40"        |
  --------------------------  / | Is it in the right forwarding mode ? |
                            /   | It should have the "L" flag set.     |
---------------------------     | The igmp-joined interface, if not on |
| Check  "show ip pim int"|     | the RPF inteface, should be in the   |
| Make sure PIM and       |     | outgoing interface list and in       |
| multicast-routing are   |     | in forwarding state.                 |
| enabled. Make sure      |     ----------------------------------------
| Interface not shutdown  |    /Yes                      | No
---------------------------   /                          |
                             /                           |
 -----------------------------------------               |
 | Check "debug ip mpacket 224.0.1.40"   |               |
 | See if packets are being dropped      |               |
 | e.g. due to RPF failure when received |               |
 | on wrong interface. Don't worry about |               |
 | mroute with NUll olist --- as long as |               |
 | packets arrived on the RPF interface  |               |
 | the "L" flag marked entry will receive|               |
 | it.                                   |               |
 -----------------------------------------               |
                                                         |
          --------------------------------------------------------------
          |  Fix the 224.0.1.40 group.                                  |
          |  To see if the problem is local or caused by upstream,      |
          |  go to the RPF neighbor and check the state for 224.0.1.40  |
          |  there. In the following example "show ip mr state" the     |
          |  (*,224.0.1.40) has "dense" "local" flags set correctly.    |
          |  The (S, 224.0.1.40) entry is also in correct forwarding    |
          |  state, and its upstream RPF neighbor is 172.21.24.20, as   |
          |  underlined by "^":                                         |
          |                                                             |
          | (*, 224.0.1.40), 2:36:03/00:00:00, RP 0.0.0.0, flags: DJCL  |
          |   Incoming interface: Null, RPF nbr 0.0.0.0           ^  ^  |
          |   Outgoing interface list:                                  |
          |     Ethernet1, Forward/Sparse-Dense, 2:36:03/00:00:00       |
          |     Serial0, Forward/Sparse-Dense, 2:36:03/00:00:00         |
          |     Ethernet2, Forward/Sparse-Dense, 2:36:03/00:00:00       |
          |                                                             |
          | (172.21.24.17, 224.0.1.40), 02:36:03/00:02:08, flags: CLT   |
          |   Incoming interface: Ethernet2, RPF nbr 172.21.24.12       |
          |   Outgoing interface list:       ^^^^^^^^^^^^^^^^^^^^       |
          |     Serial0, Forward/Sparse-Dense, 02:36:03/00:00:00        |
          |     Ethernet1, Prune/Sparse-Dense, 02:36:03/00:01:08        |
          ---------------------------------------------------------------
 
 
 
 

Sample output from command "debug ip pim auto-rp"

This command displays the transmission and reception of all Auto-RP messages. By default these messages are refreshed every 60 seconds.

The following is a sample debug trace on an RP-mapping agent:

00:01:08: Auto-RP: Build RP-Announce for 172.21.24.17, PIMv2/v1, ttl 8, ht 181
00:01:08: Auto-RP:  Build announce entry for (239.1.1.0/24)
00:01:08: Auto-RP: Send RP-Announce packet on Ethernet2
00:01:08: Auto-RP: Send RP-Announce packet on Ethernet1
00:01:08: Auto-RP: Received RP-announce, from 172.21.24.17, RP_cnt 1, ht 181
00:01:08: Auto-RP: Added with (239.1.1.0/24, RP:172.21.24.17), PIMv2 v1
00:01:08: Auto-RP: Build RP-Discovery packet
00:01:08: Auto-RP:  Build mapping (239.1.1.0/24, RP:172.21.24.17), PIMv2 v1,
00:01:08: Auto-RP:  Build mapping (239.1.2.0/24, RP:172.21.24.28), PIMv2 v1.
00:01:08: Auto-RP:  Build mapping (239.1.3.0/24, RP:172.21.24.9), PIMv2 v1.
00:01:08: Auto-RP: Send RP-discovery packet on Ethernet2 (3 RP entries)
00:01:08: Auto-RP: Send RP-discovery packet on Loopback0 (3 RP entries)
00:01:08: Auto-RP: Send RP-discovery packet on Ethernet1 (3 RP entries)
00:01:08: Auto-RP: Build RP-Announce for 172.21.24.11, PIMv2/v1, ttl 8, ht 181
00:01:08: Auto-RP:  Build announce entry for (239.1.1.0/24)
00:01:08: Auto-RP: Send RP-Announce packet on Ethernet2
00:01:08: Auto-RP: Send RP-Announce packet on Ethernet1
00:01:08: Auto-RP: Received RP-announce, from 172.21.24.11, RP_cnt 1, ht 181
00:01:08: Auto-RP: Added with (239.1.1.0/24, RP:172.21.24.11), PIMv2 v1
00:01:08: Auto-RP: Received RP-announce, from 172.21.24.17, RP_cnt 1, ht 181
00:01:08: Auto-RP: Update (239.1.1.0/24, RP:172.21.24.17), PIMv2 v1
00:01:08: Auto-RP: Received RP-discovery, from ourselves (171.69.10.13), ignored
00:01:08: Auto-RP: Received RP-discovery, from ourselves (172.21.24.17), ignored
 
 
 

5. Common errors in Auto-RP configurations.

     a)  Inconsistent sparse or dense mode definitions for the two Auto-RP  groups: 224.0.1.39 and 224.0.1.40;

    This is often caused by  inconsistent configurations of static RPs across the network,    e.g. some routers have "ip pim rp-address w.x.y.z" configured  while others don't have this command. [Or even worse, some interfaces  are in sparse-dense mode, while others are in sparse mode. But  that is easy to spot, so we won't say more about it.]
    The following example shows how inconsistent static-rp configuration breaks RP-discovery group 224.0.1.40, and how it can be "safe-guarded"  by the use of "ip pim accept-rp auto-rp" command. Assume all interfaces are in sparse-dense mode.
               -----------
               | a.b.c.d | "default RP" (staticly configured on some
               -----------                                          routers)
            From the RP-mapping agent
                   |
                  \|/
               ------------------
               |      X         | no static RP
               ------------------
              /e0                \e1
            /                     \
           |                       |            "ip pim rp-address a.b.c.d <acl>"
       ---------                 ---------
       |  Y    | no static RP    |   Z   |  static RP is configured
       ---------                 ---------
          |                          |
       ---+-----                 ----+-----
 
    Solution:
     
  1. (1) Remove "ip pim rp-address a.b.c.d <acl>" command  from router "Z";
  2. (2) To protect against misconfigured downstream routers, configure  "ip pim accept-rp auto-rp" in all routers. This command says  "accept all RPs that are in the Auto-RP cache, and reject all  that are not in the Auto-RP cache". Since 224.0.1.39 and 224.0.1.40 are not considered to be advertised via Auto-RP,  joins to any RP for these two groups will be rejected.
     
 

6. Differences in Auto-RP in 12.0 [negative group prefix].

In 12.0[X], "negative prefix" capability is added to the Auto-RP code. When a deny clause appears in the access-list that defines a group address range for a candidate RP, that group prefix is called a "negative prefix". A "negative prefix" is opposite to a "positive prefix" (i.e one defined in a "permit" clause), that it makes all groups in the range be treated in dense mode.

The following shows a negative prefix for 239.255.255.0/24:
 

So, be careful to not configure gratuitous "deny" clauses  in the access-list for the "ip pim send-rp-announce" command.
 
 

7. How to configure local RPs for an administratively scoped zone that is a subregion of my domain, which already has an RP for global groups ?

This task is accomplished by configuring RP(s) and RP-mapping agent(s) inside the admin scoped zone, and TTL boundaries to contain the local Auto-RP messages, while still able to pass through the global Auto-RP discovery packets.

Because Auto-RP uses 224.0.1.39 and 224.0.1.40 that are not in the administratively scoped address range (239/8), the UDP encapsulated Auto-RP messages need to be selectively passed through the administratively scoped boundaries, by setting up ttl boundaries and configuring different RP-mapping agents with different ttl's.

The following example illustrates a configuration of two RPs, "RP1" serving 224.0.0.0/4 for the entire PIM domain, and "RP2" serving 239.255.0.0/16 for a subregion administratively scoped zone bordered by "Boundary1".
 

      PIM domain border, boundary for
      239/8, 224.0.1.39, 224.0.1.40
            .
            .
            |
"Global RP" |
+----------------------+access-list 1 permit 224.0.0.0 15.255.255.255
| "RP1" for 224.0.0.0/4|
| RP-mapping agents    |ip pim send-rp-announce RP1 scope 15 group-list 1
| for RP1              |ip pim send-rp-discovery scope 64
+----------------------+ip pim accept-rp auto-rp
           |
           .
           .
           .
Another _/   \
zone           \
                | ethernet2
 ------------------------- access-list 2 deny 239.255.0.0 0.0.255.255
 | "Boundary1"           | access-list 2 permit 224.0.0.0 15.255.255.255
 | Admin-scoped boundary | interface ethernet2
 | router for 239.255/16 |   ip pim sparse-dense-mode
 -------------------------   ip multicast ttl-threshold 16
          |                  ip multicast boundary 2
          .                ip pim accept-rp auto-rp
          .
          . ethernet4
 -------------------- access-list 3 permit 239.255.0.0 0.0.255.255
|"RP2"for local admin|
|zone 239.255/16     |ip pim send-rp-announce ether4 scope 15 group-list3
|RP-mapping agent    |ip pim send-rp-discovery scope 15
|only for RP2        |ip pim accept-rp auto-rp
----------------------

In the above configuration,
 

For this configuration to work, make sure there is no "leakage" in the administrative boundary or ttl boundary. If there is another path that connects the global regions with the local admin scoped zone, and for which no boundaries are setup, local Auto-RP packets may leak through
and cause unexpected effects.
 
 

8. A few frequently reported problems

8.1 missing group-to-rp-mapping entries in Auto-RP cache

8.2 large numbers of  AUTORP-5-MAPPING  log messages

This is an important message that should not be neglected. Each time this message appears, it means an RP-mapping entry that had disappeared just reappeared. In steady state,  RP-mapping entries should be constantly up, indicated by long uptimes in "show ip pim rp
mapping" output.

This problem should be treated in the same way as 8.1. This is almost for sure caused by forwarding problems for the two Auto-RP groups. Follow the steps outlined in the section "It does not work", narrow down the problem to one of the two Auto-RP groups 224.0.1.40
and 224.0.1.39. Then debug it as if you would with any other multicast groups.

NOTE: There was a known issue (CSCdm20975) with the support of the Auto-RP groups on routers configured for MDS (Multicast Distributed Switching) --- "MDS + SPD == broken Auto-RP". It was first fixed in 12.0(4.0.4)S, 11.2(15)GS04. The workaround is to disable SPD, or MDS.
 

8.3. With multiple RP-mapping agents, the RP flip-flops on a DR.

Check the output from "show ip pim rp mapping" on the DR router, and see if it,
 
Last Modifier: Liming Wei (lwei@cisco.com)
 Last modified: 4/2/99