Multicast Quick-Start Configuration Guide


IP multicasting is a bandwidth-conserving technology that reduces traffic by simultaneously delivering a single stream of information to thousands of corporate recipients and homes. Applications that take advantage of multicast include video conferencing, corporate communications, distance learning, and distribution of software, stock quotes, and news. This document discusses the basics of configuring multicast for various networking scenarios.

Contents


Dense Mode

Cisco recommends using Protocol Independent Multicast (PIM) sparse mode, particularly Auto-RP, wherever possible, especially for new deployments. However, if dense mode is desired, configure the global command ip multicast-routing and the interface command ip pim sparse-dense-mode on each interface that needs to process multicast traffic. The common requirement, for all configurations within this document, is to configure multicasting globally and configure PIM on the interfaces. As of Cisco IOS® version 11.1, the interface commands ip pim dense-mode and ip pim sparse-mode can now be configured simultaneously with the ip pim sparse-dense-mode command. In this mode, the interface is treated as dense-mode if the group is in dense-mode. If the group is in sparse-mode (for example, if an RP is known), the interface is treated as sparse-mode.

Note: In the examples throughout this document, the "Source" represents the source of multicast traffic and the "Receiver" represents the receiver of multicast traffic.

Router A Configuration
ip multicast-routing 

interface ethernet0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

interface serial0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

 
Router B Configuration
ip multicast-routing 

interface serial0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

interface ethernet0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

 

Sparse Mode with one RP

In this example, Router A is the RP which should typically be the router closest to the source. The RP discovers that it is the RP because all the other routers are pointing to it as their RP and are subsequently sending registers to it. You can configure multiple RPs, but there can only be one RP per specific group.


 
 
Router A Configuration
ip multicast-routing 


interface ethernet0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

interface serial0 
ip address 1.1.1.1 255.255.255.0 
ip pim sparse-dense-mode 


 
Router B Configuration
ip multicast-routing 
ip pim rp-address 1.1.1.1 


interface serial0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

interface ethernet0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

 

Sparse Mode with Multiple RPs

In this example, Source-A is sending to 224.1.1.1, 224.1.1.2, and 224.1.1.3. Source-B is sending to 224.2.2.2, 224.2.2.3, and 224.2.2.4. You could have one router, either RP 1 or RP 2, be the RP for all groups, but if you want different RPs to handle different groups, you need to configure all routers to include which groups the RPs will serve. With this type of static RP configuration, all routers in the PIM domain must have the same ip pim RP-address <address> <acl> commands configured. You can also use Auto-RP, which is easier to configure, to achieve the same setup.


 
 
RP 1 Configuration
ip multicast routing 
ip pim RP-address 2.2.2.2 3 

access-list 3 permit 224.2.2.2 
access-list 3 permit 224.2.2.3 
access-list 3 permit 224.2.2.4 

 
RP 2 Configuration
ip multicast routing 
ip pim RP-address 1.1.1.1 2 


access-list 2 permit 224.1.1.1 
access-list 2 permit 224.1.1.2 
access-list 2 permit 224.1.1.3 

 
Configuration for Routers 3 and 4
ip multicast-routing 
ip pim RP-address 1.1.1.1  2 
ip pim RP-address 2.2.2.2  3
 

access-list 2 permit 224.1.1.1 
access-list 2 permit 224.1.1.2 
access-list 2 permit 224.1.1.3 
access-list 3 permit 224.2.2.2 
access-list 3 permit 224.2.2.3 
access-list 3 permit 224.2.2.4 

 

Auto-RP with one RP

With Auto-RP, you configure the RPs themselves to announce their availability as RPs and mapping agents. The RPs send their announcements using 224.0.1.39. The RP mapping agent listens to the announced packets from the RPs, then sends RP-to-group mappings in a discovery message that is sent to 224.0.1.40. These discovery messages are what the rest of the routers use for their RP-to-group map. You can use one RP that also serves as the mapping agent, or you can configure multiple RPs and multiple mapping agents for redundancy purposes.


 
 
Router A Configuration
ip multicast-routing 
ip pim send-rp-announce ethernet0 scope 16 
ip pim send-rp-discovery scope 16 


interface ethernet0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

interface serial0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

 
Router B Configuration
ip multicast-routing 


interface ethernet0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

interface serial0 
ip address <address> <mask> 
ip pim sparse-dense-mode 

 

Auto-RP with multiple RPs

The access lists in this example allow the RPs to be an RP only for the groups you want. If no access list is configured, the RPs will be available as an RP for all groups. If two RPs are announcing their availability to be RPs for the same group(s), the mapping agent(s) will resolve these conflicts using "the highest IP address wins" rule.

To influence which router is the RP for a particular group, when two RPs are announcing for that group, you can configure each router with a loopback address. Place the higher IP address on the preferred RP, then use the loopback interface as the source of the announce packets; for example, ip pim send-RP-announce loopback0. When multiple mapping agents are used, they listen to each other's discovery packets and the mapping agent with the highest IP address wins and becomes the only forwarder of 224.0.1.40.


 
 
RP 1 Configuration
ip multicast-routing 
ip pim send-RP-announce ethernet0 scope 16 group-list 1 
ip pim send-RP-discovery scope 16 


access-list 1 permit 239.0.0.0 0.255.255.255 

 
RP 2 Configuration
ip multicast-routing 
ip pim send-RP-announce ethernet0 scope 16 group-list 1 
ip pim send-RP-discovery scope 16 


access-list 1 deny 239.0.0.0 0.255.255.255 
access-list 1 permit 224.0.0.0 15.255.255.255 

Additional details on Auto-RP can be found here: ftp://ftpeng.cisco.com/ipmulticast/autorp.html
 

DVMRP

Your Internet service provider (ISP) may suggest that you create a DVMRP tunnel to them in order to gain access to the multicast backbone in the Internet (mbone). The minimum commands to configure a DVMRP tunnel are shown below:
interface tunnel0  
ip unnumbered <any pim interface>  
tunnel source <address of source>  
tunnel destination <address of ISPs mrouted box>  
tunnel mode dvmrp  
ip pim sparse-dense-mode
Typically, the ISP will have you tunnel to a Unix machine running "mrouted" (DVMRP). If they instead have you tunnel to another Cisco device, use default GRE tunnel mode instead.

If instead of simply receiving multicast packets, you want to generate multicast packets for others on the mbone to see, you need to advertise the source's subnets. If your multicast source host address is 131.108.1.1, you need to advertise the existence of that subnet to the mbone. By default, directly-connected networks are advertised with metric 1. If your source is not directly connected to the router with the DVMRP tunnel, configure the following under interface tunnel0:

ip dvmrp metric 1 list 3  
access-list 3 permit 131.108.1.0 0.0.0.255
Note: You must include an access list with the above command to prevent advertising the entire unicast routing table to the mbone.
If your setup is similar to the one shown below, and you want to propagate DVMRP routes through the domain, configure ip dvmrp unicast-routing on the serial0 interfaces of Routers A and B. This will provide the forwarding of DVMRP routes to PIM neighbors who will then have a DVMRP routing table used for RPF. DVMRP learned routes take Reverse Path Forwarding (RPF) precedence over all other protocols, except for directly-connected routes.


 

MBGP

MBGP is a basic way to carry two sets of routes: one set for unicast routing and one set for multicast routing. MBGP provides the control necessary to decide where multicast packets are allowed to flow. The routes associated with multicast routing are used by PIM to build data distribution trees. MBGP provides the RPF path, not the creation of multicast state. PIM is still needed to forward the multicast packets.


 
 
Router A Configuration
ip multicast-routing 


interface loopback0 
ip pim sparse-dense-mode 
ip address 192.168.2.2 255.255.255.0 


interface serial0 
ip address 192.168.100.1 255.255.255.0 


interface serial1 
ip pim sparse-dense-mode 
ip address 192.168.200.1 255.255.255.0 


router bgp 123 
network 192.168.100.0 nlri unicast 
network 192.168.200.0 nlri multicast 
neighbor 192.168.1.1 remote-as 321 nlri unicast multicast 
neighbor 192.168.1.1 ebgp-multihop 255 
neighbor 192.168.100.2 update-source loopback0 
neighbor 192.168.1.1 route-map setNH out 


route-map setNH permit 10 
match nlri multicast 
set ip next-hop 192.168.200.1 


route-map setNH permit 20 

 
Router B Configuration
ip multicast-routing 


interface loopback0 
ip pim sparse-dense-mode 
ip address 192.168.1.1 255.255.255.0 


interface serial0 
ip address 192.168.100.2 255.255.255.0 


interface serial1 
ip pim sparse-dense-mode 
ip address 192.168.200.2 255.255.255.0 


router bgp 321 
network 192.168.100.0 nlri unicast 
network 192.168.200.0 nlri multicast 
neighbor 192.168.2.2 remote-as 123 nlri unicast multicast 
neighbor 192.168.2.2 ebgp-multihop 255 
neighbor 192.168.100.1 update-source loopback0 
neighbor 192.168.2.2 route-map setNH out 


route-map setNH permit 10 
match nlri multicast 
set ip next-hop 192.168.200.2 


route-map set NH permit 20 

If your unicast and multicast topologies are congruent (for example, the are going over the same link), the primary difference in the configuration is with the nlri unicast multicast command. And example is shown below:

network 192.168.100.0 nlri unicast multicast
The benefit of having MBGP running in the case of congruent topologies is that even though the traffic is traversing the same paths, different policies can be applied to unicast BGP versus multicast BGP.

Additional details on MBGP can be found here: ftp://ftpeng.cisco.com/ipmulticast.html#MBGP
 

MSDP

MSDP connects multiple PIM-SM domains. Each PIM-SM domain uses its own independent RP(s) and does not have to depend on RPs in other domains. MSDP allows domains to discover multicast sources from other domains. If you are also BGP-peering with with the MSDP peer, you should use the same IP address for MSDP as you do for BGP. When MSDP does peer RPF checks, it expects the MSDP peer address to be the same address that BGP/MBGP gives it when it performs a route table lookup on the RP in the SA message. You are not required, however, to run BGP/MBGP with the MSDP peer as long as there is a BGP/MBGP path between the MSDP peers. If there is no BGP/MBGP path, and if there is more than one MSDP peer, you must use the ip msdp default-peer command. In the example below, RP A is the RP for its domain and RP B is the RP for its domain.


 
 
Router A Configuration
ip multicast-routing 


ip pim send-RP-announce ethernet0 scope 16 
ip pim send-RP-discovery scope 16 


ip msdp peer 192.168.100.2 
ip msdp sa-request 192.168.100.2 


interface serial0 
ip address 192.168.100.1 255.255.255.0 
ip pim sparse-dense-mode 

 
Router B Configuration
ip multicast-routing 


ip pim send-RP-announce ethernet0 scope 16 group-list 1 
ip pim send-RP-discovery scope 16 


ip msdp peer 192.168.100.1 
ip msdp sa-request 192.168.100.1 


interface serial0 
ip address 192.168.100.2 255.255.255.0 
ip pim sparse-dense-mode 


 

Stub Multicast Routing

Stub Multicast Routing allows you to configure remote/stub routers as IGMP proxy agents. Instead of fully participating in PIM, these stub routers simply forward IGMP messages from the host(s) to the upstream multicast router.


 
Router 1 Configuration
int s0 
ip pim sparse-dense-mode 
ip pim neighbor-filter 1 


access-list 1 deny 140.1.1.1 

The ip pim neighbor-filter command is needed so that Router 1 does not recognize Router 2 as a PIM neighbor. If you configure Router 1 in sparse mode, the neighbor filter is unnecessary. Router 2 must not run in sparse mode. When in dense mode, the stub multicast sources are able to flood to the backbone routers.
 
Router 2 Configuration
ip multicast-routing 
int e0 
ip pim sparse-dense-mode 
ip igmp helper-address 140.1.1.2 


int s0 
ip pim sparse-dense-mode 

 

IGMP UDLR for Satellite Links

UDLR provides a method for forwarding multicast packets over a unidirectional satellite link to stub networks that have a back channel. This is similar to stub multicast routing. Without this feature, the uplink router would not be able to dynamically learn which IP multicast group addresses to forward over the unidirectional link, because the downlink router cannot send anything back.


 
 
Uplink-rtr Configuration
ip multicast-routing 


interface Ethernet0 
description Typical IP multicast enabled interface 
ip address 12.0.0.1 255.0.0.0 
ip pim sparse-dense-mode 


interface Ethernet1 
description Back channel which has connectivity to downlink-rtr 
ip address 11.0.0.1 255.0.0.0 
ip pim sparse-dense-mode 


interface Serial0 
description Unidirectional to downlink-rtr 
ip address 10.0.0.1 255.0.0.0 
ip pim sparse-dense-mode 
ip igmp unidirectional-link 
no keepalive 

 
Downlink-rtr Configuration
ip multicast-routing 


interface Ethernet0 
description Typical IP multicast enabled interface 
ip address 14.0.0.2 255.0.0.0 
ip pim sparse-dense-mode 
ip igmp helper-address udl serial0 


interface Ethernet1 
description Back channel which has connectivity to downlink-rtr 
ip address 13.0.0.2 255.0.0.0 
ip pim sparse-dense-mode 


interface Serial0 
description Unidirectional to uplink-rtr 
ip address 10.0.0.2 255.0.0.0 
ip pim sparse-dense-mode 
ip igmp unidirectional-link 
no keepalive 

 

PIMv2 BSR

If all routers in the network are running PIMv2, you can configure a BSR instead of Auto-RP. Both are very similar. With BSR configuration, you configure BSR candidates (similar to RP-Announce in Auto-RP) and BSRs (similar to Auto-RP Mapping Agents). To configure a BSR, follow these steps:

Step 1: On the candidate BSRs configure:

ip pim bsr-candidate <interface> <hash-mask-len> <pref>
Where <interface> contains the candidate BSRs IP address. It is recommended (but not required) that <hash-mask-Len> be the same across all candidate BSRs. A candidate BSR with the largest <pref> value will be elected as the BSR for this domain.

Example command usage is shown below:

ip pim bsr-candidate ethernet0 30 4
The PIMv2 BSR is used to collect candidate RP information and to disseminate RP-set information associated with each group prefix. To avoid single point of failure, more than one router in a domain can be configured as candidate BSRs.

A BSR is elected among the candidate BSRs automatically, based on the preference values configured. The routers to serve as candidate BSRs should be well connected and be in the backbone of the network, as opposed to in the dialup area of the network.

Step 2: Configure candidate RP routers. The following example shows a candidate RP, on the interface ethernet0, for the entire admin-scope address range:
access-list 11 permit 239.0.0.0 0.255.255.255
ip pim rp-candidate ethernet0 group-list 11

CGMP

To configure CGMP, configure the following on the router interface facing the switch:
ip pim sparse-dense-mode 
ip cgmp
Then, configure the following on the switch:
set cgmp enable  

IGMP Snooping

IGMP snooping is available with release 4.1 of the Catalyst 5000. IGMP snooping requires a Supervisor III card. No configuration other than PIM is necessary to configure IGMP snooping on the router. A router is still necessary with IGMP snooping, however, to provide the IGMP querying.

The example below demonstrates how to enable IGMP snooping on the switch:

Console> (enable) set igmp enable  
IGMP Snooping is enabled.  
CGMP is disabled.
If you try to enable IGMP but CGMP is already enabled, you will see the following:
Console> (enable) set igmp enable  
Disable CGMP to enable IGMP Snooping feature.

PGM

PGM is a reliable multicast transport protocol for applications that require ordered, duplicate-free, multicast data delivery from multiple sources to multiple receivers. PGM guarantees that a receiver in the group either receives all data packets from transmissions and retransmissions or is able to detect unrecoverable data packet loss.

There are no PGM global commands. PGM is configured per interface using the ip pgm command. Multicast routing must be enabled on the router along with PIM on the interface.
 

MRM

MRM facilitates automated fault detection in a large multicast routing infrastructure. It is designed to alert a network administrator of multicast routing problems in close to real-time.

MRM has two components: MRM tester and MRM manager. MRM Tester is a sender and/or receiver.

MRM is available in Cisco IOS 12.0(5)T and later. Only the MRM testers and managers need to be running the MRM-supported IOS version.

Make sure that the "Multicast Forwarding Network" has no access lists or boundaries that deny MRM data/control traffic. MRM test data is UDP/RTP packets addressed to configured group address. MRM control traffic between sender, receiver, and manager is addressed to 224.0.1.111 group, which is joined by all three.
 
 
Test Sender Configuration
interface Ethernet0 
  ip mrm test-sender 

 
Test Receiver Configuration
interface Ethernet0 
  ip mrm test-receiver 

 
Test Manager Configuration
ip mrm manager test1 
 manager e0 group 239.1.1.1 
 senders 1 
 receivers 2 sender-list 1 


 access-list 1 permit 10.1.1.2 
 access-list 2 permit 10.1.4.2 

Output from the show ip mrm manager command on Test Manager is shown below:

Test_Manager# show ip mrm manager 
   Manager:test1/10.1.2.2 is not running 
     Beacon interval/holdtime/ttl:60/86400/32 
     Group:239.1.1.1, UDP port test-packet/status-report:16384/65535 
     Test sender: 
       10.1.1.2 
     Test receiver: 
       10.1.4.2
Start the test using the command shown below. The test manager sends control messages to the test sender and test receiver as configured in the test parameters. The test receiver joins the group and monitors test packets sent from the test sender.
Test_Manager# mrm start test1 
 *Feb  4 10:29:51.798: IP MRM test test1 starts ......  
Test_Manager#
To display a status report for the test manager, enter the following command:
Test_Manager# show ip mrm status
IP MRM status report cache:  
Timestamp        Manager          Test Receiver   Pkt Loss/Dup (%)       Ehsr  
*Feb  4 14:12:46 10.1.2.2         10.1.4.2        1            (4%)      29  
*Feb  4 18:29:54 10.1.2.2         10.1.4.2        1            (4%)      15  
Test_Manager#
The output shows that the receiver sent two status reports (one line each) at given time stamp. Each report contains one packet loss during the interval window (default of one second). The "Ehsr" value shows the estimated next sequence number value from the test sender. If the test receiver saw duplicate packets, it would show a negative number in the "Pkt Loss/Dup" column.

To stop the test, enter the following command:

Test_Manager# mrm stop test1 
*Feb  4 10:30:12.018: IP MRM test test1 stops  
Test_Manager#
While running the test, the MRM sender starts sending RTP packets to the configured group address at default interval of 200 ms. The receiver monitors (expects) the same packets at the same default interval. If the receiver detects a packet loss in default window interval of five seconds, it sends a report to the MRM manager. The status report from the receiver can be displayed using the show ip mrm status command on the manager.

For more detailed configuration coverage on any of the above features, refer to ftp://ftpeng.cisco.com/ipmulticast/.



Last updated March 2000.