A BRIEF HISTORY OF INTERNET MULTICAST 22 Jun 2000 John Zwiebel An Italian student wanted to know what the MBONE was. This was my reply. It is not intended to be comprehensive nor necessarily totally accurate. The names referenced are people I thought should have been mentioned, but certainly shouldn't be considered the only ones who had anything to do with making internet multicast a reality. FWIW: The mbone was an experiment that used DVMRP and DVMRP tunnels, generally between unix machines running mrouted, to prove that multicasting over the internet would work. It was very successful. However, DVMRP is a dense-mode protocol which means data from any source has to be flooded to the boundaries of the domain to create state. It doesn't scale for this to be the entire internet. It was recognized that a sparse-mode protocol needed to be developed. But it wasn't (and IMHO still isn't) clear that a dense-mode protocol would not be useful. A group of folks began work on PIM. One of the goals was to not have to carry around network metrics (which is why its called "protocol independent multicasting"). PIM uses the unicast routing protocol to make its RPF (reverse path forwarding) decision. PIM also does both dense-mode and sparse-mode forwarding. Sparse-mode is forwarded only to receivers who specifically subscribe to the data. (ie, if no one asks for it it doesn't go anywhere). However, PIM, like DVMRP, does not scale to the entire internet. The IETF is working on a protocol called BGMP which is suppose to establish a tree of domains or a routing heirarchy. It seems too complicated to me to be useful, but I just have to make the stuff work, I don't have to like it. ;-) Because of this complexity, and because BGP was being enhanced to carry routes for address families other than just IPv4, MBGP was developed. At one time this stood for "Multicast BGP" but in reality, it was just the first address family that was introduced to BGP4+ (I could be wrong about it being first). Now MBGP is "Multiprotocol BGP". MBGP allows a way of separating the internet into multicast domains, which in turn means the need for DVMRP tunnels (that made up the MBONE) no longer exists. Now, multicast data is delivered natively over the internet. Some times this is still referred to as the MBONE and that's fine, as long as it is clear that tunnels are no longer required. John Meylor set up a AS (10888) to support this new paradigm. This AS consisted of routers in 3 locations that were connected via GRE tunnels. This AS ran in dense-mode. Enterprises and ISPs which wanted to send multicast natively could peer at these three "MIX"s (Multicast Internet Exchanges) by placing their RP on the MIX. This was necessary because there was no way to announce sources that were sending to a group in all domains. Obviously this meant flooding data from all source on the internet into AS10888, which of course, doesn't scale any better than the DVMRP tunnels (well maybe slightly better. That could be debated.) So a protocol called MSDP (multicast source distribution protocol) was promulgated which provides a way of connecting PIM sparse-mode RP's in each sparse-mode domain together and allowing each RP to announce which sources were sending to which multicast group. These are called SA's or session announcements. AS10888 was dismanteled. Now most multicast traffic is forwarded natively over the internet using a combintion of PIM/MBGP/MSDP. There are still many places that use DVMRP and some that use MOSPF. It depends on your specific network requirements and personel prefrence and sometimes vendor. I understand that there are 15 vendors offering PIM right now, but I'd be hard pressed to name them. There are now 12+ MIXs throughout the world. If you go to www.nanog.org and look at the slides that were presented at the last meeting (this month), there is some additional information you'll be able to discover. There have been alternate proposals for how to deliver multicast packets over the internet. This included CBT and SM (simple multicast) and EXPRESS. CBT wanted to used shared-trees so that all multicast traffic would be forwarded to a core router and then distributed down a common path rooted at that core. SM is more or less the same thing. EXPRESS was looking at the idea that most multicast traffic would come from a single source -- like TV. And they wanted to build source-only trees. PIM has always used a shared tree to initially deliver data to receivers. The RP (rendezvous point) is known to all routers within a domain so any router that finds out it has a directly connecte member can join the shared tree. One the RP delivers data down the share-tree the last-hop router can make the choice to join the source-only tree. So PIM supports both the shared-tree and the shortest-path tree. Unfortunately, the shared-trees were unidirectional, meaning only from the RP to the receivers. The first-hop router would "register" or tunnel multicast packets to the RP for all the sources it served. This puts a huge load on the RP. So now bi-directional PIM will soon be available. Now traffic can flow up and down the shared tree. So far this only works for intradomain multicast, but since the RP now becomes "virtual" in that for bi-dir pim the RP doesn't do anything and no longer has to be a physical box, the objection that the ISP's had to having to depend on an RP in some other ISP may disappear. I'm hoping that some smart guy will propose a way of arbitrarily assigning an address to be used as this virtual RP which can be placed anywhere. In this way, a web page could be set up that would allow hosts to tell their last hop router where the "RP" (remember it no longer exists) is located. This would meet the SM requirements that Radia Perlman and Tony Ballardie were championing. You can find more on the IETF web site in these documents. draft-kouvelas-pim-bidir-new-00.txt draft-ietf-pim-bidir-00.txt Tom Pusitari generated quite a bit of excitement when he brought up the idea of single source or source specific multicast via PIM at the Dec 99 IETF. This was taking the ideas promulgated in EXPRESS and combining them with IGMPv3 (which allows the host to identify to the router not only the multicast group it wants to listen to but also the host). Dave Cheriton and Hugh Holbrook had first suggested EXPRESS. Remember EXPRESS wants to work with source-trees, which PIM supports so, now there is going to be PIM-SSM which you can find out about in these IETF documents. draft-bhaskar-pim-ss-00.txt draft-bhattach-diot-pimso-00.txt draft-sandick-pimsm-ssmrules-00.txt So, PIM has become the "garbage man" for providing a way to forward multicast traffic. It allows you to do this on a per group basis, so you can do each type of forwarding all at the same time while configuring only one protocol. Of course, if something is a "jack of all trades" it may not be the best for a given forwarding model, so I expect that new protocols will come soon. One of the ideas that needs to be looked at is the way sources are handled. The original idea was that any host anywhere could start sending multicast traffic at anytime and it was the network's responsibility to handle it. This made things easy for the host but very difficult for the network. Over many bottles of beer, the same conclusion always seems to be that some kind of CTS/RTS method of controlling which sources are allowed to send to which groups must be developed. The problem is getting that functionality into hosts will be difficult. It always "seems" that the network has to do all the work. :-) You might also look at: http://www.ipmulticast.com ftp://ftp-eng.cisco.com www.bitcasting.com www.marratech.com www.arcommunications.com I find it quite funny that broadcast.com is now also mbone.org. I don't know how they did that. But they are committed to multicast and have set up their own MIX to provide multicast content. The last "Victoria's Secret" webcast was multicast, and this time it "worked". There are many other content providers also working on multicast. It is anticipated that since SSM does not require anything other than the last-hop router to be upgraded, that multicast will start taking off. The Netaid concert was multicast in Oct '99. Unfortunately, I don't think anyone knew about the concert, but if you did, and had multicast access to an ethernet, you could have watched two channels that provided full-screen "nearly as good as TV" video that was transmitted from the University of Oregon using IP/TV. UofO is also working on SSM at. http://limestone.uoregon.edu/cgi-bin/ssm.cgi and on Saturday 24 Jun, the Prefontaine Classic will be shown 24 hours. (see release below). And I understand that there are going to be several places sourcing the olympics this summer. Look for them. Note, prefontaine is going to be MPEG-2, so buy more bandwidth. :-) Here's some more sites you might find useful. http://www.talarian.com/ www.merit.edu/~mbone/index/titles.html www.tascnets.com/mist/doc/mcpCompare.html www.cs.columbia.edu/~hgs/rtp/ www.maoz.com/~maddogs/ -------------------------------------------------- >From llynch@darkwing.uoregon.edu Thu Jun 22 11:26:55 2000 cc: shep@cisco.com Subject: Multicast News Item University Of Oregon To 'Multicast' Prefontaine Classic GO ONLINE FOR FULL COVERAGE OF THE PREFONTAINE CLASSIC While television viewers will be able to watch only one hour of the June 24 Prefontaine Classic, Internet viewers will be able to see the entire competition, and those servered by multicast capable networks can get full broadcast quality. The University of Oregon Computing Center will will team up with Cisco Systems to make the event available at full broadcast quality over the Internet2 network. Internet2 is a collaborative project among universities, government and industry partners to develop advanced Internet applications that are not viable using today's Internet technology. "In Oregon, institutions with Internet2 access include UO, OSU, PSU, EOU, SOU, OIT and WOU," says Joanne Hugi (director of the University Computing Center). "Individuals with direct access to these institutions' networks and the right hardware and software will see the entire event at full broadcast quality." The UO will multicast ABC-TV's video and audio feed at true broadcast quality from the crack of the first starter's pistol to the final crossing of a finish line, using what's called MPEG2 technology. This technology takes advantage of the expanded broadband capabilities of the Internet2. UO Computing Center staff will broadcast the Prefontaine Classic, through the Oregon GigaPoP, the state's access point to the powerful fiber-optic network linking all Internet2 participants. UO has operated the Oregon GigaPoP since January 1999. "With this multicast and many other projects we are trying to extend the usefulness of the world's increasingly powerful computer networking capabilities," Hugi says. "It is a big effort and in this case the UO Computing Center is working closely with Cisco Systems and Nike to make this multicast possible." For those who don't have the computing power to receive the hefty MPEG-2 stream of information, but who nonetheless have direct access to Internet2, UO Computing will also be sending out lower bandwidth, "near broadcast quality" MPEG-1 and H.261 streams. Downloadable software and instructions for how to access the streaming multicast of the Prefontaine Classic is available at http://videolab.uoregon.edu/. Modem users with regular Internet access can watch a still lower-quality version of the track and field events in RealMedia format, accessible at http://www.nike.com/pre/. "Right now seeing a track meet in MPEG-2 format is a fairly advanced use of networked computing, but things are moving forward so fast. In a few years we expect this sort of opportunity to be nothing out of the ordinary" Hugi says. Web References: http://videolab.uoregon.edu/ http://nike.com/pre/ http://preclassic.com/