Because Token Ring frames don’t allow for more than three multicast MAC addresses.
From the Cisco documentation:
HSRP Standby IP Address Communication on All Media Except Token Ring
Because host workstations are configured with their default gateway as the HSRP standby IP address, hosts must communicate with the MAC address that is associated with the HSRP standby IP address. This MAC address is a virtual MAC address that is composed of 0000.0c07.acXX where XX is the HSRP group number in hexadecimal, based on the respective interface. For example, HSRP group 1 uses the HSRP virtual MAC address of 0000.0c07.ac01. Hosts on the adjoining LAN segment use the normal Address Resolution Protocol (ARP) process in order to resolve the associated MAC addresses.
HSRP Standby IP Address Communication on Token Ring Media
Token Ring interfaces use functional addresses for the HSRP MAC address. Functional addresses are the only general multicast mechanism available. There is a limited number of Token Ring functional addresses available, and many of these addresses are reserved for other functions. These three addresses are the only addresses available for use with HSRP:
c000.0001.0000 (group 0)
c000.0002.0000 (group 1)
c000.0004.0000 (group 2)
Therefore, you can configure only three HSRP groups on Token Ring interfaces, unless you configure the standby use-bia parameter.
You may wish to refer to RFC1469 – IP Multicast over Token-Ring Local Area Networks to fully understand Functional Addressing.
FCoTR lovers need not be concerned at this limitation on the use of multicast is not generally needed except for very simple end point discovery mechanisms. Since FCoTR networks are commonly dedicated storage networks, three multicast addresses are more than adequate to perform auto-configuration discovery requests on a FCoTR network.
As you might have noticed, everyone is talking about Data Centers lately and all the new “revolutionary” networking technologies are targeted at this segment. The reason is simple: server virtualization (not to mention the vapor-word ) will forever change the networking landscape and the networking engineers might get badly hurt if caught unprepared.
Traditional data center had a networking infrastructure (“us”) connecting servers (“them”) to the rest of the world. The servers occasionally had some weird connections to external disks (called SAN to differentiate it from LAN) that we tried to avoid as much as possible (after all, they were the responsibility of the storage team). The focus of networking and server engineers was widely different; Greg Ferro published one the best illustrations on this conceptual gap .
Server virtualization and LAN/SAN convergence changed all that and the talk has already shifted from what and why to how . All of a sudden, physical servers contain virtual bridges (oops, I have to say switches these days) and network and storage traffic are converging on the same Gigabit Ethernet infrastructure. The situation is quite similar to the early days of VoIP deployments, with a few significant differences:
- This time, the server teams are the “heroes”. Server virtualization is where most of the savings will be. You will just have to buy even more expensive boxes to support FCoE requirements.
- With converged LAN/SAN/server landscape, a few people will become redundant. In most IT organizations the server teams are larger than the networking teams. Guess what’s likely to happen and which team will be merged into another one.
- Many server engineers have traditionally viewed networks as an obstacle between them and the end users. Simplifying the network by (for example) reducing the whole data center to a single transparent L2 domain with routing at layer 2 would be dreams come true. Why do you think so many vendors tout the advantages of TRILL or its fabric equivalents?
The networking might still be the most important IT infrastructure , but that fact will not help you (or anyone else) when a huge data center with layer-2 protocol-independent brouting melts down (even an “I’ve told you” statement won’t be of much use).
Regardless of what your relationship with the server team is at the moment, you’re one of their best assets (although they might not know that yet). Deploying converged LAN/SAN infrastructure and designing the right mix of L2/L3 switching that will survive the unexpected failures is just the right job for you. However, to make yourself truly useful, you have to grasp the big picture, understand the impact (and relevance) of the emerging DC technologies, start speaking their lingo and start working with them (not provisioning switch ports for them) to help them solve their problems.
A lot of you are already there (and I know I’m preaching to a very large choir); if you’re still missing an few bits, you might consider registering for my Data Center 3.0 for Networking Engineers webinar .
It’s pretty well known that I am not a big believer in FibreChannel, or even worse, FibreChannel over Ethernet.
But J Michel Metz ((From his blog: “Most recently J has used his skills as a Solutions Architect/Marketing Manager to espouse and promote Fibre Channel over Ethernet (FCoE), its promise and future in the data center, as well as promote over 20 joint QLogic/HP products with internatonal routes-to-market as a focus.” )) has been recruited by Cisco to go out into the community and “evangelize” the FCoE protocol. So, we have a blog post on Cisco’s Data Center blog that claims that FCoE standards are all packed up and ready to go.
Here is the link :
Will The Real FCoE Standards Please Stand Up
Read that. Yep, feels like misdirection doesn’t it. Like a magic trick: “No, don’t look at my right hand, look at MY LEFT HAND.”
“Oh, NO they’re not”
While it’s true that the FibreChannel standards that make up the least significant part of the FibreChannel over Ethernet protocol have been complete for a long time, you can’t claim a moral leadership because of that and that, by extension, all the other standards are under control. Cisco has been claiming that DCB Ethernet standards will be ready since the middle of 2009. Now it’s late 2010 and still no sign of completion.
So, we have a car and no wheels. Or gearbox. Just because Cisco chose to put out a range of products that use a version of FCoE doesn’t validate the technology. Neither does the half a billion dollars that Cisco has spent so far in technology and marketing. It’s certainly enough to create some buzz and force partners to do something about it.
The EtherealMind View
FCoE is clearly struggling. Cisco is the only vendor throwing full support into it. I believe that other vendors feel pressured by Cisco to add support. Whether indirectly, because of Cisco’s dominant market position, or directly as Cisco uses partnership agreements to lock in technology adoption, really doesn’t matter. I say this because only Cisco’s partners such as EMC and NetApp are doing anything. CNAs are coming from two or three companies, but almost no one else. HP, IBM, Brocade ? Yeah, it’s coming they say. I’ve been saying similar things since April 2009 The Case Against FibreChannel.
In the meantime, the momentum behind NFS and iSCSI as viable storage networking tools is growing. Next year, SAN booting over iSCSI is expected to be widely available, as will CNAs with high speed iSCSI and NFS performance. Check Microsoft and Intel Push One Million iSCSI IOPS There are reasons why Microsoft has not released a native FCoE client for Windows Server (because creating a lossless network driver is difficult for Microsoft to achieve).
Don’t let Cisco run out the astroturf and hide the fact that DCB standards are not here, and won’t be here anytime soon. Just because Cisco wants you to use their proprietary, pre-standard technology today so they can some ‘leadership’.
And when they do come, you will need to forklift all of your Cisco kit out of your datacenter to get the new features. That’s the prize for Cisco, and that’s what drives this ‘marketing exercise’. FCoE has a place in just a few data centres of the future, but for the vast majority, don’t waste your money on it. Get proven, reliable storage networking using NFS and iSCSI and use your existing network equipment. Plan for migration as your networks grows and DCB will provide you the tools to scale IP storage protocols.
Don’t listen to the paid marketing message from Cisco. Make your own decision.