Diverging Ethernet Switch Markets

I’ve been catching up on the Juniper QFabric announcement from yesterday. While I’m still attempting to digest the ramifications of the technology and the methodology that this brings to the Data Centre let’s step back and take a wider look at what this means for the ethernet switching marketplace.Juniper 16 port qfabric 1

Looking backwards

For the last 20 years or even longer there has always been just one market for ethernet switches. We’ve had product lines for ethernet that were used for all application, a single product segment where the same Ethernet switch in all cases and there wasn’t much difference in implementation.

Looking back it has been clear for a while that, and perhaps today’s launch of QFabric highlights the issue even more than before, there are now three distinct markets for ethernet switches: Data Centre, Campus LAN, Service Provider

And vendors will have product lines for each market they wish to approach. For example Cisco has the 6500 products for campus LAN, the Nexus product lines for the data centre (although they are attempting to position nexus as campus), and the service provider variants of the same equipment that apparently are different.

Compare this strategy with Avaya who are positioning the ERS8600 as Campus LAN and Service Provider only. They have elected to not develop any fabric-like capabilities, but continue with the SMLT for campus and add Shortest Path Bridging for service providers. While they will claim that SPB can be used in the data centre, I judge this unlikely to be successful. Not because SPB doesn’t work, but because they are not selling directly to data centre customers they are unlikely to be taken seriously. (Why ? It’s my guess that they probably don’t have the money to do the R&D for silicon to support TRILL. After all, IP Telephony doesn’t need Data Centres. Right ? )

What Does Ethernet Market Segmentation this mean for us?

I guess the most obvious one is that skills learned in the campus LAN and not necessarily transferable to the Data Centre. For example the use of TRILL in the Data Centre is significantly different from using Rapid Spanning tree in the campus LAN, the use of MLAG either as stackable, fate shared chassis, or traditional chassis technologies. . Although some of these skills overlap, there are significant differences.

Secondly, our approach to sparing and servicing will change slightly as the product is each segment will differentiate further overtime. It will increase productivity and increase support costs for the networking team in general. Product diversity increases costs (certainly does not reduce them), and therefore the not so obvious outcome of “data centre consolidation” is that support costs will rise for most enterprises. C4500 switch

When choosing Ethernet switches, you’ll need to understand the difference between product families. For examples, Cisco N7K is data centre today and may never be destined for use in Campus LANs where the C4500 could become standard (the C4500 Sup9 has the similar performance as the C6500 Sup720). For Juniper, The ERX becomes Campus, and the QFX products will become Data Centre focussed.

For many companies, this will create some confusion since the Campus switches can, of course, be used for connecting servers and building data centres as we do today with Spanning Tree. However, you probably won’t get Equal Cost Multi Path (ECMP) since the Ethernet silicon that supports TRILL is in specific models.

The EtherealMind View

As always, “It Depends”, but the segmentation of the Ethernet Switching market is fascinating. How far can we stretch the Ethernet standard to suit different purposes and outcomes. As customers, we need to understand the differences between the market segments, and how that drives the features in the products, so that we purchase the right product.

Pay attention, it’s getting interesting.

  • Jonathan

    Greg,

    Great site and Podcast, In your post you make the elude to the statement that TRILL is better suited than SPBm as a Data Center Protocol, just curious of why you believe that?

    Also It seems that Avaya has been promoting the SPBm Technology as a Data Center Technology (in addition to Campus LAN and Service Provider), now whether they will be taken seriously is another issue.

    A great video at NANOG about difference and similarities of TRILL and SPB
    The Great Debate: TRILL Versus 802.1aq (SPB) – http://goo.gl/RexL2

    Great site and keep up the good work.

    • http://etherealmind.com Greg Ferro

      IMO, SPB is targeted at service providers. The technical demand to keep the Ethernet frame unmodified means tough limits on what can be achieved and the requirement for out of band communication to expose state, and keep coherent information is harsh. In short, SPB consumes resources to keep the control planes functional and forces certain technical tradeoffs. The gain is that not all equipment needs to be upgrades to be interoperable.

      SP / Telcos are unable to upgrade quickly (even though that is their job) and thus tend to lazy / cheap options. That’s SPB.

      greg

      • Jonathan

        Thanks for the input, but you did not quite answer my question, was wondering what advantages did TRILL posses and SPBm lacked. Yes it a true statement that SPB was developed from SP/Telco Technologies (PLSB/PBB) and is running in SP/Telcos to achieve segmentation of services, But if one were to classify the sole purpose of all networks (LAN, MAN and Data Center), it would be to just a transport a packet from point A to point B (or multiple point Bs for multicast), seems like TRILL and SPBm both perform that function through encapsulation of the Ethernet frame utilizing IS-IS to form that path/route.

        Trying to see how TRILL answers the Data Center questions better than SPBm, it seems to me that many of the issues that plague large data centers and Data Center Bridging (ie flow control, congestion and enhanced QoS) are thorns in both TRILL and SPBm and only seems like QFabric is the only that answers it but we won’t see it until Q3 how and at what cost.

        The goal of both SPBm and TRILL is simplifying the network (initial configuration and on-going provisioning of services), couldn’t that simplification be advantageous in the Data Center?

        Even though SPBm was originally a SP/Telco technology in the past 5-7 years it has been extended to answer the issues with Enterprise LAN, MAN and Data Centers…I would say that any application that TRILL can answer then SPBm can answer as well… Would love to hear your thoughts…

        • Des

          TRILL & DCB work very well together.

          QFabric is using BGP for the control plane, and the rest is special sauce. Could be cool, but totally proprietary, which sucks.

          Over time SPB has incorporated more and more like TRILL, so differences are becoming fewer and fewer. I think the main difference now is 3 things

          1.) Loop arbitration (I prefer TRILLs hop count model)
          2.) Some calculation methods (not exposed to most users)
          3.) Support

          TRILL is supported by Cisco, Broadcom, Marvell, Fulcrum, Brocade, IBM, Oracle, Dell
          SPB is supported by ALU, Avaya, Juniper

          Supporting both is Huawei & HP

          But, full dislosure, I am bias. I participate in the TRILL WG

          • http://etherealmind.com Greg Ferro

            Maybe you should make a time to join us on a podcast at http://packetpushers.net and tell use about it. Let me know.

  • will

    don’t want to be a stickler, but I think you meant “The EX becomes Campus…,” not the ERX. The ERX is one of their Broadband Services routers.

    the QFabric stuff looks interesting, as does Brocade’s VCS solution . . . interesting times ahead, that’s for sure.

    love the blog and podcast – when’s the next one going to air?

  • http://www.jeremyfilliben.com Jeremy Filliben

    Well put! I’ve been thinking the same thing over the last couple of months, but I didn’t have enough clarity in my thoughts to write it down. There is also a potential third category of ultra low latency switching for specialized needs, although I expect a lot of overlap with the DC segment.

  • Pingback: Show 34 – Breaking the Three Layer Model ó Packet Pushers

  • Pingback: Show 34 – Breaking the Three Layer Model