FCoE isn’t a replacement for Infiniband, it’s a cheaper copy that customers will buy

Infiniband is something of a forgotten protocol these days but many of the marketing features of FCoE are directly derived from Infiniband concepts and architecture.

Paradigms of Data Center Virtualization

For example, this page is taken from the “Paradigms of Data Center Virtualization” by Cisco at Networkers 2005. If you are following Cisco UCS then it should be very familiar, perhaps surprising the the Unified Fabric idea is not new but either Infiniband or 10GB was going to be the networking fabric.

Caption Text.

Unified Fabric Basics.(Click for a full size image)

If you are reasonably new to Data Centres then you may not remember that Cisco has a moderately successful Infiniband product, and in 2005, it was the centrepiece of Cisco’s marketing strategy for a Unified Fabric for connectivity in the Data Centre. As an aggregator of connections from the server down to an Infiniband backbone it was the Unified Fabric that was going to unite the Data Centre for Grid Computing (the precursor of what is now virtualization).

Caption Text.

Caption Text.(Click for a full size image)

Unified Fabric Server Clusters

And you can see a Case Study of this using a pure Infiniband backbone for a large scale server cluster in this slide (from the same presentation)

Caption Text.

Caption Text.(Click for a full size image)

So what changed ?

So why didn’t Cisco continue with Infiniband ? I think it’s worth looking back and thinking how we got FCoE instead and looking for lessons to be learned. I can only speculate on what happened.

Recap on Key Infiniband Features

  1. InfiniBand is a high speed, low latency technology used to interconnect servers, storage and networks within the datacenter
  2. Standards Based – InfiniBand Trade Association http://www.infinibandta.org and has been working successfully for more than ten years.
  3. Scalable Interconnect speeds in multiple of 2.5Gb/s, products are currently shipping at 40Gb/s and 120Gb/s products have been announced.
  4. Low latency networking with delays around 20 microseconds (end-to-end) is one thousand times less than 20 milliseconds for a data centre ethernet network.

I also find it interesting that the silicon from QLogic and Mellanox has rapidly been repurposed for FCoE HBA’s. There must be strong similarities is these products for this to happen.

Ethernet is easy to sell

People will buy and use what they know. In that sense, faster ethernet or ‘newer’ ethernet is an easier decision. I assume that offering customers an easy choice, what they perceive as a simple upgrade path, is a much easier sale, than one that needs to teach customers new technologies. People tend to laziness, and this is a probable cause.

Cheap always wins

Ethernet is cheap and not very good at a lot of things but, there is a lot of it about and plenty of existing technology. And don’t forget skills, lots of people understand Ethernet and that has a price tag.

And history shows that cheap always wins.

Maybe Cisco wasn’t dominant

Cisco has a policy of being number one or two in any market. If they can’t do that, they will walk away or do something to kill that market. For example, in the early days of IPSec VPN, Cisco didn’t have a good story. They bought two or three companies before converging the code in the Cisco PIX (and later the ASA). They grew to number one by charging no license fee for their solution thus blocking all their competitors from making a profit. They same system appears to be happening today with SSL VPN which is now a small license fee for most customers.

If Cisco couldn’t dominate the Infiniband market, then perhaps Cisco spun out the Nuova Systems (the company that built FCoE and the Nexus family silicon as a startup) to give it a shot at collapsing that market.

FCoE cheaper but not better ?

If you spend some time with Infiniband, you realise that FCoE isn’t specifically a world beating technology innovation. FCoE is the VHS to Infiniband Betamax. And you can see that many of the ideas the FCoE Unified Fabric promotes are the same as those promoted in the past. In that sense, FCoE isn’t new, just a rehash of old ideas mashed onto existing technologies.

Now, I’m sure that FCoE and Cisco UCS strategy is working just fine for customers and it’s going to do well in the market. Given that Cisco has spent anything up to US$1billion buying, manufacturing and marketing the product, it’s a guaranteed success. But its worth looking back on older technologies with regret and cynically viewing these new developments to determine if they are really the best solutions for our networks.

So far, I’m not so sure. We could have had better, but FCoE will probably work just fine for a while until we need to scale and get faster than Ethernet can ever go. Maybe Ethernet will work for us in the future but Infiniband will still be there waiting for us to reuse it. Just like all the other technologies that we keep reusing.

  • Interesting Perspective

    I think the most interesting thing in your article is the correlation between the two. Great way to bridge the gap.

    Reality is that FCOE is a great savings to companies. 10gig is getting cheaper but real estate in the DC isn’t. If I can reduce the amount of switches i own but combining the FC and Ethernet, great: less power, less cooling, less floor space=less $$$$$$.

    For myself as an owner of several data centers, I want to get more computing power into every square inch of space, by merging the SAN and LAN into one switch, I can do that. And for that I say ‘Thanks Cisco, now get me native FC in a Nexus 7k damn it.”

    • http://etherealmind.com Greg Ferro

      What’s also interesting is that the successor to 10GB ethernet isn’t moving very fast. The IEEE is still arguing over whether to have 40Gb/s or 100Gb/s Ethernet (which looks nothing like Ethernet at all – but that’s another matter) as part of their standards process. Although early stage testing has been done on both the IEEE is still nowhere on standards.

      Compare this with Infiniband at 40GB/s today, and 120GB/s around the corner with real, validated, in use products.

      Sure, today’s servers can’t push 10Gb/s off the bus, but pretty soon they will be and then some of the benefits of converegence will have been lost.

  • http://nigelpoulton.com Nigel Poulton

    Hi Greg,

    Interesting article. I see you are softening to the fact that FCoE actually does bring something to the modern data centre. Not perfect by a long shot, but certainly brings some value.

    As for being a rehash of old technologies and ideas….. I’m a firm believer that there are no new ideas, just old ideas reborn on faster and cheaper hardware. Cycles.

    On a similar point, I would venture that all technologies are interim and will eventually be superseded. Dare I say even IP will one day be superseded. But not by FCP 😉


  • Brice Goglin

    “Low latency net≠work≠ing with delays around 20 micro≠seconds (end-??to-??end) is one thou≠sand times less than 20 mil≠li≠seconds for a data centre eth≠er≠net network.”

    I don’t know what you did with your Ethernet to get 20 milliseconds, but mine is about 50 microseconds and even 5-10 if I tune it properly.

    • http://etherealmind.com Greg Ferro

      Using ICMP isn’t reliable at that resolution so you would support that statement with methods.

      Also, Infiniband has latency of less than 200 nanoseconds on a single switch, I increased it for a typical configuration with a few adapters and switches. Ethernet cannot come close to that.

      BTW, lots of good information on your companies website. Must remember to keep checking for new information.

      • Kevin

        ICMP may not have the resolution, but we get less than 1ms across any portion of our DC’s using IxChariot, which is considerably less than 20ms. I agree that Ethernet isn’t as speedy as Infiniband, but you are overstating the difference by over 2000%.

  • Brian

    Fact: 40Gb/s InfiniBand costs less than 10GigE today. Both continue to go down in price as competition heats up.

    Here is an example…

    • http://etherealmind.com Greg Ferro

      Yes. But convincing Server and Storage people to use new technology is almost impossible. They are so retarded on legacy concepts that it’s a wonder they get anything done.

      • Brian

        Check Oracle web site – they are using InfiniBand. I dont think you need to convince any more, IB is going to places where it was only Ethernet till now

        • http://etherealmind.com Greg Ferro

          I hope so.

  • http://fcoe.ru Adam

    As I know Infiniband, is topically not data transfer protocol, it sends data over IP with MTU size 1492-2048 bytes. This is Cisco division…
    Brocade were invited new protocol FCoE to data transfer, Fibre Channel over Ethernet.
    So FCoE is firs implemented with battle between Brocade and Cisco, first implementation and ASICs are steel bad.
    And I publishing new invents on my site at http://fcoe/ru. Welcome!
    In additional, FC market is Brocades :-)