Nerdgasm: Arista 100GB Ethernet Just Blew My Data Centre Design Up

Today, Arista announces the availability large scale 100GB Ethernet for 7500 chassis and it looks like a serious change in the way network hardware is priced and will change the way you look at network hardware.

Arista has a 7500 chassis and announces four line cards.

  • 12 x 100 GbE
  • 36 x 40GbE
  • 48 x 10GbE + 2 x 100GbE
  • 40 x 10GbE with extra SDN hot sauce and new larger hot dog

Arista 100G Launch 1

A 12 port 100 GbE line card !! Shipping ?

Lets add some more porn to this: Arista have chosen to deliver the 100GbE Ethernet port with 12 x 10GbE lanes and you can use cable adapter to modify the 100GbE ports into 3 x 40GbE or 12 x 10GbE ports. You might buy the 12 x 100GbE module, use 4 x 100GbE ports today in a leaf spine architecture, then use 2 x 100GbE as 6 x 40GbE and STILL have 6 more 100GbE ports available for more connections.

Get the idea ?

And upgrades to Supervisors are optional. Newer supervisor have more CPU / Memory etc and may be required for larger networks. Since 12 x 100GbE modules means larger networksit is a good bet you will need them but you don’t have to. N
Works in Existing Chassis, Supervisor Upgrades Optional

But here is the most stunning part of Arista 100GbE announcement:

Arista 100G Launch 3

A 100GbE port uses a MPO MMF cable with 12 core to provide 10 x 10Gbps channels (I’ve written about this “Futures Review on 40 and 100 Gigabit Ethernet” in 2010.

But 100GbE using SR lasers that are built into the line cards for a cost of $120K per card.

The EtherealMind View

Who needs QoS ? Just get MOAR bandwidth and speed.

Disclosure

I have nothing to disclose in this article. I was briefed as a blogger. I will host a sponsored podcast with Arista that discusses this product in more detail in the next few weeks, but that was organised separately from this.

I mean, seriously, 100 GIGABIT Ethernet FOR TEN THOUSAND bucks a port at LIST PRICE. Holy CR*P.

My full disclosure statement is here

  • http://twitter.com/JohnMartinIT John Martin

    Damn that’s impressive, combine that with storage class memory at less than a $/GiB and stupid numbers of ARM Cores at similarly silly prices and infrastructure planning gets almost boring. Our new limitations seem to be time and imagination.

    For the record … if you’re talking to their marketing folks could you ask them to do something about the tagline .. Software Defined Cloud Networks .. really ? …. why not throw “For Agile Big Data as a Service ” on to the end of it.

  • will

    Greg, thanks for this update. I’m going to take a look at the feature sets for this chassis now (MLAG, MPLS, VPN, Service Modules, Trill, and all the other goodies I’d hope to see in a core).

    I’m surprised you made no mention of whether custom or COTS silicon is used in these new line cards. Do you happen to know?

    Is this topic even still relevant?

    Frankly, before your posts the past years on this topic I did not realize it was an industry item to keep an eye on and after reading WR Koss’s recent post on the same topic I’m even more confused.

    • http://etherealmind.com Etherealmind

      Arista uses merchant silicon in all their gear to date. Not that it matters except that merchant silicon trends to more reliable systems compared to Cisco because of the focus on software quality.

      I do not care what silicon is inside as long as I have enough information to understand the internal architecture and design around the weaknesses and strengths of that system.

      I believe (but not sure) that the key silicon is the Broadcom 88650 MAC chip that does the fancy fabric and PHY stuff – more details at http://www.broadcom.com/collateral/pb/88650-PB200-non-nda.pdf

  • DavidKlebanov

    Hi Greg,

    Yes, 100Gb is exciting when done right… In this case the distance is limited to 150m over OM4 cable, because transceivers are built in, so you can’t use anything but short reach cables. How feasible is it to have 150m distance in your DC Spine/Leaf architecture? I haven’t seen much customers who would be happy with such short distance limitation.

    What happens if one transceiver goes bad? Well, they are not field replaceable, so you either live with a failed transceiver, meaning you have less ports on the cards, so your per-port price goes up or you replace the entire line card. Replacing an entire line card because of one failed transceiver… thank you, but no thank you!

    Lastly, Arista implements CRC32 for packet buffer space and not ECC. Without ECC, troubleshooting traffic loss resulting from buffer corruption is very difficult and I am not even talking about how you would correct one… This is not specifically related to 100Gb linecard, but to the entire new linecard line-up.

    Thank you for reading,
    David
    @DavidKlebanov

    P.S. I work for Cisco, not that it matters…

    • http://etherealmind.com Etherealmind

      That’s rubbish. 150m is more than enough for all use cases. Modern data centres will be based on 100m cable runs from core to distribution.

      You will need to do some more digging the Competitive Response paper and come back with something better.

      I’ll listen when I see the pricing of the Nexus 6000 when it (if ?) ships.

      • DavidKlebanov

        1. Although 7500E can certainly be used as a Core box, my understanding is that topologically it is positioned to be in the Spine layer of a Spine-Leaf architecture. With most Data Centers deploying TOR server cabling, 150m will need to be enough to reach each individual rack where TOR Leaf switches are deployed (if you want to build 100Gb fabric). It might or might not be sufficient… I had seen plenty of examples where 150m distance will not cut it.

        2. Nexus 6004 is shipping today. As an established professional, I bet you can find Nexus 6004 pricing, if you wanted to.

        Also a piece of advice, being rude and obnoxious might draw more spotlight to your blog and to you personally, but there is something to be said about courteous professional behavior and respect for people you interact with. Take it for what it’s worth for you…

        • http://etherealmind.com Etherealmind

          Don’t get all tetchy. I’m well known for being direct and blunt so if you want to get precious now you’ll need to examine your troll-like approach.

          And, frankly, I’m tired of your cheap shots at the competition based on your on self-serving perspective as a Cisco employee. You’ve consistently run me down on Twitter and in comments on my blog. If you want to maintain that sort of attitude, Im going to get more blunt and more direct.

          Your advice, so far, is worthless and I’ll treat you with the respect that you have earned so far.

      • Broadwing

        (Yeah, I know, ancient thread, but…) The 7500E integrated optics are good out to 300m, making the arguments against them sillier. We’ll be using them across a data center, concentrating 54+ interconnects into six ports, making cabling so much cleaner.

        And the Nexus 7700 lost to it during our eval on price, maturity and capability.

    • Guest

      Threadthromancy, but it’s important to note that the integrated optics on the 7500E are good for *300m*, as well. We’ll be pushing 54 interconnects and then some into six ports on our new core, with some ultra clean cabling.

      …and what lost to the 7500E in our purchase was the Nexus 7700. Won on maturity, price and capability. Oops.

  • Nick Buraglio

    @DavidKlebanov

    I’d be comfortable deploying that even as a part of redundant a campus core, the product is solid and performs well. They’re positioned for the DC, but that is simply marketing. That equipment is quite solid and reliable as well as very feature rich, especially given that that it is merchant silicon and is, in the grand scheme of things, a relatively new company.

    However, the tech is really irrelevant. The fact of the matter is that there is a large amount of *fantastic* competition in the networking space for the first time in a very long time, which give operators choice, which in turn drives innovation and leads to better products.

    For far too long the large incumbent networking vendors have driven what we can do, how we do it, how much we pay and that is now changing.

  • Juan Olivos Cordova

    im really dumb, but can someone tell me in really easy english, how do I get 100 gb per sec? I wanted terabites, but wikipedia said that will happen in a few years,