Intel: Arista uses our Merchant Silicon

Intel is crowing about their silicon being used in Arista’s 7124SX switch. Note that this is Intel’s switching silicon (from their purchase of Fulcrum not the FPGA.). At Network Field Day, Arista was at pains to avoid discussing their merchant silicon vendors, claiming that they choose the best chipset available at any time.

In short, the silicon doesn’t matter when you are software vendor. That’s directly in the face of Cisco and their “we are a software company not a hardware company” except when they make their own silicon as a “core value”.

Intel Ethernet switching technology is at the heart of this new design. Not only do these 10GbE switches provide extremely low latency, they can maintain this latency while providing advanced L2-L4 forwarding features along with features such as data center bridging (DCB) and server virtualization support

Application Switch Brings Processing Into the Network: “”

  • http://twitter.com/tatersolid Ryan Malayter

    Of course, Arista uses BRCM silicon (Trident+) in their 7050 series of switches as well. Just like a server vendor sells both Intel and AMD servers.

    The BRCM Trident+ is currently the king of the merchant silicon heap (64x10GbE or up to 16x40GbE), but INTC’s new FM6000 (which you can find on their site via Google, but it the product pages are still clearly in-progress) looks to be interesting at 72x10GbE or 18x40GbE.

  • OmarSultan

    So, I think we actually like to think of ourselves as a “systems” company with solutions that draw on both hardware and software. You may, of course, choose to disagree. :)  At the end of the day, we think our customers benefit when we keep our options open on what silicon we use.  If you look at the ToR space as an example, using our own ASICs allowed us to lead the market in terms of features and density,but we have also noted before, we will look at other options when it makes sense.

    Regards,

    Omar @omarsultan:twitter 
    Cisco

    • http://etherealmind.com Etherealmind

      That’s true and what Cisco is currently delivering. Except for the mixed messaging that results. Thus, Cisco regularly claims to be a software company but acts more like a hardware company. Many people (including myself) are confused by this. 

      • OmarSultan

        That’s why you have me to harass :)

        • http://etherealmind.com Etherealmind

          I’ll try to be nice about it :)

    • Guest

      Omar, you really are a trooper, repeating your employer’s party line… You are the most loyal deck-chair re-arranger on the Titanic…

      Can you please tell us about the most recent ToR that, in your opinion, “[used your] own ASICs to [allow you] to lead the market in terms of features and density”?

      • OmarSultan

        @ce8fac64aeb63b82b12b3a56433e2636:disqus :
        Glad you asked.  So off the top of my head, with the Nexus 5500, which is what I had in mind, we have 96 ports in 2 RU form factor, all of those port can be unified if you want (supporting 10GbE, FCoE and FC), we have support for FEX (802.1BR), and we have support for FabricPath/TRILL in the chassis.

        Regards,

        Omar

        • Guest

          I would quibble with the “density” claim, if you really mean “density” (recall that “density” means “X per Y”, so “ports per rack-unit”), not port “count”. 96 ports in 2RU is 48 ports/RU. Any Trident+ based box (including your own Nexus 3064) gets 64 ports into 1RU, which, by my math, is 33% higher port density.

          And all those proprietary features come at a huge power cost: the 5596 draws 7W/port “typical” (11W max), whereas the best Trident+ boxes draw 2W/port “typical” (3.5W max). Have you ever held your hand behind a Nexus 5k at idle? It’s like a space heater. :)

    • Anon23

      As someone who works in HFT, I can honestly say that Cisco’s current offerings have no place in an latency-sensitive trading path. Nor could I justify the exorbitant port cost for a 10G datacenter deployment, come to think of it.