Merchant Silicon Evolution, 40GbE arriving in 2015 & Impact on Data Centre Design

I’ve been reading a presentation from Sharkfest 2012 where a engineers from Microsoft are presenting on their
Microsoft’s Demon – Datacenter Scale Distributed Ethernet Monitoring Appliance. The whole presentation is interesting but this particular slide caught my attention:


This suggests that Merchant Silicon (from companies like Intel, Broadcom and Marvel) will reach 256 ports of 10GbE Ethernet by 2015. Now it’s really hard to get hold of a roadmap for merchant silicon, it’s tightly controller & secretive so this is best I can find on merchant silicon futures.

What does this mean ?  Existing top of rack switches with 48 x 10Gb ports have 64 x 10GbE ports internally, the other 16 x 10GbE are usually 4x40GbE.

This are likely to be replaced by 256 x 10GbE silicon with a likely configuration with 64 x 40GbE configuration. Each 40GbE interface can either be a 4 x 10GbE or 1 x 40GbE because of the way that 40GbE PHY is setup.

A QSFP is highly likely to cheaper than four 10GBaseSR SFP modules in my opinion. Although efforts to Short Range transceivers for Top of Rack in on the way, I’m doubtful that it will get much traction in the market.

The EtherealMind View

I figure that the following ideas become a bit clearer.

  1. 40 GbE will arrive for Top of Rack solutions in 2016
  2. Switches in the campus backbone and aggregation layers should be ready for replacement / upgrading in 2016 to support 40GbE
  3. Do not install any cabling in your data centre or campus backbone. 40GbE uses 8 fibre cores for multimode and 1 pair for single mode. The cable will be OM4 although OM3 will have shorter distances. Provision the least amount of cable until new cabling solutions arrive.
  4. Spending money on expensive 10GbE switches will be wasted as they are likely to be replaced in 2016 with 40 GbE. Most server people are already deploying/asking for 4 x 10GbE per chassis and it probably be cheaper to use a 40GbE QSPF than four 10Gig SFP modules in two to three years time.

Sharkfest ’12 – 



I have nothing to disclose in this article. My full disclosure statement is here

  • Howard Marks

    While it will probably be cheaper to use 1 QSFP than 4 SFP+ connections, especially with Silicon Photonics or other CWDM technologies that can use 1 pair of fibers and multiple lambdas the server guys will still want connections to two separate switches making 4 SFP+s preferable. Frankly I think the server guys asking for 4x10Gbps are delusional as it’s a rare server that drives more than 10Gbps of traffic.

    • Etherealmind

      Agree – each server chassis will likely have 2 x 40GbE connections for redundancy in active standby for simplicity.

  • Vaibhav Katkade

    Will that mean a race to the bottom for higher speeds and density switches wrt cost? And the differentiation come from architecture/latency/buffering/services/scale?

    • Etherealmind

      It’s hard to be sure but my current view is that we need to be replacing network equipment every three years (as we do with servers). To support this, network equipment must be half the price we have today.

      The value prop/differentiation will be delivered by the controller and the applications on the controller – not by the software on the device.

      Hardware build quality remains important and the firmware becomes less valuable be vital.