Backplane Ethernet – the GBaseK Standard

A lesser known standard is Backplane Ethernet. I wasn’t aware of it until I was researching Notes on Cables and Connectors for 40 and 100 Gigabit Ethernet. I spent some time over the weekend scratching an itch to have a look at it and why it exists.These are scrach notes, and observations from a research session and not intended to a canonical investigation. Interesting though because it’s is valuable to understand that many network products are functionally all the same – only the software and the people are different.

Why?

There is a requirement for equipment manufacturers to be able to use off the shelf or merchant silicon to build high speed networking backplanes for servers – in other words, something that looks similar to HP Virtual connect but using ethernet.(From a IEEE document for Standard PAR)

  • A lot of blade servers needs to move ethernet across an internal backplane I.e between servers. Having a predefined standard means access to 10/40/100Gb commodity silicon that can operate reliably over internal backplanes because the commodity ethernet PHY layers are not designed for backplane use.
  • A lot of server manufacturers do not have the resources to develop and build their own backplane technology and access to off the shelf chipsets is desirable.

Points of Interest

  • Support single, quad and decimal lanes as per the 10/40/100GBase SERDES chipsets.
  • Defined in 802.3ap and known as 10GBaseK, 40GBaseK, 100GBaseK in most documentation where 10GBaseKX4 describes four lane PHY, and 10GBaseKX describes single lane PHY
  • options exist for auto-negotiation between KX, KX4 and KR.
  • IEEE did not specify a backplane design or physical characteristics (unlike the cabling specs for 1000BaseSX, 10GBaseLX etc)
  • However SERDES specifications drive backplane design in viable directions.
  • backplane length varies according to SERDES chosen. Molex documents suggest that lengths of 1 metre at 10GBase speeds are possible today.
  • Molex publishes specifications on the backplane connectors here with images of various reference backplane designs. The following photos show some similarity to the connectors on a Catalyst 6500 backplane.

10GbaseKR Backplane Connectors - MOLEX

  • Wikipedia – 100 Gigabit Ethernet talks about the 40GBaseK4 four channel gigabit ethernet standard being designed for 1 metre backplane.
  • Mellanox has a single chip technology that combines support for CX4 / KR / SFI / XFI PHY connectors. (Interpretation: SFI / XFI appear to provide fibre optic PHY but I’m not entirely sure.)

10GBASE-KR serial technology is becoming the popular PHY technology on the backplanes both on the blade server chassis as well as on the ATCA chassis. ConnectX EN integrates KR and provides a highly integrated, cost optimized and power optimized solution to both Blade server and ATCA applications.

Reference: Mellanox ConnectX 10GbE Whitepaper – PDF

  • also note that Mellanox whitepaper talks about full support for DCB including PFC, ETS, 802.1Qau and DCBX support in the same chip. Logically, the chip could be used to provide full DCB support for server backplane including SR-IOV and FCoE.
  • some details here about a company in India that offers verification of 10GBase-KR silicon wafer designs in Verilog.
  • Vitesse has some product information on their 10GBase chipsets.
  • Simclar shows a chassis and offers services that provide a chassis based capability including the case, power supplies, thermal design and 10GBaseKX backplane. Fascinating. Simclar

NewImage.jpg

  • suggestions that many network appliances uses Backplane Ethernet to build their chassis’s. I’d be guessing but companies that build low cost blade based solutions such as Fortinet, Crossbeam (??), most DPI vendors such as Procera would be using off the shelf Network processors on their line cards and Backplane Ethernet for connectivity.
  • I’m guessing you could use same technology for switching, but it wouldn’t have much performance unless you had a Backplane with a lot of 10GbE channels on it. At high density this would have a lot of crosstalk and would be challenging to design. Still, easier than designing your own chips from the ground up.

Wrap Up

And that is probably what it’s about. Networking companies can buy merchant silicon to build line cards (using network processors), backplanes (using Backplane Ethernet) and standard Intel reference boards for device management and software. In a sense, although it’s more specialised, I’d speculate that building a chassis based appliance means selecting a bunch of the off the shelf components, testing and validating the built product and writing some software to load into the NP on the line cards and you would have a Network Appliance.

Not easy, I’m sure. But it’s not like you are building everything from the nothing. As Carl Sagan once said “In order to make an apple pie from scratch, you must first invent the universe”. In this case, you have most of the ingredients available, you just need the money to buy them and make the pie.

  • Yumri

    why would you need 100 GBaseK as to my knowelge that is faster then the sever internals would be and if the sever is not able to process the infomation then the internal parts of the sever becomes the bottleneck instead of the network which might be the goal but if the bottle neck is the sever then what is the point of having the overly fast network?