Stumbled over “AgilePorts” feature in Arista products this week:
Arista’s AgilePorts technology enables the combination of four 10GbE SFP+ interfaces into a single 40GbE interface leveraging the parallel lane technology present in the 40GBASE-CR4 and 40GBASE-SR4standards. With AgilePorts, each 10GbE interface emulates one of the four parallel lanes, which are then driven by a 40GbE scheduler. Unlike traditional Ethernet port-channel/link-aggregation techniques or other link bundling technologies, which use Layer 2/3/4 information to hash traffic over links on a per flow basis, potentially resulting in uneven utilization between links, AgilePorts leverages 40GbE’s bit striping to ensure perfectly even load distribution across all four 10GbE lanes, achieving true 40Gbps line rate rather than a theoretical maximum based on flow based hashes.
I know that 40GbE runs 4 x 10Gigabit lanes on a single cable but I didn’t know that it would be possible to repurpose the multiplexing capability in this way by allocation each 10GbE lane to a separate cable.
So the end result looks something like this when connecting to the DWDM solution that is used 10GbE lambdas.
Repurposing the Features
If you are building an ECMP network, you might have a situation where you want 240 GbE between a spine/leaf in a ECMP network but your Arista 7150S switch only has 4 x 40GbE.
Arista can repurpose the features to do give a total of 6 x 40 GbE for 240 uplink.
Allocating 8 10GbE ports to uplinks is necessary when building ECMP network designs at low contention ratios. in this design, 40 x 10GbE ports are available for servers which gives a 400/240 = 1.66 oversubscription of the edge to uplink.
In an ECMP network, this has less use than you might think. Of course, this also assumes that the spine switch that has both 40GbE ports and 10GbE ports like the Arista 7500 chassis. And this doesn’t necessarily prevent the use of low cost 1RU switches like the 7050Q because 40GbE can use QSFP breakouts.
It should also be noted that a 1:1 lossless fabric could be created by only allocating / connecting 24 x 10 GbE ports at the access layer. This would reduce or remove the need for implementing QoS because there can never be congestion on the interfaces in the path.
The EtherealMind View
I would admit that bonding 4 x 10GbE DWDM connections into a single 40GbE Ethernet link for Data Centre Interconnection isn’t something that everyone need to deliver but when you do, this should be on your list of possible options. In the future, 10GbE WAN connections in data centres will be more common because optical networking equipment becomes cheaper. There is also a trend for optical edge to act as routers and MPLS PE which means you might be able to avoid paying big dollars for a DC class router.
Repurposing the technology for ECMP spine connections is a neat trick for what we clearly intended to be a Data Center Interconnect technology.
Read more in the Arista Technical Bulletin PDF file – AgilePorts over DWDM for long distance 40GbE