Is this the year of 10 Gigabit Ethernet in the LAN ?

10Gb Ethernet is will provide more bandwidth and speed for networking, but it’s hasn’t really grown the way that vendors expected. In my experience, 10GbE has some real problems that will make it grow gradually and organically, rather than force a new round of investment in networking.

The Problems with 10GB Ethernet


If you are using copper patch leads for 10GbE, you are going to a need a lot of power. Using standard copper can use up to 45W per port (although 10GBaseCX-4 apparently uses 4.5W per port).

The IEEE is working on Energy Efficient Ethernet (802.3az) technology that will allow links to auto-negotiate down to lower speeds or go to “sleep’ during periods of inactivity which will further reduce power consumption.


10GBaseT copper uses 650Mhz frequency spectrum and needs high quality cabling to work reliably. This means that you need to test, properly, your existing Cat5 or replace it with Cat6A or better. If you use Cat6A or even Cat6, the cable is physically much larger (and you may not have the space in your computer room). In this case, you will have up to 100 metre cable length. If you use Cat5 or Cat5e, the distance is much shorter depending on the quality of your cable, typically less than 40 metres and would probably need testing for assured reliability.

10GBaseSR uses multimode cabling but has different cable lengths depending on type of cabling and certain combinations will require mode conditioning patch leads.

The current list of 10GB XENPAK / X2 interfaces from Cisco show the confusion that the different types of cabling causes. For example consider the following table showing the LAN options (I’ve removed the WAN units) and the variation in cabling types:

X2 Product ID XENPAK Product ID Transceiver Type Wavelength IEEE Standard Maximum Distance/Cable Type
X2-10GB-LRM XENPAK-10GB-LRM 10GBASE-LRM 1310 nm serial 802.3aq 220m over multimode fiber
X2-10GB-SR XENPAK-10GB-SR 10GBASE-SR 850 nm serial 802.3ae

26m over 62.5-micron FDDI grade multimode fiber

33m over 62.5-micron 200 MHz x km multimode fiber

66m over 50-micron 400 MHz x km multimode fiber

82m over 50-micron 500 MHz x km multimode fiber

300m over 50-micron 2000 MHz x km multimode fiber

X2-10GB-LR XENPAK-10GB-LR+ 10GBASE-LR 1310 nm serial 802.3ae 10 km over single-mode fiber
X2-10GB-ER XENPAK-10GB-ER+ 10GBASE-ER 1550 nm serial 802.3ae 40 km over single-mode fiber
X2-10GB-LX4 XENPAK-10GB-LX4 10GBASE-LX4 WWDM 1310 nm 802.3ae

300m over 62.5-micron FDDI grade multimode fiber

240m over 50-micron 400 MHz x km multimode fiber

300m over 50-micron 500 MHz x km multimode fiber

X2-10GB-CX4 XENPAK-10GB-CX4 10GBASE-CX4 Copper 802.3ak 15m over 8 pair 100-Ohm InfiniBand cable

The impact of cabling

In a recent project to plan a refit of an existing data centre, the 10GbE cabling needs was a major problem. Because of constraints in the change control and risk management, we eventually decided to use 1Gb ethernet because the time needed to get long change windows exceeded the length of the project.

And in other projects, the cost of recabling the fibre optic to meet the new requirement for 10GbE was prohibitive for smaller works. That is, we couldn’t just add a “patch of green” to an existing facility and extend the new switch as funds became available.

Which is weird, because it reminds me of the Token Ring / FDDI / Ethernet wars back in 1995 or so.

High Cost

If you take the time build budgetary pricing around a Cisco Nexus 7000 you will quickly realise that the cost of Cisco’s 10GbE capable switch is really expensive. I found that a typically configured Nexus 7018 with a good number of 10GbE and some 1GbE was around GBP£500K / USD$800K. Admittedly, this was a fully loaded model but forms the basis for a cost analysis against our existing Cat6500 choices. Frankly, I couldn’t convince anyone that this was a good idea.

Sure, the Nexus 7000 is good product (not a great product in my opinion) and offers some 10GbE capability but the lack of features and high cost means that 10GbE is still not a part of our short term strategy. I wonder how many other people have a similar problem ?


Which Year was that ?

It seems that every year is the year of 10 Gigabit Ethernet.


In March 2009 The Register posted an article
Highbeam – Note this is behind a registration wall

And this article on CNET (really) 10-Gigabit Ethernet comes alive

The market for 10-gigabit-per-second Ethernet switching got off to a slow start, but now that corporate customers are looking for more speed on their networks, the technology seems to be hitting its stride.

And the fact ?

10GbE hasn’t really happened has it ? The standards took a long time to finish, and the prices have been very high for both the cabling and switching equipment. Server manufacturers aren’t putting the chips on their motherboards because of high power consumption. But most importantly, almost no-one need the bandwidth except for certain niche applications.

My Prediction

There is no question that 10Gigabit Ethernet is going to happen. Eventually. But there isn’t enough money or momentum to make 2010 a watershed year. There still isn’t enough demand for bandwidth in most parts of the networks to require the upgrade and CIO’s are investing in Virtualisation this year, not Networking.

That said, areas that require long investment cycles may buy 10Gb Ethernet for preemption against future upgrades (and thus downtime) will drive a surge in 10GbE purchases this year. For example, upgrades to data centres and storage networks (for those using iSCSI and FCoE) may purchase 10GbE switches and routers to build high performance backbones but continue to connect servers and edge switches at 1GbE. These are high visibility, high value purchases that will create a lot of marketing noise and management attention. The reality, however, is that 10GbE will be adopted small scale, and will not be used in the distribution switches, or the wiring closet, or the WAN. ((Note that it’s different for Service Providers whom I expect will have a lot more interest in 10GbE for their WAN backbones and may actually make investments in their backbones soon.))

I’m expecting 10GbE to get slow, progressive adoption over the next three years. It’s not an industry revolution, and not enough people need to increase bandwidth to drive rapid. The only use for 10GbE in the Enterprise is for Data Centres where Storage and Virtualization/Blade Servers are driving adoption.

No one in the Enterprise cares.

  • Matt Simmons

    I agree that the use of 10GbE for computer networking is somewhat limited, but I wouldn’t call ethernet-based storage a niche market. 10GbE is the driving force behind people being convinced that 4Gb FC is pokey.

    10Gb iSCSI is a formidable option, particularly if the 10Gb switch ports are backward compatible to existing adapter speeds. Of course, even if that’s the case, the infiniband people still scoff. “Only 10Gb/s? Here’s a nickel. Buy yourself a REAL storage network”.

  • Greg Ferro

    That’s a typical cognitive dissonance. You THINK Storage is really important because you hear a lot about it. Storage is still a small business, and still in it’s technology growth stage and this means that a lot money is spent on marketing. All technologies go through this in the early stages.

    A biggish storage network might have a couple of hundred ports. Compare this with a Data Centre that has thousands, or a campus that has tens of thousands and you should have some idea of scale.

    I agree the iSCSI will feed heavily on the perception of improved throughput of 10GbE, however, it’s not a huge driver because of the cost of 10GbE switches and NICs.

    And yes, Infiniband is the REAL deal. Current 10Gb virtualisation technologies will take two or three years to catch up with Infiniband of today. But convincing people to buy Infiniband…. that’s hard.

  • Chris

    Very well put!

    We have been using 10Gbe (fiber) at the core and core and distribution layers for about a year now. We even went as far as to run CAT6A to the desktop because the historical network refresh cycles (particularly concerning cabling) of our organization have been extremly long.

    It seems that 10Gbe over copper is even less mature than 10Gbe over fiber. We found that our network vendor (HP) has a cable length limitation of 15 meters when using CX4 cables. This would fit our current facility, however with expansion to a larger facility distance concerns pushed us to stick with fiber connections for the time being.

  • Greg Ferro

    It’s interesting that copper might nearly be over. The rising price of the raw copper, and the dropping price in fibre optic manufacturing, means that we are approaching cost parity in the data centre on a per metre basis. The cost of terminating fibre is still higher than copper, those little plugs are hard to do.

    It’s too early to predict the end of the copper, but I can see some of the signs.

  • mrz

    When building out a second cage last year in San Jose, it was actually more cost effective to do 10GE cross-connects vs. 16 1GE (8 per switch) between two floors. That included optics, line cards and CRG’s cross-connect charges.

    In designing the network for Mozilla’s new data center build ( we opted for all 10GE (fiber) at the core and as downlinks to the access layers (Cisco’s 3120X CBS). We felt it simplified wiring and added some future-proof to the design and except for the optics themselves, wasn’t significantly more expensive (and what difference there was erased itself with the reduction of cross-connects needed for same capacity).

    I also changed our IP transit requirements – at a minimum I require anyone wanting to sell my IP transit to deliver 2x10GE handoffs and only bought hardware to support 10GE handoffs (Mozilla hardly pushes more than 3-4Gbps across a number of peers). Turns out this too is cost effective now.

    We looked at Cisco but only focused on the C6500 – Nexus didn’t make sense for the price. The core in Phoenix is all Juniper. Had I really wanted to shave costs, I would have looked at Force10.

    So to your point, 10GE is here for Mozilla.

  • Pingback: Enterprise Storage Strategies()