Is this the year of 10 Gigabit Ethernet in the LAN ?


10Gb Ethernet is will provide more bandwidth and speed for networking, but it’s hasn’t really grown the way that vendors expected. In my experience, 10GbE has some real problems that will make it grow gradually and organically, rather than force a new round of investment in networking.

The Problems with 10GB Ethernet

Power

If you are using copper patch leads for 10GbE, you are going to a need a lot of power. Using standard copper can use up to 45W per port (although 10GBaseCX-4 apparently uses 4.5W per port).

The IEEE is working on Energy Efficient Ethernet (802.3az) technology that will allow links to auto-negotiate down to lower speeds or go to “sleep’ during periods of inactivity which will further reduce power consumption.

Cabling

10GBaseT copper uses 650Mhz frequency spectrum and needs high quality cabling to work reliably. This means that you need to test, properly, your existing Cat5 or replace it with Cat6A or better. If you use Cat6A or even Cat6, the cable is physically much larger (and you may not have the space in your computer room). In this case, you will have up to 100 metre cable length. If you use Cat5 or Cat5e, the distance is much shorter depending on the quality of your cable, typically less than 40 metres and would probably need testing for assured reliability.

10GBaseSR uses multimode cabling but has different cable lengths depending on type of cabling and certain combinations will require mode conditioning patch leads.

The current list of 10GB XENPAK / X2 interfaces from Cisco show the confusion that the different types of cabling causes. For example consider the following table showing the LAN options (I’ve removed the WAN units) and the variation in cabling types:

X2 Product ID XENPAK Product ID Transceiver Type Wavelength IEEE Standard Maximum Distance/Cable Type
X2-10GB-LRM XENPAK-10GB-LRM 10GBASE-LRM 1310 nm serial 802.3aq 220m over multimode fiber
X2-10GB-SR XENPAK-10GB-SR 10GBASE-SR 850 nm serial 802.3ae

26m over 62.5-micron FDDI grade multimode fiber

33m over 62.5-micron 200 MHz x km multimode fiber

66m over 50-micron 400 MHz x km multimode fiber

82m over 50-micron 500 MHz x km multimode fiber

300m over 50-micron 2000 MHz x km multimode fiber

X2-10GB-LR XENPAK-10GB-LR+ 10GBASE-LR 1310 nm serial 802.3ae 10 km over single-mode fiber
X2-10GB-ER XENPAK-10GB-ER+ 10GBASE-ER 1550 nm serial 802.3ae 40 km over single-mode fiber
X2-10GB-LX4 XENPAK-10GB-LX4 10GBASE-LX4 WWDM 1310 nm 802.3ae

300m over 62.5-micron FDDI grade multimode fiber

240m over 50-micron 400 MHz x km multimode fiber

300m over 50-micron 500 MHz x km multimode fiber

X2-10GB-CX4 XENPAK-10GB-CX4 10GBASE-CX4 Copper 802.3ak 15m over 8 pair 100-Ohm InfiniBand cable

The impact of cabling

In a recent project to plan a refit of an existing data centre, the 10GbE cabling needs was a major problem. Because of constraints in the change control and risk management, we eventually decided to use 1Gb ethernet because the time needed to get long change windows exceeded the length of the project.

And in other projects, the cost of recabling the fibre optic to meet the new requirement for 10GbE was prohibitive for smaller works. That is, we couldn’t just add a “patch of green” to an existing facility and extend the new switch as funds became available.

Which is weird, because it reminds me of the Token Ring / FDDI / Ethernet wars back in 1995 or so.

High Cost

If you take the time build budgetary pricing around a Cisco Nexus 7000 you will quickly realise that the cost of Cisco’s 10GbE capable switch is really expensive. I found that a typically configured Nexus 7018 with a good number of 10GbE and some 1GbE was around GBP£500K / USD$800K. Admittedly, this was a fully loaded model but forms the basis for a cost analysis against our existing Cat6500 choices. Frankly, I couldn’t convince anyone that this was a good idea.

Sure, the Nexus 7000 is good product (not a great product in my opinion) and offers some 10GbE capability but the lack of features and high cost means that 10GbE is still not a part of our short term strategy. I wonder how many other people have a similar problem ?

1228123_pregnancy.jpg

Which Year was that ?

It seems that every year is the year of 10 Gigabit Ethernet.

2009

In March 2009 The Register posted an article

About Greg Ferro

Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count.

He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus

You can contact Greg via the site contact page.

  • http://www.standalone-sysadmin.com/blog Matt Simmons

    I agree that the use of 10GbE for computer networking is somewhat limited, but I wouldn’t call ethernet-based storage a niche market. 10GbE is the driving force behind people being convinced that 4Gb FC is pokey.

    10Gb iSCSI is a formidable option, particularly if the 10Gb switch ports are backward compatible to existing adapter speeds. Of course, even if that’s the case, the infiniband people still scoff. “Only 10Gb/s? Here’s a nickel. Buy yourself a REAL storage network”.

  • http://etherealmind.com Greg Ferro

    That’s a typical cognitive dissonance. You THINK Storage is really important because you hear a lot about it. Storage is still a small business, and still in it’s technology growth stage and this means that a lot money is spent on marketing. All technologies go through this in the early stages.

    A biggish storage network might have a couple of hundred ports. Compare this with a Data Centre that has thousands, or a campus that has tens of thousands and you should have some idea of scale.

    I agree the iSCSI will feed heavily on the perception of improved throughput of 10GbE, however, it’s not a huge driver because of the cost of 10GbE switches and NICs.

    And yes, Infiniband is the REAL deal. Current 10Gb virtualisation technologies will take two or three years to catch up with Infiniband of today. But convincing people to buy Infiniband…. that’s hard.

  • http://www.hiddenone.net Chris

    Very well put!

    We have been using 10Gbe (fiber) at the core and core and distribution layers for about a year now. We even went as far as to run CAT6A to the desktop because the historical network refresh cycles (particularly concerning cabling) of our organization have been extremly long.

    It seems that 10Gbe over copper is even less mature than 10Gbe over fiber. We found that our network vendor (HP) has a cable length limitation of 15 meters when using CX4 cables. This would fit our current facility, however with expansion to a larger facility distance concerns pushed us to stick with fiber connections for the time being.

  • http://etherealmind.com Greg Ferro

    It’s interesting that copper might nearly be over. The rising price of the raw copper, and the dropping price in fibre optic manufacturing, means that we are approaching cost parity in the data centre on a per metre basis. The cost of terminating fibre is still higher than copper, those little plugs are hard to do.

    It’s too early to predict the end of the copper, but I can see some of the signs.

  • mrz

    When building out a second cage last year in San Jose, it was actually more cost effective to do 10GE cross-connects vs. 16 1GE (8 per switch) between two floors. That included optics, line cards and CRG’s cross-connect charges.

    In designing the network for Mozilla’s new data center build (http://blog.mozilla.com/mrz/2010/01/04/mozillas-new-phoenix-data-center/) we opted for all 10GE (fiber) at the core and as downlinks to the access layers (Cisco’s 3120X CBS). We felt it simplified wiring and added some future-proof to the design and except for the optics themselves, wasn’t significantly more expensive (and what difference there was erased itself with the reduction of cross-connects needed for same capacity).

    I also changed our IP transit requirements – at a minimum I require anyone wanting to sell my IP transit to deliver 2x10GE handoffs and only bought hardware to support 10GE handoffs (Mozilla hardly pushes more than 3-4Gbps across a number of peers). Turns out this too is cost effective now.

    We looked at Cisco but only focused on the C6500 – Nexus didn’t make sense for the price. The core in Phoenix is all Juniper. Had I really wanted to shave costs, I would have looked at Force10.

    So to your point, 10GE is here for Mozilla.

  • Pingback: Enterprise Storage Strategies

Subscribe For Weekly Updates by Email

Get a Weekly Summary of Latest Articles and Posts to your Email Inbox Every Sunday

Thanks for signing up. Look for the email from MailChimp & make sure you confirm your email address. You may need to check your spam or gmail settings to be sure of receiving the email.

Note: You can unsubscribe at any time using the link at the bottom of every email.