Problems with Cat6A Cables in Data Center

I was reading a white paper by Panduit2 that claims that 10GBaseT is suitable for use. I’ve been critical of Cat6A cable and believe that it’s not suitable for data centre use.

The problems that I have with 10GbaseT and Cat6a cabling are:

  • High power consumption
  • large physical Cat6A cable size
  • poor mechanical properties of Cat6A copper
  • Unreliability of copper in terms of Bit Error Rates (BER) and long term electrical capability.

Cable Size

It seems that Panduit has developed a special Cat6A cable that has a small cross section. And thus uses less space in the cable tray. However, the space is required to reduce crosstalk between pairs and Panduit would have to develop some technology to manage this. Likely this means a shield in the cable core which likely means expensive.

Caption Text.

Cat6A in cable tray (Click for a full size image)

Power Consumption

The early version of 10GBaseT used up to 25 watts of power per port. In the latest data sheets for Cisco Nexus 2232TM switches, Cisco states that 1W per port for 10GBaseT.

But what about the servers adapters ? The white paper is co-authored by Intel, Cisco and Panduit and states:

10GBASE-T power consumption has been rapidly dropping and now Intel’s third-generation 10GBASE-T adapter card, the dual-port Intel Ethernet Server Adapter X520-T2, which includes both Media Access Controller (MAC) and PHY, uses less than 10W per port.

So there server adapters still burn a lot of power, but getting less over time. Accepted.

Mechanical and Electrical Problems

This leaves the problems of mechanical and electrical performance over time. The basic problem I have is that Cat6A is close the limit of what can be achieved with copper in terms of electrical performance. It was originally believed that 10GBaseT wouldn’t even be possible.

The physical integrity of the Cat6A is vital to it’s propagation performance. Unlike Cat5, Cat6A has little room for signal degradation that can be caused by:

  • cable kink will cause signal reflections
  • over insertion of cable can cause degradation of connector performance
  • physical cable weakness at the RJ45 connectors

I’m less concerned about the physical integrity of solid core copper cable in the horizontal, but I am concerned about the performance of multicore copper in the the rack from switch to server, and panel to switch. A simple accident where the Cat6A cable is pinched in a door, or crushed by a tool, can cause intermittent network problems that are truly hard to detect.

The EtherealMind View

I remain convinced that Fibre cabling is the future for 10Gigabit Ethernet and continue to recommend against 10GBaseT as a general rule. This paper doesn’t inspire me to change that view.

The purchase cost of Cat6A is already high and the price of high quality copper continues to appreciate, and the industry expects further price rises . The installation of Cat6A requires much more care, testing and validation to ensure that it works and this means even more installation cost (and a hidden cost at that). And remember, the underlying error rate of 10GBaseT is poor enough such that FCoE cannot use it (BER less tan 10^ -8 is not suitable for FibreChannel protocol). 1

I believe that customers should be advised to avoid 10GBaseT and implement OM3 fibre for all 10GBaseSX in your network. This will improve reliability, and you will sleep better at night.

  • James

    A critical factor in selecting cable type is the cost. Most of the data center users believe 10GbT is cheaper than fiber and the price difference will grow wider over time.

    However, with the recent market data, it seems the SFP+ copper, with end-to-end transceivers, is now below $30, and the SFP+ fiber for longer distance is also dropping fast. I have heard below $120 with end-to-end transceivers, in large-volume order. Do you have any data of cost comparison?

  • David Bulanda

    I think that 10GbT has a place but not for more that a few racks.

    I recently purchased several SFP+ optics. All of them being SX and I had prices of $700 per for one brand and then less than $100 for the other. So for anything as with that low $ per port I think you are better off with the Fiber solution.  Any Idiot can weave a power cord through your copper network cables and induce interference.

  • Wes Felter

    Unfortunately, last I heard Intel is planning to shove 10GBASE-T LOM down the industry’s throat because it’s backwards compatible with 1000BASE-T and SFP+ is not backwards compatible.

  • Ryan Malayter

    Thank you for addressing this topic. 

    The problem as I see it comes about when you’re doing 10 GbE all the way to the server. A modern (or “cloud” or whatever term you like) datacenter is going to need at least 8 if not 16 or more 10GbE links per rack to minimize over-subscription. Ideally, you’d want 20x10GbE links from a 20-server rack for no over-subscription whatsoever. Times two if you’re running two 10GbE links to each server for redundancy (but that’s not typically done in a cloud-type deployment, where the redundancy should be in software).

    10GBASE-T or SFP+/twinax is about the same cost for in-rack connectivity. But if we want 16 uplinks per rack in a Clos (leaf-and-spine) architecture, and want to use 10GBASE-SR with vendor-supplied transceivers, that’s going to cost US$20000-40000 per cabinet above using 10GBASE-T. 

    Another way to look at it is that with 10GBASE-SR, your interconnect is going to cost more than your servers when you factor in TOR switches, spine switches, transciever, and fiber cabling costs. If your DC is big enough that you need to move to a three-stage Clos network, your’re looking at the network being twice as expensive as the servers it connects!

    People “want” 10GBASE-T not because they love RJ-45 connectors, but because the RJ-45 connector is a commodity item and the costs are very low. If transceivers didn’t cost so much, datacenters probably would have switched entirely over to fiber 10 years ago with the GbE transition.

  • chrismarget


    I’m glad you drew a distinction between Cat6A structured cabling and use of Cat6A patching between server and TOR switch.  Your previous posts (or perhaps a comment on one) left me wondering where you stood on each.

    Cable size and termination cost is a non-issue.  All of my customers either already have implemented TOR switching, or are headed that way.  Nobody uses copper for switch interconnections at 1Gig, and it isn’t even an option at 10Gig (uplink ports are generally pluggable; 10GBASE-T appear only as access ports).

    The panduit cable trough picture, while dramatic, is a joke.  The only way any of my customers would have a trough full of Cat6A is if they were doing end-of-row or centralized 10Gb/s access switching.  They’re not.  I’m with you on recommending against this notion.

    I have no problem with Cat6A between the server and the TOR switch.  Physical size is a downside that will have to be managed inside the cabinet, but install time and cost are not.  These cables are pre-made inexpensive patch cords.

    The argument about physical damage risk doesn’t resonate with me either.  Fiber, TwinAx and Cat6A are all easily damaged, and when that occurs, the damage is not difficult to detect.  Monitoring tools counting CRC errors will raise the issue immediately.

    Finally, is it your recommendation to load a (USD List price) $10,000 N2K-C2232PP with $48,000 worth of SR optics, plus another $10,000 – $15,000 in optics for the server end?  The N2K-C2232TM lists for only $11,500 and doesn’t require any optics for server/switch interconnect.

    N2K-C2232PP + 32 x Cisco SR + 32 x Finisar SR could probably be had for around $35,000, and will likely require purchase of 32 server NICs.

    N2K-C2232TM can be had for under $6,000, and the server NICs will be built-in any day now.

    This is a tough sell.

    • Etherealmind

      I would use FET’s for switch uplinks – they are real cheap –

      In terms of server connectivity, yes, I would recommend the use of SX which is going to be pricey if you buy genuine. For most people, they do not need hundreds of 10GbE ports so they could start small and scale up gradually while prices on SFP+ comes down.

      I guess the core of my argument is don’t install a huge Cat6A backbone. Maybe use it for TOR applications where the cables can be easily replaced but I’d prefer to the TwinAx/DAC copper for those connections because they are also much cheaper than either 10GBaseT or 10GBaseSX.

      Plus, I expect the failure rate of copper Cat6A to weigh heavily for many people. The BER is actually quite high, and that should really stop people from using 10GBaseT.

      • chrismarget

        Yup, that was the previous post that left me wondering :-)

        “I guess the core of my argument is don’t install a huge Cat6A backbone.”Definitely.  If you really embrace the Nexus vision, you might not need to install *any* copper backbone.

        “for TOR applications …  TwinAx … are also much cheaper than … 10GBaseT”
        You lost me again.  TwinAx and 10GBASE-T are both low-cost options, but the 2232TM + patch cords is (marginally) *less* expensive than 2232PP + TwinAx.

        That margin will widen if/when 10GBASE-T LOM makes server interfaces free, but TwinAx requires you to purchase NICs.

        The BER angle is interesting, but I’m not convinced that it will inform purchasing decisions.  I don’t have any FCoE (out of thousands of 10gig ports), and I’m sure my customers would be better off working on their (generally sorry) QoS than worrying about their interface BER.  Time will tell.

  • Tristan Rhodes

    I want to learn more about this topic and the possible problems with 10GBaseT and FCOE.  Unfortunately I haven’t found many references to this problem; in fact I found a few articles that say a BER of 10^12 is fine:

    “Fortunately, both 1 GbE and 10 GbE have a BER requirement that matches that of FC — a 10-to-12 bit error rate (1 in 10^12 bit error rate).”

    “Bit Error Rate objective for 1 Gb and 10 Gb Ethernet is the same as for Fibre Channel (10^12)”

  • hitekalex

    Greg – 10GBase-SR for Datacenter server connectivity doesn’t stand a chance against 10GBase-T in the long run.  

    It’s the matter of simple economics really..  The servers are going to start shipping with 10GBase-T LOMs en-masse in 2012.   No one in their right mind is going buy and install SFP+ NICs in the servers, when they get ‘good enough’ 10GBaseT LoM NICs “for free”.   

    The Server teams will expect the Network teams to support 10GBaseT at the edge, because that’s what they get with their shiny new DL380 G8’s (or whatever).  FCoE who?  In the game of IT Infrastructure “good enough” always wins..  And 10GBaseT is just that – good enough.

    • Etherealmind

      I agree, sort of. I personally do not recommend 10GBaseT because it’s not reliable for use ( the cable is vulnerable to intermittent faults when damaged).

      I can see the case for ToR used cases for cheap, and non-critical servers. But not for backbone wiring or mission critical requirements.

  • Pingback: A High Fibre Diet « The Data Center Overlords()

  • lordbrayam

    Great post !

  • Pingback: Predicting What Will Be Big in 2012 – Part 1 — My EtherealMind()

  • Pingback: Response:Noise to Signal Ratio. Bending Cat6 cable does causes problems — My EtherealMind()

  • Pingback: Size Differences - Cat5 and Cat6 Cable Bundles — My EtherealMind()

  • dealinfacts

    Why are people using om3/om4 ? Why not just install singlemode and be done with it. But yes, in large data centers, stick with fiber. Same can be said of office floors, establish a grid and distribute services with fiber to a switch and short copper runs , less than 50 ft, to users within the grid.

    Alot less weight, heat , cable trays, and alot mor performance. You just have to get over the factyou have distuributed switches, thats why it would only be useful for A type offices.

    • Greg Ferro

      Single mode transceivers are 10 times the price of multimode. The physical cable and installation is the same for MM or SM.

      And transceivers costs much, much more than the cabling plant over a ten year life. Single mode is not an option.

    • Jorge

      Singlemode is way more expensive than multimode (OM3/OM4). While the cable itself is cheaper, the optics on the active equipment make it way too expensive.

      • dealinfacts

        At the beginning it was. Most GBIC’s at a gig can be had sub 300. Haven’t priced out a 10G GBIC but I would contend if you had installed a singlemode cable plant from the start , say 100 M transceivers, you would have recouped your investment in short order.

        I am watching to see how the 40G/100G plays out. The amount of fiber to deliver either is ridiculous with OM3/OM4, not to mention distance issues and overall cost of fiber. Corning just priced out a 48 Fiber MTP to MTP trunk OM4 – 100 ft at a price of 4500 dollars.

        Singlemode can deliver with a single pair, but the electronics are the devilish part of it. We’ll see who can pump out a singlemode 40/100G switch