Response:Noise to Signal Ratio. Bending Cat6 cable does causes problems

In this blog post, Douglas Boom from Intel attempts to claim that bending of Category 6 cable doesn’t cause failures of 10 Gigabit Ethernet. Wrong!

Noise to Signal Ratio:

First up, bending the cable. I was at a trade show and a gentleman comes up and says to me “That 10 Gigabit stuff is for the birds, you can’t even bend the cables without dropping link!” …SNIP…..“I think you’re mistaken, sir.” I said. He smiles and reaches behind my demo and grabs the cable. He bends it over on itself, making the straight run into an O shape. He pushed with his thumb and made into more of an I shape. He let it go and walked over to the console screen, expecting error message after error message of disconnected link. To his shock, (and my amusement) nothing happened. “Wow.” He mumbled. He took my card and left the show floor, all the while being heckled by his travel buddies. Unless you damage the cable, I’ve never seen a bent cable effect link. And I’m not gentle to my cables.


It’s possible that data carried over the signal over that cable might not be affected. There is a significant likelihood that the cable will fail later due to temperature variations or other physical events eg. jamming it in the door. Once mechanical trauma has occurred, the cable has degraded signal performance. If there are other signal losses, such as interference, or bad connectors, secondary mechanical trauma, then the cable might fail immediately. More likely it will fail at some point in the future when no one remembers why the cable is now failing.

Modern cell phones and modern 10 Gigabit BASE-T are both designed to use as little power as needed to reach the end station so you would have to be pretty close with a very powerful cell phone to put a ton of noise onto the wire. And we do test cell phone interference, but with good cables you shouldn’t see any issues. Use bad cables and things could get ugly. Good cables include shielding to protect signal wires from disturbers. Those DSPs can only filter out so much; an investment in quality cables is an investment in the quality of your data.

I completely disagree with the use of shielded cabling. It’s true that shielding can further improve noise injection, however, current cabling designs use twists to reject most of this noise. And of this was a problem then most networks wouldn’t be working today. In my experience, shielded cabling costs far more than the risk profile, and typically causes problems due to faulty shield earthing.

The EtherealMind View

Cabling is hard. Take time to learn about it and understand physical signal propagation in a copper medium.

Cable shielding has been comprehensively proven as useless over the last 20 years of the network cabling. Don’t even consider it. It’s nothing but pain in so many different ways.

Damaged copper cores causes intermittent faults and they are the very worst kind of faults.

I’ve talked about this before: Problems With Cat6A Cables in Data Center. Nothing has changed. 10GbE over copper isn’t the best idea for reliability.

Image Credit

  • MissinLnk

    Agreed, with a slight point.  In extremely noisy RF environments (like AM/FM/TV transmitters) or on outside runs where lightning can be expected, shielded CAT5/6 cabling shines.  However, you can frequently get away with unshielded in most noisy RF environments with careful planning of your cable layout.  Shielded cable has its usages, but it’s rare that a normal IT person would be in need of it.
    If anyone ever questions whether a bend in a cable can cause issues, have them try the same experiment with a 80+ meter cable that’s ran near a high powered FM/TV transmitter.  (Assuming a spec with a 100 meter max length.)  Make the crimp right next to the transmitter for faster results.

  • Will

    Can you comment on some recent security requirements I’ve had in which shielded CAT6 must be used for ‘lower’ security systems that are cabled in the same trays?

    I’m told they think the lower security environment can ‘capture’ data if both cables were non-shielded.

    • Etherealmind

      In security it’s hard to say never but that’s sounds like cuckoo thinking to me. In addition, shielding would reduce that risk but wouldn’t prevent it. If you are concerned about an attack that requires that sort of commitment to an attack, you should be using fibre optic or, most likely, LinkSec to encrypt your data.

      Otherwise, sounds like someone is an idiot.

  • Jsicuran

    Wouldn’t this also depend on the copper type used in the cable. Solid core per pin or stranded per pin. Usually stranded type was for flexing in racks and xtalk reduction and the solid core for the long E/W or N/S runs. 

    • Etherealmind

      The electrical performance of multicore is much less than solid core. The standard defines that  no more than 10 metres of multicore should be used i.e. no more than two patch leads that are no longer than 5 metres. 

      The multicore causes a small impedance mismatch which might cause FEXT so it’s not a good choice either. 

      Finally, as the individual cores snap/break/wear the propagation capacity begins to degrade. 

      In my understanding, stranded cabling is all sorts of problem and fails more constantly than solid core because of degradation.


  • guydu

    I do not fully agree with the shielding cable not required as a generic statement. It depends on the application. If you are using the cable for timing, T-1 or other high line power applications, then shielding is a must because if the cable is not shielded it will interfer potentially with other high powered communication cable applications and vice versa. For lower powered applicaitons such as Ethernet I agree it is a waste of money unless running very close by an interference source such as motors or fluorescent lights and this cannot cannot be avoided.