Musing: How many 10Gigabit Ethernet Ports do you really need ?

I was doing a Data Centre Design recently and did some numbers around the numbers of 10 Gigabit Ethernet ports that need to be deployed. I got a bit of a realisation shock.

Points of Design Invariance

  • Lets start off by saying it’s not a huge data centre.
  • Several hundred physical servers of various types, accumulated over the years.
  • lots of them old / legacy , not a lot of VMware. Mostly normal for most organisations.

Invariant Factors

  • A typical HP C-Class or Cisco UCS blade chassis needs no more than 4 x 10GbE ports today.
  • generally, you can’t get more than three blade chassis to a existing rack because of
    1. power availability of less than 10kVA per rack.
    2. localised heat density – racks full of high power servers generate so much heat it doesn’t disperse.

Design Thresholds

This leads to the conclusion that a row of ten racks needs no more than a dozen ports per rack, so worst case design threshold of 120 x 10GbE ports for a ten rack row with standard power capacity of about 10kVA per rack.

  • Lets assume that you agree that a three chassis per rack density works (HP C-Class or Cisco UCS C-series, doesn’t much matter at this point)
  • each chassis has eight blades per chassis (kind of low, up to sixteen is certainly possible).
  • thats 24 physical servers per rack, or 240 server per row.
  • Lets assume that you run VMware/HyperV on all of those servers.
  • lets say you run an average density of five guests per server. That’s quite low too, you can easily run ten.
  • that’s 1200 virtual servers.
  • using just 120 10GbE ports.

You can build a pretty good “Business as Usual” style fabric with just four 40 port 10GbE switches (see Ivan’s The Data Center Fabric architectures to explain this) using excess ethernet ports to provide some LACP bundles between the four units and some standards based TRILL (yeah I’m looking at you Brocade – your non-standard TRILL implementation using FSPF instead of IS-IS is very close to an outright lie)

  • This configuration doesn’t cost a whole lot if you don’t fuss with FCoE.
  • In fact, given that FCoE still isn’t mainstream (ie. anyone outside of Cisco’s influence), you would be well advised to stick with boring (but cheap, in a relative sense) FibreChannel because you’ve probably paid for it anyway.

So, how many 10GbE ports does the world need ? I’m not so sure. It’s doesn’t seem to be a big number. Shocking, isn’t it. Compare this with how many switches you have in a “ten rack row today” using 1GbE†?

Food for thought.

  • http://rizzitech.blogspot.com Marco Rizzi

    Hi Greg,
    I agree with you, can’t have more than 3 blades chassis on a single rack, it would be too weighty too…

    If I have correctly understood your calculation for the “worst case” is:

    1 chassis = (4 x 10G ports) x 2 network modules = 8 x 10G ports
    1 rack = 3 x 1 chassis = 24 x 10G ports
    1 row = 10 x 1 rack = 240 x 10G ports

    it’s correct?

    in this case, how many switches do you recommend per row?
    3 x 96 ports?
    6 x 48 ports?
    two switch chassis?

    I’m just curious … :-)

    Marco

  • http://www.curtis-lamasters.com Curtis LaMasters

    If I’m planning a greenfield install (which rarely happens), I start off by using a 4 to 1 oversubscription for 1GbE ports. So for simple math, a single 10GbE port = 10x1GbE ports = 40 virtual servers per single 10GbE port. Obviously these numbers are for planning but I have yet to get bit in the butt with them…kinda like nobody ever got fired for buying Cisco. :)

    Also, The largest I have every designed was 20 racks so my scale is nowhere near what you guys would need.

  • Leo Song

    Well, Greg.

    Good luck! 120G ports on paper, down the road it might be quite different :(

    Leo

    • http://etherealmind.com Greg Ferro

      Maybe, but that will be the next upgrade cycle – and we will have different hardware by then. High density 10GbE and 40GbE/100GbE backbones.

      I hope so!

  • Sam Stickland

    I’m curious, has anyone actually seen a physical to virtual migration where the number of physical servers actually ended up decreasing? It’s just that so far, I’ve never seen it.

    I’ve seen grand P to V plans that never came to pass. I’ve seen when most of the physical servers ended up hanging around too and eventually get upgraded or replaced too (and becoming VM hosts themselves). Or that once the VM deployment process is made easily/semi-automated the number of VMs tends to shoot right up to. Or the desktop machines all get replaced with thin-clients and the number of VDIs required in the datacentre is now several thousand.

    But I’ve never actually seen virtualisation make the datacentre smaller. Am I the odd one out?

    • http://blog.michaelfmcnamara.com Michael McNamara

      @Sam I migrated our Data Center last year and reduced our original cage from 70 racks to 30 racks all thanks to our virtualization efforts. The only item that grew (and continues to grow – sore subject) was SAN disk storage.

      @Greg we’re currently achieving a ratio of greater than 20:1 utilizing HP C-Class enclosures so you’re being super conservative with your numbers. In your design thresholds I’m assuming your trying to maintain a set min/max over-subscription of the network with respect to the vm guests and servers?

      Cheers!

      • http://etherealmind.com Greg Ferro

        I was trying to be conservative. Many companies are concerned about Fate Sharing with 20:1 ratios so it’s not common. It’s also hard to people who haven’t virtualised to comrephend that sort of overloading.

        Even with conservative numbers, it’s still very surprising though.

        greg

    • http://etherealmind.com Greg Ferro

      You make a good point. But even if you are data centre doesn’t get smaller, at least it doesn’t bigger as fast as used to. And that also is a business success.

      The most comon cause of “non-shrinkage” is not running high enough densities. We are still waiting for the Storage industry to deliver better products before virtualisation will deliver a lot of th gains we expect.

      greg

  • tim

    Excellent point you make here, I’d suggest a chassis switch to give support for some ILO/OOB ports and some growth potential (more 10G ports, 40/100 support for interswitch connectivity, etc.).

    If you run the numbers, flattening out the network like this has a dramatic impact on the networking equipment economics and it makes it really easy to cut costs while increasing resiliency and availability.