I was doing a Data Centre Design recently and did some numbers around the numbers of 10 Gigabit Ethernet ports that need to be deployed. I got a bit of a realisation shock.
Points of Design Invariance
- Lets start off by saying it’s not a huge data centre.
- Several hundred physical servers of various types, accumulated over the years.
- lots of them old / legacy , not a lot of VMware. Mostly normal for most organisations.
- A typical HP C-Class or Cisco UCS blade chassis needs no more than 4 x 10GbE ports today.
- generally, you can’t get more than three blade chassis to a existing rack because of
- power availability of less than 10kVA per rack.
- localised heat density – racks full of high power servers generate so much heat it doesn’t disperse.
This leads to the conclusion that a row of ten racks needs no more than a dozen ports per rack, so worst case design threshold of 120 x 10GbE ports for a ten rack row with standard power capacity of about 10kVA per rack.
- Lets assume that you agree that a three chassis per rack density works (HP C-Class or Cisco UCS C-series, doesn’t much matter at this point)
- each chassis has eight blades per chassis (kind of low, up to sixteen is certainly possible).
- thats 24 physical servers per rack, or 240 server per row.
- Lets assume that you run VMware/HyperV on all of those servers.
- lets say you run an average density of five guests per server. That’s quite low too, you can easily run ten.
- that’s 1200 virtual servers.
- using just 120 10GbE ports.
You can build a pretty good “Business as Usual” style fabric with just four 40 port 10GbE switches (see Ivan’s The Data Center Fabric architectures to explain this) using excess ethernet ports to provide some LACP bundles between the four units and some standards based TRILL (yeah I’m looking at you Brocade – your non-standard TRILL implementation using FSPF instead of IS-IS is very close to an outright lie)
- This configuration doesn’t cost a whole lot if you don’t fuss with FCoE.
- In fact, given that FCoE still isn’t mainstream (ie. anyone outside of Cisco’s influence), you would be well advised to stick with boring (but cheap, in a relative sense) FibreChannel because you’ve probably paid for it anyway.
So, how many 10GbE ports does the world need ? I’m not so sure. It’s doesn’t seem to be a big number. Shocking, isn’t it. Compare this with how many switches you have in a “ten rack row today” using 1GbE†?
Food for thought.