It’s common practice to place network switches at the top of the rack. At higher Ethernet speeds where the cabling costs are outsized there is sense to placing the switch in the middle of rack but there are tradeoffs.
I received the following question from Gavan:
Something I recall from one of your podcasts a long time ago was the option of moving to “Middle of the rack” switches, and using cheaper AUI connectors instead (I think it was one of your earlier podcasts – I started listening after a HighScalability.com blog post).
I wonder if you ended doing that anywhere?
The middle of rack solution is common when building “rack at a time” data centres (or more likely ten or more racks at a time). This enables the use of cheaper and more reliable passive coaxial Ethernet for 25G/50G. Active coax is for cable runs over 3 metres (but no more than 10 metres). Such customers have their racks assembled and tested off site before shipping to the DC. It’s unlikely that these racks will ever be changed/upgraded before decommissioning as they are operating on a ‘cloud SRE’ model.
A quick diagram shows how much shorter the cables will be.
Another dependency is your uplink cabling. If, for some reason, you are using coax to connect to the core then TOR might be the only way to reach the Spine switches.
Where you are building your data centre a server at a time you might not know how many servers will end up in the rack.
When you are using chassis-based servers which use 4RU or 8RU, you can run in to space allocation and weight problems. Putting heavy mass at the top of a rack can be safety issue and may require floor spreaders.
In reality very few companies fill their racks before reaching power/cooling limits so space allocation is not a widespread problem. OK yes, people are are worried but it is an imaginary problem of incompetent project managers or data centre managers. Another source of problems us “your budget, my budget” where its ok to spend someone else’s budget just not your own.
Over the last few years, the price on generic cables has dropped substantially :
A third party 1M 25G SFP28 Passive DAC Twinax is ~$40.
A 5M 25G SFP29 Passive DAC Twinax is ~$70.
A lot of people won’t buy third party but it is my experience the ratio of 200% markup holds true.
If you choose to buy branded SFP modules, the cost difference becomes quite large. Roughly a Cisco version is $400 and the 5M version is $800. Rule of thumb, will need at least two cables per server and 15 servers per rack, so 30 x 400 = $12000 per rack.
Most people don’t do the numbers like this so it’s rarely considered.
The EtherealMind View
Take a walk around any co-lo facility and check how most racks have plenty of space. Middle of rack would save most people money with few downsides.
Of course its ‘best practice’ to put switches at the top of rack so I”m doubtful that anything will change.