I’ve recently reviewed a data centre design for a mid-sized company which can fit into a closet. This prompts the question, do you really need a data centre anymore ?
TL:DR – we can put enough CPU, Storage and Networking into a single rack to meet the infrastructure needs of most companies. Especially if you app
I’ve heard and spoken to a few companies that are moving services out of the public cloud because of high recurring cost of public cloud services. Because their applications were “cloud ready” or “cloud architected” they achieved major infrastructure cost reductions by designing a “data centre” at closet-scale.
When I first started my career in IT infrastructure in mid-1990s, I was “installing” servers & networks into spare closets/broom cupboards in offices. Later it was a “spare” office and later converted with a raised floor and extra aircon.
While “big” companies who had mainframes had already built data centres (at vast expense), there was a time in the late 1990’s/early 2000’s that made building a data centre made sense. It was “accounting fashionable” to own real estate to boost the balance sheet with hard assets like real estate instead of “corporate good will”.
Today, the money-fashion is to own nothing, rent everything, produce intangibles and focus on short term returns and thus the tide has turned against owning and operating a data centre. The result was that building a DC on a 20-year depreciation schedule was cost and tax effective. Naturally, once the idea took hold everyone started overdoing it leading to the costly and vastly over-specified data centers of today.
For now, at least, its practical to consider returning to building closets instead of renting co-location space because of major change to Density, Power, Cooling and Weight.
The overall trend in Enterprise is increasing utilisation and efficiency and thus the “density of utilisation” (I made that up). Mostly because the existing generation of technology is highly inefficient – manual operations, one server/one app, zero/limited re-use of existing platforms. The last five years have seen some improvement with a slow but steady migrate to using hypervisors to improve the utilisation of servers, flash storage and overlay networks.
A single rack of equipment has enough compute and storage to drive the applications of a large company when using virtualization, automation and orchestration (a la, software defined).
If you can keep power consumption at a reasonable level then you don’t need a complex power infrastructure. A key driver for dedicated data centre’s was to support the power infrastructure of diesel generators, fuel tanks, battery rooms etc. But if you can keep the power under 20-40KW then you can avoid this with battery backup. Modern battery systems require a lot less space and last longer.
Power failure ? You can readily automate a power down of non-critical assets to extend battery life for a three nines. If you have something that needs better (you probably don’t), then get half a rack in a colo where the power is someone else’s problem.
Modern hardware doesn’t need to be refrigerated to 17ºC. Running a closet at 30ºC substantially reduces cooling load and concomitant reduction in CapEx AND OpEx. While its a bit unpleasant to work in there, its also strong encouragement to implement automation.
Less Space & Weight
We reached peak space consumption around 2009 and virtualization has solved this problem. The increase in CPU performance has reduced the number of servers needed. Networking doesn’t need chassis switches when using a handful of servers and is reduced to a couple of 1RU switches (again, less power & cooling too).
Weight has been a major problem, especially for storage arrays with hundreds of disk drives. The transition to All Flash Arrays has reduced space and weight while increasing performance compared to disk arrays.
The EtherealMind View
I’ve heard and spoken to a few companies that are moving services out of the public cloud because of high cost of public cloud services. Because thier applications were “cloud ready” or “cloud architected” they achieved signficant infrastructure cost reductions by designing a “data centre” at closet-scale.
Their “cloud only” processes means that a design process to think small, minimalist and software-first produced a small-scale data centre. They already have dashboards and monitoring that can predict their consumption and resources.
Many companies can easily fund a few racks of equipment from cash flow. Importantly, it reduces the total overhead of the business.
NOTE: this assumes that you have the right skills in your organisation. Not technical skills (thats easy to sort out) but management skills to comprehend how to deliver this. In my experience, finding competent managers in technology is exceedingly rare.