In the last three months, there have been a series of marketing puff pieces circulating from Google, Yahoo and Facebook on how “amazing” their infrastructure is. If you haven’t read these articles, here is a sample:
Google boasts of melting data center antidote – Google talks about their distributed file system and data replication.
Chillerless Data Centre Google talk about their data center that has no cooling. They turn it off when it gets too hot. (Clever, eh?)
High Performance at massive scale – webcast from a Facebook Manager. (And they later went on to criticise server manufacturers for not making hardware that Facebook wants.)
All problems solved
Firstly, all of these articles talk about the problems that they have solved, not the problems they are experiencing now (again, except to state the obvious problems like power, cooling and upgrades). This is intended to make you think how clever and amazing these people are – which is, of course, marketing.
Contributions to Open Source
If it’s not about problems solved, it’s about contributions back to Open Source such memcache, or Linux kernel or MySQL and SQLite etc. Publicising your contributions to open source, after your entire business was founded, built, made possible and made profitable because of of Open Source always seems two faced hypocrisy. Companies like Facebook and Google were only possible because they didn’t pay licensing fees for their software during their startup phases.
So excuse me if it sounds shallow when you tell us how great you are when you give back to open source. Really, you should be doing much, much more.
Stage Managed, Rehearsed ….Permitted
What’s happening here is that some of the engineering folk have been let off the leash by the marketing department. Well, kind of. If you read these articles, you can see that they have been carefully vetted so that there is nothing confidential, nothing interesting, nothing that you couldn’t guess or deduce by giving it some thought. In fact, the lack of negative information is what makes me suspicious.
Not really. When you build monofunctional systems it’s easy to get a good result. Facebook only does one thing – host a website. Google has only one software system to host, that they wrote themselves and have some of the smartest people in the world managing and operate. They are their own tech support operation.
They don’t have the same problems that I have. Contacting Cisco or VMware for support ? Trying to fit Oracle onto an IBM AIX platform (because that what’s you already have) ? Trying to cluster MSSQL because forty different applications that you don’t know anything about, and can’t find any information about, need to have high availability. It’s not the same problem is it ?
What can you learn ?
I was recently in a discussion about Data Centre design, and someone raised the idea of redundant Data Centers and casually pointed out that Google had recently lectured about this and why didn’t we implement that same thing.
Let’s leave aside the technical challenges of data portability, replication, in-flight failover, rollback and backups and just focus on the attitude that this brings. I’m sitting across from a manager, who has read a stupid, shallow, marketing puff piece about data centre redundancy and actually thinks that we can do the same thing.
“Of course, we can”, I said, “however, we would need to spend quite a bit more to do that”.
“How much more?” sayeth the Manager.
“Well, Google spends about $20 million in hardware alone for their data centre, plus at least $200 million in research and development to make the data centre architecture and the software architecture compatible and have taken about ten years of investment to get where they are today” I said.
“What can we do for $2 million and six months?” sayeth the Manager.
“We can do what we are already doing” I said “Nothing that Facebook or Google does is relevant to normal Enterprise computing.”puff flow