In this post on Gigaom, Rackspace is planning to build their own servers based on OpenCompute standard:
Rackspace is contracting with Wistron and Quanta, two server manufacturers that also build boxes for Dell and HP. The catch is, the Dell and HP boxes aren’t optimized for Rackspace and because of the groundwork already laid by the Open Compute Project, Rackspace can now afford to tweak the existing designs offered by Open Compute for its own use.
That’s a significant boost for OpenCompute (and OpenStack) as lower cost servers helps to build better clouds.
And there is this from GigaOm which talks about modularised servers. That is, memory modules are on one shelf, CPU on another shelf and so on. I think of this as an anti-blade chassis design.
Facebook has contributed a consistent slot design for motherboards that will allow customers to use chips from any vendor. Until this point, if someone wanted to use AMD chips as opposed to Intel chips, they’d have to build a slightly different version of the server. With what Frankovsky called both “the equalizer” and the “group hug” slot, an IT user can pop in boards containing silicon from anyone. So far, companies including AMD, Intel, Calxeda and Applied Micro are committing to building products that will support that design.
The other innovation that’s worth noting on the Open Compute standard is that Intel plans to announce a super-fast networking connection based on fiber optics that will allow data to travel between the chips in a rack. This 100 gigabit Ethernet photonic connector is something Intel plans to announce later this year, and Frankovsky can’t wait to get it into production in Facebook’s data centers.
The EtherealMind View
Here is what I’m imagining. I have an OpenCompute rack with blade of CPUs and Memory instead of racks of routers. Typical Intel servers can handle 10Gbps of routing easily and up to 40Gbps. That’s a lot of WAN connections and Internet pipes. What about firewalls running on intel servers ? What about a VPN cluster ?
So what if I need a few extra, I’ll have SDN to automate the provisioning, management and operation. That could be a lot more efficient than using big arse routers with large price tags. Even if I can replace 50% of my edge functions in the Data Center with this technology, that’s a lot of improvement in networking hardware.
Yes. I’m watching Open Compute standards pretty carefully.