OpenCompute Servers and the Impact on Networking

i’ve writer about OpenCompute hardware standards a few times. Today has seen a few announcements that make me think networking could be about to change significantly.

In this post on Gigaom, Rackspace is planning to build their own servers based on OpenCompute standard:

Rackspace is contracting with Wistron and Quanta, two server manufacturers that also build boxes for Dell and HP. The catch is, the Dell and HP boxes aren’t optimized for Rackspace and because of the groundwork already laid by the Open Compute Project, Rackspace can now afford to tweak the existing designs offered by Open Compute for its own use.

That’s a significant boost for OpenCompute (and OpenStack) as lower cost servers helps to build better clouds.

And there is this from GigaOm which talks about modularised servers. That is, memory modules are on one shelf, CPU on another shelf and so on. I think of this as  an anti-blade chassis design.

 Facebook has contributed a consistent slot design for motherboards that will allow customers to use chips from any vendor. Until this point, if someone wanted to use AMD chips as opposed to Intel chips, they’d have to build a slightly different version of the server. With what Frankovsky called both “the equalizer” and the “group hug” slot, an IT user can pop in boards containing silicon from anyone. So far, companies including AMD, Intel, Calxeda and Applied Micro are committing to building products that will support that design.

The other innovation that’s worth noting on the Open Compute standard is that Intel plans to announce a super-fast networking connection based on fiber optics that will allow data to travel between the chips in a rack. This 100 gigabit Ethernet photonic connector is something Intel plans to announce later this year, and Frankovsky can’t wait to get it into production in Facebook’s data centers.

 

The EtherealMind View

Here is what I’m imagining. I have an OpenCompute rack with blade of CPUs and Memory instead of racks of routers. Typical Intel servers can handle 10Gbps of routing easily and up to 40Gbps. That’s a lot of WAN connections and Internet pipes. What about firewalls running on intel servers ? What about a VPN cluster ?

So what if I need a few extra, I’ll have SDN to automate the provisioning, management and operation. That could be a lot more efficient than using big arse routers with large price tags. Even if I can replace 50% of my edge functions in the Data Center with this technology, that’s a lot of improvement in networking hardware.

Yes. I’m watching Open Compute standards pretty carefully.

  • Cnacorrea

    And there goes all the babbling and ranting about software-based packet commuting not being fast enough =)

  • http://twitter.com/BRCDbreams Brook Reams

    Greg,

    I think this disaggregation of the “server enclosure” will grow. As you know I work for Brocade. So I was excited by our acquisition of Vyatta whose software provides routing, VPN and firewalls running in VMs on commodity HW.

    IF (capitals) the “edge” becomes a rack of standard interconnects you can plug boards into, and then configure software stacks for compute, network, and storage (See EMC direction) to suit your purposes, that’s market disruption “at scale”. As you point out Intel can create this disruption. They are clear about their intent to enter networking and storage markets and disrupt them as part of their growth strategy.

    Interesting times.

  • http://twitter.com/dvorkinista mike dvorkin

    At some point, someone will build open software that will leverage these wonderful technologies and make them within reach of the mainstream enterprise. Open interfaces will enable automation, which will lead to real enterprise private clouds.