L3 routing to to the host. An idea whose time has come, again.
Facebook has telling people more about it’s in-house designed and developed whitebox Ethernet switching based on Broadcom silicon (not sure if it’s Trident2 or Arad), their own Linux distribution and OpenCompute standards.
TL:DR A recent project bought a low cost network for the data centre. It cost less one-third of the the market leader & half the cost of a well known merchant silicon vendors. As a result, it is planned to last for two, maybe three years before it will be replaced. From this project I learned that “fast & cheap networking” could make a big impact on new data centre designs and business attitudes. Plus it was much more satisfying as a professional project. I’m now wondering – is networking too expensive ?
Most people refer to the “Data Centre Network” as though it was a single network. In practice, data centres have a number of individual networks. Each network is specifically designed for a specific purpose and function. A typical data centre network design has about five individual networks that connect together to form the “data centre network” which many people fail to recognise. I’ll define these networks and then look at the future of data centre networks with overlays. What seems clear, today, is that networking will provide different networks for different use cases and the customer will decide.
Overlay networking has been around for a year or so now and the ideas behind it are well established. It was about 3/4 weeks ago while researching VTEP functionality in Dell and Arista switches that I realised I could build manually configured tunnels with VXLAN and get the same results as an EoMPLS x-connect with almost zero effort. More importantly, I don’t have to pay for expensive hardware that has MPLS functions or pay again for software licenses to upgrade with MPLS features.
In this blog post I’ll make an attempt to summarise Overlay Networking in a couple of paragraphs to act as reference for upcoming blog posts that discuss the nature of Tunnel Fabrics in Physical Network environments. It also has pictures.
I’ve been reading a presentation from Sharkfest 2012 where a engineers from Microsoft are presenting on their
Microsoft’s Demon – Datacenter Scale Distributed Ethernet Monitoring Appliance. The whole presentation is interesting but this particular slide caught my attention:
i’ve writer about OpenCompute hardware standards a few times. Today has seen a few announcements that make me think networking could be about to change significantly. In this post on Gigaom, Rackspace is planning to build their own servers based on OpenCompute standard: Rackspace is contracting with Wistron and Quanta, two server manufacturers that also […]
Ivan Peplnjak posts an outstanding summary of the myriad networking challenges when designing a dual Data Centre . Complete with cynical commentary and live action diagrams, he explains the problem and some suggestions for the solution. Recommended for everyone! We have a network with two data centers (connected with a DCI link). How could we […]
I’ve talked bit about the problems of using Category 6 copper cabling in the data centre. The sheer size and weight of the cable is a serious problem. Here are some photos showing the comparison of Category 5 and Category 6 cable bundles.