L3 routing to to the host. An idea whose time has come, again.
Facebook has telling people more about it’s in-house designed and developed whitebox Ethernet switching based on Broadcom silicon (not sure if it’s Trident2 or Arad), their own Linux distribution and OpenCompute standards.
TL:DR A recent project bought a low cost network for the data centre. It cost less one-third of the the market leader & half the cost of a well known merchant silicon vendors. As a result, it is planned to last for two, maybe three years before it will be replaced. From this project I learned that “fast & cheap networking” could make a big impact on new data centre designs and business attitudes. Plus it was much more satisfying as a professional project. I’m now wondering – is networking too expensive ?
Most people refer to the “Data Centre Network” as though it was a single network. In practice, data centres have a number of individual networks. Each network is specifically designed for a specific purpose and function. A typical data centre network design has about five individual networks that connect together to form the “data centre network” which many people fail to recognise. I’ll define these networks and then look at the future of data centre networks with overlays. What seems clear, today, is that networking will provide different networks for different use cases and the customer will decide.
Overlay networking has been around for a year or so now and the ideas behind it are well established. It was about 3/4 weeks ago while researching VTEP functionality in Dell and Arista switches that I realised I could build manually configured tunnels with VXLAN and get the same results as an EoMPLS x-connect with almost zero effort. More importantly, I don’t have to pay for expensive hardware that has MPLS functions or pay again for software licenses to upgrade with MPLS features.