In the process of building cloud networks over the last 3 months, it has become clear that a lot of people have problems accepting that Overlay Networking is a viable technology. The current1 version of the future in Software Defined Networking (SDN) in the Data Centre will use overlay networks using technologies like VXLAN, NVGRE or NVO3 to create an abstracted network that is somewhat independent of the physical network.
Toxic Technical Sludge
The current virtualized data centre model that is the current “best practice” uses what I consider to be a toxic sludge of technologies. Firewalls with secure hypervisors for “virtual firewalls, load balancers that run shared services, Ethernet switches with hypervisors to separate control planes called “virtual device contexts”, MPLS in some areas to provide “virtual routing”. The DMZ might run Private VLANS to override the default Ethernet behaviour with non-standard modifications to the forwarding plane. It’s a never ending mish-mash of hacks, work arounds and kludges.
Traffic paths in a virtual multi-tenant data centre can look like snail trails on a hot day. Paths cross and overlap with no sense or reason. The only controls are firewalls in key locations that become critical points of failure. Thousands or tens of thousands of firewalls rules in single location in the very heart of the data centre might be the only choice today but it’s still a dumb choice.
And all of this is managed by a handful of people who use a CLI that is complete with arcane syntax and very limited troubleshooting tools. In 2013, the best troubleshooting tool is still “Ping” – I could weep at the lack of progress in the last twenty years.
Are You Sure You Haven’t Heard This Before ?
When people tell me that overlay networking is unproven, I’m always surprised. I can recall Data Link Switching (DLSW) in the late 1990’s as an overlay network where SNA traffic was managed using a SNA over IP protocol. Aside from the complexity of SNA and IBM mainframes it was a hugely successful technology.
And what about MPLS ? MPLS abstracts the network path from the physical network by inserting a tag between the Ethernet and IP Header2 and the user doesn’t know, or care, that the IP data is traversing an ATM or E3 circuit, an SDM ring over a DWDM backbone or even a Metro Ethernet using QinQ encapsulation. Neither MPLS or QinQ are called “risky”, “faulty” or “unproven” any more.
Are you sure you haven’t seen overlay network before ?
Somewhat Independent
When I say, somewhat independent of the physical network I mean it. Abstraction does not mean that the underlying network becomes a commodity. The physical network is likely to become simpler to consume because the perception of value moves into the overlay but this does not mean that the network is a “commodity”. Look at the telco’s where the use of MPLS has made massive growth possible because the physical network is decoupled from the forwarding place.
This is why accusations of “commoditisation” of network hardware don’t make sense to me. It’s certainly true that I will no longer implement all of the fancy software features my physical network devices but that’s because those fancy software features are universally buggy, hard to use and have expensive licensing.
I still want a high quality, low latency, high bandwidth and reliable physical network that I can trust. In fact, I will probably use the network that I have today3
The Value of Overlays is Change
The value of an overlay network, as a technology, is the ability to implement change. Today, a data centre is toxic technical debt based on the eventual consistency algorithms used by Spanning Tree, OSPF and BGP protocols. Although MLAG provides some workarounds, it comes at a price of operational complexity for configuration and expensive upgrades.
An overlay network can be changed without consideration of the physical network.
The EtherealMind View
Back in the 90’s, DLSW provided an overlay network that transported SNA traffic over an IP network. In the early 2000’s, MPLS was used to create an abstraction layer so that the customer data was transported in an overlay network that was independent and abstracted from the physical network. This allowed complex technologies like Frame Relay and ATM to be replaced with simpler and cheaper technologies like DWDM and Ethernet. Of course, the market responded by buying even more of everything.
Overlay Networking does not mean LESS networking, it’s means MORE networking. Instead of just one network in the data centre, there are now two entire and complete networks to work on. That’s double your money.
Even better, the overlay network can be configured by software with limited change risk so that our customers, the business services, can be provisioned in less time and with less risk. And the physical network ? It gets a bit simpler and with less toxic sludge, it will be more stable. I expect to spend a lot less time in the data centre performing pointless code upgrades to core switches after spending 12 months fighting to get a change approval.
Yeah, bring on the overlay networks. I want me some of that.
- Subject to change at any time, objects in the mirror are larger than they appear with 20/20 hindsight. Sound advice given here. 95% sound, 5% advice. ↩
- Penalty! – gross oversimplification↩
- Because my existing network is massively overpriced and on a 5 year depreciation cycle. But mostly because who can be bothered to change what’s working anyway. ↩
Other Posts in This Series
- Blessay: Overlay Networking, BFD And Integration with Physical Network (25th April 2014)
- ◎ Blessay: Overlay Networking Simplicity is Abstraction, Coupling and Integration (10th December 2013)
- Integrating Overlay Networking and the Physical Network (21st June 2013)
- ◎ Introduction to How Overlay Networking and Tunnel Fabrics Work (10th June 2013)
- ◎ Overlay Networking is More and Better while Ditching the Toxic Sludge. (7th June 2013)
Awesome read. Well done sir!
Thanks
Only two networks? there can be more dyanmic overlay networks ontop of the static undelay network!!!
Very well done loved it