In this blog post at VMware Communities: Security & Networking: Let’s get logical – the case for network virtualization, Alwyn Sequeira ( VP of Networking I think) is talking about how OpenFlow could impact the way that networks are virtualized. When it comes to network virtualization, VMware could play a huge role in changing the shape of the industry. Alwyn says:
In summary, the rigidity and static nature of current network architectures stand in the way of the agility, flexibility and dynamic requirements of modern workloads. Network re-mapping becomes an ongoing, onerous task. A better approach is needed, one which separates the consumption of these network constructs from the underlying physical network. We need to un-tether VMs from the underlying physical network, much as we un-tethered OSes from the server hardware.
VMware has made several strategic moves to implement dynamic networking and overcome the static nature of networking :- vSwitch, vDS, Nexus 1000 (in partnership with Cisco), vCloud External Networks (using MAC in MAC of all things) and have basically failed to deliver overlay technology without implementing technology in the network itself or going for a vendor lock in. Equally, VMware hasn’t been willing to engage with the networking vendors to develop technologies that would solve this problem – and let the IEEE waste time with VNtag / VEPA1 / VEP combined with TRILL 2 / SPBB3, and letting them argue amongst themselves while we wait for the moths to fly up when the next announcement comes from the IEEE 802.1 committee. VMware’s attempt with vCloud networking using MACinMAC encapsulation seems to have failed ( because of undirected broadcast and security concerns) and stalled and they getting set to have another attempt this time using MACinIP.
But what VMware really wants is a software controlled network – just like VAAI – vStorage API’s for Array Integration they want to be able to reduce the network to a set of APIs.
Whoa. That’s a big thing. Having your network configured by software using APIs is kind of new. Or is it ?
Lessons from Storage APIs
Lessons to be learned from the storage folks is that reducing client configuration storage functions to an API didn’t stop EMC or NetApp etc from selling storage. In fact, it increased the marketplace by making storage arrays easier to use and get business value from for the first time in 20 years. This reduced the friction to buying and implementing even more storage than every before and you can bet they are rubbing their hands over the increased dedupe and backup licenses on the arrays.
Although the customer may not place so much reliance on the firmware in the array boxen like previously, this doesn’t mean those features and functions aren’t required, only the front end that defines certain subset of storage function has been given away.
It’s will probably be same for networking. OpenFlow does not mean that you won’t be configuring switches, routers and firewalls in the future – you will still do that, but some subset of functions will be abstracted into an API for remote configuration, management and administration.
What will the Networks APIs look like ?
So what will these functions be ? In my current perception, I guess the most likely functions are:
- VLAN creation and port membership
- QoS Policy
- port activation and switch provisioning (for very large / Cloudy networks)
- ACLs / Security features
That’s about it. Most OpenFlow controllers implementations are going to struggle to handle these functions alone and more complexity would require multiple generations of software.
A serious risk to the incumbent vendors ( Juniper, Cisco et al) would be that VMware implements a viable OpenFlow controller and begins to control the top layer of the network stack and thus disintermediates them. I’m sure that the vendors just want VMware to develop an API for VMware / Citrix Xen / HyperV to call and they will provide the “value”. I’ll bet the vendors will be fighting this very hard over the next two years or so.
The EtherealMind View
VMware and other virtualization platforms are demanding virtual networking that is driven by software tools, and they aren’t likely to wait. The virtualization has a lot of push (and hot air) and some serious cash to make things happen. This is what I think drives the OpenFlow opportunity. The fact the people like Alwyn Sequira in making noises tells you that it is serious, and Citrix already has a working implementation as demonstrated here at Interop
OpenFlow might be all clamour and noise, but there is a logical line I can draw from the virtualization vendors to implementation – people like James Hamilton appears to be planning for it at Amazon. I’ve spoken to Facebook about it, they’ve got something going on. While these networks mean nothing for normal enterprises, they will drive exposure – sometimes that’s all it needs to make a technology be adopted.
My view is that Software Defined Networking is well overdue. Networking has been constrained by protocols that attempt to take the discrete elements of network and attempt to create a contiguous infrastructure element. However, those same protocols that attempt to create coherence out of hundreds or thousands of devices are not flexible. Our current network protocols don’t readily change to conditions in the network like congestion, or failure, or dynamically shift loads according to predicted conditions (such as backups for example), or provide automated deployment, or dynamic software configuration from an external system.
And why not ? A simple rules based system for network configuration should be possible. A little intelligence around VLAN creation, or ACL in a system that knows and can parse the configuration of devices isn’t that hard to do compared to the business opportunity that it creates.
And that’s what it’s all about.
- Another standard that the IEEE can’t get it’s act together and get out the door. Most likely it being hung up by one of the vendors but no one is saying anything. ↩
- TRILL has a number of proprietary variants at this time including Cisco’s FabricPath and Brocades VCS. ↩
- SPBB is an incomplete IEEE standard that is taking far too long to ratify, but is probably the long term winner in the L2MP stakes because it works equally well for campus & MANs as it does for Data Centres. And scale wins in the terms of producing mass market acceptance. ↩