Software Defined Networking & OpenFlow – So Far and So Future

Over the weekend I published the latest Packet Pushers show about Software Defined Networking in the Priority Queue feed which focussed on Cisco and how you can implement SDN in an EXISTING network with some of key people from Cisco – PQ Show 015 – Cisco Hybrid Switching and OnePK – Sponsored.

As I was editing the show I realised that the first time we discussed OpenFlow was in May 2011 on Show 40 – Openflow – Upending the Network Industry – a show where we identified that the future of networking was going to change. I can remember clearly during the discussion with Matt Davey from Indiana University being struck by how obvious the idea was. It was obvious to me that once you grasped the technical concepts behind OpenFlow , any decent engineer can perceive the impact on networking. While OpenFlow makes sense, it took another six months before Software Defined Networking became a thing.


The EtherealMind View

In the months ahead, you will be hearing a lot about OpenFlow and Software Defined Networking. That’s because the technology looks most likely to impact networking. I’m coming to the view that MPLS doesn’t work for solving virtual networking in the data centre. I’ve been researching VXLAN and the operational problems of maintaining and securing Bidirectional Source Multicast trees, or even Any Source Multicast trees, is a serious concern. Overlay networks are not free.

And even if OpenFlow doesn’t work out, the Software Defined Networking part if what really excites me. If OpenFlow does one thing by encouraging software tools that replace the CLI for network configuration then we will have really moved the industry forward. The idea of software program that can positively configure my network is just awesome.

I imagine a software controller that checks the input against a rule base and rejects incompatible configuration. I want to see a log of all changes made to the network, and who did them. And when. And why!

That’s what we need to move networking forward. We MUST reduce the cost of network operations by using software tools to improve reliability and speed. Configuring networks at the CLI was fun, and even practical for a few hundred devices. But the time for the CLI is over.

We  must be able to reliably configure tens of thousands of configuration points across thousands of devices.

We need monitoring APIs to feedback quality data about performance and status.

We must have a future that it’s limited by the CLI. The CLI should be able to enhance the software functionality.

But enough ramblings. Lets look back at the history of OpenFlow at EtherealMind and Packet Pushers.

Practical Introduction of OpenFlow

In October 2011 in Show 68 – Practical Introduction and Application of OpenFlow Networking, I delivered a one hour video presentation, with Martine Casado from Nicira riding shotgun, on the fundamentals of OpenFlow. I wrote the presentation so that an engineer could get an introduction to concepts and technologies.

This was written as preparation for the OpenFlow Symposium that we ran in san Jose in November 2011. If you are looking for an introduction to OpenFlow this is a good place to start.

OpenFlow Symposium

OpenFlow Symposium Logo 20110908 The Tech Field Day OpenFlow Symposium was a big event. Before most people had realised what OpenFlow and SDN was about, we had people Juniper, Cisco, Brocade, BigSwitch and NEC present their visions on OpenFlow. I recently re-watched some of these videos, and was surprised that very little had change dint he last year. The visions and challenges haven’t changed much.

You can find links and video recordings from the main page at

Understaning Change in an SDN World

In subsequent podcasts, we have talked a lot of about how SDN will impact the

October 2011 – Show 71 – OpenFlow, SDN, Controllers, VXLAN & Wishing for Fishes where we got a panel of people together after the OpenFlow Symposium to discuss the vendor positioning, messages and what the future might look like.

I made a conscious effort to stop discussing SDN/OpenFlow about this time. But the topic kept being raised by listeners and the audience. Ethan and I responded to listeners questions in May 2012 – Is SDN A TRILL Killer ?

More recently, we have seen Cisco announce their OnePK programmable networking strategy and the Cisco has been sponsoring podcasts to share their message and intentions. In my view, Cisco’s strategy is more than being about supporting OpenFlow, they are attempting to extend the available options to include every aspect of IOS. You can regard this as a good thing, or, more cynically, see them as protecting a legacy platform. The truth is probably somewhere in the middle ….. but we talked extensively about it with Omar Sultan from Cisco here:

Show 107 – Cisco Software Defined Networking Strategy With Omar Sultan – Sponsored

Ric Pruss is one of the key developers behind Cisco’s OnePK technology and here he talks about the implementation and usage of their APIs. – PQ Show 003 – Cisco onePK With Richard Pruss – Sponsored

This show talks directly after the announcement st Cisco Live in San Diego. Priority Queue Show 008 – Cisco and Network Programmability – Virtual Symposium

More recently we recorded this show with David Ward who is leading Cisco’s push into the Service Provider market based on programmable networking ( or SDN/ OpenFlow ) as we know it. This is a very high level discussion and possibly the best explanation of how Cisco envisions the future of networking that I’ve found so far: Show 120 – The API Layer Cake With Dave Ward and Lauren Cooney of Cisco – Sponsored

Blog Posts and Packet Pushers.

The are some blog posts over at Packet Pushers that are worth highlighting:

And I’ve written quite a bit on the topic of OpenFlow and SDN. You can find all the posts by tracking the tag OpenFlow but these posts have been the most popular:



I have nothing to disclose in this article. My full disclosure statement is here

  • Sam Stickland

    Greg, Dino’s points regarding LISP on your recent show opened my eyes. In a LISP enabled network, that supported VPNs and service chaining (with LISP enabled firewalls and load balancers) the underlying network could just be a simple L3 core, possibly even with DHCP for all NIC addresses.

    All the cleverness now happens by manipulating the mappings in the LISP database, rather than directly reprogramming or reconfiguring the network hardware. As Dino points out, reprogramming the network using Openflow means that the state needs to be reprogrammed in many, many places, but reprogramming the mapping database is a far simpler operation, and, as it’s at a higher level of abstraction, a potentially less risky operation.

    Personally, I think it could be a quite a win to use LISP to implement IPv6. IPv6 by itself doesn’t add huge amounts of value, but IPv6 with such a LISP implementation would bring many, many benefits. And we could start implementing it now, before it needs to be production grade, without touching the existing IPv4 network infrastructure.

    (Thought experiment: Would it be too wasteful to assign an IPv6 EID not just to a server instance, but permanently to that server, so that after it’s decommissioned we never reuse the address. Even a /64 of EIDs is 2^64!)

  • Francesco Bonanno

    I think the most important advantage of SDN is not the easyness of configure lots of devices in the same time, for that i can use one of the miriad of software that can do that (using CLI through SSH or Telnet, and scheduling all of the possible operations).

    I think the SDN is great for the awareness of the entire network from the controller perspective, or the fact that I can use any x86 machine, automatic and implicit network mapping etc… so I don’t think that a man that manage lot of devices changes the configuration in the single devices… the question is: the benefits of SDN are an actual need? I don’t speak about new companies, but existing companies that could migrate the existing, well known and well engineered infrastructure to SDN…

  • Matt Stenberg

    Greg, I definitely agree with your points about simplifying how we manage our networks. The days of knowing all of the tricks of the CLI seem to me to be numbered. If I could reliably devise a topology in a software system and then accurately deploy that topology to the active network – why isn’t that something I’d embrace?!

    Out of curiosity, are you developing any software development skills in preparation for the OpenFlow/SDN/who-knows-what future? Software development skills are something I’ve been thinking about improving lately, and it seems like the future could be bright for a networker-come-developer.

    Any thoughts on this?

  • Tim Rider

    Agree with Sam’s perspective. OpenFlow in a large scale production network is a pipe dream. Having a centralized controller remotely manipulate thousands of flows in a large complex network? This had been tried before. There are fundamental reasons driven by laws of physics why distributed intelligence is the only way to reliably run scaleable networks.

    And there are ways to abstract CLIs and automate provisioning of network – XML/NetConf among others. We don’t need centralized controllers for that.

    Let’s talk in 5 years and see how widely used OpenFlow will be in production grade physical networks. My bet is on “not very”.

    • Etherealmind

      I can understand your doubt, I was also concerened. But over the last 18 months I’m convinced that controller based networking is the next generation of networking. Most SP’s are aready using controller based networks to configure their MPLS overlays.

      In short, the ability of programmers to reliably develop distributed programs is now proven. Google, Facebook are examples of this. And we now have software Hadoop & MongoDB to do the data processing. The last decade has seen dozens of successful distributed projects, in commercial and opensource projects.

      While there are ways to abstract CLI’s to perform automation, none of them are working. In the last decade, we have had zero adoption of the those technologies. That’s because each vendor used a proprietary approach and refused to implement standards in this area.

      That’s part of the reason that OpenFlow & SDN will be successful in the Data Centre within two years, and in the SP Edge within, I think, about 4 years.

      • Tim Rider

        “Most SP’s are aready using controller based networks to configure their MPLS overlays” Can you link to some examples? By definition, MPLS networks require IGPs (OSPF, IS-IS) to build LSPs. I suspect the controllers in these environments don’t completely take over the control plane.

        “the ability of programmers to reliably develop distributed programs is now proven. Google, Facebook are examples of this.” I am not sure how that is supposed to aliviate my doubts? _Distributed_ interlligence is how the traditional networking is done. OF/Controller based networking is not proven, and never been done on any sort of scale. What Google has done with OF so far is a very narrow use case, and even their implementation supplements (rather than replaces) traditional routing protocols.

        “While there are ways to abstract CLI’s to perform automation, none of them are working”. I wouldn’t say none are working, I would say none has become a broad multi-vendor standard. While I generally agree we need a centralized management plane framework, with cross vendor support.. My point is that centralization of control plane is not necessary to achieve automation. Leave control plane distributed and focus on management. I don’t see how OpenFlow controllers is the answer to your management/automation pains.

        • Etherealmind

          I made that mistake for a while. The “virtual networking doesn’t care about physical networking” is a stage you go through before you realise that isn’t possible either.

          You can abstract or automate parts of the virtual networking, but ultimately the physical networks needs programming to support the virtual layer. The initial requirement is configuration but later it performance monitoring.

          You need better physical control than we have today. SNMP, closed XML APIs and variable CLI’s don’t cut it.

          • Tim Rider

            Sure it’s possible. Internet is a giant working proof of “intelligent edge / dumb core” approach. Does Core Internet infrastructure care about all the intricacies and the complexities of edge applications? Of course it doesn’t, else the Core Internet could never scale to the degree that it has. You need to maintain a degree of de-coupling between the Core and the Edge, else you build a giant mesh of inter-dependencies that eventually gets crushed by its own weight.

            I do agree we need better Management plane automation, but de-coupling Control plane via OF/centralized controllers is a wrong way to go about solving this. We will see in a few years :)

    • Sam Stickland

      It’s not that I don’t think OpenFlow can be implemented in a large scale production network. It’s just that I think building service chains and instances in LISP will cover off 90% of the use cases we seem to be considering OpenFlow for, and with considerably less state to push about too.

      With regards to configuration management, with a proper overlay network the underlay network configuration is static. Job done.