Is OpenFlow Open ? I Ask – Compared to What ?

I’ve always wondered whether the Open Networking Foundation is the correct caretaker of the standards process for Software Defined Networking. Recently, we’ve seen some questioning of the direction of those standards on OpenFlow, and, now that we are beginning to understand the concepts of controllers, Software Defined Networking these are good questions to ask. it shows that we are moving to seriously consider the technology.

Michael Schipp asks on the Packet Pushers blog – Is OpenFlow Losing Its Openness?:

…a comment from Nicira’s Casado that was written last spring, “you most likely will not have interoperability at the controller level (unless a standardized software platform was introduced).” tweet

And (following a vibrant exchange on Twitter), Brad Hedlund asks whether Nicira’s Open vSwitch is open – Dodging open protocols with open software ?

The story being that of open source networking software minimizing the role of network “protocols” and the diminishing role of standards bodies in building next generation networks. tweet

He makes a good point – are companies using what looks like open standards to produce closed or proprietary systems ?

There are several lines of thinking that lead away from these points so this post kinds of wanders over several areas. Permit me to make a few points.

OpenFlow, Controllers & SDN

Lets recap some basics since OpenFlow/SDN is still new to many.

The purpose of a controller is to provide the following:

  1. A user interface for configuration of the network.
  2. A platform for applications that define the configuration of the network.
  3. Deliver the configuration to network – most likely using OpenFlow but could be SNMP, NETCONF or other protocol.
  4. Provide the feedback loop and administration of the operational network.

Is openflow open 2

Anyone who has designed and managed a wireless LAN will very familiar to this process.

Nicira and their “Open” Open vSwitch” – Controllers aren’t new

From what I can read & understand, Nicira is using controller based network management system to manage a software switch that uses flow forwarding. This principle is similar to an SDN/OpenFlow design. It’s also conceptually similar  as Embrane, and most Wireless LAN networks. And lots of other network technology in history ( such as IBM 3745 FEP network). But that’s where the similarity ends (more on that here).

Nicira isn’t delivering a revolution here, they are filling a rather obvious niche in software networking. VMware & Microsoft haven’t delivered anything useful in their products so there is a big space for an early mover.

Another point is that I don’t think OpenFlow 1.x standards don’t provide enough functionality or features to be used for software switching. Could be wrong here but it might be likely that ONF is moving slower than Nicira would like so they are moving ahead of the ‘standard’.2

Open Networking Foundation

Personally, I think that the OpenFlow protocol and Software Defined Networking is a great idea. But, I have strong concerns about the leadership of the Open Networking Foundation since it’s effectively managed and owned by large, mostly American corporations – Google, Yahoo, Verizon et al. These companies have very narrow interests in the use of the OpenFlow for their hyper scale data centres and it’s easy to question their motivations and whether their “leadership” will benefit the network community as whole. The word “cabal” comes to mind with all the negative connotations therein.

So far, there my opinion is that there are few “signs of evil” but only time will tell. OpenFlow is not an open protocol like the IETF, or even ANSI 1

State of OpenFlow

At this point, the standards process for defining OpenFlow is all hung up. Big companies want the things that they like, the university people are all sticking their noses in, and the incumbent vendors are struggling to either embrace or adopt yet another network transition.

This means that the standards process will be ……. hard. There are no procedures for dispute resolution, or open debate, etc because they are a new organisation that has no bureaucracy or structure.

The state of OpenFlow is not good.

Comparing OpenFlow to other Protocols

Finally, I want to compare OpenFlow to another industry standard protocol – SNMP. It’s probably worth remembering that SNMP has some functional similarities to OpenFlow. SNMP describes a protocol, which uses the Structured Management Interface (SMI) to fetch data contained in the Management Information Base (MIB).

The SNMP Protocol is open, as is the MIB and SMI data structures they contain in ASN.1 notation (hope I got that right).

However, Network Management software that uses SNMP is not open. Network Management Systems(NMS) are all proprietary and rarely, if ever, exchange data with other systems in a meaningful and open way. Have you every attempted to integrate BMC Patrol with anything ?

For all practical terms, an NMS server is a network controller but not for configuration data. NMS are used as controllers for performance and fault data only. That’s a fine distinction, but an NMS is almost never used to configure the network because SNMP isn’t able to handle that.

Define ‘Open’?

So when Michael Schipp asks Is OpenFlow Losing Its Openness? I would have to question the concept of open. SNMP is open, but NMS are closed and data is sequestered into closed data silos.

Do we want Controller – Controller interoperability ? Absolutely. But the organisation that could organise that does not appear to be ready, or perhaps capable, of doing that.

The bigger, and much more challenging topic is what sort of controller interoperability do you want ? At this stage of development, there are less than five viable controllers of which only one is commercially available3. And the technical discussion continues about Active/Standby, Clustered or Hierarchical controller designs will be most suitable. It’s difficult to think about interoperability when you still haven’t finalised the best software architecture.

And with OpenFlow led by companies like Google/Yahoo for data centres, and Verizon/AT&T for Service Provider, will they even consider what Enterprise networks want ? At this stage, there are campus OpenFlow implementations at Universities ( such as Indiana University) but whether it’s suitable for corporate/enterprise I can’t say.

The EtherealMind View

Lets say that it’s too early to expect much from Software Defined Networking. A year ago, it was a good idea that showed some promise. The people who led it’s development have chosen to create a new body to define standards and guide development – fair enough, it’s their idea for now. It remains to be seen how that will work out.

My view on interoperability is that we will need usable and working systems before we can consider interoperation between controllers. It’s too early to call proprietary when there are just four or five first generation controllers available today.

It’s up to customers to ask the vendors (now or future) and demand compatibility and drive interoperability. Don’t expect anyone else to do it. Keep writing, blogging and twittering about it so that vendors know it matters. Standards are driven by what we ask for, otherwise vendors will take the easy path and deliver what’s easiest for them. And that’s proprietary, closed systems.

Update - 20130902

As a result of questions, the current state of SDN standards is largely being driven by the OpenDaylight project. Because the OpenDaylight project has participation from nearly all of the vendors and contributing code to build a functional controller. Cisco and IBM have been contributing heavily which has seen a number of odd standards being added to SDN like LISP and PCEP. Still, a working standard creates its own mass and attracts others to it. I’ll take standards from ODP because I can use them and not have to put up with the foolishness of organisations like the IEEE.

The ONF will continue to define OpenFlow and hardware standards that vendor needs – as least, that’s how I see it.


  1. although that’s a stretch because ANSI exists for American Standards, not everyone and well known for it’s “convenience”.
  2. Same argument had regularly with critics of Cisco’s proprietary first, standards later. Cisco tends to say we precede standards because it takes time to complete them. Sometimes, they have to create a need for the standard to happen. I will accept your reverse arguments as also valid, I’m just making a point.
  3. NEC and ProgrammableFlow. Has hardware too.
About Greg Ferro

Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count.

He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus

You can contact Greg via the site contact page.

  • http://twitter.com/dlenrow David Lenrow

    Greg,
    I think that your analysis doesn’t consider the fact that the IETF is viewed by many as a hijacked institution used by the tier one equipment manufacturers to filibuster,steal or  subvert any innovation that threatens their margins and vendor lock (Not necessarily my opinion or that of my employer, but something I hear frequently).

    Many see SDN and openflow as a force that will break-down the “mainframe” business model that persists in the vertically integrated network industry and lead to a horizontal model with commodity hardware and value creation in software. This is a huge threat to the biggest incumbent box vendors who chair  the working groups in IETF. The public markets won’t let them cannibalize their own margins and a horizontal ecosystem will put their lives at risk.

    From my discussions with members of the ONF founding/Board-member organizations I understand that their original intent was to have a user-centric group retaining the user benefits of the SDN revolution even though it will not be welcomed by the dons of the equipment industry mafia. One can argue that big DC operators are a cabal and serving their own interests, but they and their customers certainly view that as preferable to serving the interests of their vendors. ONF was well intended, but there is certainly a chance that it will not achieve all of the goals of it’s founders.

    • http://etherealmind.com Etherealmind

      You make a good point. The IETF has it’s problems, as does the IEEE, ITU, ANSI and other standards bodies. And I understand the intention behind the ONF is to focus on the users.

      still, I’m concerned that the smaller networks may not get the focus they deserve. Once the fashion for cloud technology is over, and we move the pendulum back to local resources, will OF be suitable.

  • http://twitter.com/Vegaskid1973 Matt Thompson

    As somebody who has moved from 10 years in the Microsoft sysadmin world to (predominantly Cisco) networking over the last couple of years, I find this potential metamorphisis fascinating as the story unfolds.

    The ‘wouldn’t it be great’ scenario that we had hoped for a year ago seems to be further away now than it was then but everybody who actually works on the kit knows that this is exactly what we need, as the infrastructure(s) that we look after are getting bigger and bigger.

    How will it end? Not sure it will, at least not in the next 2-3 years as the big players with their political and financial sway try to get the best outcome for their own ‘teams’. The way it is playing out today is basically pissing all over the concept of ‘open’.

    Perhaps we need a revolution…

  • http://twitter.com/DmitriKalintsev Dmitri Kalintsev

    > because SNMP isn’t able to handle that

    If you are saying that it isn’t possible to configure network equipment via SNMP, then I beg to differ, because it most definitely is.

    • http://etherealmind.com Greg Ferro

      While it’s possible to use SNMP for configuration, it’s not practical at large scale because the data format is limited and there are no transactions. Read RFC 3535 for full explanation.

    • Matt

      I agree and I’m really confused as to why there is effort to create a new system when there is an existing system already present.

    • http://etherealmind.com Greg Ferro

      Actually, no it isn’t. The IETF has agreed this in 2003 and abandoned efforts to use SNMP for serious network configuration since then. It was thought that SNMP could do configuration but the schema is not useable for modern hardware.

  • Rick Bauer, CompTIA ONF Rep

    In reply to those who feel that ONF is being hijaacked, you don’t need Seal Team 6 to rescue anyone. CompTIA, the largest body of vendor-neutral IT certified professionals in the world, is also a part of the work. Anyone wanting to contribute through their CompTIA certifications or company membership can contact me, and we will put you to work.

    When I worked in the storage industry, I saw spec grow. I saw the larger companies dominate because they could affort to have engineers and participants in all the con calls, staying on top of things. But I also saw smaller companies and even non-connected free agents make significant contributions.

  • Peter Ashwood-smith

    Greg, very interesting, thanks ..  three points. First, the interface between the control plane and the forwarding plane is probably one of the more difficult parts of a switch or router, there is a large volume of ‘stuff ‘ that moves back and forth. So some serious thoughts as to how to optimize that interface will be required in any open version of that interface. Second, since a controller can only control so many devices, controller to controller protocols are required .. the logical choice for those protocols are IETF link state/BGP (at least as a start otherwise .. micro flow based route propagation?) etc. The third thing that has always bothered me is the chicken egg issue of who controls the forwarding between the controller and its devices since clearly we can’t have hundreds of point to point links. That underlying control protocol is going to govern the performance of the entire system and has to be bullet proof because the last thing you want is to ‘brick’ your device by mis configuring forwarding to it from the controller by the controller and then getting permanently stuck.

  • Pingback: Is OpenFlow Losing Its Openness? Part Two.

Subscribe For Weekly Updates by Email

Get a Weekly Summary of Latest Articles and Posts to your Email Inbox Every Sunday

Thanks for signing up. Look for the email from MailChimp & make sure you confirm your email address. You may need to check your spam or gmail settings to be sure of receiving the email.

Note: You can unsubscribe at any time using the link at the bottom of every email.