When using Open Daylight (ODL), two open standards for configuration are OpenFlow & NETCONF. Which is the better choice ? Is there an option for both ? I consider design ideas for practical uses.
Preamble – OpenFlow & NETCONF Protocols
The two southbound SDN protocols that are best suited to enterprise networking are OpenFlow and NETCONF.
OpenFlow or flow networking scales well for use in the data centre network as companies like BigSwitch and VMware NSX can demonstrate. OpenFlow is a good protocol for flow configuration because of its simplicity and direct control. 1 However, it is my opinion, at this time, that applications for OpenDaylight are not ready for data centre when using OpenFlow and we need some more time for this arket
NETCONF is a rough replacement for command line configuration that can accessed via external software using methods that are more standardised. 2 NETCONF has many limitations and tricks but its ‘heart’ is a standardised XML implementation.
Note: Protocols like BGP, LISP or PCEP are better suited to carrier networks and large-scale cloud providers. SNMP isn’t suitable for configuration.
Logical Structure of an Enterprise Data Centre Network
An Enterprise Data Centre has three high level security zones implemented at the network layer with each representing a level of trust to systems located in those zones. To create trust, security devices scan, inspect data or restrict the source. To further enforce security policy, these networks are usually separate physical devices with totally separate DMZ and Core/Internal physical switches.
The DMZ has a high security posture and strong controls / restrictions on flows between the external and internal network. The design goal is maximise control over network and force traffic into strongly defined paths and enforce the use of security appliances. By comparison, the internal data centre LAN focusses on speed, stability and resilience. Some enterprises will implement logical or physical separation so to further isolate traffic loads and the most common categorisation is production/non-production.
The rough diagram below shows the sort of complexity that happens as each service isolated.
OpenFlow in the DMZ Network
The DMZ network is well suited to using OpenFlow. Traffic in the DMZ is tightly controlled, easily identified and services are carefully controlled. Configuration by OpenFlow in the DMZ requires very little modification from existing practices. In particular, using OpenFlow to replace micro-segmentation technology like VRF-Lite, PVLAN.
DMZ Networks have a low port count but consume a large amount of design and operation time to maintain them. Configuring complex protocols and features means a lot of time spent/wasted on menial preparation (change requests) instead of doing something more valuable for your employer.
Perhaps the best feature is network visibility for security audits. The ODL interface is a GUI and displays most of the relevant configuration information in a web browser. I’ll admit that the interface is kind of clunky but I’m told it will get better soon. Also, virtual switching is big the DMZ. People are getting used to the idea of using software appliances and the use of Open vSwitch can be integrated.
I often find that many DMZ implementation use poor networking practices. It is common to implement complex features that should ‘improve security’ but also create serious operational problems. An overly simple example is the widespread use of Private VLANs to control access between hosts on a single VLAN which causes problems for HA for security appliances which assumes that the VLAN is a single broadcast domain.
NETCONF for the Data Centre LAN
The Data Centre LAN has a primary need to reliably connect everything and be resilient in event of failure. Performance is the secondary concern. Micro-management of flows in the data centre network is possible through an overlay network that has deep integration server / VMs or end points. Overlay networking requires enormous applications like VMware NSX or Cisco ACI to orchestrate the configuration of the tunnels. For many networks, this is unnecessary and a simpler process would meet their requirements.
NETCONF doesn’t require as much change to the existing methodology of the design and operations. It somewhat of an exaggeration to say that NETCONF simply configures the CLI but in some ways (and for some vendors) this is close enouhg.
It would be possible to use NETCONF protocol to replace existing processes and simply automate them. For the data centre LAN and OpenDaylight, this makes a certain amount of sense for organisation that are not comfortable with more comprehensive change.
The diagram shows OpenDaylight managing both the DMZ and Data Centre LAN. Although the networks are separate they should be unified in operation because flows traverse the network end-to-end.
A more distant future of SDN will eventually include firewall configurations and IDS/IPS rules as well as simpler route/switch configuration.
Device Support – Networking vendors have been slow to replace the CLI with open API. And their software development remains slow and buggy. We have to accept that it will take some time (months or, sadly, even years?) before NETCONF and OpenFlow support can be fully and reliably implemented in current generation devices.
The EtherealMind View
OpenDaylight has more use cases that most people realise due to it modularity. Southbound protocols are diverse – PCE, BGP, LISP, SNMP are open protocols while vendors produce their own modules to support less open protocols
While there are plenty of Ethernet switches with maturing OpenFlow support , ODL isn’t production ready but you might want to start watching developments in this area. Also keep watching the market for SDN applications that integrate appliance configuration such as load balancers & firewalls.
But mostly stop thinking about having just one network. A data centre has always had many networks – DMZ, core, development, out-of-band, WAN and many others. It seems more likely to me that each network type will have its own SDN solution but still be integrated or part of single operation platform.
Food for thought, eh ?
- Indirect configuration methods based on promise theory don’t make sense to me. I can comprehend the value of protocols like OpFlex but believe it will ultimately fail because lack of precision or accuracy leads to poor implementation. Vendors have been unable to reliably implement IETF RFC protocols like IPSec, STP & OSPF for 20 years that are loosely defined. ↩
- Most networking vendors have offered proprietary XML interfaces and most are of poor implementation quality. The data formats have been widely different and this prevented a substantial software ecosystem from developing. Most people would prefer to use JSON over REST/HTTP but we are probably stuck with NETCONF in its current form. Since Cisco acquired TAIL-F who drove many of the NETCONF initiatives, it may be a while before innovation restarts. Innovation in a big company is hard and perhaps impossibly so. ↩