In SDN, the network isn’t about packet but a series of flow states instantiated in devices where UDP and TCP protocols stream data between the client and server. Each of these streams setup coarse forms of state internally to network devices as cache entries.
For switching, the internal cache is populated by destination MAC Address and Output Interface action, for simple IP routing the cache is populated with the destination IP Header, Output Internet and the next hop IP Header to reduce processing load. This cache is important to both hardware and virtual devices because the forwarding delay is reduced. The flow cache entry is built from the first few packets in a flow stream between the client and server because packets in a single session maintain the same state consistently.
For a hardware device, these “flow cache entries” are held in physical memory called a TCAM that acts like a database. For a virtual device, the database is physically held in DRAM of the x86 using a software engine to perform lookups.
There limits of scaling flow state. In hardware devices, the TCAM is complex, expensive and limited in size. Therefore hardware devices have limited number of MAC and IP Addresses it can support. Software devices can have very large number of flow states according to available DRAM but the lookup performance is slower.
Ways to Scale Flow Handling
From what I have seen in the market, there two possible ways to scale Flow Networking past the limitations described here. They are:
- Coarse Flow Management
- Overlay Networking
Both of these options are focussed on reducing flow state in the physical network from a scaling perspective.
Coarse Flow Management is about reducing the number of flows that must be configured in devices. For OpenFlow networks that install flow entries directly to the device, it is about creating the flow rules that embrace large numbers of actual packet flows into a complete entry. This is conceptually the same as an Access List where it would be possible to to 250 host rules or use a single /24 subnet to match all possible hosts.
SDN controllers calculate the flow tables and will create “summary” flow entries in core network devices but have more explicit flow entries at the edge of the network.
Overlay Networking is a method to scale flow state in the network used encapsulation to reduce the number of flow entries in the core of the network and by simplifying the controller calculation. At the edge of the network, the device uses flow entries to select which encapsulation action is needed to simply the flow operations to work on a tunnel only in the network core.
This method means that complexity is moved to the edge of the network with reductions in operational risk and flow states. The edge device has a relatively small number of possible flow states because it is need to manage only the state of endpoints connected there.
The Same Purpose, Different Methods
From my not-very-expert perspective, an overlay network is a simpler method and easier to operate but lacks the possibility of fine grained control that direct flow manipulation offers. Direct flow manipulation requires advanced software methods to calculate the flow tables and takes longer to develop and test. As such, unpopular with vendors who operate in a market where time is money and all of that.
Some products use a combination of legacy routing protocols (BGP, IS-IS) to distribute tunnel state to endpoints and other companies use direct flow table entries roughly like OpenFlow. Again, not much difference between these approaches except that interoperability is a major concern.
In the end, both approachs have the same outcome of reducing flow state in the network through a summarisation process. In some ways, tunnelling is like a network address translation where many endpoints appear as a single network object. Which is fine, whatever works.
Other Posts in This Series
- Overlay Networking as a Method to Scale Flow Networking and Handling (16th February 2015)