OpenFlow, HP Sentinel and Security SDN

I was talking with Dave Larsen, CTO for HP Networking about SDN and future applications and he  describes a security application/use case of OpenFlow/SDN for Campus Networks.  HP Sentinel is an SDN Security application that combines a reputation database, HP VAN Controller and OpenFlow to build a Campus security solution. Here is a quick overview of the process and how you can mix existing security technology with OpenFlow/SDN to provide a useful campus security tool. 

HP Sentinel

HP Sentinel is a SDN Application that monitors the flow creation process in the campus network. As a flow is identified, it is compared to a reputation database for IP Address and DNS names. If the lookup is positive, traffic is dropped on the campus edge switch.

The product is current in trial with selected customers, one of whom is a school that needs protect the network from malware and misuse. In the first week, the school discovered three compromised computers in the school that was hosting malware even though other tools were on the campus.


HP Sentinel 1

Here is how it works:

The Campus ProCurve switches are configured to support OpenFlow with a HP VAN SDN controller. The controller is linked to an “SDN app” that uses the Tipping Point reputation database from it’s IPS product. This reputation database is built from  Reputation Digital Vaccine service

Step 1: A workstation has an application that initiates an IP connection to a site.

Step 2: When the IP packet is received by the switch,  the flow data would not be matched against the existing flow table in the Switch since this is first time the flow has been seen. The flow would be punted to the controller.

Step 3: The controller will send the flow data to the HP Sentinel App. The VAN controller would be configured to send flow data from specific switches to the reputation database.

Step 4: The Flow is compared to IP and DNS data in the reputation database. The DNS name is important since a single IP address can hosts multiple domains. Or a very large network is ‘hidden’ behind proxy servers.

Step 5: The flow will pass/fail and signal back to the VAN controller which will then push  a pass/drop OpenFlow entry into the switch.

Step 6: Optional step – send an alert that a reputation alert has been triggered. T

Points of Note

Standards, and nothing but standards. The first point I noted is that this solution is entirely standards based on OpenFlow. There are no custom protocols that other vendor solutions use.

Filtering at the Edge: the blocking is performed at the edge of the campus network where the workstation connects to the Ethernet switch.

Will work for WiFi:  This solution will work for WiFi networks in the future once WiFi equipment is OpenFlow enabled.

Uses Existing Technology:  This solution uses the existing switches in the customer network since many HP switches that have shipped in the last few years are OpenFlow capable with a software update.

Tipping Point and software: The Tipping Point division might be moving ahead. HP Tipping Point division continues to focus on hardware products and, to my knowledge, hasn’t yet embraced virtualization in a meaningful form. This software only solution might provide some hope that Tipping Point are beginning to embrace the cloud era.

The EtherealMind View

Many people do not understand how flexible and capable the OpenFlow protocol can be for a wide range of uses. A number of vendors are claiming the OpenFlow isn’t enough, yet I remain deeply cynical about the motives of such views. OpenFlow continues to have the largest momentum and widest industry support for SDN in the data network. The ongoing development of OVSDB will provide for new extensibility, capability and address the other issues around device management.

For now, HP Sentinel is a clear demonstration of the capabilities of OpenFlow. The HP SDN story is stronger than many understand. Existing network switches  are OpenFlow capable, the HP VAN controller is stabilising and forging links with other divisions. The IMC division is providing strong linkages to manage VAN as part of the normal operations.

HP has a strong SDN story but it’s somewhat overshadowed by other companies making more noise. And the Networking team is less prominent . Here’s hoping that they can improve this in the months ahead.


I was a guest at HP Discover in Barcelona as part of Social Media outreach program.

  • SilentLennie

    My concerns with this have never been about the OpenFlow standard, but about scalabilty, latency and price.

    So I have a question is: how price competitive would such a solution be compared to other solutions and does it scale to the datacenter with many, many more flows. And what about the latency ?

    A scalable solution for the datacenter might be to run a Sentinel app (in a VM) on every hypervisor where there is no network latency especially if the reputation database is distributed. But if you are running many instances of that system how is licensing handled ?

  • Peter J. Welcher

    This sounds like process switching at flow initiation. OpenFlow already has that issue with putting the controller in the loop. Now HP plans to have all the switches also in effect feeding a single device / server running the Sentinel App. I’m very curious how this can possibly support modern high speed networks. What kind of performance can OpenFlow achieve, in terms of N x 1 Gbps or N x 10 Gbps ports supported at wire speed? Of course, any answer will be highly dependent on the number of new flows per second.

    • Etherealmind

      I asked Dave Larsen about this. My understanding is that because the flow management is done at the Campus Edge it works fine. When you think about it, a 48 port switch has a maximum of 48 workstations. A typical HP switch can hold 4000 flow entries (more or less, details vary according to certain technical limits. So that is about 80 flow entries per host in a worst case scenario. Consider that is 79 “drop rules” plus one “permit any” on each interface.

      You can extend the flow table by using certain techniques to maximise the CAM/Memory use so the real number is much larger. I’m also told that there is a small number of beta sites using to today and they report no problems with performance.

      I guess it’s possible to rotate flow entries as well by ageing out the older table entries.

      Still think it’s a good use case though.

      • Peter J. Welcher

        One switch no problem. Many switches, problem?

        I’m reading “Campus Edge” as access switch. And wondering about 50-100 such switches (e.g. blades) all handing off new flows to controller and Sentinel. Say with
        2000 to 5000 people using web browsers that open separate flows for each
        web page component.

        FWIW, I’m also on the warpath about Internet border boxes with no way to monitor performance impact, queue depth, drops. And have encountered sites with IPS devices where they apparently hit the wall well before reaching the marketing numbers. I’ve been consulting and trying to t’shoot such situations, it’s a bit painful. That’s why this scenario throws up a bit of a red flag for me. It’d help if HP put out some performance specs for you to pass along. In general, that’s my biggest problem with OpenFlow right now, not seeing a lot of discussion of real hardware and actual performance. Admittedly, not been looking too hard for it, either.

        • Etherealmind

          SDN controller performance is not an so much of issue today. Commercial SDN controllers are commonly managing 1-5million flows with a flow setup rate of 25K-50K per second. The limitation is physical memory to keep performance in good shape.

          I’ll ask someone from HP to give me some links to their VAN controller performance but my (sometimes faulty) recollection is it also is millions of flow.

          I’ve asked the performance questions and, based on the testing results and response, controller performance isn’t an issue. As I said, this is still Beta, but the performance problem is the DNS reputation lookup not the OpenFlow. In any case, the performance impact isn’t much different from an NGFW or IDS doing the same DNS Firewalling functionality.

          • Peter J. Welcher

            I get the TCAM issue. In this case my concern is precisely the NGFW / IDS performance issue, because that’s where some seem to fall down, or have mystery performance lapses. (Garbage collection, temporary queue backlog, update signature/reputation info?). I’ve seen the big numbers you cite, but haven’t seen/looked at what vendors site for say a 20, 50, or 100 switch controller (hardware, performance, cost). Thanks for the replies!