OpenFlow might lower CapEx while SDN will increase OpEx

A lot of people have talked extensively about OpenFlow making significant changes to the networking business. In particular, many writers have focussed on the possibility that OpenFlow enables a choice of using low cost network equipment instead of the expensive networking equipment that we use today.

Well, that’s highly unlikely.

You need to understand that the networking business today is mostly a software business. Over the last ten years, networking vendors have moved steadily away from the focus on hardware and have been placing more and more emphasis on their software features. Indeed, Cisco has never wavered from its marketing pitch that they are software company in the fifteen years I’ve worked with them. And Juniper is constantly talking about the value of the Junos software and it’s interface.

It’s true that if you think back a decade or so, say around 2000, that networking vendors produced a lot of custom hardware. Technologies like BRI ISDN services, E1/T1 interfaces, ATM, Frame Relay, SMDS, Token Ring and FDDI are just a few of the physical products that are now gone. All that remains is Ethernet.

There are some remnants of the old carrier networks, most carriers that are cash strapped and unable to upgrade their networks. Mostly in countries that have large geographical areas or poor telecommunications laws such as Australia (geography), India (legal) and US (both). But for most people, the carriers deliver Ethernet services to the corporate office.

Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value. That is, routing protocols improved, new features, faster forwarding and so on. While software bugs aren’t any less, the software is exponentially more complex than that of 2000. The arrival of merchant silicon, while lamentable in many respects, reflects the business reality of that change.

It seems that many people are focused on the devices and not the software. The real value of OpenFlow is the Software Defined Networking. And the complexity of software, and the maintenance of that software is where the money is. The Controller and the Applications that build the OpenFlow entries is where the money is. Not in the devices.


Is openflow open 2


So it’s possible that Google and Yahoo will achieve CapEx savings in the short term buying cheaper network hardware from Chinese ODMs. But they are also shifting their CapEx investment into an enormous software development and testing program to make controller applications. This means large teams of resources devoted to developing a custom application that suits their business as an Operating Expense. Does that sound like cost saving ? What’s the real cost over three years ? Five ? How many programmers and project managers does it cost to make that software ? What if they fail ? What’s the financial cost ?

The benefit, of course, is a new network that meets their EXACT requirements. In fact, hyperscale networks such as Google and yahoo are one of a kind and no vendor can really make products for them.

I don’t expect OpenFlow/SDN will be cheaper than the networking we have today over the long-term. The advantage is that our networks will have new features, that can be added dynamically and allow to bypass draconian change restrictions for fear of network problems.

That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.

  • Wes Felter

    I agree that if you buy an SDN fabric from a vendor it’s going to cost the same as non-SDN (just ask NEC what PFC sells for).

    But there is a wild card here: open source. Today you cannot buy a “bare” Cisco switch and install open source instead of NX-OS. You can’t buy a QFabric data plane and provide your own control plane. But thanks to SDN one day it may be possible to move the value into software and also make that software free. (I know, you’re asking “who’s going to pay to create that open source?” Maybe a server vendor who feels like blowing something up. Maybe an end customer (who doesn’t have specialized hyperscale requirements). Maybe even the “community”.)

    • Juan Lage

      Xorplus has been around for a while. The point isn’t about whether opensource is the solution for more affordable networking, because as I said, opensource options have been there for a while. The community building networking features will probably not be as large as other opensource communities, etc, which is a probable explanation about why Xorplus is as limited as it is. But the key point is that a switch is not a network. To make a network work you need a level of engineering beyond running one image on a piece of hardware with certain features. This is where networking vendors add value. SDN does not change this, just provides a different way of building and (more importantly) managing a network. It is about making the network infrastructure manageable in a simpler way through well-known APIs to the network (or fabric, whatever) itself. As opposed to having APIs to manage independent devices. Sure, you may have SDN controllers that are opensource, time will tell how successful they are, just like we’ve had xorplus for a while. For OF you already have OS options for controller.

    • Ryan

      Agreed.  If openflow truly separates the control and data plane successfully, we will see commoditization of the data plane as the “value” network vendors provide moves to the controller.  If that happens, then I tend to agree, someone will decide to blow things up by opensourcing a top notch controller once they feel they no longer derive competitive advantage from it.  The benefits for carriers and web-scale players are simply too big to not see this happen in the long run.

      • Art Fewell

        Vendors will end up migrating toward the cloud development model which is open source 80-90% of the code base that is only designed to adhere to standards anyway, and then only spend time on developing the top 10-20% of features that provide differentiation. To date open innovation has already taken over much of the development chain as most manufacturers are focused on product definition and outsource most of the development. The next logical step will be to include open source code as part of the innovation/supply chain for major vendors core products. The more this happens, the more $$ can be taken from the development process, there is a LOT of room to cut costs, but we need more companies to execute better to put the heat on Cisco. Their grasp on the industry is strong enough to where it is probably still a likelihood the industry will continue in its current proprietary state that even Cisco acknowledges is very unhealthy when they are at ONS (because they have to) then as soon as they are talking anywhere else they continue to find new and creative ways to promote lock-in and further damage the industry.   

  • Dmitri Kalintsev

    > can be added dynamically and allow to bypass draconian change restrictions for fear of network problems

    …and when we hit a bug in the controller?

    • Colin

      Agreed. As much as we hate them, draconian change restrictions in context of IT are basically a immune system response to the injection of shitty software. I can’t see things getting any better, only worse, as complexity increases and software development practices don’t improve.

  • Brent Salisbury

    Great post. It has been pretty satisfying to see consumerism finally drive the conversations and arguably the industry, for a change. The sooner SW decouples from HW in some semblance of standards based adoption it will be a runaway train of startups building wrappers on the new abstraction. We have it in a few niches like wireless today, with real operational savings from a consolidated management plane.

    You are spot on in your assessment of potential, or more likely lack there of great hardware Cap savings. The margins are pretty slim today from the manufacturers embracing off the shelf HW. Hell, Pronto is just a tad cheaper than HP at the edge, so white box will show big savings only for the hyperscale content and cloud providers. Oh, the chaos of change will be fun. FPGAs in switches serving up Facebook. Hmm maybe that sucks, nm.

  • Julien Goodwin

    (I work in Google NetOps)

    So ignoring all the bits I’m NDA’d on…

    “But they are also shifting their CapEx investment into an enormous software development and testing program to make controller applications.”

    The developers may be new (or just moving from developing automation/monitoring/management tools for existing kit), but the testing probably already exists, any (sane) large network tests hardware and code very extensively before deploying it, you yourself have complained many times about Cisco’s QA.

    In late 2010 at $JOB[-1] I was trying to upgrade the firewalls, and Juniper shipped two straight versions that *would not boot (the forwarding plane)* on our hardware.

  • Pingback: Debating SDN, OpenFlow, and Cisco as a Software Company | Twilight in the Valley of the Nerds()

  • Juan Lage

    Great post Greg. I see many people talking about how the arrival of merchant silicon as the big thing in networking, when in fact, merchant silicon has been always there. If you look back five, six years ago, switches from Foundry, Enterasys, Extreme, Nortel and others was built using silicon from Marvell or Broadcom already. Why those companies failed to capture significant market share? Many reasons no doubt, but one was the lack of features and/or any software differentiation compared to IOS at the time. In the end, if you develop great software to control the cheap hardware, you’ll see the cost of the software reflected somewhere. Same thing for the cost of having proper support structure, etc. If price reflects value, companies gain market share. 

  • Adetola Oredope

    If I remember well and please correct me if I am wrong, I think the whole idea of SDN/OpenFLow is to add value to both new and existing hardware. The main idea is to separate the control plane from the data plane allowing custom applications to manage these control plane controllers that then pass certain instructions to the underlying hardware. So you can decide to use a HP, Juniper, Cisco or a dodgy network element as long as they understand the Openflow protocol. So if I am right (in which I may wrong) the whole idea is to a have a set of network devices that understand the same language – i.e. OpenFlow. So in conclusion, OpenFlow is an optional extra and from experiences optional extras are always expensive…

  • Ryan

    What happens after the big web shops get their controllers rock solid and decide to open source them?  Big Switch has already put Floodlight out there.  Even though “Linux” is free, people still pay for Redhat/SUSE, etc, but it still broke the back of high priced commercial UNIX…

    • Etherealmind

      It’s not the controller that is the most important item here. It’s the application that derives the flow table that is most important. 

      • milliamp

        I don’t really understand the difference between them. I guess I assumed the applications would be part of the controller?

  • Pingback: OpenFlow doesn’t undermine Vendors even though it changes everything — My EtherealMind()

  • Pingback: Bumper SDN/OpenFlow Roundup ONS2012 — My EtherealMind()

  • milliamp

    My counter point to this is there is already a handful of small companies and researchers using Linux routing distros on PC hardware that are limited in large part by having an x86 CPU do the heavy lifting. They are badly crippled by things like packets/second eating CPU cycles on that hardware which forces them to vendors C and J mostly as soon as they require any sort of scale beyond 3 or 4 gig.

    Once some of the Linux routing distros (and Quagga) support openflow it will remove an enormous barrier that these small companies are running up against.

    Sure software is lacking now but there are just too many things that will be converging on openflow for it to stay that way.

    It makes sense in the data center and backbone as well in companies large enough that the capex savings is high enough to justify the opex costs and as Julien pointed out those companies have their own code test/QA teams already anyway.

    We are talking about many billions of dollars and any company besides Cisco that already makes network software that already supports a full protocol stack would be dumb if they aren’t rushing to use that code to build an openflow controller.

    Things are hot enough right now that you could probably start a company tomorrow and get funding for it to launch a commercially supported Quagga + openflow platform.

    I suspect that over time many small and nimble companies will do things like build a competent network monitoring platform right into their controller or stick raw data in a standard database to reduce having a bunch of different monitoring platforms and scripts constantly pulling data from network equipment.

    I think Cisco has some really difficult decisions to make right now because if they shun openflow and it ends up as big as I suspect it will be they may end up worse for it in the long run. If they embrace the standard they speed up the deployment.

  • Pingback: Networking Field Day 3: The Links()

  • Guest

    All this is great, but I fear that there will be lot of networking job losses in the valley as it will commoditize most of the networking hardware. A company like Cisco would have to lose 50% of it employees. It may not happen right away but few years down the road this can be trouble some.  With margins collapsing it will be difficult have to have new companies entering in this space.  

  • Sanjay Kapoor

    Great Post. There is a lesson from history here, that is worth a mention. The compute industry in the 1980’s was vertically integrated, IBM was the dominant force, provided a vertically integrated mainframe that packaged the custom silicon, the OS and the applications. The value remained mostly with IBM. Then came the micro-processor and Windows/Linux, and value transferred equally to the microprocessor folks, the PC vendors, the OS vendors and the app vendors. In the 2000’s came the hypervisor, squeezing the value out of the PC (System vendors), with the remaining value remaining with the microprocessor and application vendors, i.e. value moved mostly to Software.

    If history were to repeat itself …. how it might play out in networking

    The networking industry today is no different than the Mainframe era of the 1980’s. SDN is the start of the disaggregation process. Over the next 5 years, we will have a combination
    of SDN controllers and customized hardware. Post that, advancements in merchant silicon will effectively squeeze any remaining value of the systems. The VALUE will then move to SOFTWARE…

    The Big question then is:- will the incumbents, find ways to monetize the SW, that they have today, or will they … let the upstarts capture that value. Another intriguing question … what would be the business model … monetize via sale of software … or will the network software be opensource … monetized via offering a service (Redhat for Networking) …. Time will tell …