Is OpenFlow Losing Its Openness?

Let get started with some background:

About OpenFlow

The OpenFlow Switching specification was created in 2008 to evangelize and support OpenFlow. Although hosted at Stanford University, our goal is for OpenFlow to be owned by the community – for the betterment of research and innovation in networking.

Openflow team at Stanford

The Stanford OpenFlow Team

First let’s look at what OpenFlow was meant to do in 2008. From the introduction found at


Today it has become extremely difficult to innovate in the computer networks that we use everyday in our schools,  businesses, and homes. Current implementations of main-stream network devices, such as Ethernet switches and IP routers, are typically closed platforms that cannot be easily modified or extended. The processing and routing of packets is restricted to the functionality supported by the vendor. And even if it were possible to reprogram net-work devices, network administrators would be loathe to allow researchers and developers to disrupt their production network. We believe that because network-users, network-owners and network-operators cannot easily add new functionality into their network, the rate of innovation—and improvement—is much lower than it Rather than leave network innovation to a relatively small number of equipment vendors, our goal in the OpenFlow project [19] is to enable a larger population of researchers and developers to evolve our networks, and so accelerate the deployment of improvements, and create a marketplace for ideas. Today, most new ideas proposed by the research community—however good—never make it past being described in a paper or conference. We hope that OpenFlow will go so someway to reduce the number of lost opportunities.

Good, right? This might be where Greg Ferro gets his woody from.  There is much to like about this – even crave.

Now, let’s fast forward to 2012.

OpenFlow is not only alive and well, but coming at us full steam. But in what form?

From “Twilight in the Valley of the Nerds” blog post found at which refers to a comment from Nicira’s Casado that was written last spring, “you most likely will not have interoperability at the controller level (unless a standardized software platform was introduced).”

This statement floored me! If an Ethernet vendor brings out something new, we expect that it is completely standards based. If it’s proprietary (like Juniper’s QFabric), we slam them. Why do we slam them? Because we do not want vendor lock in!

Okay, what is the statement from Casado really saying? Let’s see, first if you take vendor X’s OpenFlow controller then you better have ALL vendor X’s controllers everywhere – full stop, end of sorry – thanks for coming – see ya next upgrade.

So, I guess my questions are…is there an IETF, IEEE or any other recognized standards body for OpenFlow other then the ONF? What about controller interoperability – is this a part of any draft OpenFlow ONF version? Where is the “open” without that?

As far as I can see, the answers are “No,” “No,” and “Not much!”

Now that Stanford has given OpenFlow over to the world, and the Open Networking Foundation (ONF) has taken up Openflow as their own, let’s look at who is on the board of the ONF.

  • Deutsche Telekom
  • Facebook
  • Google
  • Microsoft
  • NTTComunication
  • Verizon
  • Yahoo

Can anybody pick a name from that list that cannot do networking well, yet has more API’s then you can poke a huge stick at? Can anybody see that an OpenFlow controller might be coming to you *free* in an new Operating System?

I do hope I am missing something here, as from what I can see so far, all we are doing is moving the vendor lock in from the switch/router players to the OpenFlow players.

Please somebody tell me, “Sorry mate, you missed XYZ – this was addressed in ABC.”

Thanks for reading.  All comments/corrections/enlightenment welcome!


  1. says

    Hi Michael. Although I do agree that a standardized software platform for OpenFlow controllers might be a good thing, this is not the goal of the OpenFlow spec. OpenFlow opens up the control channel towards the data plane, which enables us to choose the software that we want to use to control them.

    The software that we use for this (the controller software) is already interoperable between each other. Beacon, NOX, FloodLight, etc. are all different implementations which aren’t able to communicate with each other.

    So in short, OpenFlow standardizes the control channel in a vertically (switch – controller) but not horizontally (controller – controller).

    — Michiel

  2. says

    Michiel’s last sentence is key. As was pointed out at the PacketPushers’ OpenFlow Symposium last October, OpenFlow will actually make many things more complicated in the near-to-medium term: it may be useful for attacking more arcane problems for which existing solutions would be hopelessly complex, but what may be gained in technical simplicity and directness is offset (which is not to say lost) by needing to manage an ecosystem of vendors and/or larger app dev teams to make it all work.

    Greg made the comment a couple of weeks ago (as part of his buggy IOS rant) that OpenFlow would foster network mgmt that actually works, and it would save time/money now being wasted on fixing IOS bugs. I demurred for the reason above, suggesting that costs will simply move to different areas: app dev, supplier mgmt, and possibly SIs, as opposed to NetOps. (And there’s nothing to prevent buggy software being put out at the controller level instead!)

    Now, I do think (hope?) that interest and traction in OpenFlow and SDN more generally may help foster more focus on software (quality, flexibility, openness in the API sense, general modernity) within networking vendors, both at the OS and management levels. It’s a potentially attractive alternative that may be a forcing function for “getting the house in order”. But it’s unlikely to entirely obviate the turnkey network device you’re buying (even OpenFlow-enabled switches still possess all the standard switch mechanisms as well) any time in the near future. And there’s no reason for it: there’s plenty that switches & routers already do very well and very efficiently, and reinventing it all in a different software space would be pointless. Curt Beckmann touches on some of this “separation of duties” discussion in his latest blog (

    • says

      Hi Lisa,
         I do like to finish on a high point (nothing untoward meant).  I agree with you on the added complexity that it will add.

      I also bloody well hope venders clean house as a wakeup
      call.  In fact I would say we have
      already started to see that happen, Brocade has just opend up the ADX’s (ADC)
      for scripting and Arista has also done so via Python to add a great deal of
      extra value to both.

      These are not the only ones to start opening up networking
      equipment either and I would like to see Netconf embraced more by all vendors.

  3. Montygoat says

    For your comments on the proprietary protocol in Juniper’s Qfabric, would you blame the  the traditional modular chassis with LCs plugging in the backplane using prioprietyary protocol?

Leave a Reply

Your email address will not be published. Required fields are marked *