What’s Next for OpenFlow and Open Source?

One year ago, the Open Networking Foundation was setting a blistering pace of standards development. Though their process is closed to outsiders, word had leaked out that not only were they on track to produce version 1.3 of the OpenFlow standard, but 1.4 would follow before the end of 2012.

At some point that plan changed; although 1.3 came out in May, and was eventually followed by 1.3.1 in September, the next version is still under development and under wraps. It’s been said that the rate of change in the standard was proving too challenging for vendors, and that the moving target was causing too many of them to skip implementation of the intervening versions. With the goalposts constantly receding into the distance, products would never be released; they’d always be in development for the next version.

We still don’t know what is planned for the new version, or even that it will be numbered 1.4, and certainly don’t know when. But an interesting piece of news came to light in a very informative discussion with Curt Beckmann of Brocade during Networking Field Day 5. Asked about the OpenFlow development process, Curt said that this time the ONF Board has put a new requirement in place, requiring the development of working code before the standard would be released to the outside world.

Although the ONF and IETF clearly have vastly different models for standards development, borrowing the requirement for actual implementations prior to standardization seems like something the OpenFlow community could benefit from. The errata that have been published for versions 1.0 and 1.3 have demonstrated the value of actually trying to build code and hardware to the standards. As more implementations are tested against each other, the number of rough edges, incompletely specified functions and simple oversights has become clear. However, there may be one consequence of this decision that won’t be as beneficial to the wider community.

As of today, there are only a few implementations of the current OpenFlow 1.3 standard:

LINC – soft switch with 1.2 and 1.3, and OF-Config: Infoblox and Erlang Solutions

Ryu – controller with 1.0, 1.2 and 1.3, OF-Config in progress: OSRG/NTT Laboratories

CPqD – controller (NOX) and soft switches with 1.2 and 1.3, Fundação CPqD

Open vSwitch – soft switch with partial 1.2 and 1.3, Nicira

As you can see from that list, some of these projects are supported by ONF members; by the rules of the organization their employees are able to participate in the standards process and will have access to the newest version before it is public. In principle, then, they could be part of the implementation process. But the implementation of those projects isn’t a solo effort. Contributions can and do come from many directions – sometimes even including coders who work for competing companies, but who are able to cooperate in order to move the larger project forward.

Right now these FOSS projects are the only available implementations of OpenFlow 1.3; commercial products are in the pipeline but won’t be available for some time (with the exact schedules subject to NDA, as usual).

The question that interests me is how these projects will handle the new process for the next version of the OpenFlow standard. Will they be able to incorporate the new features and capabilities before the standard is released, and be part of the process of developing interoperable implementations? It’s hard to imagine given the ONF’s predilection for secrecy around its internal processes. Perhaps there will be a new model, at least for those projects sponsored by ONF members, where a private development process splits away from the public one, to rejoin it later once the ONF allows? Or will they have to wait, and as a result lose ground to the commercial, closed-source implementations? Either of the latter choices would be a serious blow to the role of FOSS in the OpenFlow world, and a sad consequence of the ONF’s new requirements.

Bill Owens

Bill Owens

Bill has had his hands in networks since 2400 baud was fast, but lately he thinks that things like DNS, IPv6 and OpenFlow are more fun. During the day he helps take care of a statewide optical/IP network. You can find him on Twitter as @owens_bill and lurking around lots of different network-related mailing lists.
  • Mike Fratto

    Yeah, having running code first does make sense and it might serve as a reference architecture that others can conform to. Standards are great and I bet the people developing them make every effort to produce well defined standards, but without running code, a reference architecture, and agreed upon conformance testing, the value of standards is greatly diminished. SIP is my example du jour.

  • Curt Beckmann

    The “working code before standard” requirement will ultimately be a very good thing, but having gone 3 versions on OF-Switch (plus 2 on OF-Config) without it means we’re playing a bit of catch-up. So OFS1.4 and OFC1.2 will pay the price, then OF1.5/OFC1.3 (or whatever the next versions are) will have a lighter load. Plus, as Mike’s comment suggests, the OFS1.4 and OFC1.2 specs will be better for having sorted out more practical questions arising from the working code efforts.

  • SilentLennie

    “With the goalposts constantly receding into the distance, products would
    never be released; they’d always be in development for the next

    Sounds just like anything else in this technology sector. :-)