What is the value proposition of Standards in the age of Open Source?

I’ve been thinking about this question quite a bit over the last year [0] and interestingly a debate over just this issue has recently erupted  in the blogosphere (and elsewhere). Vidya Narayanan, who reignited the discussion with her blog “Why I Quit Writing Internet Standards” [1], calls for a “radical restructuring” of the IETF, IEEE and what she calls “SDN organizations” (left unnamed). Among other issues, she cites speed to create standards, leadership (and leadership selection processes), and consensus based decision making processes as candidate areas for revamping. Others blogs defend the status quo and the IETF in particular [2].

As an aside I agree with much of what Narayanan is saying. However she doesn’t seem to see open source as an alternative to traditional standardization; suffice it to say that I do. In any event, what seems to be missing in this discussion is a careful analysis of the relationship between standards and open source, where the strengths of each lie, and what synergies, if any, can be found between the two approaches. While this is a huge issue that deserves wider community consideration, I’ll focus here on one particular “impedence mismatch”: the IETF requirement for two or more interoperable implementations.

Before diving into this, a bit of history might be instructive. When I started in the IETF (circa 1990), implementation was of crucial importance because among other reasons, it was important to show that the protocols we designed were actually efficiently implementable. The IETF was very much about implementation in those days, and working groups frequently resembled what we might recognize today as a code reviews (although the code was generally hand written on transparencies).  For example, during the IPNG process each of the contenders had one or more implementers who’s primary responsibility was to show to show that the proposal could in fact be implemented in an efficient and scalable manner. Later Dave Clark’s now famous IETF maxim [3] codified not only the importance of code but also of consensus as a decision making process (I’ll just note here that consensus works more efficiently where there are common overarching goals, a key characteristic of the IETF community in those days). By the way, one of the ways we learned in the early days (and still do today) was by doing implementations; there are few better ways to learn about a protocol or system than to implement it. In any event, running code was key to our culture, and standardization was a back-end process. Perhaps somewhat paradoxically, today’s open source movement captures the very best of the spirit of Clark’s famous statement.

As the Internet matured and became commeralized, many protocol implementations became “closed source” and we saw the emergence of commercial, vertically integrated embedded systems (in this case, routers). Running code was still important (of course), but the code itself was hidden.  As a result much of our focus was naturally on wire protocols (as that was the only thing we could observe and/or measure).  We needed two or more interoperable implementations for several reasons. In particular, the existence of multiple interoperable implementations provided three important data points: first, it told us that the protocol in question was sufficiently specified such that we could create independent, interoperable implementations, and second, that the wire protocol (again, all we could really see) was correctly implemented. Standards and the resultant interoperable implementations created the possibility of multi-vendor networks, which were key not only to driving various economic factors but also gave us bug/supply chain/… independence (in theory anyway). All goodness.

Note that independent code bases (or “code diversity”, as it is frequently informally called) can be a very good thing; all you have to say these days is SSL. So this is one place where open source projects could potentially suffer. On the other hand, when source code is available and there is a sufficiently vibrant community around that code base, new kinds of verification, debugging and tooling come into play. In theory, more eyes should mean more scrutiny which in turn should mean more robust code. Why this didn’t happen with SSL/heartbleed is an excellent case study and has been widely discussed in the popular press. In summary, while no code base is immune from bugs and other mis-implementation, the transparency of the open source development methodology is reason to be optimistic.

That said, what is the meaning of the “two independent implementations” requirement in the open source world of today? That is a bit harder to understand. As I mentioned above, this is a requirement which derives from at least two concerns: first, during a time when implementations were closed, proprietary and vertically integrated systems, all we could see was what was on the wire (so of course we wanted multiple implementations for  all of the reasons mentioned above). Second, we all have the desire to avoid the kind of catastrophes that can derive from monocultures (wherever they arise).  In open source development addresses these concerns in a very different way. In particular, since everyone is using the same code base (mod non-upstreamed modifications), what is “on the wire” is not a non-issue from the interoperability perspective, but this is not the primary issue. Rather, the impedance mismatch is around what the meaning of multiple implementations  is in an open source setting. For example, if I require two interoperable implementations in an open source setting, what interoperates with OpenStack?  Does the question even make sense?  It doesn’t seem to. Things just work differently in open source as it is the code base and not an external specification that is being developed by the community. In fact, in open source documentation is in many cases auto-generated from the code itself. Clearly this is an area where the open source community and the IETF can work together to better understand each other’s community and culture.

A final thought on all of this. Today’s technology artifacts, as well as the engineering systems that we use to build them, are all evolving at an incredible accelerating rate. The open source development methodology is not only a primary driver of this acceleration but is also uniquely adapted to take advantage of it (by the way, this is a great example of autocatalysis [4]).  The IETF standards process, on the other hand, suffers from well understood and frequently discussed structural inability to act quickly. This acceleration is what fundamentally characterizes our industry today, and it is the inability of the IETF to respond to this trend that is at the core of many of concerns voices in any number of fora (along, of course, with the discussion we’ve had over governance, leadership, and decision making; note that in some sense these are all facets of the same problem). Narayanan astutely sums up the fundamental challenge for the IETF and standards processes in general: “Unless these standards organizations make radical shifts towards practicality, their relevance will soon be questionable.” I couldn’t agree more.

A few references

[0] http://www.sdncentral.com/education/david-meyer-reflections-opendaylight-open-source-project-brocade/2014/03/

[1] http://gigaom.com/2014/04/12/why-i-quit-writing-internet-standards

[2] http://packetpushers.net/internet-standards-still-important/

[3] http://groups.csail.mit.edu/ana/People/DDC/future_ietf_92.pdf

[4] http://en.wikipedia.org/wiki/Autocatalysis

David Meyer
David Meyer is currently CTO and Chief Scientist at Brocade Communications, where he works on future directions for Internet technologies. Prior to joining Brocade, he was a Distinguished Engineer at Cisco Systems, where he also worked as a developer, architect, and visionary on future directions for Internet technologies. He is currently the chair of the Technical Steering Committee of the OpenDaylight Project. He has been a member of the Internet Architecture Board (IAB) of the the IETF (www.ietf.org) and the chair/co-chair of many working groups. He is also active in the operator community, where he has been a long standing member of the NANOG (www.nanog.org) program committee (and program committee chair from 2008-2011). He is also active in other standards organizations such as ETSI, ATIS, ANSI T1X1, the Open Networking Foundation, and the ITU-T. Mr. Meyer is also currently Director of the Advanced Network Technology Center at the University of Oregon where he is also a Senior Research Scientist in the department of Computer Science.. One of his major projects at the University of Oregon is routeviews (see www.routeviews.org). Prior to joining Cisco, he served as Senior Scientist, Chief Technologist and Director of IP Technology Development at Sprint.
David Meyer
  • riw777

    Dave — while I think the IETF is broken, I don’t think “open source,” is going to solve the problem. I know the OpenStack community feels really tight and strong right now, but so did the IETF when it first began. What the IETF is suffering right now, OpenStack (and all open source projects) will suffer. In fact, looking around at the OpenStack situation, it’s been said (within my earshot) that it’s a mess — too many egos in the room, too many people adding stuff for their own personal gain, etc.

    So I honestly don’t think the situation is as bleak on the standards front as you paint it, nor as rosy on the open source front.

    To answer your specific question, “what is the meaning of having two interoperable implementations in the open source world,” think about the problem a different way than what you’ve posted above. I look around and find hundreds of abandoned open source projects. Someone, somewhere, put their data, their operations, etc., on those projects, and lost. They lost time and money. So the question might not be the same, but there are still questions.

    For instance, rather than, “are there two implementations,” maybe a question worth asking might be, “how many versions of open source northbound APIs do we need?” Or, “how many plugins developed for this project will interoperate when installed at the same time?” Or… “Will the project be around in five years? Or will I be replacing it with yet another open source project, while this one collapses because it’s not new/shiny/cool, or the code has gotten so unwieldy it can no longer be maintained, or the startup that founded it has made a couple of tens of millions and moved on to something else?”

    My point is this — I think there is a place for open source. OTOH, I’m not convinced that open source, without open standards, is going to work. Sure, if you want you can just invent another method to carry this or that piece of information. But what happens when everyone invents “just another way to carry this or that data?” How are a bunch of different open source projects each carrying the same data in a bunch of different ways different than all those different tunneling protocols in the IETF that have been abandoned? Is it really true that every open source project from now until forever is going to choose to use the same data formats just because the community will choose to?

    Sorry, but I don’t think so.

    Open standards and open source are both necessary. Neither is sufficient on its own.

    It’s not time to abandon the IETF, it’s time to fix it.

    Just my 2c.

7ads6x98y