I’ve been thinking about this question quite a bit over the last year  and interestingly a debate over just this issue has recently erupted in the blogosphere (and elsewhere). Vidya Narayanan, who reignited the discussion with her blog “Why I Quit Writing Internet Standards” , calls for a “radical restructuring” of the IETF, IEEE and what she calls “SDN organizations” (left unnamed). Among other issues, she cites speed to create standards, leadership (and leadership selection processes), and consensus based decision making processes as candidate areas for revamping. Others blogs defend the status quo and the IETF in particular .
As an aside I agree with much of what Narayanan is saying. However she doesn’t seem to see open source as an alternative to traditional standardization; suffice it to say that I do. In any event, what seems to be missing in this discussion is a careful analysis of the relationship between standards and open source, where the strengths of each lie, and what synergies, if any, can be found between the two approaches. While this is a huge issue that deserves wider community consideration, I’ll focus here on one particular “impedence mismatch”: the IETF requirement for two or more interoperable implementations.
Before diving into this, a bit of history might be instructive. When I started in the IETF (circa 1990), implementation was of crucial importance because among other reasons, it was important to show that the protocols we designed were actually efficiently implementable. The IETF was very much about implementation in those days, and working groups frequently resembled what we might recognize today as a code reviews (although the code was generally hand written on transparencies). For example, during the IPNG process each of the contenders had one or more implementers who’s primary responsibility was to show to show that the proposal could in fact be implemented in an efficient and scalable manner. Later Dave Clark’s now famous IETF maxim  codified not only the importance of code but also of consensus as a decision making process (I’ll just note here that consensus works more efficiently where there are common overarching goals, a key characteristic of the IETF community in those days). By the way, one of the ways we learned in the early days (and still do today) was by doing implementations; there are few better ways to learn about a protocol or system than to implement it. In any event, running code was key to our culture, and standardization was a back-end process. Perhaps somewhat paradoxically, today’s open source movement captures the very best of the spirit of Clark’s famous statement.
As the Internet matured and became commeralized, many protocol implementations became “closed source” and we saw the emergence of commercial, vertically integrated embedded systems (in this case, routers). Running code was still important (of course), but the code itself was hidden. As a result much of our focus was naturally on wire protocols (as that was the only thing we could observe and/or measure). We needed two or more interoperable implementations for several reasons. In particular, the existence of multiple interoperable implementations provided three important data points: first, it told us that the protocol in question was sufficiently specified such that we could create independent, interoperable implementations, and second, that the wire protocol (again, all we could really see) was correctly implemented. Standards and the resultant interoperable implementations created the possibility of multi-vendor networks, which were key not only to driving various economic factors but also gave us bug/supply chain/… independence (in theory anyway). All goodness.
Note that independent code bases (or “code diversity”, as it is frequently informally called) can be a very good thing; all you have to say these days is SSL. So this is one place where open source projects could potentially suffer. On the other hand, when source code is available and there is a sufficiently vibrant community around that code base, new kinds of verification, debugging and tooling come into play. In theory, more eyes should mean more scrutiny which in turn should mean more robust code. Why this didn’t happen with SSL/heartbleed is an excellent case study and has been widely discussed in the popular press. In summary, while no code base is immune from bugs and other mis-implementation, the transparency of the open source development methodology is reason to be optimistic.
That said, what is the meaning of the “two independent implementations” requirement in the open source world of today? That is a bit harder to understand. As I mentioned above, this is a requirement which derives from at least two concerns: first, during a time when implementations were closed, proprietary and vertically integrated systems, all we could see was what was on the wire (so of course we wanted multiple implementations for all of the reasons mentioned above). Second, we all have the desire to avoid the kind of catastrophes that can derive from monocultures (wherever they arise). In open source development addresses these concerns in a very different way. In particular, since everyone is using the same code base (mod non-upstreamed modifications), what is “on the wire” is not a non-issue from the interoperability perspective, but this is not the primary issue. Rather, the impedance mismatch is around what the meaning of multiple implementations is in an open source setting. For example, if I require two interoperable implementations in an open source setting, what interoperates with OpenStack? Does the question even make sense? It doesn’t seem to. Things just work differently in open source as it is the code base and not an external specification that is being developed by the community. In fact, in open source documentation is in many cases auto-generated from the code itself. Clearly this is an area where the open source community and the IETF can work together to better understand each other’s community and culture.
A final thought on all of this. Today’s technology artifacts, as well as the engineering systems that we use to build them, are all evolving at an incredible accelerating rate. The open source development methodology is not only a primary driver of this acceleration but is also uniquely adapted to take advantage of it (by the way, this is a great example of autocatalysis ). The IETF standards process, on the other hand, suffers from well understood and frequently discussed structural inability to act quickly. This acceleration is what fundamentally characterizes our industry today, and it is the inability of the IETF to respond to this trend that is at the core of many of concerns voices in any number of fora (along, of course, with the discussion we’ve had over governance, leadership, and decision making; note that in some sense these are all facets of the same problem). Narayanan astutely sums up the fundamental challenge for the IETF and standards processes in general: “Unless these standards organizations make radical shifts towards practicality, their relevance will soon be questionable.” I couldn’t agree more.
A few references