We’ve talked a little about the structure of the IETF, and the process a draft follows when moving from submission to draft to RFC… The perennial question is, though — why does it take so long? Or, perhaps — why is the IETF so broken? Let me begin here: the IETF is a human organization. That means, if you’ve not sorted it yet, that it’s made up of humans. Humans who want to keep their jobs, make a name for themselves, pad their resumes, “win,” and all the rest. So before I delve into the various problems that exist in the IETF, remember that they’re likely the same problems you face on a daily basis in any organization you work in. In fact, they infect open source, large companies, small companies, and even companies of precisely one. All of that said, let’s look at a few ways in which the IETF process seems, from my perspective, to go awry.
The Flood. Sometimes, when a new area of work opens up, because everyone wants their name on a draft, a ton of people jump at the work trying to find someplace they can get a draft in edgewise. This causes a “flood” of drafts, such a large amount of information that anyone who is even remotely human simply can’t read them all. Or at least anyone who actually has a job and/or life other than reading IETF drafts.
Sometimes the flood isn’t intentional — someone has a great idea, and works with a small team of people to get the work started with a solid set of drafts that unintentionally turn into a flood. In the process, however, the flood of new work slows the process down, having the opposite than intended effect.
Other times, the flood is the intentional effect of a set of authors who spend a good deal of time building a wide array of drafts and releasing them very quickly over a short period of time — in order to reduce the possibility that someone else is going to “muscle in” on their territory (technology).
The result of the flood is often a set of complex, difficult to read standards documents that aren’t always well thought out. In the rush to cover every corner case, the law of unintended consequences isn’t well thought through, and all sorts of things creep into drafts that either aren’t ever implemented. Or a lot of things creep into drafts that cause problems down the road, or make it possible to implement the same technology in multiple ways, causing lots of interoperability problems.
The Funding Slap Down. It might seem odd, but funding actually has a lot to do with which protocols or ideas are finally passed through the IETF process and on to RFC-hood. Two specific illustrations might help to clarify. First, the IETF is based on rough consensus and running code. If someone invests in running code, then, it’s often the case that their standard will win. If you’re a vendor, then, you can go down the path of coding it first so your version of the standard wins — whether or not it’s the best technical solution. Second, say you’re a government agency and you’d like a particular standard to win in the IETF. Throw some research dollars at the right universities, and promise the right set of vendors the right amount of money (“we’ll buy your product if you do it this way”), and you can influence the standard down a specific path.
Now, in theory, none of this ever happens at the IETF. The IETF is a purely technical body made up of cyborgs rather than humans, each of which is perfectly fair and cannot be influenced by things like money, fame, or power. But theories involving perfect (or perfectible) are much like unicorns.
Next time we’ll continue the conversation with a few more places the IETF can process can go wrong, then we’ll take a more serious look at the workload of the IETF, and why things take so long even when everything goes right.