There’s been a lot of talk of late on the performance of centralized network controllers (such an odd thing to say when you think about it, but there it is). Ethan recently had a post up on the topic of scaling and SDNs that overlaps with this topic, and SDN Testing ran some interesting tests on a few controllers that are worth looking at. But as useful as pondering the theoretical picture is, and as useful as individual tests are, there’s a middle ground that often needs to be filled and isn’t. The middle ground?
How should we go about testing these controllers? Or rather, what sort of tests, and what should be tested, and how should these things be tested? Without this sort of framework, each vendor can create a set of tests that favor their product over all other products, so you just end up with a confusing array of benchmarks and charts that don’t really tell you much of anything. There will, of course, be some solution to this in the press end of the networking business, but the networking press, itself, isn’t always the best group of folks to design and build such tests. It’s not a matter of intent, it’s just a matter of expertise.
Who, then, should you turn to for such frameworks? A potentially surprising answer: The IETF. Yes, that one. In fact, the IETF has several working groups that focus on building frameworks for the measurement of networking systems, protocols, and implementations. Since the process for creating such a framework, within the context of the IETF, is rough consensus and running code, the results tend to be higher than average quality frameworks.
In the realm of SDN controllers, the IETF has a draft currently on the table worth looking at; specifically Benchmarking Methodology for SDN Controller Performance (draft-bhuvan-bmwg-sdn-controller-benchmark-meth). This draft, current work within the Benchmarking Methodology Working Group (bmwg), makes for some interesting reading — even if you’re not interested in actually testing the performance of any particular SDN controller. Benchmarking drafts not only cover how to test things, they also cover what should be tested, and why. For instance, some of the test cases taken from the draft are:
- sure the time taken by the SDN controller to discover the network topology (nodes and links), expressed in milliseconds.
- Measure the time taken by the SDN controller to process an asynchronous message, expressed in milliseconds.
- To measure the maximum number of asynchronous messages (session aliveness check message, new flow arrival notification message etc.) a controller can process within the test duration, expressed in messages processed per second.
Each of these test cases are backed up with a full description of what to measure, where to measure it, how often to measure it, and other details.
This one draft should lead you farther into the work of the BMWG — everything from OSPF convergence to BGP convergence to network convergence are covered in various drafts. Each set of measurements normally comes in a set of two, one describing the measurement, and another describing the importance and background information, as well as any other application notes.
The best part, in the case of the SDN controller draft, is that it’s still a work in progress. This means you can actually send comments to the authors, listed right there in the draft itself, and make a difference in the larger community.