The fundamental promise of OpenFlow – that any switches and any controllers can be used to build a network – was perhaps always a fantasy. Every protocol definition has corner cases unanticipated by its authors; OpenFlow is no exception. And to make it implementable across as wide a range of hardware and software systems as possible, the OpenFlow 1.0 standard has a significant number of optional features; even those that are required can be implemented in different ways on the underlying hardware, with very different results.
This article begins to examine some of the different ways that OpenFlow is implemented. I’m going to use five different switches as examples:
The heart of OpenFlow, the 12-tuple of fields against which each packet is matched to determine how it will be handled, is the first place where implementations begin to differ. The standard does not make any of those fields optional, and indeed all production OpenFlow switches support them. The problem arises when one considers what ‘support’ really means.
For a purely virtual implementation like Open vSwitch, there is no possibility of hardware acceleration of the matching process; every flow is handled by the system CPU, as is everything else, and the expectation is that performance will be determined by CPU power. Hardware switches are another story. In their natural mode of operation, they can move packets between all of their ports at full line rate, so it’s a reasonable assumption that they can do the same when used with OpenFlow. Sadly, that’s not the case.
Every hardware switch has a finite amount of TCAM, critical for implementing line-speed forwarding, and it can hold only a finite number of flows. A typical switch supports on the order of a thousand 12-tuple flows; in our list the number ranges from 750 for the NEC PF5820 to 4000 on the MLX. But there’s a twist. The PF5820 and some other switches have the ability to do a huge number of flows if they’re only required to match the Layer-2 fields; in that case the TCAM will support more than 80,000 entries. A controller might easily overtop the 750 flow limit unless it’s aware of the situation and can use Layer 2 matching for some traffic. Once the limit is reached, the switch might refuse to accept more flows, or it might try to fail more gracefully and process them in software; of course, it’s anyone’s guess which way the controller wanted the flows to be handled, with a 50-50 chance of getting it right.
And it gets worse – for some hardware, the number of flows is only one kind of limitation. It’s important to keep in mind that most switches weren’t designed with anything like OpenFlow in mind, especially when their interface ASICs were laid out. The chips do a fine job of switching, and frequently handle basic Layer 3 functions as well, but OpenFlow asks for a great deal more. The Pica8 and NEC both support 12-tuple flows in hardware. The MLX can handle all 12 matches, but not all at once; each port has to be preconfigured in either Layer 2 or Layer 3 mode, which determines which fields are active. The HP has the most complex story of all. First, the rules are slightly different depending on the chip’s generation (HP calls them v1 and v2). Considering just the v2 rules, for the sake of some simplicity, we find that a flow will be in hardware if it matches on:
- VLAN ID, VLAN priority and/or input port
- VLAN ID, VLAN priority, input port and/or any of the IP and Layer 4 fields – but only if the Ethertype is IP (0x800)
- VLAN ID, VLAN priority, input port and/or source/destination Ethernet addresses – but only if the Ethertype is not IP
Other combinations will happily be accepted, but they’ll be handled in software. That’s not so bad for some traffic, but if a controller chooses to push a file transfer or a video stream by using a flow match that doesn’t fit one of the three hardware categories, the switch stops being a wire-rate Gigabit Ethernet performer and becomes a 1 Mbps chokepoint. That limit is configurable by adjusting the packets per second that the CPU is allowed to handle, but increasing it much beyond the default of 1000 runs the risk of overwhelming the processor.
None of these limitations are fatal; meaningful work can be done with any of these switches, if only the controller is aware of them and its choices of flow rules are made appropriately. Sadly, the OpenFlow standard doesn’t provide any mechanism for the switch to communicate this nuance of its capabilities. There is a way for the controller to ask each switch what it can do, but since the standard requires all of the match fields to be supported, they aren’t included in the response. And there’s no way for the switch to talk about what it can do in hardware or software, or what combinations of fields are available, or how many flows it can support.
It’s obvious that the controller’s job becomes much more difficult when these quirks and customizations are considered; the value of a standard mechanism for programming flows on a switch is substantially eroded. And unfortunately the situation only gets worse as we look deeper into the protocol, but that’s a matter for the next installment.