OpenFlow State of the Union: Reflections on the OpenFlow Symposium

Listening to the OpenFlow Symposium panel speakers today (enough brain power in the room to blow the roof off of the hotel), I took away a few interesting points. At least, I took away my spin on a few interesting points.

  1. What’s the status of OpenFlow as a usable protocol? Well, folks ARE using it. But it would be most fair to categorize OpenFlow as immature, emerging, and still trying to sort out what direction it’s heading in. The OpenFlow 1.0 spec is what production implementations are running on, not 1.1. The 1.1 spec seems to be a non-starter for vendors, due to issues which will be resolved in OpenFlow 1.2. The 1.0 spec was categorized by speakers today as simplistic, which means that while you can still do interesting things with it, the functionality is limited. OpenFlow 1.2, 1.3, and 1.4 are being worked on. OpenFlow 2.0 is in the “we’re waiting for customer input to know where we’re going with it” state.
  2. What can I do with OpenFlow? Hmm. This was one of the fuzzier areas today. Based on the spec, a controller programs the flow table of an OpenFlow switch to forward traffic in a way that you specify based on certain criteria of the frame or packet. The general direction folks were heading was related to traffic steering, usually at the access layer. Organizations with specific, well-known-to-them, generally static, and predictable compute environments also leveraged OpenFlow to do specific things. There were more ideas thrown around as opposed to concrete “this is exactly what we’re doing.” I believe the OpenFlow magic will come when we see vendors writing clever applications that inform the controller in clever ways. I believe this will manifest itself most keenly in the areas of network virtualization, software defined networking (a term that somewhat means what you want it to mean at the moment), and automation. But I also believe that OpenFlow is only one piece of these rather large puzzles.
  3. Is OpenFlow going to disrupt the networking industry? The answers to this question  from the panel ranged from “it already has,” to “not really.”  Some vendors felt that those ignoring OpenFlow would be made irrelevant in the marketplace before too long. Others thought it unlikely to change the market landscape. From my perspective, the answer to this question of market disruption is most closely tied to commoditized switches. Specifically, if de-coupling the control-plane from the switching hardware allows for OpenFlow-capable network switches to be produced dirt-cheap when compared to their integrated cousins, then that could be disruptive. Maybe. And the reason I say “maybe”, is that it depends a great deal on what you think you can do with your OpenFlow controller(s) and cheap switches, and how (and when) that might drive your purchasing decisions. That said, do you think you’re going to replace your entire data center with OpenFlow? Probably not, even if you could…which you can’t. (Next question.)
  4. Assuming I wanted to, can I replace my entire data center with OpenFlow? Not today. Not tomorrow. And my guess is, probably not ever. Limited functionality in the OpenFlow spec in one reason (at least today – admittedly new specs are coming), but another more poignant reason is that there’s a scaling challenge related to the forwarding of microflows that I believe means devices like load-balancers are not going to be replaced anytime soon. What’s a microflow? Well, if a regular old data flow could be categorized as a forwarding decision made by destination IP address, then a microflow is when you start to make forwarding decisions on more granular frame or packet characteristics. In a straightforward routing application, there’s not a lot of flows to be considered. You can have a whole lot of conversations heading for a /24 netblock that all would be considered a single flow…because they’re all forwarded based on a single criteria defined by a single flow in a flow table. But when you get more granular, say forwarding non-contiguous IP blocks with sundry destination ports (typical in large load-balancer environments), then you’re dealing with microflows. That’s harder to scale. The scaling issue is within the netflow switch. When the switch sees a flow that does not match on any of the flows in its flow table, it punts that packet to the CPU. The switch CPU in turn forwards that packet to the OpenFlow controller. My understanding is that the switch CPU is where the bottleneck is. NEC suggested to me that an OpenFlow switch CPU could effectively punt about 1,000 unknown flows per second. The controller, probably running on a multi-core Intel CPU, is actually NOT the bottleneck, although certainly punting to the switch CPU, forwarding to the controller, and the controller making a decision about the flow and writing the flow entry into the switch flow table, all adds latency (in the tens of microseconds). I’m not going to get into other large-scale concerns that OpenFlow faces relating to controller to switch communication integrity and controller hierarchy (as in, there is no such thing today), but there are other concerns to be sure.
  5. What is the OpenFlow killer app? I feel that the killer app is that you can now add features to your network that you couldn’t get your favorite networking vendor to add for you. If OpenFlow is a unicorn, you get to make it cry whatever tears you’re looking for. That’s it. That’s the killer app, at least in so far as I could see today. It’s not that there’s some specific, well-defined, universally recognized networking problem that OpenFlow solves. It’s more that OpenFlow enables network programmability via a predictable interface. Therefore, if there’s some unique problem you happen to have that traditional networking architectures can’t resolve for you due to technical, practical, or financial limitations, then OpenFlow might just provide a liberating answer.

More to come on this line of thought. NEC is presenting at Tech Field Day, and I’m keen to hear more about how their shipping OpenFlow solution might bear on the enterprise, the corner of networking that I call home.

What do you think? Did I get something wrong? No worries – I’m happy to correct anything I got wrong or context I’m missing. Just let me know in the comments.

Ethan Banks
Ethan Banks, CCIE #20655, has been managing networks for higher ed, government, financials and high tech since 1995. Ethan co-hosts the Packet Pushers Podcast, which has seen over 2M downloads and reaches over 10K listeners. With whatever time is left, Ethan writes for fun & profit, studies for certifications, and enjoys science fiction. @ecbanks
Ethan Banks
Ethan Banks
  • http://twitter.com/mzawirski Michal Zawirski

    Nice summary!

    on #4 – I think we’re more likely to firstsee an SDN-like DC routing/switching infrastructure, with loadbalancers, firewalls and other “middleboxes” still using the classic architecture.  It seems that OF already has (or will have shortly) the right set of primitives to program a DC switching fabric.

    speaking of “middleboxes”, microflows are only one of the issues.  if the current performance with controller “inline” is ~ 1000 flows/s there’s a huge performance gap to be filled… even pure software load balancers (when running on decent servers) can handle hundreds of thousands requests per second. 

    another problem is that packet flows through L4+ devices are completely out of scope of the current OF spec, and unless OF “bloats” I see no way of handling complex higher
    layer behaviors (in terms of “matches” and “actions”), eg. keeping track of the TCP session state, HTTP request analysis and proxying (for L7 loadbalancing), not to mention SSL decryption which often comes into play.  middleboxes do not have a clear separation between the control and the forwarding plane, which does not make them good SDN players.

  • Pingback: OpenFlow Symposium in San Jose – Networks as Application Stacks | Ethernet Fabric

  • Pingback: The Killer App For OpenFlow and SDN? Security. | Rational Survivability

  • http://keepingitclassless.net/ Matt Oswalt

    Yeah I had a lot of trouble regarding in-line vs out-of-band communication between the controllers and switches. It would be interesting to see how that develops. Your idea of a “killer app” is good because it’s true that it allows us to develop our own features, or at the very least go to a 3rd party for.

     I’d be interested to see how OpenFlow is applied towards virtualization and the myriad of problems brought on by it. Today VXLAN…tomorrow, “VOFLAN”

    • http://packetattack.org Ethan Banks

      Can’t wait to write about what NEC is selling based on OpenFlow. Have to think through my post and review the audio, but what they’re doing is so cool is almost made me cry. I had a serious, visceral, emotional reaction to the demo.

      • http://cdplayer.livejournal.com/ Dmitri Kalintsev

        Would the potential post be here, on this blog?

        • http://packetattack.org Ethan Banks

          Yes. At the moment, the vast majority of my writing is here. It will take a bit to re-digest the NEC demo and compose a post. My schedule is crammed for the next few days.

          • http://twitter.com/cloudtoad Derick Winkworth

            You see why I was so in love with Big Switch when I left there?  You are having the same reaction I had…  Its pretty amazing.  I want to manage my Data Center with this…

          • http://packetattack.org Ethan Banks

            Whether I want to manage my data center with OF or not is an open-ended question for me, at least today. In principle though, yes, I’m right there with you. But I need to see a robust application/controller combination to be confident that *I’m* not the application, i.e. coding flow tables by hand, which to a degree was the NEC ProgrammableFlow demo. And obviously I realize that no one’s big plan for a commercial, large-scale OF network is for nerds to still there coding all flows by hand. I’m hoping to connect with one of the crew from Big Switch and see a demo, even just webinar-style so that I can get the flavor.

          • http://twitter.com/cloudtoad Derick Winkworth

            I agree.  Thus the skepticism.  But the potential.  I mean its mind-blowing.  Visually managing multi-tenant virtual-topologies.  It seems like this is the way its supposed to be.

          • http://www.originalmontgomery.com/de/ Mantel

            Quit so, mind-blowing like you said, not sure though if this is the way it’s supposed to be but really cares about that to be honest.

  • Rob Sherwood

    Hi Ethan,

    Great article, but one thing I would like to point out re: your point #4.  What you’re describing is what folks in OpenFlow terms call “reactive flow setup”, that is, dynamically inserting rules when the packets actually show up.  While there are a number of interesting applications that use reactive flow setup, there are obvious performance and scaling limitations involved and it’s certainly not OpenFlow’s only “mode” (for lack of a better term). 

    In fact, most large installations that I’m aware of are using the opposite strategy– “pro-active” flow setup–which is exactly what routing protocols do today: learn forwarding paths through some out of bands means (e.g., a routing protocol, external DB, static policy, etc.), insert the corresponding forwarding rules before a packet arrives, and then packets are forwarded at full line rate without intervention from the controller.  In other words, OpenFlow doesn’t preclude you from doing what people are already doing today, so there are no inherent scaling limits to OpenFlow, just (potentially) to the reactive flow-setup mode.

    The other point is the granularity of a “flow” in OpenFlow is not hard coded by the protocol, but decided by the controller.  It can be as small as what’s traditionally considered a microflow (e.g., a fixed 5-tupple), as broad as a destination CIDR prefix, or even a rule that matches everything.  Most critically, the controller can specify multiple flow granularities at the same time, so you could imagine a controller that mostly used CIDR prefix-style flows, but had a few higher priority more precise flows (e.g., 5-tupple specified) to more precisely direct elephant flows, as shown in Juniper’s bandwidth calendar demonstration.

    Does that make sense?

    Thanks for the interest in OpenFlow – I hear the event was a big success!

    • http://packetattack.org Ethan Banks

      Rob, it does make sense. In conversations I’ve had subsequent to this post, the point was made to me more than once that in a typical OpenFlow deployment as many envision it today, microflows aren’t going to spend much time being punted or occupying the controller’s decision-making time for exactly the reason you cite: no one is likely to architect a flow table that would force so much controller reactivity.

      While I think we agree my scaling point is valid (were an OF network programmed in such a way), I’ll say that at this point, while it’s an architectural concern to be aware of, it’s largely moot for many (most?) potential OF applications. I will be writing about NEC’s ProgrammableFlow demonstration to the Tech Field Day delegation, and will raise this issue to bring a better balance.

      The symposium came off well. I felt there was good, honest dialogue among all the participants, and I believe most observers walked away with a more fully-formed notion of OpenFlow’s potential. Certainly I did.

  • Pingback: Software Defined Networks and Internet Data Centers | Ethernet Fabric

  • Pingback: » OpenFlow and SDN from the Symposiom FryGuy's Blog

  • Pingback: The Killer App For OpenFlow and SDN? Security. | Revolusionline