In a comment on my “Say SDN Again” post, Fred asked the following.
You had an article on TechTarget back in 2013 that distilled SDN down two four uses:
– central control
– network virtualization
Do these categories still hold in your opinion?
Digging back through the archives, I found the source document from that piece Fred refers to — an SDN Buyer’s Guide for 2013. I can’t find a live link to that piece anymore, but re-reading what was was top of mind for me three years ago was interesting.
TL;DR. The categories somewhat hold, but times have changed. 2013 was a time of experimentation, rough code, political posturing, and early dog food tasting. A lot of it didn’t taste good, so as an industry, we’ve picked and chosen the bits that have made sense.
We’ve not seen centralization of the control plane become commonplace. Scale has proven to be too difficult to truly centralize all network control plane functions, with grumpy old network engineers rightly exclaiming things like, “I told you so,” “Well, duh,” and “Neener-neener.”
Perhaps the greatest indicator of this is the stagnation of OpenFlow development and lack of interest both on the part of customers and vendors in continuing down the OpenFlow path. Which isn’t to say OpenFlow doesn’t have products and adherents, but the love just isn’t there for most.
Centralized control seems to have its greatest use in what I think of as application-specific programming. That is, there’s a distributed control plane handling most things, and then a central controller with an application handling specific things. These are manifested as point solutions that layer on top of existing network infrastructure.
To me, this differs from the vision of throwing away distributed control plane protocols, replacing them with a centralized brain. Still, centralized control has its uses. I believe the maturing of OpenDaylight and the possibility of a vendor agnostic application ecosystem built around the platform can result in interesting networking tools over time.
The rise of the ONOS controller is another example of central control still being a valid concept. ONOS was built with scale in mind, handling large numbers of operations per second across a scale-out controller architecture. Service providers are leveraging ONOS with applications like CORD (central office re-architected as a data center) layered on top.
Orchestration, automation, and programmability.
As I look back over the raw text from three years ago, I highlighted orchestration as opposed to automation. It’s hard to discuss orchestration by itself, though. I see automation as a tool that orchestration platforms use to get their jobs done. Programmability goes hand-in-hand with orchestration and automation. Orchestration needs automation to bring an IT stack to life. But without programmability, automation has nothing to act on.
Great strides have been made in these areas. For example, one fresh idea is defining network state that encompasses all network devices. This is subtly distinct from configuration-centric thinking where individual devices get some configuration pushed to them. Yes, network devices still have configurations, but the engineers gets to take a step back from individual devices, focusing instead on how the entire network should behave, providing that desired state to a software tool, and allowing the tool to programmatically enforce the desired state.
APIs and abstraction layers continue to be developed and matured. The OpenConfig project is releasing network models that allow network devices from a variety of vendors to be accessed via a common interface. The IETF is defining common models as well. YANG, YAML, and telemetry are perhaps not as pervasive as SNMP, but soon will be.
All of this means that the network will, over time, be as accessible, modelable, and testable as any other part of the IT stack. This won’t happen overnight, but I do think will continue to happen as snowflakes melt.
Multi-tenancy continues to find new and interesting use cases in the enterprise, often driven by security. VXLAN is well-known now, and is the dominant choice of encapsulation type for virtual networks running over a layer 3 fabric.
This model of network virtualization perhaps isn’t commonplace, as in you don’t see it deployed ubiquitously in the average enterprise, but it’s common enough to have reference architectures, hardware support, and even the emergence of BGP EVPN as a VXLAN control plane.
Closely related to network virtualization is microsegmentation, where virtual machines or containers are isolated from each other in accordance with a central security policy. This policy is enforced via a central piece of software. This space is active right now, with startups and open source projects coming to the fore to solve this problem.
Also related to network virtualization is network functions virtualization, where routers and middleboxes (firewalls, load balancers) are run as virtual instances on a hypervisor and traffic routed through them via a service chain. NFV is usually handled in a centralized way, where an orchestrator can spin up or down a virtual network function (VNF) and program a forwarding path, typically between VXLAN endpoints, to build the service chain.
Do these categories still hold?
Overall, these categories still hold up. We have problems of isolating different network segments from others. We have problems of management and operations, where it’s increasingly difficult to manage all of our devices and device contexts, whether physical or virtual. We have problems of staff, where technical talent is hard to come by and harder to retain.
Much of this points back to a core problem of scale. In the long term, networking will no longer be about standing up a new VLAN, injecting a new route into the routing domain, building a new VPN tunnel, or updating a firewall policy. Even in smaller networks, this mindset will become untenable, as the sheer number of objects to be successfully manipulated when making a network change will be out of the grasp of humans.
New technology is automating the task of network configuration away. I believe that these categories are a reflection of that — a complex IT world with complex layers of functionality that must all work together.
We’re seeing the building blocks of this new IT world now, much of which will be abstracted away from us. We’ll still need to understand how it’s supposed to work. The building blocks won’t especially change. Ethernet and IP are here to stay. But the way we interface with those layers will continue to change.
Is that SDN? Sure. You can call it that if you want to. The marketers do.