The Impact of Software Defined X on a Networking organization

Introduction

We’re in one of the most exciting times in data networking.  While I’m sure we’re all sick of vendors co-opting technologies in their infancy, there is a lot of good work going on to change the fundamentals of moving data (I shudder to call this a paradigm shift; I’ll save that term for life ending controversies such as Galileo & the whole heliocentric debacle).

A Little History of OS virtualization and Networking

While operating system virtualization has a history nearly as long as the solid state computer[http://www.everythingvm.com/content/history-virtualization], the main focus here is x86 virtualization that got its start in the late 90s when a project at Stanford University resulted in VMware becoming incorporated. The technology has matured well past the initial benefits of simple physical machine consolidation and better utilization through statistical multiplexing. The current state of affairs for OS virtualization is a highly automated, massively scalable system capable of supporting the dynamic creation of thousands of on-demand ephemeral machines to meet elastic demand.

Sadly, during this time, there has been little to no similar networking innovation to parallel what has been happening in compute. Most innovation in data center networking has been speed improvements and yet more overlay technologies to give the appearance of virtualization, while still maintaining discrete, independent elements of switches, routers, load balancers, firewalls, etc. For an excellent overview of how poorly this complexity is serving us, see Ivan Pepelnjak’s The Need for Overlay Virtual Networks.

When VMware purchased SDN startup, Nicira, it was working on an SDN controller- the brains and orchestration behind a software defined network.  The integration for advanced network control into VMware’s existing network stack is obvious, but the company recently announced NSX which will offer a cross-platform ability to control networking stacks in other virtualization environments. My first thought on hearing about the Nicira acquisition was that VMWare was going to control physical network elements using OpenFlow. While this hasn’t happened yet, it is an obvious step for a company that has publicly stated it wants to be in networking.

Networking hardware has traditionally been a closed ecosystem of specialized hardware (Application Specific Integrated Circuits (ASICs)) and closed, proprietary operating systems. A service provider or enterprise network was built with hundreds of these discrete networking elements. Large shops attempted automation through home-grown scripts. There was very little industry-wide success in providing a vendor agnostic provisioning and management system. Aside from some basic scripting, NetOps (network operations) was not focused on automation to the same scope as systems.

Networking hardware is becoming more commoditized, with key hardware (the media access controllers) made by a few manufacturers such as Broadcom and EZChip, allowing for a more regularized interface to the packet-forwarding intelligence.

The current inflexible infrastructure presents a black-box, opaque communications infrastructure to the applications. But applications demand, and should receive, more than a generic, best-effort packet delivery service. We have the power, and soon to be capability, to provide an intelligent, scalable, programmable network that supports bi-directional, real-time feedback to optimize performance and user experience.

Impacts

Virtualization continues to grow in both scale and scope, and the infrastructure that support applications need to advance to meet these changing needs. Mass automation, on-demand, elastic compute can only happen if the underlying infrastructure can be elastic and programmable. With data centers becoming more virtual than physical, it only makes sense that the network begins to change to support that.

Specifically, when most workloads in a data center are under the supervision of a virtualization platform, the role of the physical network needs to be questioned. VMWare is taking the approach that the network should become a function of virtualized server infrastructure, and so, the term Software Defined Data Center (SDDC) is born. The term Software Defined Data Center is becoming more common; I’ve always thought of data centers are becoming more like walk-in computers. Nearly everything inside this giant machine is automated except networking.

In fact, not only does VMWare have this vision, but Intel as well, obviously from a hardware perspective. Intel’s goal is to “explode” the computer and separate the component compute, memory, storage, and I/O into discrete data center building blocks. If these visions are fully realized, communication within the data center is akin to intra-machine, or even intra-process, communications in the current environment.

One of the biggest, and potentially most painful, changes SDN/SDDC will cause an IT shop  is the impact on itself. Who will control the data center network and what skills will be needed?

IP networking has steam-rolled over several technologies: Time Division Multiplexing (TDM voice and data transport such as SONET), ATM, Frame Relay, etc. As network engineers, we need to see that the rapid pace of data center virtualization is having a profound impact on networking. We can either embrace this change and adapt, or stand in the corner and troubleshoot STP for the next 20 years. If past technology changes have taught us anything, it will be to adapt or become irrelevant. Anyone care to troubleshoot the SPID on an ISDN BRI anyone?

CONCLUSION

The networking industry has finally caught on to the virtualization trend. This fact coupled with the increasing power of general purpose processors and commoditization of previously proprietary hardware is leading to a tighter integration between compute and networking in the data center. We can either prepare for, and embrace this change, or we can attempt to ignore and resist it (obviously at our own peril).

Andrew Gallo

Andrew Gallo

Senior Information Systems Engineer
Andrew Gallo is a Washington, DC based Senior Information Systems Engineer and Network Architect, responsible for design and implementation of the enterprise network for a large university. Areas of specialization include the University's wide area connections, including a 150 kilometer DWDM ring, designing a multicampus routing policy, and business continuity planning for two online datacenters. Andrew started during the internet upswing of the mid to late 90s installing and terminating fiber. As his career progressed, he has had experience with technologies from FDDI to ATM, and all speeds of Ethernet, including a recent deployment of several metro area 100Gbps circuits. Focusing not only on data networks, Andrew has experience in traditional TDM voice, VoIP, and real-time, unified collaboration technologies. Areas of interest include optical transport, network virtualization and software defined networking, and network science and graph theory.
Andrew Gallo
Andrew Gallo
  • Stacky

    SD-X – I just mentioned the same thing to my colleagues yesterday – Excellent!!!