Many folks that find their way to Packet Pushers are familiar with Cisco IOS and/or JUNOS. You no doubt are aware that Cisco has a feature in IOS known as Policy-Based Routing. Juniper has one known as Filter-Based Forwarding. They are roughly equivalent. These features allow a network admin to define policies that choose next-hops for packets as they ingress an interface on a router. The selection criteria can be based on some combination of packet header data: Src/Dst IP address, Src/Dst TCP/UDP ports, ToS byte, etc.
What if I told you that you could “virtualize” your network in a secure, audit-compliant fashion by disabling all dynamic routing and removing all static routes on your network? After doing this you would *only* use PBR/FBF to forward all packets. To help automate the configuration of the ACLs and the policies, a tool can be created that will automatically create and apply them along a chosen path in the network. Real-time monitoring of link utilization and netflow information will allow this tool to choose the path optimally so as to ensure efficient bandwidth utilization and good application performance.
So far so good. Except I don’t think that anyone would agree we have virtualized the network. The entire network still falls under a single control and forwarding domain. Just because we effectively handcoded the forwarding entries of the router doesn’t make the network virtualized or secure. So it is with basic OpenFlow, which fundamentally works in the same fashion as the hypothetical network I have described above. The basic premise of OpenFlow is that a controller is remotely programming the forwarding plane of network nodes. The controller is doing this based on policies configured by an administrator through a GUI interface, or even potentially policies that are built by applications themselves through an API. The primary difference here is that OpenFlow can also match on L2 headers.
In the world of OpenFlow there is a thing known as FlowVisor. This effectively divides an OpenFlow compliant node into multiple “slices” or domains. This sounds good to my network-engineering ears. Each slice can be managed by a different controller and has ports and resources assigned to it. OpenFlow v1.1 now supports Q-in-Q (802.1Q-in-Q or 802.1ad) tagging as well as MPLS tagging. Sadly, FlowVisor is not v1.1 compliant and it appears a great deal of work would be needed to make it so.
Why is that important? Why should products or technologies that claim to deliver “virtual-networking” actually support things like Q-in-Q, MPLS, or MAC-in-MAC (802.1ah)?
(A) As a CCIE-SP and JNCIE-M, when I think of audit-compliant multi-tenancy in the network I think of logically delineated control and forwarding: (1) VRFs, VSIs, and S-VLANs with their own control and forwarding configuration. (2) Differentiation on the wire by VLAN outer tags, MPLS tags, or outer MAC addresses. Why is this separation valid for Service-Providers in providing multi-tenancy, but not in the data center? Goose, Gander. As it turns out, contrary to the claim that networking has been stagnant for 20 years, many brilliant engineers have figured out exactly how to virtualize the network infrastructure at layer-2 and layer-3.
(B) A large number of medium-to-large organizations, and certainly Service-Providers, already use these technologies to virtualize their network plumbing. There are certainly pros and cons to each of these technologies but there are many existing networks that use them prolifically. At some point OpenFlow must intersect with these existing networks. As companies place different security zones, different customers, or just different networks into VRFs, VSIs, or S-VLANs, it makes sense that we would want OpenFlow to natively integrate with them.
I certainly wouldn’t look upon PBR or FBF as virtualization technologies, I imagine not too many network engineers have ever thought of it that way. Extending this functionality to Layer-2 doesn’t make it different. A long time ago Network Engineers figured out that CRD (Controlled Route Distribution) was not the way to go to provide VPN service on a network. Controlled FIB schemes do not differ.
Until OpenFlow has a FlowVisor scheme that can interface to external networks using existing virtual-networking technologies, I’m afraid OpenFlow does not qualify as “Virtual-Networking.” OpenFlow needs a method of partitioning its control and forwarding planes (the controllers and the nodes) into slices or domains while also differentiating between them on the wire using existing virtual-network technologies.