Virtualizing Network Services, Part 1: The Beginning


The Purpose of the “Network”

In this series of articles I intend to walk through a series of diagrams with a narrative that will lead us to a “cloud” and “automation” ready network design.  Before we get started, we have to first answer a fundamental question:  What does the network do?

The purpose of the network is to transport packets between the hosts that are variously attached throughout it.  At least, that is what we’ve always assumed at the most fundamental level.  In reality most networks do something to the packets in transit between the hosts.  Various types of appliances perform ancillary operations on this traffic:  acceleration, translation, inspection, filtration, load-balancing, etc.

It is typically the case that the network engineering function within an organization tends to the design, deployment, and maintenance of the devices that perform these functions.  Therefore, it is safe to say that the purpose of the network is more than just simple transport.

Figure 1: An abstract representation of network functions

The figure above is a simplified view of the network service.  On either end we have the client “networks,” N1 and N2.  For the purpose of this diagram these could be a single host each, a small switched network with several hosts, or the entire Internet.  What is being illustrated is that the network is providing transit between N1 and N2.

The E1 and E2 blocks denote the edge of the network.  These blocks provide access to the network for N1 and N2.  They represent switched or routed functionality only.  The “S” blocks represent a series of operations or services being performed on traffic passing between the client networks.  For instance, S1 could be an Intrusion Prevention device.  Any of the previous mentioned ancillary operations could be represented by the “S” blocks.

It is important to know that these blocks do not represent physical boxes, they represent some specific functionality.  For instance NAT is different than connection-tracking.  The two may be performed on the same box, but really they are two different functions.  Filtering is different than both of these, and so would be a third function potentially.  All three, again, could be performed on the same physical unit, but it is possible that there are two or more physical units performing these operations.

Before we move on, take a minute to think about all the various specific functions that could happen to a packet along the path through the network:  Cryptographic functions (SSL, IPSec, etc), Acceleration functions (TCP optimization, compression), TCP MSS adjustment, QoS marking, port-mirroring, etc.

Topology Constraint

Traditional network designs suffer from “topology constraint.”  When an appliance is placed into the network to perform ancillary operations the hosts or networks sitting behind this appliance are bound to it physically or logically (i.e., by VLAN).   This has a number of unfortunate side effects such as:

(1) Rigidity: For instance, in the case of appliance clusters, if the cluster fails its not trivial to install, configure, and re-route traffic through an additional pair of firewalls.  It takes time if they are not already in place.

(2) Fate-sharing: Multiple applications cold be impacted by an event in the network such as a DDoS attack or a configuration mistake.  This can lead to further rigidity in the form of strict change controls and overall inflexibility in the provisioning of new network services.

(3) Immobility: Virtual-machines have a restricted range of motion with respect to  the hosts they can reside on.  Any potential host must also be bound to the same appliances.  This means, for practical purposes, that the virtual-machine has redundancy within the same data center, or potentially even within a sub-section of a data-center.

(4) L3 Insertion:  Inserting a new appliance into the path is risky as it involves topological changes at layer-2 or layer-3.  The provisioning time for new services can be lengthy because of this.

In our design, we will attempt to alleviate these issues by binding hosts or networks to required services through policy.  Specifically we will be using MPLS and VRF import/export policies to achieve this.  On a Cisco IOS platform this ultimately means we will be using route-maps.  On a Juniper JUNOS device we will be using policy-statements.

The Intermediate VPN

Figure 2: An Intermediate VPN using assymetric route-targets


Figure 2 depicts an “Intermediate VPN” called “Green.”  Intermediate VPNs are MPLS L3VPNs that provide transit between the functional blocks in the network path as depicted in Figure 1 (the “E” and “S” blocks).  Four VRFs in the Green VPN here are shown.   These VRFs can be divided into two types:  “Tn0” and “Tn1.”  The meaning of these terms is explained below, but for now just know these two types use assymetric route-targets relative to each other.   This means, as depicted above, that the route-targets being exported by the Tn0 VRFs are the ones being imported by the Tn1 VRFs.  A different route-target is used in the opposite direction.  The effect of this is that traffic will flow vertically or diagonally (as shown) but never horizontally (i.e., from “Tn0” to “Tn0” or “Tn1” to “Tn1”).

A Simple Network Service Path

Now lets combine Figures 1 and 2 above into a third, combined figure:

Figure 3: An MPLS-based virtual network service path


Figure 3 depicts a network services path between two hosts.  Two intermediate VPNs are depicted, “Green” and “Blue.”  The Green VPN provides transitivity between “Host A” and the top side of the firewalls.  The Blue VPN provides transitivity between the bottom side of the firewalls and “Host B.”  We can align the abstract notation in Figure 1 to this new service path as shown.  The N1 and N2 networks are the hosts and the ethernet segments connecting the hosts to the edge devices.  The E1 block includes the MPLS PERs connected to Host A’s ethernet segment.  The E2 block, similarly, includes the MPLS PERs connected to Host B’s ethernet segment.

Now lets take a closer look at at the S1 block.  Note that it includes the firewalls and the PERs directly connected to either side of the firewalls.   The Green Tn0 VRFs are on one side of the firewall, while the Blue Tn1 VRFs are on the other side.  The “T” in the “Tn0” and “Tn1” terms stands for “trust.”  The “n” is some number corresponding to a trust level or firewall “tier.”  The zero and the one represent different security zones.

NOTE: While this explains how these terms came to be, you should consider them just general terms used to differentiate between the two types of VRFs in an intermediate VPN.  Or “either side” of an appliance.  In subsequent parts to this series of blog posts you will see other appliances between these VRFs such as load-balancers and IPS’s.

Lastly, it is likely that we will want traffic to flow symmetrically through one firewall or the other.  The red and orange lines depict route propagation.  Specifically the Red lines depict routes being exported from the VRFs on either side of the left firewall.  These routes are being exported to the E1 and E2 MPLS PERs with a local-preference of 2000.  The orange lines depict the same routes being exported from the VRFs on either side of the right firewall.  These routes have a local-preference of 1000.  The net effect here is that traffic from Host A entering either of the Green Tn1 VRFs (the E1 block), will flow towards the firewall on the left by default.  Similarly in the opposite direction traffic coming from “Host B” will prefer the same firewall.


Well, this was a hefty article.  In Part 2 we will insert additional appliances in the path by creating additional intermediate VPNs.  This will lead us to the very exciting, ominous sounding Packet-Cyclotron-of-Death… its coming for you.  It will wait until you are asleep.


  1. says

    I think I see where you are going with this but Im not 100% sure at this point.  Needless to say, you have my attention.  Anxious to see what comes next and where this concept can be applied. 
    Thanks – Jon

  2. Nixisfun says

    From your diagram it would appear that the firewalls are not participating in the MPLS vpn. Wouldn’t it make sense to have them part of the MPLS network? Or are they just going to run dot1q trunks between the green and blue zones?

  3. says

    So how do the network service elements in your model communicate liveness & healthiness to the surrounding forwarding elements, and how are they provisioned/deprovisioned? This has always struck me as the big problem to be solved in the ‘network service’ space; Cisco “Service Insertion Architecture” plays in this space (as does openflow).
    (For a bonus point, what’s the difference between a network service and an application, and why’s there a difference?)

  4. Hannes Adollarson says


    Great post!

    Would be interesting to know how far out towards the host you push the VPLS/VPN instance in your designs since the gear could be quite pricy.

    Longing for the next part.


Leave a Reply

Your email address will not be published. Required fields are marked *