This post is sponsored by 6WIND, who will be appearing on the Packet Pushers Priority Queue podcast around October 30, 2015. This post is an introduction to what we’re going to cover in that show.
A modern networking trend is to move functions that used to reside in specialty appliances and spin them up on generic x86-based hardware. This is broadly known as network functions virtualization (NFV), and is typically seen in the context of software defined networking (SDN). The two often go hand-in-hand, as software might decide it needs a virtualized network function (VNF) to deliver some larger IT service. Orchestration systems such as OpenStack also might call upon NFV to create the networking portion of some service they offer to their cloud consumers.
Practically speaking, this means that we’re seeing the transition from specialty firewalls, routers, and load balancers to virtual flavors of these devices. Network engineers of the future are less likely to rack specialty appliances, and more likely to manage these network functions on a hypervisor host or bare metal x86-based server. That’s because x86-based servers are cheap, easily replaceable, and in a better position to scale than dedicated, specialized networking hardware.
While NFV addresses the functional question of how to easily (and presumably cheaply) instantiate routers, firewalls, and so on, it does not directly address the issue of performance. Delivering the required performance is an exercise somewhat left up to the architect, and it’s not immediately obvious how to get to performance levels out of x86-based NFV that we are used to in dedicated networking appliances.
Once upon a time, an architect would do one of two things to achieve required performance levels:
- Buy a bigger box.
- Buy more boxes.
And while we can do that in the NFV world of x86 (buy a box with more cores or rack more x86 metal), the end result is not exactly the same. Just as we’re going from dedicated custom ASICs designed to execute specific network functions very quickly to general-purpose Intel x86 CPUs, we’re also going from tightly coupled hardware/software networking stacks to, in most cases, a general-purpose Linux kernel that is loaded down with features.
In this new world of general-purpose CPUs and a Linux kernel not dedicated to networking functionality, how do we drive an x86 server to fill a 10Gbps NIC? What about two 10Gbps NICs? What about 25Gbps or 40Gbps NICs?
6WIND, a Packet Pushers sponsor, has been in the business of dealing with these sorts of issues for some time. Its original market was telcos and other organizations that would use 6WIND’s packet processing software, 6WINDGate, as OEMs to increase performance for their own networking products.
6WIND was in the background, making the acceleration software that other companies could use to make the most of their standard servers with multi-core CPUs and Linux.
However, 6WIND has recently taken a further step, offering its own VNFs and virtual networking acceleration software packages to end users like you and me. That means that 6WIND is offering a full routing & encryption stack for bare metal and virtual deployments, and a virtual acceleration platform for x86 now.
6WIND Virtual Accelerator does the job of virtual switching, offloading that function from the Linux kernel, and dramatically improving network throughput while accelerating features in addition to Open vSwitch (OVS) and the Linux bridge, such as VXLAN, VRFs, and NAT.
6WIND’s Virtual Accelerator runs as a process inside of Linux hypervisors and is transparent to the other systems running on a hypervisor host. That means Virtual Accelerator does its job without requiring a significant change to other running applications or to orchestration systems such as OpenStack. Virtual Accelerator supports any VNF with Virtio drivers.
In a hypervisor environment, Virtual Accelerator can also accelerate 6WIND’s two VNF offerings, Turbo Router and Turbo IPsec, in addition to third-party VNFs. However, 6WIND’s Turbo Appliances can also run on bare metal without Virtual Accelerator.
Turbo Router, as the name implies, is a software-based L3 router. Turbo Router offers functions such as Ethernet bridging, VLANs, link aggregation, VRFs, GRE and IPinIP tunneling, stateful filtering (like a firewall performs), IPv6 support, RIP, OSPF, BGP, BFD, and VRRP. A Turbo Router can be managed via an SSH CLI, and allows operators to run Linux tools like iproute2, iptables, tcpdump, and traceroute.
Turbo IPsec is essentially a Turbo Router with encryption capabilities. To Turbo Router, Turbo IPsec adds IPsec for IPv4 and IPv6, IKEv1 & v2, and VPN monitoring.
The Turbo Appliances leverage DPDK and support a variety of NICs in 1G, 10G, and 40G speeds from Intel, Mellanox, and Emulex.
In summary, 6WIND Virtual Accelerator is a software package that runs on x86 in a way that directly accesses hardware and works around bottlenecks that other virtual switches suffer from, while still maintaining virtual machine portability that can be lost with hypervisor pass-through or SR-IOV. With the Turbo Appliances, 6WIND offers a fully featured routing and encryption packet forwarding engine that runs on x86. Turbo Router is going head to head with other virtual routing appliances such as Brocade’s vRouter and Cisco’s CSR1000V and Turbo IPsec offers an alternative to traditional hardware-based IPsec gateways.
To hear more of the details about 6WIND, stay tuned to the Packet Pushers Priority Queue podcast channel. 6WIND sponsored a full show, and we’ve already recorded it. Scheduled to publish during the last week of October 2015, we discuss with 6WIND more details about how the Virtual Accelerator and Turbo Appliances actually work, going over topics such as…
- The x86 performance challenge – why x86 environments struggle to fill Ethernet NIC pipes
- The difference between kernel space and user space in a Linux environment, and why it matters
- The differences among pass-through, SR-IOV, and Open vSwitch (OVS) with DPDK
- The impact of installing Virtual Accelerator, Turbo Router, and Turbo IPsec into your environment, whether in a VM or bare metal
- Actual performance numbers
Before listening to the podcast, you might like to read the following.