Dear Cisco and Juniper:
Its been a good run, Cisco. Thank you for the CCIE. Thank you Juniper for the JNCIE. I learned a lot about networking because of you. But you are irrelevant now.
Right now, I can buy a 64-core server with 768GB of RAM from HP for a mere $57k. That includes 8TB of storage, by the way. Since Google has graciously contributed Receive Packet and Flow Steering (RPS/RFS) to the linux kernel, multi-core packet processing is available to network applications. Combine this with the offloading of SSL and deep-packet inspection to GPU(s) and you have a platform that is far less-expensive and potentially far more scalable than vendor dedicated silicon. That is, what little vendor silicon is left these days in network gear since Broadcom and the likes seem to be in every vendor’s equipment. Tell me what happens when we put a broadcom chip right on the server bus and OpenFlow becomes a device driver?
Cisco, Juniper… listen to me: your days are numbered.
No wonder Google rolls their own. I’m about to roll my own too. My company delivers IP applications and services to thousands of customers’ private networks. We need lots of customizable NAT. Not in throughput. Not even in concurrent sessions. We need lots of *configured* NAT. Thousands of rules. 50k right now as a matter of fact in just one spot of our network. Yet Cisco’s ASR plaform supports just 16k configured static NATs. Their solution? Buy more ASRs. You’ll need an RP2 w/16GB of RAM. Probably an ESP-40. A pair for redundancy. So we are talking six of those. Juniper is no better. An SRX-5800 (fully loaded, this is basically a super-computer) only supports 8k of configurable static NATs. A Cisco 7206vxr supports 16k. Nobody wants to jumble their rule base up trying to spread these out across three or four kinds of NAT. Simple 1:1 bidirectional mappings are the way to go. And you only support 8k of them.
You know what happened in the server world when companies started using VMs? They found, in the end, that they had far more VMs than they ever had physical servers. Its easy to launch another VM. Network virtualization is no different. With the virtualization of network functionality comes an explosion in the number of configured virtual elements. Scale isn’t just about throughput, concurrent flows, or the size of route-tables. Its about the ability to support tens of thousands or even hundreds of thousands of configured elements. This is going to happen even in companies smaller than Google. You should have seen this coming when the carriers starting using software to manage MPLS. You should have seen this coming when Google started rolling their own. You just should have seen this coming.
So here is a far less expensive option for my NAT problem: A big server running linux and iptables in containers. It could be FreeBSD running PF in jails too. It will probably be more scalable, with some tuning. I’ll admit that ALG support is awful with both iptables and PF, but how hard would that be to fix? Thanks to NetPDL we have a way to describe protocol data-units in XML. A general purpose proxy could be built that could read in NetPDL descriptions. Redirect traffic on specific ports to this proxy, and in turn this proxy will offload DPI to a GPU. Adding additional protocol support is as simple as writing an XML description in NetPDL and sending it to the proxy process. That, or just build that functionality right into PF or iptables. Guess what, there are already libraries (and source code) available to help us… Imagine not having to wait for ALG support or fixes. It brings a tear to my eye.
Suricata already has CUDA support. Its a matter of time for Snort to have it. Either way, IPS will be open-source and running on commodity hardware soon enough. Should I mention how obvious it would be to offload SSL or any other kind of encryption to GPUs?
Lastly there are piles of APIs and libraries available with open-source tools that people can use to make real progress with the usability of network functions. No one is going to be hand configuring iptables rules. Soon applications will describe what they need in API calls and those functions will be created in the network inside of containers that can logically exist anywhere. Need a firewall? Launch an lxc with iptables. Need an IPS? Launch Suricata in an lxc. Need a virtual router or virtual switch? Quagga or linux bridging in an lxc. All those containers will be attached to Nicira’s Open vSwitch in the main host (if switch hardware isn’t already integrated right in the server). All of these elements will be arranged and configured according to API calls, not an army of router-monkeys. Containers (or jails) will be monitored for CPU utilization and be automatically moved between servers to maximize resource utilization… thanks to OpenFlow and the already very capable state-sync libraries available to us for Apache (oh yeah, that will be containerized too) and iptables. Packet capture in this network will be cake: We can run tshark in any jail or container.
But hey, there might be a market for you making really fast dumb OpenFlow switches to interconnect the servers housing all these great functions… If you get in on the ground floor now you might be able to squeeze a little extra margin out of it before you have to start competing with Casio.