While attending Discover 2016 as a guest of HPE, I attended a briefing that included a discussion about the growth of 25GbE & 100GbE networking in the datacenter. My internalized reaction was, “What growth? 25GbE & 100GbE is a relatively new specification driven by a small consortium of very large customers with huge amounts of data. Most of the market is barely using their 10GbE & 40GbE capacity.”
Even so, HPE took 25GbE & 100GbE growth as gospel, making the case — for a 25GbE access layer especially — in a couple of ways.
1. Changing from 10GbE to 25GbE on hosts means that compute footprint could be reduced. According to HPE’s logic, more network capacity means more work that can be done on a given host, thus reducing the number of hosts required.
2. Changing from 40GbE to 100GbE for spine uplinks means maintaining the same oversubscription ratio as fabrics previously deployed at 10GbE & 40GbE. This assumes the same number of host-facing and spine-facing ports on the leaf switch, but you get the idea.
My objection to this logic was that the network interface card is rarely the bottleneck on the average enterprise host. A server is likely to run out of CPU or memory before it will burn through 10Gbps of network capacity, at least at a sustained rate. In some cases, this is true before a server even burns through 1Gbps of network capacity, let alone 10Gbps.
Considering my experience of the underutilized NIC in the average enterprise server, how does HPE’s logic add up? I put the question to them, and they offered a reasonable answer: containerization. As environments migrate from heavy VMs to comparatively light containers, compute work will be executed more efficiently. More processes can be run on the same host in the same CPU and memory footprint. And those additional processes will drive additional networking throughput.
Hmm. While the mid-market enterprise is unlikely to make 2016 the year of containers, HPE has a point as we peer down the road. Containers will move the performance bottleneck. Even today, HPE is seeing a few customers move rapidly into the container world. According to HPE, those customers are reducing the number of hosts they are using while at the same time increasing the amount of network data coming out of each host.
The view from the hot aisle.
If you’re standing up new data center infrastructure, investing in 25GbE & 100GbE is perhaps an ROI exercise. Are your applications able to be run on containers? Do these applications run at large enough scale that your container load on a single host will consume more than 10Gbps of network capacity?
The simple math is to compute what it will cost to step up to 25GbE & 100GbE network infrastructure, including servers architected to fill the capacity of 25GbE NICs. Then see if the smaller number of servers, racks, power, and cooling you’ll require offset the cost of the pricier network infrastructure sufficiently.
If the numbers don’t make sense, perhaps just wait. It’s still early days for 25GbE & 100GbE. The costs will come down over time. On the other hand, if you have the discretionary funds available to upgrade network infrastructure without an ROI requirement, it’s not a bad time to consider 25GbE & 100GbE capable network gear. The need will come eventually.