This article is part 2 of a series on the Aruba 8400 chassis switch, launched in August 2017. See the links section at the bottom of this article for the other articles in the series.
A major reason you might buy a chassis switch is investment protection. A chassis switch, when emptied of line cards and power supplies, a carefully constructed bit of sheet metal designed to be bolted into a network rack and left there for many years. As the years roll on, modules can be slid into and out of the fancy sheet metal to provide new networking capabilities.
The downside of a chassis switch is the inherent complexity such a piece of hardware implies. As chassis switches often support a high number of links and might play pivotal roles in a network architecture, they are often viewed as devices which can never fail, and should run without interruption under all scenarios, including component failures.
Thus, chassis switches usually feature redundant supervisor engines, power supplies, and fabric cards. The chassis will feature a midplane that all of these components plug into along with Ethernet line cards. Each component will feature electrical traces that support monitoring by the CPU in the system. This approach unifies the chassis into a single system that is, in fact, made up of many discrete modules plugged together.
For those familiar with chassis switch hardware as I describe broadly here, the 8400 will seem similar to other chassis designs you’ve worked with in the past.
What Drove The 8400 Hardware Decisions?
The Aruba 8400 hardware was engineered in-house by Aruba, reportedly by the ProCurve team. Why did they make the engineering decisions that they did? Several specific goals drove them.
Campus Core. The 8400 is a campus switch, particularly targeted at the campus core providing layer two and three Ethernet services.
Availability. Availability was another element Aruba focused on. Aruba has decided to label the 8400 as having “carrier class” availability. I imagine that’s meant to imply that carriers have the highest uptime requirements, and that their gear never goes down. I believe that good network design goes further than clever hardware design in network availability, so I don’t feel too excited about “carrier class” myself.
Service Life. Length of service life was another key design element. The intent is for the chassis to survive three generations of hardware. In other words, the chassis is expected to last through three generations of supervisor engines, line cards, etc. Thus, Aruba designed the 8400 for modularity with plenty of power and cooling.
Queueing And Buffering. Another stated goal was for best-in-class queuing and buffering. Aruba defined this to mean no head-of-line blocking, virtual output queues, as well as deep buffers. I found this to be a little odd for a campus core switch discussion, because it implies Aruba thinks the 8400 is going to be seeing a lot of traffic coming through the box–so much so, that crossbar contention will be a problem that needs to be addressed aggressively.
While there are always outliers, campus traffic patterns just don’t run that hot in my experience. Yes, elimination of HOL blocking with VOQs is table stakes for a chassis switch design, but deep buffering is a different conversation often had in the context of handling microbursts in a data center. And even then, I don’t believe that deep buffering is the right answer in all cases. Sometimes, TCP throughput is better off if packets are dropped using a well thought-out QoS scheme or updated congestion control mechanism.
In summary, best-in-class queueing and buffering sounds good on paper and marketing departments love features to highlight. However, I don’t think this specific hardware goal of Aruba’s is the one that’s going to tip the scales one way or the other for most campuses.
Aruba 8400 Chassis Hardware Highlights
As we dove into the 8400 hardware at the launch day event, many details bubbled to the surface. I’ll cover them in no particular order here.
The Ethernet chip used in the linecards is not an ASIC of Aruba’s design. On the other hand, the ASIC isn’t simply merchant silicon, either. Rather, the ASIC is a combination of technologies, where Aruba has added their own peripheral magic around the ASIC they chose.
Aruba did not disclose who the ASIC vendor was, which I found odd. There was no obvious reason I could think of they’d need to be secretive about such a detail. There just aren’t that many Ethernet chip providers out there, and eventually the industry will suss it out. I got a look at a physical line card, but the card was so covered up with metal shields and heatsinks that no clues as to the ASIC manufacturer were forthcoming.
The ASIC driver is Aruba’s own. Rather than use SAI or another ASIC abstraction layer, Aruba wanted a layer that would take full advantage of the silicon capabilities. Abstraction layers like SAI have the advantage of a common interface, but tend to reduce ASIC programming to the lowest common denominator, leaving features on the table. And if you think about it, why would Aruba want to use something like SAI, when SAI is more attractive on platforms where multiple operating systems might be leveraged? The 8400 is not a whitebox switch where the SAI architecture perhaps makes more sense.
Fully loaded, an Aruba 8400 chassis weighs in at a hefty 240 pounds. The chassis is 8RU. There are 4 power supplies in the front of the chassis up top, with the power connectors at the back. There is room for 8 line cards and 2 management cards in the front, where the cards are loaded vertically. The back of the chassis presents three massive fan trays, each with 6 fan modules.
The 3 fabric card slots are found behind the fan trays, but in a way that they can be accessed non-disruptively–very clever packaging by Aruba.
Assuming all 3 fabric cards are running, the chassis is non-blocking. That is, there is enough switching capacity among 3 fabric cards to handle all of the front panel port capacity. If a fabric card fails, some capacity is lost, and the chassis might no longer be non-blocking, depending on how loaded with line cards and ports the chassis is.
The 8 line cards are connected in a Clos fabric to the 3 fabric cards via a direct-connect midplane. A Clos fabric here means that each line card has a direct connection to every fabric card, similar to how each leaf switch has a direct connection to every spine switch in the leaf-spine data center switching topology that’s been all the rage for a few years now.
Forwarding through the fabric modules isn’t necessarily on a per-frame or per-packet basis. The 8400’s crossbar fabric architecture includes a feature where a large packet can be fragmented after ingress from the front panel, the fragments distributed across the fabric, and the large packet reassembled before switch egress. This results in more even traffic distribution across the internal switch fabric, reducing contention and fabric hot spots.
Speeds And Feeds
For those who need to know just how fast the 8400 is, I’ll summarize by saying, “It’s fast enough.” The chassis is good for 19.2Tbps of throughput or 1.2Tbps per slot according to the 8400 data sheet. If that math is a little confusing, I believe Aruba means 19.2Tbps total for the chassis if you consider both ingress and egress traffic, a common marketing metric. 1.2Tbps per slot times 8 slots equals 9.6Tbps in one direction. If you multiple 9.6Tbps times 2, you get the claimed 19.2Tbps.
Read through the rest of the Aruba 8400 data sheet for a detailed list of protocols, standards, and other featured supported by the 8400 that I might not highlight in this series.
Aruba describes the control plane as centralized, but the data-plane as distributed. In other words, the control plane exists as software running on the management cards, while forwarding tables are programmed on each line card so that they can forward traffic independently.
The hardware chassis midplane connects not only the front line cards to the fabric cards, but also the control planes & management planes via specific electrical traces. Everything in the Aruba 8400 chassis is managed.
All of cards and modules in the 8400 are designed for in-service serviceability, even the fabric cards so cleverly tucked in behind the fan trays and modules.
The control-plane and management CPU is an Intel Xeon-family 4 core @ 2.0GHz, with 32GB of RAM. Onboard storage consists of a 120GB SSD, although Aruba cautions against overusing it to reduce the risk of wearing it out before the anticipated 10 year life span of the box.
If you’re wondering what overusing an SSD might mean, consider that the 8400 gives you the ability to do things such as save a tcpdump file right on the switch. In other words, if you treat the switch like a workstation, there’s a small risk you’ll burn through the read/write lifecycle of the onboard SSD before the end of the chassis service life. Therefore, be a little smart about how you use all that power born from the 8400’s flexibility.
8400 Platform Security
Aruba opted to not trust the hardware supply chain. This is not an unusual step for a vendor that wishes to sell their hardware to customers with strong security requirements. In recent years, the supply chain has suffered from rootkit-infested components being installed into network devices.
Therefore, every hardware component in an 8400 is validated as authentic as the system boots. Invalid components will prevent the 8400 from booting up. The idea here to use Trusted Platform Module (TPM) to prevents cards shipped with a rootkit from coming online and disrupting the system.
Engineers responsible for the 8400 should take the time to understand how TPM is implemented. The core of the system is certificate-based SSL. There is a secure root of trust from startup of system to the OS running. This means that you end up with an 8400 where both the hardware and code loading can be completely trusted, as certificates establish a chain of authenticity.
To reiterate, if certification validation fails for whatever reason, the switch won’t boot. However, there is a recovery mechanism—the box isn’t a brick when certification validation fails. The 8400 just won’t be forwarding traffic while the recovery process is ongoing.
Coming Up Next
In the next article in this short series, I’ll review the highlights of the ArubaOS-CX network operating system.
- Aruba Picks A Fight In The Campus Core With Its New 8400 Switch
- The Aruba 8400 Chassis Switch. Yes, But Why?
- The Aruba 8400 Hardware Highlights
- The Aruba 8400 ArubaOS-CX Network Operating System
- The Aruba 8400 Integrated Network Analytics & Automated Root Cause Analysis
This article underwent a technical review by Aruba Networks to ensure accuracy, which I appreciate. I sat for an entire day during the launch event hosted by Tech Field Day listening to several hours of presentation on this complex platform. I like to be sure I got it right.