I’m working through a cabling layout for a pair of Nexus 5596UPs that will be used as access layer switches. Some of the ports will uplink to 10GbE hosts directly. Some of the ports will feed FEXen that I’m positioning top-of-rack (ToR). An issue that comes up for me is that in a typical switch design, some number of front-panel ports are grouped together, and each group is serviced by a common ASIC on the board. That’s why when you lose a port on the front panel of a line card, there’s a good chance don’t lose just one. You probably lose the entire port group, as the failure is often the ASIC, not the port itself. While this is admittedly a very rare failure, if a dying ASIC takes out a group of Ethernet ports on the front panel, you’ll discover quickly that it’s a bad idea to uplink redundant uplinks, switch interconnects, redundant servers, etc. on consecutive panel ports (i.e. right next to each other).
With that as a backdrop, I have a few goals in mind when planning what ports on a specific switch or linecard I’ll use for what purpose:
- Spread the heaviest traffic sources across ASICs. Specifically, I look at the anticipated load coming in from applications that will be serviced by the port, and spread uplinks around to avoid ASIC congestion. In other words, I would try to avoid all the heavy-hitters being serviced by the same ASIC. In the case of the 5596UP, this is largely a non-issue as the internal fabric is non-blocking. Historically, this was a much greater concern, as I was almost always working with ToR switches, interswitch links, line cards, or chassis back planes with some amount of oversubscription.
- Spread redundant links across ASICs. The idea here is that if an ASIC fails and takes the port group it supports with it, you’ll only lose one link of a redundant set…not the whole set.
Here’s couple of shots (click them to enlarge) that I lifted from the BRKARC-3452 presentation from Cisco Live US 2012, titled “Cisco Nexus 5000/5500 and 2000 Switch Architecture“. These are Cisco’s graphics, not mine, but they are public (not confidential or NDA). You can download these presentations from the CiscoLive365.com site at no cost. So, I don’t think they’ll mind that I’m using them here.
This first graphic gives you the general idea. The front-panel port maps to a “Unified Port Controller” – the ASIC. All UPCs connect to the “Unified Crossbar Fabric” – where all the traffic is being switched. My understanding from the presentation bullet points is that all traffic going through the 5K series goes through the UCF.
This second slide talks in more detail about the mapping of the physical ports on the front of the Nexus to the UPC inside. The big takeaway for me was the “show hardware internal carmel all-ports” command, that lists the port-to-ASIC mapping.
BRKARC-3452 has a lot more information regarding how the Nexus 5K series shoves packets around inside of it, what happens to malformed frames, how counters are incremented, the order of operations, and much more. It’s worth your time to head up to CiscoLive365.com to download the presentation if this sort of switch architecture nerdery is of interest to you.
In short, I’ve gathered that 8 ports map into an ASIC. So, I believe I’ll spread my uplinks accordingly. So, let’s say I have a quad uplink. And let’s further assume ports 1-8 map to one ASIC, 9-16 map to a second, 17-24 map to a third, and 25-32 map to a fourth. (Which *is* an assumption until I actually poke around a bit more.) In my quad uplink, I would connect uplink 1 to port 1, uplink 2 to port 9, uplink 3 to port 17, and uplink 4 to port 25 – unless I find some reason upon further reading that convinces me that this is a bad idea.
Overkill? Paranoia? Pointless? Maybe, but it’s easy enough to do in a brand-new deployment, and offers me a little extra peace of mind. What about those of you that have already done your Nexus 5K cabling layouts? Was ASIC connectivity a part of your thought process? Or did you worry about other things? What other considerations weighed upon your mind?
(Yes, dual 5500′s with vPC is very much a part of my design, just not mentioned here. We have a podcast on vPC coming up within the next few weeks. In this case, I’m focused specifically on redundant uplinks that are going into the same physical switch…there’s an assumption that there would be the same number of redundant ports uplinked into the vPC twin.)
Cisco Nexus 5548P Switch Architecture (at cisco.com)
BRKARC-3452 Cisco Nexus 5000/5500 and 2000 Switch Architecture (at CiscoLive365.com)