Cisco Nexus 2000: A Love/Hate Relationship

My feelings towards the Nexus 2000 Fabric Extender (FEX) are hardly a secret. The myriad of design choices and platform limitations present engineers with some rather difficult decisions. Because of this, I’ve seen a handful of engineers reverse course on their current design due to limitations. It’s because of this that I have harsh feelings for the FEX.

One of the issues plaguing the topic around FEX design is that misunderstanding about what a FEX truly is. The one thing you have to remember that the FEX in and of itself is not a switch. It is the quintessential line card for the Nexus 5000. The uplink ports should be only considered as switching fabric. Since FEXs do not switch traffic locally, all traffic is sent upstream across the “fabric” to the Nexus 5000/7000. This switch provides a centralized forwarding and policy enforcement. Yes, this means that even host-to-host traffic between two ports on the same FEX on the same VLAN will flow through the parent switch. Thinking of them as line cards may help you reconsider some of your design choices, which will provide you with more available features.

Design Choices

Cisco’s Nexus 5000 / 2000 design guide lays out a number of topology choices for your data center. Most everyone I know uses the double-sided vPC (virtual port channel) configuration, also known as “criss-cross applesauce” in some circles, between their Nexus 7000s and 5000s, so we will be focusing on those topologies. The first one is dubbed the single-homed or straight-through vPC configuration. This topology not only supports a vPC to a host from (2) FEXs, and FCOE (on the 2232), but also static pinning (for more information on static pinning please see the Nexus 5000/2000 Design Guide linked above). The main advantage here truly being that you’re able to build redundant teaming configurations from the servers to the fabric extenders.

Single-Homed FEX

The second configuration has been donned the dual-homed or active-active configuration. In this scenario, each FEX has a dual-connection to the 5Ks via a vPC. This configuration has a couple of downsides. First off, the port configuration needs to match on both 5Ks. Also, you’re no longer able to configure a port-channel to the host device from two different fabric extenders. You are however still able do either active-standby NIC teaming from servers to two different fabric extenders, or configure a port-channel that terminates on a single FEX (not on the 2148). You also lose the ability to do FCoE and static pinning.

Dual-Homed FEX

Other Caveats

  • The 2148 is not capable of doing a standard port-channel using 2 or more ports on the FEX. This is apparently a hardware limitation in the ASICs used on the 2148.
  • A port-channel with (2) 2148s (also called Single Homed Fabric Extender vPC Topology) is limited to 2 ports. Again, see above.
  • The 2148 only supports 1000BaseT… The 2248 has of course brought us 10/100/1000, and the 2232 has 1/10gb port options.
  • The 2148 also runs BPDU Guard… permanently. Recent code updates have allowed us to disable BPDU Guard on the 2248, but no such luck on the 2148 line.
  • FEXs cannot be used as a SPAN destination port… a rather handy feature if you happen to have Nexus 5510s, which only support 10Gb.
  • No distributed forwarding on the FEX itself… unlike what a modern line card would offer.

Why am I still deploying FEX?

Well, there are obviously a number of issues I see with the architecture, but overall there are a few key points that I feel continue to make it worthwhile to consider.

  • Simplifies network management by consolidating your edge network. This minimizes the amount of touch points required when provisioning new services.
  • Simplifies your network topology by not only reducing your spanning tree domain, but also by allowing you to take full advantage of uplink bandwidth using vPCs.
  • Flexibility, from the same management point, I can manage multiple switches with a variety of connectivity options.


With all of the gotchas and caveats, making a design choice can be difficult. But if you get in on the ground level and do your research, the Nexus 2000 can be a great asset for your data center. While I’ve certainly had my share frustrations with them, in the long run I have learned my lessons and I even plan to deploy more in the future. As for those of you looking for advice on a new infrastructure build-out, for most situations, I’m recommending to my customers and colleagues that they consider deploying Nexus 2248s in a single-homed configuration. This provides the best of both worlds in regards to redundancy and available features.


  1. David Knill says

    Great writeup. One minor nit….the 2248 is 100/1000, not 10/100/100. It’s bound to burn someone at some point.

        • Weylin Piegorsch says

          To be pedantic: the PHY is 10/100/1000; you can see it in a “show interface capability” output. It’s the NX-OS that isn’t using the 10BaseT capability.

          cumm111-0b05es58# sh inv fex 104 | i “104 CHASSIS”
          NAME: “FEX 104 CHASSIS”, DESCR: “N2K-C2224TP-1GE CHASSIS”
          cumm111-0b05es58# sh inv | i Chassis.*55
          NAME: “Chassis”, DESCR: “Nexus 5596 Chassis”
          cumm111-0b05es58# sh ver | i system:
          system: version 7.0(5)N1(1)
          cumm111-0b05es58# sh int e104/1/1 cap | i Speed
          Speed: 10,100,1000,auto

  2. says

    Nice writeup :)

    I think I heard that the fact that you can’t dual-home FEXs to both a server ( active-active ) and the 5Ks at the same time was a software issue that has been ratified in the latest nxos releases. Didn’t pay much attention to it as the source isn’t exactly reliable. Could you please comment on that ?


    • says

      As Minhua mentioned above, in 5.1.3 Cisco has added a feature called Enhanced vPC which allows port channel connectivity to dual-homed FEXes.I have not tested this feature, so I can’t comment on how “green” it is, or if it’s near production ready.

    • Brandon says

      This feature is only available on Nexus 5500-class devices, and isn’t currently supported on Nexus 5010 or 5020

  3. Minhua Zhu says

    Latest NX-OS 5.1.3 supports host vPC in a dual-homed FEX configuration, now labeled ‘EvPC’. I was hoping for EHvPC.

    • says

      Unfortunately this article was written before EvPC was released, and I have yet to do any testing with EvPC myself. I’d be interested to hear any of your experiences with it though.

  4. Anonymous says

    Nice writeup Tony, You mention that BPDU Guard can now be disabled in some code on the 2248’s? I knew this was the case and I’ve just checked the latest release notes for both the 2K and the 5K respectively and can’t see anything mentioned.

    Is this something that is yet to be released to us mere mortals?


    • says

      This is again my fault. IIRC NX-OS 5.0 at least allows you to turn on BPDU Filter on FEX ports (2248 at least) — I was originally told by a colleague that you could disable BPDU guard in a recent version, but apparently he was mistaken.

      My bad for not researching that more before throwing it in the article at the last second.

  5. David Chayer says

    Nice post Tony.  Do either of these designs change if you deploy a Nexus 5548 or 5596 as the L3 layer instead of 7Ks?

    • says

      No. They remain the same, but there are L3 performance limits in the NX5548 that you need to be careful about. The NX5K was designed as a L2 product and the L3 capabilities were retro-fitted and they aren’t seamless. Approach L3 on the NX5K with caution.

        • Kevin Dorrell says

          Tony, thank you for this writeup; it is proving very helpful. Could you (or Greg) point me to some experience (blog etc.)about these L3 limitations on the 5548.
          I am deploying a couple of 5548s in each of my 2 data centres, and I was figuring on using them for the inter-VLAN routing. It has got to be better than the Cat4500-on-a-1Gig-stick that I am using at the moment.

  6. says

    Of course another advantage of the single-homed configuration is that it allows you to use the full number of FEXes supported by the upstream switch (24 at L2, 8 at L3).  If you do FEX vPC then you’ve basically halved the number of FEXes you can connect to your N5K.  Probably not an issue at L2 for most people (24 FEXes would mean some oversubscription somewhere), but potentially more of a problem at L3.

  7. Jason Costomiris says

    Tony, nice post.  Do you find that your customers have concerns about lack of local switching on the FEX (since you did mention it as a caveat)?  I can see that as a huge downer in the financial services space, where latency is king.  Also, has anyone ever seen any latency metrics on the FEX line, either 2248 or 2232?

    • says

      I’ve been wanting to do some L2 latency comparisons with the 5k/2k FEX in various configuration and compare them with the 6500 and 7000 line, but due to a lack of Lab hardware, I’ve been unable to run any long term tests.

    • says

      If you are concerned about latency then the Nexus 5000/Nexus 2000 are not good solutions – look at the Nexus 3000 which is Cisco’s low latency switch. Note that the Nexus 3000 is a Broadcom Trident chipset so doesn’t support FEX, vPC or most Cisco Ethernet proprietary features. Since most other vendors use the same chipset, it’s usually better to consider non-Cisco products for low-latency applications.

  8. says

    Great post Tony. Any thoughts on the 7k/2k setups? I’m getting ready to have my first exposure to such a setup and will definitely be taking notes. Lots of them. :)

  9. says

    “FEXs cannot be used as a SPAN destination port… a rather handy feature
    if you happen to have Nexus 5510s, which only support 10Gb.”

    You can still use first 8 ports as 1 gig ports using any one gig SFPs. I use GLC-T= or SFP-T= and reserve port 5 or 8 for 1 gig port. Catch is you need to define speed 1000 under interface config.

  10. Kevin Dorrell says

    I am interested to know whether the 2248TP sends out BPDUs, i.e. is bdpufilter active? I have always hated the Cisco Catalyst “best practice” of putting bdpufilter and bpduguard on access ports.  It seems to be inviting someone to bridge two ports using a non-STP WallMart switch. OK, even with bdpufiter, the ports do leak a few BPDUs as they become active and that should trigger the bpduguard, but there is still scope for bringing down the network with a non-STP switch. That is why, on a Catalyst, I always have bpduguard on access ports, but I avoid using bpdufilter.
    Now, what about the 2248TP? I know that it has bpduguard by enforced default. But someone connected two ports of an HP Blade rack to two FEX ports, and the network melted down.  I figure that if the HP Virtual Connect was bridging the two ports, and the 2248TP was sending BPDUs, then bpduguard should have cut the loop. But it didn’t. Any comments?

    • Herbmeier says

      Hello Kevin,
      what version of HP virtual connect was in there?
      I just look about the possibility that VC can make a network loop.

    • Brandon says

      All Fabric extenders are forced to STP edge ports with bpdugard enabled — if a FEX receives a BPDU on the port, it is put into errdisable

  11. Spike says

    Just a note the 5010 can support 1gb, but only on the first 8 ports. 5020, first 16, and 55xx any of them can be 1gb/10gb

Leave a Reply

Your email address will not be published. Required fields are marked *