Use A Cisco Nexus 5K As A Core Instead of a 7K? Isn’t That…Crazy?

One of the challenges of dropping a Cisco Nexus 7K as a core switch into the small or even mid-sized enterprise IT shop is the cost. Have you priced a Nexus 7K? It’s a shocking capex number if you’re a smaller shop, and the ongoing opex for support isn’t cheap either. If you’re on a budget, and don’t require the number of 10GbE ports that a Nexus 7K will allow you to scale to, there are many lesser expensive options. If you’re committed to Cisco as your networking vendor, one such option is a Nexus 5548UP or 5596UP chassis with the L3 option. There’s limitations and design considerations for sure, but it’s not as crazy as it sounds when you start thinking it through. Intrigued by this notion? I cover the topic in a lot more detail in a two part article over on SearchNetworking hosted by TechTarget.

Links

  1. Cisco Nexus 5500: A viable core switch for the midsized enterprise? (part 1 – link to TechTarget, register wall)
  2. Implementing Cisco Nexus 5500 as a core switch: Design considerations (part 2 – link to TechTarget, register wall)
Ethan Banks
Ethan Banks, CCIE #20655, has been managing networks for higher ed, government, financials and high tech since 1995. Ethan co-hosts the Packet Pushers Podcast, which has seen over 2M downloads and reaches over 10K listeners. With whatever time is left, Ethan writes for fun & profit, studies for certifications, and enjoys science fiction. @ecbanks
Ethan Banks
Ethan Banks
  • Mskeider

    Thanks for the post Ethan. Just want to comment that there is no policy based routing in the Nexus 55XX with L3 — unless that has recently changed.

  • Robert Harper

    CAPEX is just one factor to consider.
    -Will the 5k support ISSU with layer 3 enabled?
    -Does the 5k have the chutzpah to enable snmp interface polling and netflow?
    -Will it be easier to manage than a 7k (read eventual vdc for the datacenter and campus distro)?
    -Will personnel cost be lower (see above collapse infrastructure via vdc).  Less equipment to manage.

    One of my co-workers came up with the idea to collapse our 4 VSS cat6500s (DMZ and CORE) into 2 Nexus 7k’s.   If we have the $$ we may jump at it.

    Bob

    • http://packetpushers.net/author/ecbanks Ethan Banks

      Agreed that there’s LOTS of other factors to consider, including everything you stated. There’s pros and cons to the solution, and money, while a big factor, is certainly not the only one.

      – No 5K ISSU support with the L3 module as of NX-OS Release 5.1(3)N2(1). So you’d have to plan an upgrade via manually migrating traffic over to the other core switch. Dual 5500’s is assumed in my scenario. 
      – SNMP interface polling. I haven’t looked at the MIB, but it seems sort of crazy that such rudimentary functionality would be missing. I’m going to Cisco tomorrow or Friday, and I’ll ask since Google isn’t turning up much on that point. I’ll also ask about netflow, although for me personally, I don’t have a data collector that could keep up with that much data.

      – I’m assuming the goal is to throw down multiple environments onto the same set of physical hardware using VDCs, therefore reducing the number of devices you have to manage. Not a requirement of mine, do not know off the top of my head. Am not aware if VDC is on the 5500 road map. If it is, I’m sure it will be a licensed add-on. Will ask at Cisco.

      Personnel cost won’t factor into things much for a small shop, my opinion. 2 5500s or 2 7Ks – either way, no change in staffing.

      • Robert Harper

         For snmp I was just wonder if the CPU was up to snuff for baseline gathering (frequent polls) given the amount of load adding L3 will place on the box.  I know that (depending on the device) that crawling snmp mibs can tax a router/switch.  I just priced a 5596 with mostly 10G SR optics at ~$140k.
        As always the answer is ‘it depends’.  Hey, that’s a good show title.

        • http://packetpushers.net/author/ecbanks Ethan Banks

          I wouldn’t hesitate to do regular SNMP polling on all the interfaces on a 5596. The L3 functionality is going to add a nominal load to the control plane, but I don’t see it mattering significantly. Today, I poll all interfaces on a 2 minute cycle of a pair of 6509s with Sup720s loaded with mostly 48 port line cards. No CPU impact, and that’s will a fully encrypted SNMPv3 session. I’ve only ever flatlined a Cisco CPU with SNMP when doing a full walk from high up in the tree…that’s not a very nice thing to do. But hitting specific OIDs to poll interface octets and error counts? Shouldn’t be a big deal, even with L3 control plane load.

  • Merrill Hammond

    In the part 2, you comment that there’s no 40Gb option…  Seems like Cisco could make a 4-port 40gig module pretty easily since the bandwidth to the slot would be the same.

    • http://packetpushers.net/author/ecbanks Ethan Banks

      Without knowing what the backplane connector and internal chipset might support, I’ll agree that from a bandwidth perspective, it’s plausible. I’m skeptical that there’d be that much demand for 40GbE on this platform, though – at least not from the market I’m targeting with this approach. OTOH, I could see a 40GbE module being driven by the scenario where the 5596 is being used as an access layer switch, and someone wants 40GbE to carry that aggregated traffic north-south. But then again, you’re getting into a whole different design game, and I really think you’d position something like the Nexus 3016 or 3064 into those spots. Tough call. At this point, I just don’t see it.

  • http://twitter.com/ehelmydn Eric Helm

    Seems to me that this makes a good case to look at other manufacturer’s gear such as Brocade, HP, Force10, etc. Staff will already have to be trained on or learn NX-OS, so learning a different system wouldn’t be a barrier. One just might be able to get a switch with all the features of a 7K for costs closer to a 5K. Then there are no design trade-offs… What do you think?

    • http://packetpushers.net/author/ecbanks Ethan Banks

      I focused on Cisco in this case because so many shops (especially small/mid enterprise) are averse to anything that doesn’t say Cisco on the front. I personally would be evaluating the viability of building out a small Brocade VCS fabric if I had a choice. It’s time to start moving away from the traditional core/dist/access layer models. That’s a design that presumes north-south traffic, when we’re seeing a lot more east-west these days, and I don’t see that changing anytime soon. Therefore, I don’t think a big chassis switch sitting at the middle of the data center has to be the answer. With new L2 multipathing options that are out there (particularly TRILL), it starts making sense to look at meshes of smaller density switches instead. Now, if you NEED the big port density…then you need what you need, (in which case using the 5596 as a core switch was never really a consideration). But I think there’s a lot more shops that will get along great with a smaller port count – and smaller switches – as they move into virtualization and the 10GbE world. The issue is then a functional and topological one, which solutions like VCS can address. Juniper QFabric also addresses it, but in a very different (and expensive) way, targeted at a very different sort of customer. HP has no story there as yet, aside from MLAG with IRF, and while IRF can compete with Cisco VSS and vPC, it’s not really comparable to the design options you get with TRILL. Anyway…long answer to a simple question.

      The short answer is, YES, go look at non-Cisco vendors if you can. You can get more functionality for less money, but know what you’re really getting. Ask lots of questions, and never trust the sales team. Don’t accept “yes, it can do that” as the answer. Make ‘em prove every feature before buying, especially if it’s fancy new features.

      • http://twitter.com/ehelmydn Eric Helm

        Thanks for the reply Ethan… Brocade VCS is indeed very interesting for smaller shops indeed. However, no L3 at the moment unless its been recently announced. I’ve also heard rumblings of a “baby” QFabric on the way that would be great for the 95% of shops that can’t afford and don’t need the scale of the current iteration.

        I agree completely with “never trust the sales team”. I see/hear marketing-speak often and frequently assist my clients with bake-offs so we can asses the solution in the proper context and avoid surprises later on.

        • http://packetpushers.net/author/ecbanks Ethan Banks

          LOL. So, the context of my VCS comment was after sitting through 4 hours of Brocade discussions. I redesigned the main data center in my head about three times as I was listening. Flatter, mostly. But L3 never came up in the conversation. In fact, one of the questions I meant to ask the guy was how to best optimize north-south flows. I never got there, because I got bogged down in a conversation about TRILL and TTL. Then you point out that there’s no L3 in a VCS fabric switch. So, I pull up a VCS reference doc, and whaddya know? There’s routers hanging off the edge of the fabric cloud. http://www.brocade.com/downloads/documents/white_papers/Brocade_VDX_VCX_Use_Cases_WP.pdf

          • http://twitter.com/ehelmydn Eric Helm

            I may be wrong, but I believe that is how QFabric wants to add L3 to the fabric as well by hanging an EX or MX switch off the fabric… Although, I’ve heard through the grapevine that there may be a QFabric enabled card coming out for the MX switches. 

            In a pure L2 data center fabric do you think its a big deal to have external routers handling North/South traffic when East/West is the bigger concern from a speed/redundancy ease of management?

          • http://packetpushers.net/author/ecbanks Ethan Banks

            No, I think you hit the new design paradigm nail right on the head. If east-west becomes the propensity of traffic, then a well-designed L2 topology becomes more important than the performance of a core switch having to throw traffic between VLANs. You need the plumbing into the core switch from the upper layer switches, but the core switch doesn’t have to do ALL the heavy lifting in the form of raw packet forwarding performance.

            But obviously success would depend a great deal on what servers are nailed up in which IP subnets. For instance…IP storage mounts separated by an L3 boundary, or backups happening across VLANs would end up working the core a lot harder.

          • Gibbon

            I think you may be mistaken here, Qfabric has its own L3 capability :-
            http://www.juniper.net/techpubs/en_US/junos11.3/topics/concept/layer-3-summary-qfx3000.html

        • Jamie Burch

          I’ve also bee interested in the Brocade VCS, but I cannot get past the fact that it has a limit of 24 switches in the fabric, and auto discovers all fabric-mode switches within the L2 domain.  Gives it a pretty hard stop on scalability in my opinion.

  • Norgs

    I have a bunch of 7K’s and a bugger bunch of 5K (5020 and 5548UP) in an edu in Australia.  Right now my 5K’s have 5.1(3)N1(1a) on them, and are all L2 switches.

    There is NO WAY I would trust a 5K as my core.  

    The features on the 5K might be enough of what you need to do for your small/mid business, but …… will those features work?  
    They might this version, but no doubt some feature you use will stop working when you do an upgrade.  Or maybe you will upgrade to fix a bug with a feature you use, and it just wont work on the next version.

    Then, if you are migrating from IOS devices to these NX-OS devices, there is the features that are not available on NX-OS.  Thats probably a story for another post.

    • http://packetpushers.net/author/ecbanks Ethan Banks

      I agree there’s not feature parity between IOS-SX and NX-OS, which I pointed out in the articles I linked to at TT. So, it’s a game of knowing what you need, and making sure the 5K can do it. The fundamentals are definitely there, and for many shops would work just fine.

      As far as NX-OS’s bugs, that’s a tough call. Do you reject a platform because Cisco’s quality control continues to be problematic? Poor QC is a systemic poison in their development process, but I don’t think that problem isolated to Cisco alone. Are any of the vendors doing code releases well? I’ve been burned so often and for so long by so many vendors, I just assume “fail” out of the gate for any software upgrade I do, but hope for the best.

  • http://twitter.com/aakso Anton Aksola

    One option to consider would be to use the new 1U 4500X as a L3 edge maybe in combination with NX5k (L2 only). You have the benefits of IOS-XE + all the traditional features of classic IOS. As a added bonus you can also get 40G support.

    • http://packetpushers.net/author/ecbanks Ethan Banks

       Ah, yes. The 4500X comes up again. I need to review that one…

    • http://twitter.com/bobmccouch Bob McCouch

      I’ve been looking at the same for a customer for whom a 7K core is unreasonable and their older 6500E would require extensive/expensive upgrading to provide decent 10G aggregation density. Looking at 4500X cores for 10G downlinks to the access layer and to provide L3 routing with 5K/2K combos to provide all the data center port density. Existing access switches would just migrate to the 4500X in this collapsed-core design.

  • Seyda

    Hi Ethan,
    Ask Cisco about MPLS VPN on the 7ks. I dislike vrf lite configurations.

  • http://dasblinkenlichten.com/ Jon

    Hi Ethan,
    I think the bottom line to this point is that it’s a matter of ‘it depends’ right?  There are several functions/features of the 5k that take it out off my list of core devices.

    1 – The layer 3 module is a standalone module currently.  I wouldnt consider a 6k a core switch without redundant SUPs so a 5k with a single point of failure doesnt make the cut (granted you could HSRP between two 5ks).  Not only that, but it’s basically a router on a stick within the 5k.  So you have something like ~150 gig of throughput to the card but if you hash funny you could end up with a 10 gig connection to your layer 3 forwarding engine. 

    2 – The port buffers are too small.  We looked at using one for backup distribution and found out that the port buffers were even smaller than what we were used to seeing on a 6k line card.

    Otherwise the switch is appealing.  A 5596 with close to a 2 terabit backplane in 2Us is pretty neat.

    Thanks – Jon

    • http://packetpushers.net/author/ecbanks Ethan Banks

      Oh yes – “it depends” plays hugely here. My argument in this scenario has been for smaller shops that need to consider price/performance because the 7K in pretty much any configuration is a budget-buster. There are plenty of shops where a 5596UP will be good enough…and many where it will not.

      I’ve never tried to position the 5596 as a big data center core switch here, although a lot of the objections I’m getting to the idea seem to be from folks with requirements that imply full-on data center needs with lots of pods.

      It’s a straightforward matter to configure a pair of 5596s in a vPC configuration with HSRP. I can’t imagine someone wanting to place a single 5596 in any configuration other than access-layer ToR, and even then only in certain situations. None of the Cisco config guides suggest such a thing…all their guides point towards vPC with MEC plumbing to everything upstream. So, I think 2 5596s configured thusly is just fine.

      I have mixed feelings about dual sups. I can build the case both for (it usually works, reduces packet loss during IOS events) and against (cost, doesn’t always failover properly, introduces other issues). I have no issue with a pair of single sup devices, assuming the environment can tolerate a nominal hit if one of them goes down.

  • Stevenjwilliams83

    We have just moved to a 5596 and we feel that the L3 module is not a true L3 engine. It looks to us like it doesn’t support routing protocols, which is a huge disadvantage in a core switch. I do not think the 6500 has been replaced with anything, but a 7k in the core at this point. I think 6500’s are very widely used still and don’t see many companies moving away from them anytime soon.

  • http://twitter.com/greginmadison greg padden

    Also you get a max of 28K mac addresses. Doesn’t work for us.

  • chris stand

    the 5672 and 56128s are a more better choice -now-