Show 38 – Comparing Data Centre Fabrics from Juniper, Brocade and Cisco

Kurt Bales has a customer who wants to buy a new Data Centre Network and the three main networking vendors (Juniper, Cisco & Brocade) have pitched at him and the customer. Kurt then contacted the Pushers and said “This would make a great podcast to talk about how it looks, works and the reality of the so-called “Data Centre Fabric networks, plus I’ve got some questions that I’d like to get some second opinions.”

So we rounded up Ivan from IOS Hints and Greg from EtherealMind to record a fast, furious and focussed look at the state of play with the three data centre fabrics today. Lots of speculation, wild guesses and deep diving followed. I learned heaps.

Topics that we covered: * data centre fabric design, TRILL, Borg or Big Brother approaches, * FCoE, iSCSI, NFS, Routing, * Juniper, QFX 3500, QFabric, ERX, ICN, * Brocade, VDX, VCS, FSPF, * Cisco, Nexus 7000, Nexus 5000,

Name: Ivan Pepelnjak

Web: http://blog.ioshints.info Twitter: @ioshints

Name: Kurt Bales

Web: http://www.network-janitor.net Twitter: @networkjanitor

Name: Greg Ferro

Web: www.etherealmind.comTwitter: @etherealmind

Links and Posts

This post is where Ivan looks into TRILL and STP interaction at the edge of the L2 network. IOSHints TRILL/FABRIC PATH – STP INTEGRATION

Here is the post where Ivan outlines the Borg / Big Brother architectures. THE DATA CENTER FABRIC ARCHITECTURES

Brad Hedlund’s post on  Inverse Virtualisation – it seems Cisco might be forgetting that  there is more than one way to do it and Brad talks about the the other ways to do it while pointing out that the way HE does it is best. Keep up the good work Brad and stay “on message” for Cisco – keep pulling for the team.

Greg’s post at EtherealMind.com on Controller based networks and might be worth contrasting with the move away from controller based networks in wireless AeroHive, HP, ‘Big Boner’ AP’s and Wireless LAN Controllers

 

Feedback

Follow the Packet Pushers on Twitter (@packetpushers | Greg @etherealmind | Tom Hollingsworth), and send your queries & comments about the show to [email protected].  We want to hear from you!

Subscribe in iTunes and RSS

You can subscribe to Packet Pushers in iTunes by clicking on the logo here.

Media Player and MP3 Download

You can subscribe to the RSS feed or head over to the Packet Pushers website to download the MP3 file directly from the blog post for that episode.

 

Greg Ferro
Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count. He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus.
Greg Ferro
Greg Ferro
Greg Ferro
Greg Ferro

Latest posts by Greg Ferro (see all)

  • John
    • http://etherealmind.com Greg Ferro

      Fixed, thankyou.

  • Jonathan Hurtt

    Great show again, but no talk about 802.1aq/Shortest Path Bridging in the Data Centre context…With HP joining Avaya, Alacatel-Lucent and Huawei and supporting SPB, will Packet Pushers record a show on a ratified standard-based solution that fits in the Data Centre, or if you don’t think 802.1aq/SPB could answer the needs of a Data Centre, would like to hear why…

    Also maybe a show about if you see these Data CentreFabrics extending to the Campus/Core and out to the closet. Seems like a way to simplify the network (also allow for multi-tenant networks, and with MACinMAC transport in the core and DC, would it help with migration to IPv6… just a thought)… cant wait for the show on OpenFlow…

    Let it be known that Avaya's Implementation of SPBm (IEEE 802.1aq with IETF Draft enhancements) allows for dual homing into the cloud.

    (Full Disclosure: I work for Avaya, but this comment is not an official Avaya Comment)

    • http://etherealmind.com Greg Ferro

      My response is here.

      Respect for disclosure. Thankyou.

      • Jonathan Hurtt

        Thank you sir.

        • http://etherealmind.com Greg Ferro

          If anyone from Avaya marketing wants to explain why it's better, and why they have a good story, then by all means get in contact. Sometimes all that's needed is a discussion and new information to change my point of view. I'm a customer, and I don't always have all the information.

          • Jonathan Hurtt

            Completely understood…Just to be clear, if you look at SPB and TRILL at a technical level (not Layer 8-10), they are very similar and some could say SPB is a more viable option for many enterprises for the sole fact they can, as you say "hack the exist­ing sil­icon"…

            If you want to take a look at a whitepaper Avaya has written to compare and contrast TRILL vs SPB, this is a good start to seeing the differences of SPB and TRILL

            Once again Avaya uses SPBm (not SPBv) and also has IETF draft extensions that allow some features enterprise require today. This information is covered in the document.

            Link to "Compare and Contrast SPB and TRILL": http://goo.gl/ZBMT0

  • Carlos

    I got the impression from Omar Sultan words @ runt packet of 12/7/10 that FabricPath was equivalent to TRILL at the data plane. Now I get that the frame format is not the same ?
    Omar did say that the hardware was ok to run either…and implementing TRILL when the control plane was settled would be done in software, right ?

  • Michal

    hello Gentlemen ,
    interesting talk and BTW speaking about different new layer 2 c-plane trends I think this draft can be very complementary to that discussion:
    http://tools.ietf.org/html/draft-raggarwa-sajassi

    Cheers,
    Michal

  • Sam Stickland

    I had a presentation from Brocade last year on their fabric. IIRC they said they could balance single flows over multiple links because their fabric was aware of both the load and latency of each link. I'm sorry I can't remember more details but I do remember them giving an example of it being aware that two links between two switches may have different cable lengths it taking that into account when load balancing.

  • http://www.asi.com.au Michael Schipp

    Brocde ISL

    single flow, load balanceing from switch to switch = ISL (Inter Switch Link)

    Nothing new to FC world. VERY new to Ethernet.

    Frame based load blaneing cool

    Saw a demo on one (1) flow of FCoE traffic spread over two TRILL parths (one of 3 10 GBE links and one of 1 10BGE) – link droped 1 at a time and load balanceing works. Not like a LAG.

    Brocade VDX supports a current max of 12 switches in a fabric – max of 60 * 12 10GBE ports in a single fabric (720 10 GBE ports).

    That is shipping today.

    Now scuttlebut
    * Single management due ~ October 2011
    * Layer 3 will be added ??? Q1 2012 (Guess but has been stated by Brocade that it is coming)

    VDX also offers POD (Port On Demand) licenses. – New to Ethernet – e.g. Buy the 60 port switch and pay for 40 ports and can then upgrade/license the remaining 20 ports in 10 port licenses = pay as you grow.

    IS-IS TRILL date ?????????
    as stated in the PODCAST two switches can form a frabic (with no license) – up to 12 switches can form a frabic (with license – not stated)

    Anyway a good way to spend an hour :)

    P.S. ASI Solustions is a Brocade Elite Partner and an Arista Networks Parnter. I do not work for a vendor, these are MY Views only.

    Thanks
    Michael.

  • Pingback: Brocade Virtual Cluster Switching revisited « Mes 2 cents()

  • Pingback: Cisco’s new data center fabric | The IT Manager (ITMGR.org)()

  • iris

    Great podcast