Show 51 – Juniper QFabric

The much anticipated show about Juniper QFabric has finally arrived. Although we had some recording problems, I’ve done the best I can to reduce the impact – apologies in advance for the poor experience.

Juniper QFabric is a different approach to building a switched network core – using what I call “expanded backplane” concept to create a One Layer data centre switch fabric that uses a different approach from what we are used to by extending the backplane from a single switch to a multi-switch design.

The topics that we looked into are :

  • QFabric – The elevator pitch
  • Where is the switching done ?
  • Where is the routing done ?
  • How does the Fabric scale ?
  • How about the switch interconnect performance ? Where are the performance bottlenecks ?
  • Linear pricing models.
  • Handling FC/FCoE
  • Would like the 10,000 foot view of the clos architecture in the interconnect switch that facilitates any-to-any.
  • Security considerations.

Product information can be found at QFabric Marketing Launch page

Related

Greg has a number of posts:

Hosts

Tom Hollingsworth Web: http://networkingnerd.wordpress.com | Twitter: @NetworkingNerd

Ethan Banks Web: http://packetattack.org | Twitter: @ecbanks

and last, and the very least:

Greg Ferro http://etherealmind.com| Twitter @etherealmind

Feedback

Follow the Packet Pushers on Twitter (@packetpushers | Greg @etherealmind | Tom Hollingsworth), and send your queries & comments about the show to [email protected].  We want to hear from you!

Subscribe in iTunes and RSS

You can subscribe to Packet Pushers in iTunes by clicking on the logo here.

Media Player and MP3 Download

You can subscribe to the RSS feed or head over to the Packet Pushers website to download the MP3 file directly from the blog post for that episode.

 

Greg Ferro
Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count. He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus.
Greg Ferro
Greg Ferro
Greg Ferro

Latest posts by Greg Ferro (see all)

  • http://twitter.com/ccie15672 ccie15672

    Anticipated indeed.  Awesomeness +1 for PP…

  • http://twitter.com/maschipp Michael Schipp

    ok cut and paste sucks on this one. Sorry to lazy to fix it up, content is still there :)

    Some thoughts

     

     

    Qfabric

     

    Director site where? 
    E.G no room in  the interconnect
    rack, so let say Rack 5.

     

        Rack         Rack          Rack         
    Rack              Rack                 Rack          Rack         
    Rack     Rack

      48x10g      48x10g     48x10g    
    48x10g      Interconnt1     48x10g     
    48x10g     48x10g    48x10g

                                                       
                               Interconnt2

    9 Racks

    8x 48 = 384 10G ports

    Edge port oversubscribed = 480G into 80G = 6 to 1.

    Usable edge ports 384 10G @ 6:1 oversubscribed

     

    Brocade VCS

     

        Rack         Rack          Rack         
    Rack          Rack
             Rack         
    Rack     Rack

      60x10g      60x10g     60x10g    
    60x10g      60x10g     
    60x10g     60x10g    60x10g

     

    8 Racks

    8x 60 = 480 10G ports

    Edge port oversubscribed = 480G into 120G = 3.2 to 1.

    Usable edge ports 384 10G @ 3.2:1 oversubscribed

     

    Now what we want to do no oversubscription.

     

    8 Racks

    8x 60 = 480 10G ports

    Edge port non oversubscribed = 300G into 300G = 1 to 1.

    Usable edge ports 240 10G @ 1:1 non oversubscribed

     

    Current max scale for Brocade VCX is 12 switches – e.g.
    60*12 = 720 port of 10G

     

    Brocade use ISL’s for frame based load balancing

    Juniper on their 40G ?? 
    I do not know.  Is it flow based?

     

    Brocade interconnects or ISL as they call them are multiple
    10G ports – think Twinax or SR optics VS 40G optics – I think I can guess which
    is lower in price.

     

    Brocade you can start your patch of green at 2 x 24G
    switches

    Juniper – I am guessing  the price of the two interconnects along will
    mean you might just need the 500 10G port to paid for it.

     

    Brocade interconnects or ISL as they call them are multiple
    10G ports – think Twinax or SR optics VS 40G optics – I think I can guess which
    is lower in price.

     

    Brocade VCS has been out since December 2010 vs Juniper
    still coming.

     

    Juniper need the interconnecter/s and director/s vs Brocade
    do not.

     

    Brocade have Out of band management port VS Juniper do not.

    • http://twitter.com/abnerg Abner Germanow

      Hi Michael,

      Thanks for taking the time to take a closer look at Juniper. I’m not going to dissect your post, and I’m not sure I understand the formatting of rack by rack, but a few notes as you seem to be comparing apples to oranges:

      1. For scale in the 10’s or 100’s of ports where you want a pool of compute to seamlessly move VMs around, Juniper has been shipping Virtual Chassis for 3 years. For QFabric, you are correct that 500 10G ports is a big place to start for most enterprise datacenters, but there are an increasing number of datacenters that need much more of than that. That 500+ number will come down – envision a blazingly fast interconnect in a much smaller form factor. Also, you will find the rack space comparisons will change significantly as you scale out to 1000s of ports. See this post for how one of our customers uses Virtual Chassis for his servers: http://www.myteneo.net/blog/-/blogs/how-i-use-juniper-4200-for-servers

      2. If you are getting started today, you can use the QFX 3500 (shipping since 1Q11) as a TOR, connected to an existing core Ethernet switch, Cisco, Juniper, Foundry/Brocade – doesn’t matter. Then as you scale up, you can move off of that legacy core to the interconnect and director (2u, need at least 2 of them, but can add more as you scale into the 1000’s of ports) See this paper for more info: http://www.juniper.net/us/en/local/pdf/whitepapers/2000387-en.pdf

      3. With QFX 3500, you are looking at 48 10G ports and 4 40G uplinks into the interconnect 480/120G = 4:1 over subscription fully loaded. Or you can have line rate if you don’t fully populate the QFX 3500. If you use the QFX 3500 just as a stand alone TOR, it’s a 63 port 10G switch.
      More here: http://www.juniper.net/us/en/products-services/switching/qfx-series/

      Thanks for taking the time to compare what you know to new and alternative architectures. There is a lot of change and options for network professionals to evaluate.

      Cheers,
      Abner @ Juniper

  • http://twitter.com/chubirka Michele Chubirka

    This show was awesome! I sent it to my Juniper SE and he said it was great.  Packetpushers makes me seem so much smarter at work, I’m almost afraid to share it with my co-workers ;-). Also can’t wait to use the expression “unicorn tears” in a meeting.

  • http://twitter.com/TheParadiso Paul Paradiso

    Another great show, Greg! I love the patch of green in the brownfield. Definitely going to use that. When you get to a ‘best of show’ milestone, be sure to put that in there. And you can’t forget the clean air bum bugle quote! :)

  • Marek_0564

    What about FW’s? Most server to server communication goes through at least one layer of FW. Is the 3500 a stateful FW as well? I’m not sure I understand how FW’ing will be done with this architecture

7ads6x98y