Show 34 – Breaking the Three Layer Model


Tom Hollingsworth | Twitter: @NetworkingNerd

Brandon Carroll | Twitter: @brandoncarroll

John McManus | Twitter: @_johnmcmanus_

Topic 1 Juniper QFabric Announcement

Is this the point we start to re-think the 3 Layers of Network Architecture, will the future generation laugh at core, distribution and access designs. What are the implications for future designs?

Cisco response is kind of hysterical.

Greg posts about Diverging Ethernet Switch markets is

Greg’s post on controller based network sis

Topic 2 – Who’s scared of multivendor networking ?

Does anyone remember those days ? There seem to be a drive at proprietary technology around the Data Centre these days, QFabric Unified Fabric, this does not help with multivendor environments.

Topic 3 – What impact has could computing on network design for the network Design Engineer ?

I see network a bunch of cuboids linked together which allows communication between end point, I see no fluffiness here.

Topic 4 – The Last of the Bogons / Bogon begone

RT @_JohnMcManus_: BOGON updates tonight, feeling all sentimental as this should really be the last time :( bye bye BOGONS

Topic 5 – Backplanes and Frames – User Question

From: Travis Marlow [email protected]

Message: Does a tagged frame stay tagged when it enters a switch and traverses the backplane? I have always wondered this and haven’t found the answer in all of my studies.

Cheesy Bogon Jokes

A Bogon walks into a bar. The Bartender says… You’re not supposed to be here!

A Bogon walks into a bar. The Bartender says….. I thought you were extinct !

A Bogon walks into a bar and says to the Bartender… Take me to your leader!


Follow the Packet Pushers on Twitter (@packetpushers | Greg @etherealmind | >Tom Hollingsworth <), and send your queries & comments about the show to [email protected].  We want to hear from you!

Subscribe in iTunes and RSS

You can subscribe to Packet Pushers in iTunes by clicking on the logo here.

Media Player and MP3 Download

You can subscribe to the RSS feed or head over to the Packet Pushers website to download the MP3 file directly from the blog post for that episode. .

Greg Ferro
Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count. He is a host on the Packet Pushers Podcast, blogger at and on Twitter @etherealmind and Google Plus.
Greg Ferro
Greg Ferro
Greg Ferro
Greg Ferro

Latest posts by Greg Ferro (see all)

  • rscheff

    I find it funny that everyone is overlooking the real roots of all this multipath technology – SecureFast by Cabletron from the dark age (1996-2002).

    Fully meshed L2 (with L3 lookup) networks already with 100 Mbit / 1GE switches back then, even documented in IETF RFCs.

    However, for the DC, the real question is:

    How does that address Incast (it doesn't by itself; only throw more (useable/useful) bandwidth at the problem, pushing it out; builting fat tree designs require intelligent ECMP – or a seperate control fabric which oversees the whole network.… ).

    How does that address Latency? How does it address Bufferbloat? Is this compatible with ECN marking (Broadcom chips can do this even when in L2 forwarding mode), and QCN (802.1Qau)?

    For the DC, you want your Queues to be nearly empty (vendors appear to be building DC switches with many GB of buffer RAM instead…), while maintaining high throughput – which requires some proper control feedback (ECN – TCP, QCN – L2/L3 flows classified in the MAC layer).


  • Mark

    You mentioned all the pit falls with proprietary technology around the Data Center (QFabric, FabricPath, L2MP, etc.). Are the market or technology forces strong enough to make a customer commit to proprietary technology? What are they?

    • Greg Ferro

      Mark, as you are a Cisco employee it is courtesy to declare your interest before you ask questions.

      For a fee, I would be pleased to offer marketing advice. Don't hesitate to get in contact.

  • Fernando

    I just listened to the show this morning, great stuff as usual! I am truly thankful that you all take the time to create this series. I have reviewed it on iTunes and have mentioned to others as well.

    I do have a couple of comments (sorry if they run too long):

    – I know we all 'hate' NAT and can't wait for IPv6's end-to-end model. Count me as naive, but I can't imagine that we'll reach some "nirvana" where this will be broadly true in corporate networks. NAT, clunky though it may be, is here to stay. Reason for that would be 'security' in the sense of not disclosing more information than needed.
    I know "port-scanning" an IPv6 subnet is not viable, but there will be organizations – several of them, I think – that will not be comfortable with details of their network topology just freely available. Using a crude but possibly useful analogy, remember how Woodward & Bernstein used the Commitee to Re-elect the President's phone directory during the Watergate investigations.
    Still on the phone analogy, any call that you make to a call-center is routed through a gateway right? Or sending paper mail, how about the use of PO Boxes? Or money transfers (Bank A doesn't have full visibility of Bank B accounts, but uses a clearing house instead)?
    I think that as a society we realize there is a time and place to end-to-end connections but also for mediated/proxied/controlled ones. Network-wise it will be the same. Be it dual-stack load balancers, firewalls with some sort of NAT66, whatever, but it will be there.
    I humbly suggest that arguing that NAT will "make life harder" or will "break some protocols" will be a weak argument when compared to how the business wants to run.

    – On the topic of proprietary versus standards, I too prefer to recommend standards whenever possible. That being said, we should recognize that standards-based solutions lack at least two things that are often critical in business:
    – timeliness. A proprietary solution will usually be available *well* before a *fully-equivalent* standards-based one. Look at corporate messaging: how long was it before the standards-based solutions evolved to the point of providing something "equivalent" to Notes or Microsoft Exchange? Some might argue "never".
    – turn-key, likely easier to deploy. In this day and age of "do more with less" (, professionals just don't have the cycles. The choice between deploying "fire-and-forget" EIGRP in x hours or "open" OSPF in 2x hours might not be an option given business constraints.

    If we don't address these two areas and indicate clearly how the issues can be "mitigated" and also the BUSINESS benefits of the standards-based solution, that solution will be vulnerable to those who oppose it. Imagine your typical proprietary vendor account executive golfing with your CIO, "did you know that using our HyperNewSuperCloudFabric you can get your project deployed faster and easier than the way your people are doing it now? They should look into it…"

    Again, I like and recommend standards-based wherever possible.


    (Disclosure: I work for a vendor – Crossbeam – but I'm offering the comments as a fellow professional. If my opinions seem to favor my employer, you have it backwards: I work where I do because of the opinions that I have. Post hoc ergo propter hoc.)