Show 35 – Media Markup – A Garden of Switches


This is Packet Pushers Media Mungle, new format were we gather people from from the technology media to sit down around the virtual workbench and look back at events with beady media eye. We’ll take closer look at what’s been happening and discuss what’s happening in a little bit more detail. We got a list of topics to discuss and here is who’s who in the zoo today.


Mike Fratto, Editor, Network Computing. | Twitter: @ mfratto

Shamus McGillicuddy, Director of News and Features for | Twitter : @shamustt

Matters At Hand

Juniper QFabric and it’s impact on the market

Cisco SecureX strategy – is there a plan ? First it was Borderless Networks, now it’s SecureX. Along the way we lost a lot of products, and seen a lot of delays.

Any news from Brocade ? Nah, still nothing.

There is also the Huawei/Symantec and Force10 deal which could be interesting. Breathes some life into Force10


Follow the Packet Pushers on Twitter (@packetpushers | Greg @etherealmind , and send your queries & comments about the show to [email protected].  We want to hear from you!

Subscribe in iTunes and RSS

You can subscribe to Packet Pushers in iTunes by clicking on the logo here.

Media Player and MP3 Download








You can subscribe to the RSS feed or head over to the Packet Pushers website to download the MP3 file directly from the blog post for that episode. .

Greg Ferro
Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count. He is a host on the Packet Pushers Podcast, blogger at and on Twitter @etherealmind and Google Plus.
Greg Ferro
Greg Ferro
Greg Ferro
Greg Ferro

Latest posts by Greg Ferro (see all)

  • Dan

    Greg, regarding HP's FlexFabric, why do you so dislike it?

    If I want 10G to the HP blade servers, and the server guys won't hear about 256 cables in the rack. (4 NICs per blade running ESX * 16 blades per chassis * 4 chassis in a rack = 256).

    With Flex Fabric I can:
    1. Have fewer uplinks from each chassis (2 or 4 10G + 2 or 4 FC 8G). Total 32 per rack!
    2. I can use vNIC to allow the ESX servers to have their 2 public and two private ports.
    3. I don't need to buy HBAs. G7 servers come with CNAs for "free" -> this alone save tons of money.
    4. FCoE terminates at the right place: the first hop, less then one meter :)
    5. Its not really a switch. Its almost like vSwitch! Very little to configure. No spanning tree! After two years with more then 50 ESX servers I had 0 (zero) problems with vSwitches.

    I checked the Flex Fabric switches in my lab, they just work.

    I don't see any other options. Do you have any suggestion?

    • Greg Ferro

      As a designer, it's a crappy mess of confusing SKU's and technologies. This part only works when the sun is in Uranus, and that part is needed when cross dependency Y meets the Sign of the Lion in Mars.

      – it's feature deficient.
      – software patches every other week.
      – can't find the software patches.
      – the scaling strategy is abysmal. And expensive, and requires licenses, that fail when things change.
      – the documentation is trash. Nothing explains how it all goes together.
      – you can't find the documentation on how it works.

      that'll do for starters.


      • Dan

        I agree about the feature deficient. But for me the features are enough.

        I found the documentation rather OK. Especially after reading the "VC for Cisco admins" and the cookbooks with to the point examples.

        I understand that a new product is little unstable at first, and if its still not stable yet I don't want to be there. I'll check it out, this is important!

        What about the scaling strategy? What do you mean by that?

        Can you think of other options for 10G connectivity for Hp servers, which do not involve 256 cables per rack (not including the FC cables).

  • nick

    Hi Guys, just quick note to say keep up the excellent work. I'm an avid listener of your shows , which has helped me in a number of discussions , debates and decisions ! I am spreading the word on a daily basis ….

    On a recent show , there was a discussion around the use of propitiatory protocols within designs, and the pros / cons …I also recall a discussion a number of months ago where it was mentioned ( allegedly ) that Cisco maybe ceasing development of EIGRP within IOS….I cant find any information on this and was wondering if this is indeed the case or hearsay…. this will assist in a number of key strategic decisions…