Show 88 – Server Internals and Network Performance

Having met at VMworld 2010, Daniel Bowers and I were having an ongoing discussion around server architectures and how they impact network performance. I convinced him to come onto the show and talk broadly about what goes on inside a server. Mostly we focus on how server performance impacts network performance. I wouldn’t call this a deep dive, more of an overview into some of the ideas to keep in the top of your head.

This show was recorded on 4th October 2011. It’s taken a while to find a slot where we can publish this show – we’ve got too much to talk about.

  • PCI Express bus connections can support 10GbE.
  • PCI Express is a point to point connection
  • Memory performance affects network performance.
  • You may get better performance with less memory modules according the type of memory bus in use.
  • Physical slots in the chassis have different properties.
  • Servers don’t make good switches


Daniel Bowers is an server design engineer and marketeer who analyzes server architectures and performance for the IT research firm Ideas International.  He’s also a primary representative at SPEC and TPC.   Follow him on Twitter, or read his blogs on

Show Links

Not All Servers Are Alike (With DNA)



Greg Ferro
Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count. He is a host on the Packet Pushers Podcast, blogger at and on Twitter @etherealmind and Google Plus.
Greg Ferro
Greg Ferro
Greg Ferro
Greg Ferro

Latest posts by Greg Ferro (see all)

  • Derick Winkworth

    Another one I would have liked to been on…  ARGH.  So many opinions.  So few outlets for forcing them on other people!

  • Dan

    I am not sure that when Daniel was talking about software switching, he was talking about vSwitch/NX1K…

    •!/Daniel_Bowers Daniel Bowers

      I was talking about software-based switching on x86 in which traffic enters or leaves the physical host.  I wasn’t specifically talking 1000V or any vDS.  

      Traffic that stayed on the same physical host (that is, VM to VM without leaving a single 1000V VEM) wouldn’t see bottlenecks due to the PCIe bus, and shouldn’t require as much memory bandwidth. 

      I haven’t seen any 1000V performance and throughput numbers involving more than 1 physical 10GbE port, but that would be enlightening to see.

  • Walter Gibbons

    Outstanding topic for a podcast. It was very enlightening to see issues that lay beyond the NIC. Great work guys! Keep it up.

    • Etherealmind

      Networking connects to servers so I thought it would be worth talking about that. Thanks for saying nice things !!