Show 27 – Layer 2 Data Centre Interconnection

Scott Lowe and I planned to have a discussion around L2 Data Centre connection the VMworld 2010 in Copenhagen but Scott and I could not meetup. So we recorded this podcast to start talking about some of the issues, technology and solutions on L2 DCI. I’m not sure that we have all the technology or knowledge in place, so keep watching for more discussions in the future. We have only scratched the surface. If anyone else wants to discuss the topic, please get int contact as more would be better.

Because the recording lasted for a hour and a half, we split the show into two parts. This is Part 1 – Layer 2 Data Centre Interconnection where we talk about the problems, challenges. Next week we go into more esoteric topics on vCloud networking, OTV, VEPA and network appliances as virtual machines.

Guests

Scott Lowe http://blog.scottlowe.org Ivan Pepelnjak http://blog.ioshints.info @ioshints

Layer 2 Data Centre Interconnect – open discussion

  • The challenges of extending L2 DCI network betweens sites.
  • Outlining the threats of large Layer 2 spaces in terms of network vulnerability.
  • quick review of vMotion traffic and its requirements.
  • Reviewing the impact of fault domains and L2 VLAN space as uncontrollabl
  • It’s all about the application.
  • Latency, latency and the impact on vMotion switchover.
  • Greg’s ariticle on the Traffic Trombone and Ivan’s extensions to it.
  • It’s worth remembering that Network Load Balancers can be more effective than using vMotion is certain use cases.
  • vMotion is not for unplanned outages or DR. More for planned outages or possibly workload balancing.
  • Debating whether we are able to QoS L2 traffic when flowing between data centers thus answering the question about guaranteeing levels of service for multiple hosts in a VLAN.
  • Some humour on Pseudowire over MPLS over GRE over IP. But’s its actually real.
  • Some review on the F5 EtherIP technology and whether it’s relevant to the solution.

Feedback

Follow the Packet Pushers on Twitter (@packetpushers | Greg @etherealmind | Ethan @ecbanks), and send your queries & comments about the show to [email protected].  We want to hear from you!

Subscribe in iTunes and RSS

You can subscribe to Packet Pushers in iTunes by clicking on the logo here.

Media Player and MP3 Download

 

You can subscribe to the RSS feed or head over to the Packet Pushers website to download the MP3 file directly from the blog post for that episode. Also, subscription options for Zune, Boxee and a range of other podcatchers are listed on the website.

Greg Ferro
Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count. He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus.
Greg Ferro
Greg Ferro
Greg Ferro

Latest posts by Greg Ferro (see all)

  • Ron Fuller

    Hi guys, great podcast about a technology near and dear to me, Data Center Interconnect. A few things regarding the conversation about Nexus 7000 capabilities tghat I'd like to clarify. Disclaimer, I work for Cisco.

    1 – MPLS is coming in the very near future (early 2011) and will NOT require a hardware swap
    2 – QoS on the Nexus 7000 is very robust regarding COS marking and queuing so would love to understand better the scenario the Ivan mentioned that the N7K couldn't handle.

    Can't wait to hear the rest of the conversation because the question Scott posed in the teaser is one I am asked on a regular basis.

    Thanks for the podcast and see you in the "ether"

  • http://blog.michaelfmcnamara.com Michael McNamara

    I enjoyed listening to you guys today… so much in common it's truly scary.

    I did want to comment about the discussions around enterprise critical applications and the role that VMware can play. As I believe Greg mentioned, it's only after the product and/or solution has been purchased and contract signed that many organizations think to involve IT. At that point we usually need to take some half-baked application and/or solution and make it enterprise ready and highly available. As you guys know VMware provides a lot of options in this respect. So much so know that we really only worry about the applications that live outside of our VMware environment.

    I look forward to hearing the rest of the show next week.

    Cheers!

  • http://www.mplsvpn.info shivlu jain

    awesome deliverables…

  • Nicolae Matau

    Hi,
    First I want to say: great podcasts, keep up the good work!
    Regarding this ‘hot’ subject VMotion and the requirement of extending VM interfaces VLANs between 2 DC.
    There is a small ‘hack’ which can possible sidestep this problem, keep the 2 DC layer 2 topology separated and the moved VM running: ‘ip mobile arp’.
    This ‘ip mobile arp’ will create /32 routes (which can be redistribute in aggregation/core IGP) for the moved VM on the foreign DC. Of course this means that you will have a /32 route for every VM which is in the ‘wrong’ datacenter, which doesn’t scale too well but it works. (I never said it is a good solution!).
    As far as I know, this command is not supported on Nexus 7k. It works for 7200 routers for example.

    best regards,
    Nick
    CCIE #21821

    • Nicolae Matau

      forgot: it is supported on 6500 , starting from 12.2(33)SRA;12.2SX too.

    • Julio

      I always wondered why this feature was rarely brought up. I can certainly see this helping with the traffic trombone issue.

      I wanted to use this option a while back at a previous job — but it required proxy arp to be enabled. Unfortunately, that wasn't possible in our environment.

  • Ray Lucas

    Nice show guys. Loved some of those L2 failure scenarios you described :)

    Regarding your talk about QoS with L2 traffic. As Ivan indicated, the best place for the marking to occur – be that IP DSCP bits or 802.1q CoS bits – would be in the server (VMguest in the context of the discussion I think). I've only seen that happen once.

    On Cisco switches for sure – and I imagine most other vendors – if you can write an ACL to identify traffic, you can re-mark either the DSCP bits or the 802.1q CoS bits as it enters the switch. If you had a 1000v as your vSwitch, I suspect you could also do the marking there. Once either of those fields are marked, all the Cisco switches I'm aware of (all the way down to 2960s) will allow you to affect the drop probability of a packet based on either of those fields on L2 interfaces. Not as nice as shaping on a
    eal router but better then nothing.

    There are some switch platforms – Enhanced Services ports on 3750MEs, 4500s with Sup6s, the new ME models – which do support more advanced HQoS. I haven't looked at Nexus so can't comment there I'm afraid.

    Apologies if I've completely mis-understood the context of that part of the discussion and have just written a something completely off topic!

    Cheers,
    Ray

    • Tony Brown

      Thanks Ray, you clarified a few things I was going to follow up on myself!

      Interesting point about marking in the server. I wouldn't generally trust markings from a server in case it gets compromised, but I'm not sure of the risks with VMware and 1000V.

      Great show guys! Look forward to part 2!

  • Mudasir Abbas

    It was great discussion guys. I missed the 2nd session. And I can not find it. Is there anyway I can get that?

    Cheers,

  • Etherealmind

    Its a challenging problem about marking packets. You want to mark them at the server switch which is usually the vSwitch, because the VMs are dynamic. Marking on the core switch means that you need very comprehensive QoS marking plans that apply to every port in the DC whereas a policy that marks on the server port can be quite specific and easy to troubleshoot.

    The cisco nexus 1000v helps to address some of these issues, but it an expensive solution and hard to sell to management who barely understands virtualisation much less deep competency and operational issues therein.

  • Pingback: Show 28 – vCloud Network Overlays, OTV, VEPA and networking appliances — Packet Pushers

  • Pingback: Show 28 – vCloud Network Overlays, OTV, VEPA and networking appliances – My Etherealmind

  • http://www.infineta.com Haseeb Budhani

    Hi Greg,

    [Shameless plug] I'd like to comment on the brief conversation around WAN accelerator performance that you, Ivan and the other gent on the show had. We here at Infineta are building the industry's first 10Gbps WAN acceleration solution. Our system, which is in Beta now, can carry out highly efficient data reduction (using our own flavor of data-in-motion deduplication and standards-based compression) and TCP optimization at 10Gbps rates while incurring only low 10s of microseconds of latency.

    Our initial focus is on high-speed storage replication and on high-speed big data transfers over high-speed WANs, and we are finding a lot of enterprise who are expressing interest. We have also carried out some initial testing with VMDK transfers and with VMotion traffic, and have found that our solution is able to reduce the footprint of this type of traffic very aggressively.

    We'll be happy to share a bit more if there is interest.

    Thanks!

    — Haseeb.

  • Pingback: Show 28 – vCloud Network Overlays, OTV, VEPA and Networking Appliances – Gestalt IT