Heavy Networking 566: Inside Intel’s Strategy To Unlock Data Center Performance (Sponsored)

Ethan
Banks

Drew
Conry-Murray

Greg
Ferro

Listen, Subscribe & Follow:
Apple Podcasts Spotify Overcast Pocket Casts RSS

Intel has a substantial position in networking silicon and technologies to accelerate network functions. You might be familiar with the chip maker’s 2019 acquisition of Barefoot Networks. This acquisition brought Barefoot’s programmable switch ASIC, Tofino, into Intel’s stable. Intel is also a significant contributor to the P4 language for programming ASICs.

There’s also Intel’s investment in DPDK, which optimizes x86 CPUs for network use cases; plus a product line that spans NICs, SmartNICs, FPGAs, and more.

On today’s podcast, sponsored by Intel, we dive into Intel’s portfolio to understand how it unlocks the compute power of your data center. Our guest is Mike Zeile, Data Center Group Vice President and General Manager of End-to-End Network Applications at Intel.

We discuss:

  • The widespread adoption of the P4 language
  • Extending P4 from switches to SmartNICs, appliances, and software pipelines
  • Intel’s progression with the Tofio ASIC
  • Leveraging SmartNICs for acceleration
  • Using eBPF and XDP for integrated telemetry
  • A glimpse into the future of silicon photonics
  • More

Show Links:

Better Connectivity Means Better Experiences – Intel

Share this episode

Have feedback for the hosts?

We want your follow-up.

Send us follow-up! 😎

Get Our Weekly Content Summary

The free Packet Capture newsletter lists every podcast, video and blog we published that week.

Subscribe

Leave a Comment

Comments: 5

  1. Ron on

    The only method to run SD-WAN at speed and scale is to use Intel. Intel Ethernet controllers combined with Intel processors are my hardware of choice!

    Reply
  2. Ben Smith on

    So many false claims, not sure if due to ignorance or in purpose.

    InfiniBand is not complex to set. It works out of box, and it is more cost effective than Ethernet.

    Ethernet performance will not be in par with InfiniBand. InfiniBand scales much better, delivers much lower latency, RDMA and computing accelerations.

    Intel is indeed a great place for networking. You can check with Netffect, Fulcrum, QLogic, and the Cray Arie team. Oh, and OmniPath. How do you end a networking product? give it to Intel and count till 5.

    Reply
    • Greg Ferro on

      I believe that I said Infiniband has a role, but that Ethernet will win in the end. Yes, infiniband has better technical performance but its not so scalable and much more expensive that ROCE/MPII over IP Fabric.

      I have worked with a HPC location and replaced Infiniband with IP Fabric. The budget was reduced by 90%, scales to the 2000 servers so far and speed up to 400G. Its possible but its not for all HPC locations, many feel that IB fits their needs best but I wonder if they have done a full evaluation

      Reply
      • Ben Smith on

        InfiniBand is actually more cost effective than Ethernet when you compare the same data speeds. Ethernet is more expensive, as you pay for a server in every Ethernet switch for example, which you do not with InfiniBand. If you want to pay more and get less, Ethernet is your answer.

        Not sure how do you compare scalability, but while you can run a single application on a large InfiniBand supercomputer, you cannot do it with Ethernet.

        If you have worked with a location to move them from higher performance to lower performance with a bigger price tag, they should be happy. Many others doing the opposite.

        Ethernet will win at the end means nothing. Ethernet of today is definitely not the Ethernet of 10 years ago. So is this just because it is called Ethernet? No one got fired because of buying Cisco is yet another similar claim. You could have mentioned that to Intel. Or to the large public cloud vendors.

        Reply
  3. Cory C. on

    WRT: silicon photonics: If the lasers are built into the ASIC, would manufacturers be able to mix/match laser type? (ZR, LR, ER, SR) or will there be an optical-to-optical transceiver or some-such? What about media count per interface? (400GBASE has 1, 4, 8 and 16 strand varieties)

    Reply