This week, the Packet Pushers talk about storage network design mostly in the context of converged infrastructure. Guests J Metz, Chris Wahl, and Russ White do all the heavy lifting of those storage-related packets from one end of the data center to the other.
When traditional network engineers think about designing for storage, there’s a few things that might pop into their heads. Let’s discuss the importance of each.
- Switches with larger buffers.
- Jumbo frames.
- Dedicated Ethernet switches (i.e. nothing on them but storage traffic).
- Packet loss intolerance.
- 10GbE interfaces.
- Is QoS for storage traffic a thing?
When considering IP storage, what are reasonable bandwidth and delay characteristics of the network? In other words, how far apart can servers be from their storage? And how much pipe should they have?
While IP storage is fairly common, FCoE is less so. What is the FCoE future?
- How is designing for an FCoE network different from designing for IP storage?
- Has FCoE multihop improve customer adoption?
Often, centralized storage is designed in a vacuum, with little input from the networking team. What should storage engineers be asking networking engineers, and vice-versa?
- How critical are fast-failover mechanisms when it comes to storage traffic?
- Network engineers think in terms of link aggregation groups, but iSCSI has features like MPIO that can be used to achieve link load balancing. How important is it for network engineers to understand how storage traffic is balanced?
- Blog: WahlNetwork.com
- Twitter: @ChrisWahl
- Book: Networking for VMware Administrators
- Pluralsight Videos: Author Page for Chris Wahl
- VUPaaS: Website