A transcript of the audio that you can play above follows.
Welcome to Briefings In Brief, an audio digest of IT news and information from the Packet Pushers, including vendor briefings, industry research, and commentary. I’m Ethan Banks, it’s June 5th, 2018, and here’s what’s happening.
I had a briefing with DriveScale on May 31st. Who’s DriveScale? DriveScale is a storage company, whose main product is software to centrally manage and share disk for clustered applications and big data environments. File under buzzword “software composable infrastructure.”
A central DriveScale chassis filled with disk is attached to the network in the same rack as the compute that needs it. DriveScale software makes sure that all compute nodes in the rack have as much storage as they need, but does away with the waste that direct attached storage sometimes experiences in this environments.
In this briefing, DriveScale announced Software Composable Infrastructure for Flash, which they claim is a market first. Flash storage is all about high bandwidth and low latency, and the latest NVMe flash drives have ferocious network-filling capability. If you think serving storage over Ethernet is a performance compromise, think again.
Quoting the official DriveScale press release,
“DriveScale’s SCI for Flash is available as software, capable of being deployed on a variety of hardware systems. These include DriveScale’s own Composable Flash System and Western Digital’s Ultrastar Serv24-HA. The system combines the speed of up to 24 dual-ported NVMe™ drives, four 100 Gbit Ethernet ports and a dual-server architecture to deliver IT operations high performance and high availability flash storage in a composable system.”
These systems are sometimes known as EBOFs, or Ethernet-attached Bunch of Flash. DriveScale described their EBOF to me as containing 2 controllers each with 2x100G Ethernet ports. That’s a 400G of pipe from the ToR switch to the EBOF. The controllers handle converting Ethernet to NVMe. DriveScale sells the empty chassis, allowing their channel partners to populate the chassis with NVMe drives of their choice. Again, remember that DriveScale is primarily a software company.
In case your mind snapped at the thought of a bunch of disks being able to fill 100Gbps Ethernet channels, have a listen to the Datanauts podcast episode 111, where we talk with storage expert J Metz about NVMe and its network impact.
To connect to the EBOFs, there are two communications protocols supported. Most environments are going to use iSCSI. But RDMA over Converged Ethernet (RoCE) is also supported, but obviously you need an RDMA fabric for this, something Mellanox might have sold you. RoCE is the “ultimate performance” solution. Looking down the road, NVMe-over-TCP is probably the final answer according to DriveScale, but NVMe-over-TCP is not standardized yet.
One of the drivers for this solution is cost optimization. Flash is expensive, still as much as 10x over spinning rust. DriveScale SCI allows for flash to be sliced, so that, in effect, partial drives can be presented to compute nodes, rather than having to use up an entire drive as happens when the drives are directly attached.
Other nifty capabilities include on-the-fly recomposability, keeping in step with the ephemeral container world. Public key authentication is centrally managed, and used to encrypt drives and connect drives to compute nodes. The central PKI management means that your encrypted drive data can move around as demanded by applications and still be decrypted.
Predictive analytics are also part of the solution, helping you to determine what data should go where to optimize storage system performance.
DriveScale was also just awarded the Cool Vendor designation in the cloud infrastructure area by Gartner, which underscores the flexibility DriveScale offers in their approach, a central requirement of cloud environments.
DriveScale is happy to take your order for their SCI for Flash today, with order fulfillment expected starting July 15, 2018.