While cruising in our Galaxy-class starship to another data center, we’ve hit a storage scaling issue. There’s no way to add any more performance to our antimatter storage array! The chief engineer suggests that we perform a saucer separation to decouple capacity from performance, which will allow us to scale the two requirements individually. Release the docking latches, and let’s hope this maneuver can save us!
This show will focus on building data center storage that decouples performance from capacity, such as with PernixData’s FVP product. The idea is to understand why you would do this, the benefits, and where this technology is going. This is often referred to as Server Side Cache / Acceleration. Joining the Datanauts for this show is Satyam Vaghani, co-founder and CTO, PernixData. Follow Satyam on Twitter @SatyamVaghani.
Some of our brilliant and insightful questions…
- Decoupled storage involves using flash or memory in the compute node and capacity drives in a storage array. Let’s go deeper into this architecture and why it was created.
- How does decoupled storage differ from hyper-converged architectures we discussed in show #1?
- At a high level, what does the path of an IO look like when we introduce an acceleration layer within the compute node?
- Let’s go to the virtual whiteboard and build a data center powered by decoupled storage. What does it look like?
- How do people introduce acceleration technologies into the data center – I’m thinking about day 1 operations to install, configure, and begin leveraging acceleration.
- What sort of scale is possible using decoupled storage architectures? Is this meant for a single cluster of compute nodes, multiple clusters, or perhaps even across data centers?
- We talked about hybrid and all flash options for storage arrays in Datanauts 005. Do these approaches negate the need for a decoupled storage solution?