CloudPhysics wants to put analytics tools in the hands of IT to help them optimize the data center by combining cloud-based storage and analytics software. CloudPhysics presented at Tech Field Day this June.
The company uses a virtual appliance, called the Observer, to gather highly granular data from VMware vCenter environments. Those data are shipped to EC2-based storage and poured into a data lake, where CloudPhysics’ software correlates and analyzes the information. Customers can run ad hoc queries against the data as well as create alerts and reports.
At present, CloudPhysics only runs on VMware, but the company does plan to expand to other platforms.
Because the Observer is plugged into vCenter, it gets more information than just virtual machine statistics. CloudPhysics can see the physical hosts, how they’re clustered, and how they access the network and data stores.
It can also capture configuration details, including virtual CPU and virtual RAM data, server capacity, storage I/O characteristics, and more.
Observer captures data every 20 seconds, which provides a detailed picture of actual resource utilization, both in real time (or near real time) and via historical views (hourly, daily, monthly, etc.). Customers can also compare metrics such as peak utilization vs. median utilization, and see whether configurations match actual resource use.
With this information at hand, customers can then optimize the data center to meet workload requirements. By right-sizing resources, customers can make more informed decisions about balancing cost and performance.
CloudPhysics also positions its product as a way to estimate the costs of moving an application or workload to the public cloud, because you have actual resource usage measurements. Other use cases include capacity planning and troubleshooting.
CloudPhysics also lets you compare your metrics and benchmark your utilization against a global—and anonymized—data set. This is a neat feature if for no other reason than to serve as a sanity check: for instance, to see if you’re over- or under-provisioning compared to your peers.
The service is available for businesses via the Premium Edition. A Partner Edition targets for MSPs and resellers. There’s also a free edition with limited functionality that lets potential customers get a taste for the service.
Licensing for the Premium Edition is based on the number of hosts under management. The Partner Edition is sold by seat, plus an aggregate number of users. CloudPhysics did not provide specific details on how much it charges per host or seat.
A potential sticking point regarding price is how storage costs will get passed along to customers. EC2 is cheap, but it’s not free. CloudPhysics says it keeps customer data indefinitely, so it seems worthwhile to inquire about this aspect of the service cost.
One of the issues that a few Tech Field Day delegates had with another presenter was that it was simply pulling information from one system into another (Nutanix to Microsoft SCOM). The delegates wanted to see more value-add via correlation and analytics.
While CloudPhysics also builds its service around gathering and moving information from one system to another, correlation and analytics is what the platform is all about.
Of course, CloudPhysics can only be as good as the data it collects. As mentioned, it’s presently VMware-only. That’s a sensible first choice, but there are lots of other data sources this service could tap. In particular, it could do a lot more around networking, which is an essential element for capacity planning and performance monitoring/optimization.
Last but not least, you have to be comfortable exporting intimate data center details into the cloud. As more organizations adopt SaaS and IaaS, I think this has become rather a faint objection, but due diligence requires you to check that CloudPhysics’ security controls and anonymization is up to par.
The TFD site has five videos of the CloudPhysics presentation, and I recommend taking a look to get more details.