Application lifecycle management is a challenge for the cloud. While we’re used to managing applications in our own data center, the process of deploying, operating, and optimizing applications in cloud environments is a different animal. For example, making proper use of resources becomes crucial, because over-provisioning is no longer a valid answer. Excessive CPU, RAM, etc. results in a costly bill.
I recently caught a technical briefing from Nirmata, addressing this problem. Nirmata works with private, public, hybrid, and multi-cloud providers, with enough insight to deploy differently across different regions. Think of Nirmata as providing an interface for developers that makes it easy to stand up microservice-based applications across a bunch of different cloud environments via a unified interface.
Nirmata makes the point that there will always be developers and that there will always be operations. They don’t subscribe to the trendy theory that there is a super-tech that will be able to perform both of these roles. Nirmata interacts with both dev and ops. Visibility is a key component to their platform.
Developers focus on application definition in an agnostic manner. Operations focuses on governance and policies, which Nirmata facilitates via contracts.
For those trying to wrap their brain around the use case, Nirmata cited a school district needing to manage WordPress sites for ten schools. Nirmata enabled them to containerize the infrastructure and deploy it into the Microsoft Azure cloud, leveraging their single interface that managed the details for both developers and operations teams.
However, don’t mistake Nirmata as being just for the small potatoes stuff. They went on to cite a large company with a multitude of applications needing to be managed. Their point was that their tool is useful across a wide range of scenarios.
The Nirmata platform has three major prongs.
- Application delivery and management.
- Application centric infrastructure (read “intent”).
- Scheduling via Kubernetes.
Nirmata describes themselves as a SaaS based control-plane. In other words, the brains for the application management lives in the cloud. For instance, imagine a blueprint for an application deployment. This is defined and lives in Nirmata tools up in public cloud. That blueprint is used, and the application deployed. Then, once the application is deployed, it needs to be monitored. The application performance will change over time as demands on the application change. That information gathering and parsing is happening in the cloud, as well.
Who is going to be a Nirmata customer?
Nirmata is going after at least two kinds of companies.
- Application builders whose products are consumed by the public.
- Enterprises deploying applications for internal consumption.
The first target is obvious–app builders need to be able to deploy new versions of their product quickly, and containers are a piece of that puzzle. However, enterprises are seeking agility and flexibility as well. As enterprises migrate their application deployment model from the traditional “schedule an outage, swing traffic, upgrade, test, swing traffic back” model to something less disruptive, containers and rolling upgrades become attractive. The catch is that managing these environments is complex, especially in multi-cloud environments.
Nirmata under the hood.
Kubernetes (K8s) is the most popular scheduling tool for containers, used by 64% of respondents in a recent survey. But, the K8s interface isn’t really its strength. It’s just not easy to use, especially for new adopters. Nirmata claims to bring ease of use to K8s, highlighting three major things they bring to the container orchestration party.
- Simplified app delivery and management.
- Automated Kubernetes management.
- Elastic management of cluster hosts.
Ah ha! Taking a step back then, Nirmata is “orchestrating the orchestrator.” Let’s walk through the high level steps you would take using Nirmata to provision infrastructure resources.
- Onboard your cloud providers. These are clouds that Nirmata can work with to deploy containers.
- Create host groups. These hosts are used to deliver and deploy applications. Nirmata does not perform patching or maintenance of these hosts, as there are other tools on the market such as Ansible and Chef that can do this. However, Nirmata can integrate with these tools.
- Create clusters. Clusters are collections of host groups, and a K8s construct.
- Create policies. Operations teams use these. Policies are key in Nirmata because they decouple the applications from the infrastructure. These policies control infrastructure, among other things, security. For example, a security policy can govern who can deploy apps on a given cluster or host group. Policies also impact upgrades and other operational tasks.
Nirmata described another process of spinning up a dev/test environment. Imagine a CI/CD pipeline where Jenkins calls out to Nirmata, requesting new infrastructure be stood up. Nirmata deploys the dev/test infrastructure, and now containers can be deployed into the newly provisioned dev environment where testing can proceed.
What else does Nirmata do?
Recall that a major Nirmata feature is environment optimization. This is all governed by policy, so that based on performance and some other constraints, different environments are stood up. Dev environments might have different performance requirements than UAT or production, and Nirmata can be set up to know the difference.
Today, optimization is within a single cloud provider, but optimizing in the context of multiple clouds to choose from is coming. Also, optimization is performance focused today, but other metrics, such as cost, are on the roadmap.
Operationally, Nirmata shows operators the state of the environment in real time. Nirmata gives you history and can display major ephemeral infrastructure exception events like container sigkills and errors logged. This makes it possible to find out why a container died, helpful to developers needing to understand what went wrong in order to update their code.
The data gravity problem is being worked on by Nirmata as well. Today, snapshotting of a workload and moving it is functional. However, Nirmata is working on cross-cloud storage migrations. To them, a storage migration is just an integration event, so it’s nothing they shouldn’t be able to handle.
The view from the hot aisle.
The complexity of a solution like Nirmata is massive, as is true with any orchestration product. Complexity brings fragiility, and I couldn’t help but wonder just how fragile the canned API integrations between Nirmata and several public and private cloud environments might be.
A key Nirmata message is about speed and agility. I agree that Nirmata abstracts away many of the fussy deployment details you have to know if you’re standing up AWS or Azure resources directly. I also agree that the most common complaint about Kubernetes is that it’s a bit hard to use, especially for neophytes.
Therefore, Nirmata is clearly filling a need. If you put Nirmata in the middle of your infrastructure management, you have a single interface and policy manager that allows you to throw resources at it, and let it figure out the rest. But what happens when the integrations break down? What happens to the promise of speed and agility when AWS changes an API, and all of a sudden your AWS public clouds are unmanageable, at least temporarily?
Another concern is that of dependency. If your organization goes all-in with Nirmata, there are operational wins, to be sure. You get a single application deployment interface across both development and operations groups. However, now you’re tied down to Nirmata. Is that what you want? That’s okay if that’s what you want. I don’t have a big problem with vendor lock-in, especially for organizations that are moving towards shorter refresh cycles.
I suspect buying into Nirmata means re-tooling of operational processes, and processes are enormously hard to change, particularly in larger environments with lots of human contributors. But still…Nirmata is managing infrastructure that is still manageable by other tools. If you backed away from Nirmata at some point, you could still manage your infrastructure. You aren’t locked out if you disengage from Nirmata’s comforting embrace.
Tools like Nirmata might also help reduce other kinds of vendor lock-in, for example being tied into unique AWS processes. Since various clouds are abstracted away by Nirmata, you end up with flexibility to move between environments without necessarily having to be an expert on each individual environment. In theory, you could move workloads pretty easily without screwing it up.
We’re going to see many of these “orchestrator of orchestrators” products, especially as ease of use and speed of application deployment become critical to businesses. Nirmata is in the right place at the right time, especially for leading edge companies moving into containers.