In the beginning, our applications ran inside our data centers. Our users came to our apps via our data centers. We knew roughly who our users were, and how we would let them in. We were in control, because the app lived THERE, and the users were HERE, THERE, and IN THIS ONE OTHER PLACE. And that was it.
Control felt good. We liked it. We even bragged about it to our friends in rigorously detailed Visio diagrams and lunch ‘n’ learn whiteboard adventures, the bracing smell of markers mingling with our manager-bought pizza and Diet Coke.
And so we managed our perimeter firewalls, adding layer seven inspections when the hackers got smarter and faster. We could define acceptable behavior for our users and apps, because everything stayed where it was supposed to stay. After all, we installed the app on the server, or the cluster if it was a big one. Or behind the load-balancer if we needed to be fancy. But by the trident of Neptune, we knew our app, and we understood our app.
So it was that we trusted the perimeter firewall to keep us safe, because our policy rules were mighty in enforcing what we knew. Besides, our service contracts were expensive. And mostly, things worked out. Mostly. Yes, we had some problems inside the perimeter. But even inside, things were mostly okay, covered by endpoint protection or a firewall between environments. Ah, the good old days.
The modern data center insists that you know nothing, at least not about applications. No longer does the app live THERE, on that server, in that rack. Now, it lives on that server, in that rack…and also on these other servers in these other racks, at least sometimes. Oh, and part of it lives in the public cloud, because reasons.
And thus, our applications are now THERE, THERE, and THERE, but sometimes OVER HERE, and fairly often UP THERE. And our applications aren’t even applications. They are a rag-tag bunch of motley
attack surfaces microservices cobbled together into this sprawling monster of location unpredictability.
What have we done?
We’ve changed infrastructure. We used to carefully plan a series of hardware configuration events, and execute them lovingly to deliver monolithic applications in over-provisioned comfort.
Now, we’re turning infrastructure into an ocean that developers throw applications into. Yep. Infrastructure is a ocean of CPUs, fans, memory, storage, and networking. Apps get thrown in there, and they float on the ocean of metal, buoyed up by abstraction layers and, presumably, actual capacity.
How do we secure the apps in the ocean?
One answer is certain: not via perimeter firewalls. After all, what perimeter are we really talking about? As soon as an application is spread all over, the concept of a perimeter gets hazy.
Besides, if we loosely define “perimeter” as the barrier between the big, bad Internet and our precious stuff, we’re only talking about 20% of the attacks. 80% of the attacks our networks experience will come from within, according to Alan S. Cohen, Chief Commercial Officer at Illumio.
Microsegmentation tackles the challenge of securing applications spread all over. Generally, microsegmentation looks like:
- A policy language that supports abstract groupings of objects — not just IPs and ports.
- A central security policy written in this language describing who can talk to whom.
- A policy computation engine that creates policies on a host-by-host basis.
- A policy installer that puts the policy in place on the hosts.
- A visualization engine that shows flows between network endpoints.
This is a tricky problem to solve, as the applications floating around on our infrastructure ocean are sometimes ephemeral. A container running a microservice can come online, take requests for minutes or even seconds, and then disappear again.
By the same token, there are workloads in the public cloud, or spread across multiple public clouds. Each element of the infrastructure ocean presents different implementation details. Placing standalone firewalls, even virtual ones, in just the right places everywhere is hard.
What Illumio has done to address these challenges is place the filtering right out at the edge. Rather than traffic leaving the workload and being steered around to a security device, the filtering happens right at the workload itself.
There are two key elements of the Illumio Adaptive Security Platform.
- Virtual Enforcement Node (VEN). The VEN is an agent that runs on Linux or Windows workloads. VEN observes traffic flows to and from the workload. VEN also programs the local iptables or Windows filtering tools.
- Policy Compute Engine (PCE). The PCE manages all VEN-equipped workloads. The PCE gathers all traffic information from the VENs, and thus understands who is talking to whom. This information is used by the Illumination flow visualization tool. The PCE also translates high-level policy into specific rules for individual VEN-equipped workloads. The PCE can also determine that two workloads should be communicating via IPSEC, and assist in management of Windows or Linux IPSEC implementations.
The VEN makes it possible for Illumio to manage Windows and Linux workloads no matter where they are. Managing the local iptables or Windows filtering means that enforcement happens right at the application edge, and not through some centralized transit point that traffic must be engineered to pass through.
The PCE makes it possible for Illumio to centrally manage via graph theory all endpoints in the system and calculate just what rules are required to enforce an organization’s desired security posture. The PCE also allows for expression of security in abstract ways. Illumio describes their RAEL (pronounced “rail”) policy model as four dimensional.
- Roles are arbitrarily assigned. A typical organization might use web, app, and database tiers to describe roles.
- Applications represent a specific application class, independent of the environment that application is operating in.
- Environments are the various places an app might live — think dev, UAT, QA, and prod.
- Locations can be whatever the operator likes, whether that’s a rack and pod, data center, or geographical scheme.
The PCE uses RAEL policy as described by the operator, computes appropriate individual policies, and sends either entire configs or diffs down to individual VENs to enforce that policy.
And what of systems that can’t have VEN installed? As long as there’s a VEN on at least one side of the conversation, Illumio can manage it — Illumio checks traffic at both ingress and egress. However, Illumio cannot see or manage workloads where there is no VEN on either side of the conversation.
The view from the hot aisle.
Illumio has a lot going for it. There is no dependency on greenfield. The vast majority of workloads in environments I’ve worked in have been either Linux or Windows. The architecture solves the microsegmentation problem effectively, and according to Illumio, scalably. Other than host operating systems that can handle VEN, there isn’t much Illumio is asking of an infrastructure.
For heterogenous environments that just don’t know what they might look like tomorrow, Illumio is a security solution that doesn’t lock their users into an architecture. In a world where most vendors and sometimes even open source projects are looking for lock-in, finding a solution that will work with you no matter what direction you want to go in is refreshing.
If I was looking for a security solution that could work with me no matter what infrastructure I choose, Illumio would be on my bake-off list.