Thin Slicing A Black Swan: the Search For the Unknown Unknowns

Over the last two weeks I’ve had an ongoing conversation with Derick Winkworth regarding the colossal (and largely unmanageable) amount of data gathered in information security. I even brazenly cornered Josh Corman, one of the founders of the Rugged Software movement, at AppsecDC to discuss my thoughts. But let me quote a famous American poet, Donald Rumsfeld, on the subject:

[T]here are known knowns; there are things we know we know.
We also know there are known unknowns; that is to say we know there are some things we do not know.
But there are also unknown unknowns – there are things we do not know we don’t know.

The difficulty is that we think that we can solve the problem of the unknowns, intrusions, with more data. But this hasn’t been the case thus far. It seems like the more information we have, the less we know. A recent article from National Defense, about the shortcomings of data gathering in the intelligence community, seems to confirm this.

“Military drone operators amass untold amounts of data that never is fully analyzed because it is simply too much.”  -  Michael W. Isherwood, a defense analyst and former Air Force fighter pilot.

As infosec professionals we are swimming in prodigious amounts of data, but it isn’t making us better at our jobs, it seems to make us worse. In Verizon’s 2012 Data Breach Investigations Report, it was found that across organizations, an external party discovers 92% of breaches. Trustwave’s number was slightly lower at 84%.  Then there’s the losing battle of malware detection, with cries of “Anti-virus is dead” echoing throughout the virtual hallways of SANS.  We continue to desperately grasp at that straw of, “If we only had some more data.” Unfortunately, because of the increasing number of unknowns, we’re frequently subjected to the collapse of security’s “No Silent Failure” rule that Dan Geer warns us about.

Vendors will tell you that the solution is to collect and correlate even more data by purchasing one of their nice shiny SEIMs. So enterprises throw big, heaping wads of cash at a product, including a metric ton of professional services for the deployment. What they wind up with is a white elephant that either never gets fully implemented after the money runs out or isn’t much better at getting rid of false alarms than the home-grown alerting scripts they were using in the first place. Ah, but we’re told we need to break down silos and make security everyone’s business in the enterprise. Agreed, but have you ever tried to navigate the complex politics of most IT organizations in order to get centralized log correlation configured?  I can tell you from personal experience, it’s easier to negotiate peace in the Middle East. Even if you do manage to get some kind of unified logging,  there never seems to be enough time to manage the care and feeding of the system. The frequent answer is to request more staff to look at a dashboard or manually parse through piles of data. Security teams are desperate to find that Hadoop Hero to slay the dragon of the Big Data.  Can the security industry really so inept or are we looking at the problem in the wrong way?

Bayesian Probability applied to security analysis will tell you that if you get enough data and apply the right models, then you can predict the likelihood of events, i.e. intrusions. But isn’t that what we’re already doing without much success? What if the issue isn’t that we don’t have enough data, but that we’re collecting too much. In Nassim Taleb‘s recent books, Fooled by Randomness and The Black Swan, he challenges the human mind’s tendency to find causality and underestimate randomness in the world. He introduces the concept of the “Black Swan Event, ” aka the unknown unknown, a major impact incident which can’t be computed. He was taking aim at the financial industry, where he used to be a hedge fund manager and trader, but I think it’s also applicable in the security industry. *

In Malcolm Gladwell’s book, Blink, he provides examples of a concept called Thin-slicing.

“Thin-slicing” refers to the ability of our unconscious to find patterns in situations and behavior based on very narrow slices of experience.[1]

One of the most compelling examples of this is a medical case study from Cook County Hospital’s struggle with identifying patients in danger of an imminent heart attack. The public hospital’s emergency room was overwhelmed by patients and had limited funding. They needed to find more efficient ways of determining who was at immediate risk in order to conserve hospital resources.  Lee Goldman, a cardiologist, had created a protocol based upon an algorithm developed in partnership with mathematicians .  After two years of using Lee Goldman’s decision tree, the hospital staff was 70% more effective at recognizing patients at risk. [2] In this example, they actually used less information and it made them more successful.

Is it possible to use Thin-slicing in security while still avoiding the dreaded Silent Failure? I can confirm from personal experience, that just because you have more data and it screams at you more often, it doesn’t mean you’ll be more inclined to pay attention to it. Please raise your hand if you’ve ignored the copious amounts of false-positive alerts that go to your email or cell phone, only to find you actually missed something important because there was too much information to process. Maybe the application of Thin-slicing techniques applied to the right data could make a difference, because I think it’s obvious we can’t continue in this current direction.

Maybe the real evolution in the security industry will come when we realize that we can’t quantify or fight all the unknowns.  What we can do is create strong infrastructures that minimize technical debt by building secure applications and protocols from the start, then add the equivalent of air bags to our architecture for when the inevitable intrusion occurs. We could also focus more on the things we can control, like the human factor, because even though compromises originate with humans,  people are your best intrusion detection mechanisms.

*Disclaimer: incredible oversimplification of some very complex ideas.

1 Gladwell, Malcolm (2007). Blink. Back Bay Books, Little, Brown and Company, first published 2005, p. 23.

2. Gladwell, Malcolm, p. 135.

Mrs. Y
Mrs. Y is a recovering Unix engineer working in network security. Also the host of Healthy Paranoia and official nerd hunter. She likes long walks in hubsites, traveling to security conferences and spending time in the Bat Cave. Sincerely believes that every problem can be solved with a "for" loop. When not blogging or podcasting, can be found using up her 15 minutes in the Twittersphere or Google+ as @MrsYisWhy.
Mrs. Y
Mrs. Y
  • EinarAleksejev

    It’s not about amount of time or data, but just not doing the work.

  • TFrenk

    There’s nothing wrong in using a SIEM. But most SIEMs unlike Secnology for example, cannot handle Big Data anyways. It’s hard to slice when your missing data. So once you have all the data & the right tools to manage them, then you definitely need a security expert that will do the slicing, make the calls & set up the rules.