2007-05-25

Four interdepent reasons why Information Security fails

In a fit of inspiration on the metro this morning, extending to frustration at work as to why we're increasingly having to build our own tools to detect advanced threats, I realized that four overlapping assumptions and associated tradeoffs by both the InfoSec field as a whole, and the manufacturers supporting it, are creating artificial barriers to success in network defense (NetD). In this essay, I outline those problems, how they connect, and why they're preventing progress in tools and techniques that are so desperately needed.

Tradeoff 1: Inline Analysis versus Offline Analysis
Assumption: Analysis must be done inline.
Many tools and approaches rely on, or are limited by, the ability of the solution to function at the speed of the technology being "protected." Email cannot be delayed by minutes or hours for processing. Packets must be delivered with near-zero delay in transmit time. This is because analysis is being done inline. The delivery of the message, the execution of the process, the delivery of the packet, is blocked until the requisite analysis is complete. It doesn't have to be this way. Why not analyze packets offline? Our three other assumptions form the premise that, if accepted, necessitate inline analysis.

Tradeoff 2: Protection versus Detection
Assumption: Detection is insufficient
This isn't so much an explicit assumption as the others, although we all read Gartner's prediction in 2003 that IDS will be dead in a few years (and hopefully you laughed as heartily as I did). Many in the IDS/IPS community acknowledge that such devices will never run in purely-prevention mode. That's good. But why aren't we incorporating this mentality into other tools? Anti-virus, for example, always looks for things that are known to be bad. Sure, you've got heuristics, but you can't granularly define "block this, allow that and log, ignore this other stuff." Not only that, all we have to go on when an action is blocked is some meaningless name that, when referenced on the vendor's website, gives a vague and often useless explanation. Allow blocking and logging based on behavior. Give administrators the ability to say "log all files matching this md5deep sequence 80% or more," "block if this other md5deep sequence is matched 90% or more." There is no granularity, and the tool is geared exclusively toward protection, with detection as an afterthought. Firewall rule strategy is another example. Why do we only give advice on blocking and allowing? Why doesn't our technique set include "Identify regions you don't do business with, and log that activity with a higher priority" (assume for a second that you can give firewall logs a priority)? Blocking and allowing is often the only strategy ever discussed.

The perceived protection necessity is driven by inline, general analysis and the 100% solution. If those premises are accepted, this one follows necessarily.

Tradeoff 3: Complete Solutions versus Incomplete Solutions
Assumption: Incomplete solutions are not worth pursuing
This assumption is a bit more subtle, but it's prevalent throughout our industry. Consumers want, and vendors deliver, complete solutions. Consumers do not want incomplete solutions. Both of these statements are unfortunately true - true because the consumer perceives an incomplete solution cannot produce value. The consumer here is you, me, or often our bosses - heck, much of the InfoSec field. The problem with this mindset is that it's wrong. Incomplete solutions can deliver significant value. If a solution doesn't scale, that doesn't make it a worthless solution. It simply means the solution needs to be selectively applied where it can provide the most value. An example: a system that opens every Word document an enterprise receives in a VM, analyzes the content, and reports on what files are written, etc., as a result. Computationally expensive? You bet it is! Can a large enterprise do this for all Word documents inbound, to find that elusive Zero-day exploit? Probably not. Can a large enterprise apply this to all email from countries and regions with which it does not have business? Probably. Can a large business limit this further to, say, "@competitor.com" email addresses as sources? You see where I'm going.

Incomplete solutions are often seen with skepticism for a variety of reasons, notably the implication that the technology is immature. This isn't always the case. Some problems simply have no known solution in polynomial time - the cost of computing a solution increases exponentially with the size of the problem space, making scalability impossible. And even if a tool is immature, as long as the approach is valid, a positive result is a positive result. A Word document that, when opened, drops something into C:\WINNT\system32, is suspicious, nevermind the fact that you only analyzed 5% of all Word documents. If that 5% represents 80% of your malicious email, you've done a good job.

I've heard IDS vendors refuse to do, say, base64 decoding, because "what about base32? or any of the other weird one-offs?" This is the same thing. A complete solution is not always necessary - but it becomes more necessary if you insist on inline, general prevention.

Tradeoff 4: General versus Specific
Assumption: All tools must solve all problems
Kudos to the folks that developed Nessus, Nmap, Tripwire, (until recently) Snort/Sourcefire, and anyone else who built and continued to hone tools focused on a specific task. That is why these tools are the best in the industry: their authors and development teams weren't distracted by the untold other aspects of Information Security that have problems that need to be solved. Unfortunately, this is becoming more and more the exception. Managers want tools that solve all of the problems for them, and to support their bottom line, every security vendor out there is happy to oblige. The problem is that these tools are then designed to provide some value to everyone. That leaves an enormous gap that is the rest of the problems some organizations face. Every industry, every region, every company has nuanced security challenges. When every tool solves a few problems, it leaves no tools left to combine to build a custom solution to the remaining problems, like when you built that giant blue battleship with a million Legos when you were 12, only to have your dumb Labrador walk through it and destroy it.

We have the general problems licked. It's the specific threats that are now most effective, seriously threatening bottom lines and even national security. Why are there 30 approaches to anomaly-based IDS's, and no approaches for mining and clustering firewall data? SIM data? Why are there no pcap-based systems to monitor network-aware applications like FTP and DNS? Why are tools for analyzing full packet captures and netflow so nascent from a security perspective? I'll tell you why: these are tools that need to be selected and deployed by skilled analysts; put together to form a custom security system. They're specific, tactical tools that need to be implemented with care, and the industry fails to recognize these tools as absolutely essential in filling the large gaps left by the general tools. A general tool can't solve all of anyones' problems. We need more specific tools - toolsets, even - to enable enterprises to customize their defenses. But before this happens, the assumption that all tools do everything needs to be banished from our industry. I'm tired of my AV trying to be a HIPS, and my firewall trying to analyze layer 7 packets. But this all ties into prevention over detection, 100% solutions, ... and now I'm a broken record.


Delivery of an email that's infected is currently considered a failure of the system because detection isn't good enough, and we can't analyze every email in the depth we'd like. If you buy the premises, this is logical. I don't. All of these assumptions are incorrect, and in my opinion, are influenced by the fact that advanced analysis is not assumed as a follow-up step to detection. Again, the "Complete Solution" rears its ugly head. No technology provides a complete solution. In order to provide analysts with the data they need to make decisions that, today, cannot be made by a computer, we must leave the above assumptions behind and broaden our analytical scope. But for the meantime, you'll notice the strong interdependence of the assumptions. If one or two can be broken, the others will fall with ease (reminds me a bit of NP-Complete theory).

These are not the only problems or challenges the industry faces today, but many of the failures of our industry are symptomatic of them. The sooner the industry - and our government - acknowledges the true security solution space, and leverages all areas of it, the sooner we will be able to appropriately address the advanced threats.

No comments: