A Market-based Approach to Predicting Compromises

This is an idea I've been noodling over and shopping around to some in the industry for a month or two now, and I think I'm ready to at least suggest it intelligently here to see what others may think.

I was reading a Scientific American article on the University of Iowa's long-running experiment in using prediction markets to forecast the outcome of presidential elections, and I thought: why not try a similar model to forecast data breaches and security compromises at publicly-traded companies?

As the article notes, prediction markets have been applied to a variety of different problem sets. Their implementations have ranged from the mundane to the contentious (worth a read), but their real prescience is difficult to prove and the subject of long-running debate. It certainly seems that causality wasn't on the drawing board when they were created - the article even acknowledges, "Economic theoreticians have yet to understand precisely why this novel means of forecasting elections should work better than well-tested social science methods," which extends to other uses of prediction markets as well. But hey, these are economists and business folks we're talking about, so we'll let it slide. One thing that is certainly true is that a prediction market is an effective mechanism for aggregating knowledge. Those with the most knowledge are the most likely to invest more, which means the state of the market represents the experts' best guesses on the reality of a difficult-to-measure situation.

So what does this mean in terms of the market's utility? Like a financial market, a security market could improve confidence in decision-making by consumers and businesses alike, without having to be an expert in the industry. The value of companies on the exchange represents their relative and "absolute" (I use that term loosely) data security posture. While this is unlikely to be a key decision point in any but the most specific cases, it supplements decision-making based on other criteria, and could serve as leverage for large deals and acquisitions. Do we want to invest in this company that deals almost exclusively in personal data? Do I want to open an account with this bank? You see where I'm going here.

Naturally, this model isn't without its problems, the first and most difficult of which is at the heart of many security challenges: how does one know when a security compromise occurs? Underlying this question are problems of definition, disclosure, and internal measurement. The solution to this problem is a robust set of market rules, driven by breach disclosure and data protection laws. Can these be broken? Of course, and while breaking the rules of market participation would undermine its confidence, this is a balance that is successfully struck in financial markets with robust oversight complementing the rules of the market.

Market manipulation is manifested in a little different manner than we see in financial markets. If one knows of the potential for a security breach, one could invest accordingly, cause the breach, and profit handsomely. The fundamental difference is control - in large financial markets, it's more difficult for one person or group of people to bet money on an outcome with the knowledge that they can, with some degree of likelihood, create that outcome. So, parallels to insider trading in financial markets are clear, but incomplete. That notwithstanding, while some mitigations may differ in their nature between the two markets, the presence of this problem shouldn't be a show-stopper towards market success as it can be mitigated via rigorous oversight and enforcement.

I don't see this as a panacea to anything, but rather a knowledge aggregator and magnifier. Whether or not it would be useful, or even accurate, I cannot say - nor do I believe anyone could. IANAE (not an economist), nor have I ever sincerely studied the subject of prediction markets, so it's quite possible this proposal reveals my naivety by overlooking some serious faults. If a "real" economist were to give the idea a preliminary thumbs up, or at least not laugh themselves to tears over the thought, I think further study would be an interesting endeavor. At the very least, I think applying economic models to security problems holds a great deal of promise, and is already being considered by others out there, although I haven't been able to find anyone considering this particular approach.

Update 5/27 08:51
It comes as no surprise to learn that this isn't the first time such a market-based approach to security problems has been proposed (thanks for the link, Richard). You'll find this an interesting and more general read on pretty much the same topic.

Update 6/10 20:30
Adam, and readers from Emergent Chaos, provided some good feedback on this idea. Even though the general response is that this wouldn't be a supportable approach, I appreciate the input! This helps me focus my research intentions on the most promising theories and technologies.


Strategic warfare in cyberspace

The USAF is considering building its own botnet. This is a really dumb idea. Richard Bejtlich has a good blog post which discusses many of the obvious problems here, but there are other reasons not to do this; foremost, that the USAF would be ignoring the advice of one of its experts and pioneers in the subject of the strategic use of information warfare.

First, isn't this approach really nothing more than effective central management of computer resources? I think that is a great idea. If the USAF has to use a buzzword and a cool twist to convince base commanders to buy into the central management of all of their computers, then so be it.

However, if the true purpose is to build an offensive strategic capability, then I fear we're in trouble. For the remainder of this entry, I will quote liberally from Col. Gregory Rattray's (8th AF, retired) seminal book, Strategic Warfare in Cyberspace. The argument against such an approach can be made almost entirely from quotes from this text.

In focusing on offense, strategic warfare theorists generally have been influenced by a belief that new technologies will allow attackers to get through and attack key centers of gravity. These theorists assume that adversaries subjected to such attacks have significant vulnerabilities. Strategic warfare theories assume that offensive strikes will prove capable of inflicting sufficient punishment on civilian targets or enough damage on infrastructures supporting military operations to influence adversaries and thereby achieve coercive or deterrent objectives. (p. 98)

What adversary of the US exists which could be coerced by such a force? What national "center of gravity" exists in cyberspace outside of the US? To date, we're the only ones vulnerable to such an attack. And not only that, as we saw during the cold war with the Soviet Union, developing this capability will only encourage our adversaries to develop the same capability - which could be used far more effectively on us than vice versa. To wit:

Enabling Conditions for Waging Strategic Warfare
3. Prospects for effective retaliation and escalation are minimized. Actors initiating strategic warfare need to assess an opponent's likely reactions to a strategic attack and possible courses of action after an attack has been sustained. Such attackers must also assess, prior to initiating attacks, their own vulnerabilities to strategic attack and their adversary's capability to retaliate
. The efficacy of an actor's threat or use of attacks will depend on its vulnerability to retaliation both in kind and through other military and nonmilitary means. (p. 99-100, emphasis mine)

And even assuming that all of these preconditions are met - that the military invests significant resources in its defensive posture to an attack in kind - even then, the efficacy of this strategy is dubious at best:

The use of force is not simply a linear exercise in orchestrating one's own forces and unleashing them with certain effect against the enemy. Adversaries will attempt to anticipate each other's actions and minimize their detrimental effects. The likely course of an opponent's actions can only be guessed at, however, not determined with any certainty. As eloquently developed by Edward Luttwak, strategy is governed by an interactive logic rather than a linear logic. (p. 78)

Prior to 6 August 1945, the strategic bombing campaigns of World War II had opened up a new battlefield for conflict based on attrition. These campaigns were neither quick nor decisive. Those assessing the potential for waging strategic information warfare have so far paid little attention to the possibility that its actual use may well confront similar hurdles in terms of requirements for lengthy campaigns and lack of decisiveness.
(p. 84, emphasis mine)

Does anyone still think this is a good idea or wise investment of resources?


Are we legislating blaming the victim?

Ladies and gentlemen, I present to you HR5983, Homeland Security Network Defense and Accountability Act of 2008. From the bill, describing a proposed requirement of the DHS IG in its report to congress:

"describing the effectiveness of the testing protocols developed under subsection (c) in reducing successful exploitations of the Department’s information infrastructure."

I really fear this is another case of blaming the victim. Can more be done to raise the bar for attackers? Of course. I'll be the first to throw stones at DHS for having very, very shoddy security and doing zilch to help out the rest of us. But it occurs to me that asking DHS officials to prevent compromises is in some ways akin to giving women a bottle of mace and asking them to stop getting assaulted. The anecdote is harsh, but it drives home my point. We'd never do the latter, so why is the former an approach for which we expect results?

The real problem is the high ROI for attackers and insurmountable odds facing computer network defenders. There isn't, nor has there been, any real political consequence attached to getting "caught." Until decision makers in the executive branch show a willingness to address this gap, we will only see limited improvements no matter how strongly worded a bill is. And, to that end, it is our job as experts in the field to communicate this problem to the public, with the hope that it will flow up in the democratic way the US's founding fathers intended.


Measuring the Effectiveness of Bulk Data Collection

While decompressing from a brutal day of studying for a crypto final, I came across an article on BBC which argues that "huge investment in closed-circuit TV technology has failed to cut UK crime." My first thought was, did they really expect it to?

A lot has been made by the media and bloggers of the efforts in London to deploy thousands of CCTV cameras, much of it surrounding civil liberties of British citizens. I'm going to set aside civil liberties concerns for now and focus on more objective measurements (not that these issues are not important, but rather they aren't important to my point here).

To sell or design a widespread CCTV system on some notion that the thought of Big Brother will somehow keep the citizenry well-behaved is so tragically Orwellian that I don't think it warrants another mention. However it was sold to the public or government, and regardless of these silly claims, measuring its success in terms of crime reduction belies the real investigative benefit of such a system: as a forensic tool.

To bring this into an area which I have more expertise, I think of CCTV in the same way that I think of full packet capture on an important network segment. How much sense would it make to have an analyst sit and watch every packet, every flow, every session that blows by this sensor? How much would I expect detection of malicious activity to increase? Not at all. Even if it were possible for an analyst to keep up with the data rate of the sensor (which is the case with CCTV), so few things happen in the timespan of the human attention span that have investigative prima facie meaning that I would expect the results to be negligible. However, when placed in the context of a known attack, suddenly benign or minute details become significant. That white van parked in a parking spot that leaves 1 minute after a robbery a block away now has some meaning. That weird base64-encoded comment in HTML is now of concern.

Active monitoring of these dragnet systems is ludicrous. If some correlative system can be built to reduce data - and that's a big if - then some limited monitoring might make sense, but we are nowhere close to having a technique that will allow us to do so and this argument is moot.

The bigger story is that only 3% of London's street robberies [are] being solved by security cameras. This is certainly concerning, but this is one slice of crime. How do these tools assist in other crimes? The information provided in that article is limited. I would like to see a comprehensive study on the forensic use of this tool by London police - perhaps one is available that I haven't seen. Both the police and the media should start focusing their attention on this aspect of the system - for critique, improvement, and measuring success. That's what we'll be doing as we build a full packet capture system at work, and how we'll be measuring its success.


Nothing that hasn't already been said...

...but it bears repeating.

Security has its limits. Thanks to Igor for the help on this entry.

Image on the right is directly from the Ohio state patrol website. Image on the left source unknown; Soviet Russia 1950's ("Be sharp sighted and vigilant").

Windows XP DEP in action

For a bit of comic relief, I'll share with you this error I got at work not too long ago: