2008-11-25

What security can learn from the recent financial crisis

In the most recent Scientific American Perspectives, the editors lament the state of our economic system and place a great deal of blame for it on software models. In their words, risk management models should serve only as aids not substitutes for the human factor. While this is certainly not the only example of the perils of algorithms replacing analysts, it is perhaps the most poignant.

In the security industry, software vendors and managers have been pushing hard for years to supplant analysts with software -- the theory is that automated software can do just as good a job, and after all, labor in this day and age is expensive. The danger, of course, is that the security field is far less mature than the study of capitalism. Instead of dangerously repurposed algorithms originally designed for unrelated fields of physics and mathematics, though, algorithms never having a connection to any causal relationship are employed in our industry. Indeed, the end state of "security" has been elusive even in the most anecdotal of terms; we are a long way away from quantitative methods to define the risk management that is our jobs. Yet software vendors are happy to hand-wave their way through a sale in an effort to provide what amounts to a false sense of security riding on principles that are often far enough from empirically proven that they are better described as "faith" than "science," even though they are presented as such. Management, without the requisite technical skill set or trust of their subordinates to identify the b.s., is too often eager to buy into the hype.

The information security industry as a whole would be wise to learn from this painful lesson in economics: technologies are tools, to be used by skilled analysts to digest large and complicated data sets and produce actionable intelligence. Analysts drive the tools, the tools should not drive the analysts. Otherwise, you find yourself dangerously reliant on inflexible tools incapable of identifying the larger systemic problems, and the only means to identify a problem is the collapse of the entire system - in our case, a catastrophic compromise of security.

2008-11-14

Mirror's Edge

This sounds like a really cool game (warning: annoying flash and sound inside) with an intriguing plot:

"Once this city used to pulse with energy; dirty and dangerous, but alive and wonderful. Now it is something else. The changes came slowly at first. Most did not realize or did not care, and accepted them. They chose a comfortable life. Some didn't... they became our clients.

"In a city where information is heavily monitored, agile couriers called runners transport sensitive data away from prying eyes. In this seemingly utopian paradise, a crime has been committed, your sister has been framed and now you are being hunted."

And besides, that's a really freaking cool tat. Even if it is anime-ink. I wonder if the discovery to beat the game is realizing the effectiveness of hiding in plain sight. I doubt it. That would be a pretty crappy video game. It might make for a pretty sweet aspect, though. I'm not a huge gamer, but I do have myself an Xbox 360. I may have to check this out.

The game is an extreme example, but nevertheless a potent reminder that hiding data isn't always bad... a notion utterly lost on many in the general public. Any awareness is good awareness.

2008-11-12

Solve the right problem with NAC


NAC is an important technology. It's neat. It's cool. But it's expensive. And, while many Cisco or networking zealots may argue to the contrary, it's not always necessary.

NAC prevents unauthorized computers from participating in a network. This is good for environments which your IT staff doesn't have control of, but need to permit a certain level of access to. VPN's, of course, are one of the most common examples. They also happen to be one of the simplest use cases for most administrators.

However, in corporate environments where assets are owned by the same entity that control the network, NAC shouldn't be a replacement for good software management. With a few notable exceptions, if you can implement NAC, you can typically implement good software management on your endpoints.

NAC is also not an appropriate binary access control mechanism in most cases. If the primary goal is to restrict network access to computers you own, this is a site security problem. Naturally, if you have contractors or customers who require access to your network, there is a role for NAC to play. The right answer here is to define your security requirements in general terms, articulating the decision point between an IT and physical security concern for each aspect.

Use NAC. But do it with a clear understanding of your goals, and apply it just like you would any other technology: where it's appropriate. IT solutions are slick, but they're not always the best option available.

Image from http://download.101com.com/wa-mcv/spo/images/april7/monitor.gif

2008-11-06

Why the Obama-McCain Hack may be bigger than you think

A recent Newsweek article revealing that both US presidential campaigns were compromised by 'a foreign entity or source' is getting a lot of attention. The article ominously quotes the FBI: "You have a problem way bigger than what you understand." Boy aren't they kidding. Let me explain a parallel to you, since the correlation is far from obvious.

You have probably read news reports about defense-related data on unclassified networks being targeted by actors that seem to be abroad. Working professionals in the defense infosec industry understand the logic from the perspective of an adversary: target technology while it is being developed on unclassified networks, by necessity for collaboration, because once the military receives the technology it will be harder to get these details as some become classified or more closely held. There is asymmetry between information sourced at contractors (tends to be unclassified), versus the very same type of information sourced within the government (tends to be classified). This is one of the not-secret, but not-widely-known dirty little truths about our classification system.

Here, we see the same tactic with a wholly different kind of information. Policy decisions being made by the Obama and McCain camps during election season are likely to translate into official US Government policy once one of them is elected, at least insofar as election promises are upheld. Some of these details are likely going to be held close to the vest, and almost certainly classified. Naturally, while policies are under development in a not-yet-elected campaign office, they are unclassified with the custodians (campaign workers) unqualified or uninterested in protecting them - except possibly from the other candidate. This is a brilliant application of the same tactics available to adversaries for acquiring military technology, perfectly timed for the only period that such an attack may be successful in compromising the confidentiality of future policy stances. This parallel may have significant implications; specifically what depends on the viewpoint of the reader, but the alignment is no less than 'quite interesting.'

If there is a silver lining here, it's that Barak Obama's office now has a first-hand understanding of just how severely questions of information security and electronic espionage have the potential to impact national security. Let's hope they remember that when deciding on IT and government-wide security strategies for the next 4 years.