Email Authentication Frameworks: Truthiness

A few weeks ago, my boss asked for my opinion on an article by Dan Kaplan of SC Magazine titled Keeping A Secret, published 3/9/2008 (yes, awhile ago). The article discusses the larger problem of authenticating email senders, and specifically the TSCP (Transglobal Secure Collaboration Program) framework. It was a great opportunity to step back and contemplate the fundamental concerns and drawbacks of authenticating email. I'm sharing my sanitized thoughts here for the consumption of others, as I think these issues are shared amongst security practitioners everywhere - whether it's called TSCP, TEOS [pdf] (Microsoft's Trusted Email "Open" Standard), or something else.

First, a brief bit about TSCP. From their website, TSCP "engenders a common framework for secure collaboration and sharing of sensitive information in international defence and aerospace programs." It is a partnership, not so much an organization or industry trade group. The group has released secure email specifications [pdf] designed to help address the identity management problems inherent in email, somewhat as an implementation of Homeland Security Presidential Directive 12 (HSPD-12), Policy for a Common Identification Standard for Federal Employees and Contractors.

Enough boring govvie crap, though, let's get on to an analysis of the article and some critical thinking about the claims of the proponents of this and other related systems.

The two sources Kaplan uses to set the tone of this article are Northrop Grumman's Keith Ward, who frames the problem of email authentication, and Amit Yoran (NetWitness CEO & former Bush administration cybersecurity chief), who acts as a professional opinion source on TSCP. Keith does a good job of boiling down the problem we face with targeted, forged emails, and to a certain extent how they've impacted the DoD and its contractors. However, the extent to which TSCP - and indeed any email authentication framework - addresses this problem is greatly exaggerated by Yoran. He even claims the standard "helps remove entire categories of problems that plague us like spear phishing." This is simply not true. The article goes on to cheerlead TSCP as addressing everything from green initiatives to terrorism - weak claims that are clearly hyperbole.

TSCP will provide a higher level of confidence in recipients that the sender of an email from a participating member is authentic. The meat of the article really focuses around Yoran's quote above; however, there are two fundamental problems with the assertion that an email authentication framework (let's assume TSCP is flawlessly implemented) will solve whole categories of problems like spear phishing:

1 It is inconceivable that there will be any situation where all email correspondence for an account holder will be subject to this framework. Wherever there is professional correspondence, there is opportunity for spear phishing. Even where there is casual correspondence, that opportunity exists. To wit, I have seen targeted email campaigns that spoof personal correspondents as senders (scary, huh?). Any broadcast emails that come from a shared or anonymous address will not fit into such a framework. These are common, especially for announcements on contracts from the government (BAA's), mailing lists, etc.

2 The security of the system presupposes that all credentials are secure. If any credentials are compromised, this trust system fails, and phishing is not only possible using the compromised credentials, but it stands to be far more effective as the sender is "trusted." The framework provides a quick and effective response in such situations - revoking the credentials - that isn't available in classic email correspondence, but in the interim all other participants are exposed. To that end, the approach suffers from a painful paradox: the larger the system, the more useful it is and the more participation will grow. But as the system grows larger, the likelihood that some credentials will be compromised at any given time grows with it, putting us right back at square one.

All of this isn't to say that TSCP or similar frameworks are impossibly flawed to the point of being useless. Such systems do raise the bar for adversaries, making some of their approaches less tractable. Expectations should be tempered, however, and investments in them should reflect their true benefits as a real implementation. Users should also realize that strange behavior is strange behavior, even within a trusted framework.

For a long time I have been working on an entry covering identity management more broadly (and philosophically); stay tuned, maybe I'll finish it one day.


Someone's finally listening

When a hospital computer gets compromised, the privacy of a person's health records are at risk of theft. When a bank is compromised, people stand to lose money through fraud.

When defense department computers are compromised, information about the tactics and technologies used to defend our country can be lost. For years, major defense contractors have been jumping up and down, waving our hands, trying to tell the US Government that we have a major problem: compromises of unclassified systems that have the potential to impact national security. And let there be no mistake: regardless of your feelings on the subject, the lines between the networks and staff of the DoD and the defense industrial base are blurred. A compromise of one likely means a compromise of the other, and vice versa. We sit next to each other in operations centers. We build next-generation technology side-by-side.

It seems that, along with the injection of billions of dollars from a presidential directive, someone is finally starting to pay attention. Naturally, this is being presented as their idea, but whatever - the important point is that it gets addressed.

A choice quote:
The government needs the "best and brightest" from Silicon Valley and elsewhere in the private sector to work on creating an advanced warning system to prevent such cyberattacks.

The best & brightest in the DIB have been trying to help the government for years. If this means they will finally start listening (as an institution - to date collaboration has been at more of a professional than organizational level), then I welcome the change. If this means DHS will begin looking for a silver bullet to every security problem, or engaging in more security theater like that which we see at airports, then I loathe to think what this could mean. I can only imagine FTP becoming illegal over IP because an adversary stole sensitive military technology from a compromised system via that protocol. Laughable, yes, but this is a direct parallel to the approach taken in matters of airport security. We need something more than theater and throwing money at snake oil.

The important question is now: can the DHS, which has failed over its 6 years in many of its most important tasks (see also: Katrina), and the NSA, still notorious amongst the intel community for being unwilling to share data, accomplish this task? Let's hope so.


Economics and the Security Cold War

The current state of the computer security threat landscape, it has been said, is a new cold war. I feel, regardless of how deeply this anecdote holds, that lessons can be learned from it. Let's accept the cold war metaphor as an axiom for the moment.

It is widely agreed that the cold war between the United States and Soviet Union was decided by economics - quite simply, the US outspent the USSR. In an effort to keep up with American defense spending, the Soviets sent their economy into collapse. If we follow this lesson through our anecdote, the problem of security boils down to one of economics, not complete security. Slowly, the truth that no computer system or network can be perfectly secured is being accepted by decision makers. Thus, the goal of computer security becomes to make the cost of compromise higher than some other alternative. In a necessary divergence from a comparison to the 20th century cold war, and making the economics of computer security more difficult, we must understand that there is no terminal state. There is no Soviet Union to collapse, relaxing the obligation of net defenders. There will always be some entity with a computer and an ambiguous moral compass.

Economic efficiency therefore becomes the ultimate goal of security - to not just defend, but defend in the cheapest possible way, so the most robust defenses can be erected and the prospect of compromising a network becomes too expensive to warrant investment as the adversary considers options in achieving their various ends. Ideally, this makes the cost of achieving a goal more cost effective via moral and legal means. Most likely, though, it just moves the problem to another entity or altogether different domain.

Understanding the threat landscape of the environment to be defended, in this paradigm, is paramount. Adversaries that are looking to save money by sharing games, videos, or music (classically referred to as warez) can quickly and cheaply be driven out of profitability when you consider the cost of a DVD is around $25. Quite a bit more effort (money) is necessary to outspend the likes of scammers and organized crime syndicates. Once espionage - nation-states attempting to achieve multibillion-dollar generational jumps in their military technology - comes into the picture, it's easy to see that the costs become staggering.

Why, then, are we not condoning threat-appropriate strategies for different industries? The defense industrial base and DoD are starting to diverge as an entity from the rest of the world, but this is an exception. Our collective mindset needs to change, and we need to begin by educating other security professionals. Computer security defense intelligence is needed in every industry, to map the computer security needs of an organization to the economics of its adversaries. This is how security is achieved.

On Blaming The User

I've written previously on how blaming users is a flawed approach to security. Recently, in an interview with Educause, Bruce Schneier opined:

Users are going to pick up their knowledge from their experiences. You can try to teach them stuff explicitly, but it's not going to stick in the same way that experiences do, and unfortunately, the experiences often don't match our reality, whether it's an experience of fear, an experience of an attack, or an experience of no attacks. Rather than focus on what can we do to educate users, we need to focus on building security that doesn't require educated users. That will be much more resilient, because while there are some educated users, there are a lot of noneducated users.... For example, my mother is never going to be a security maven—not because she's stupid but because it's not her area of expertise. And we can't expect it to be. If I say, "Look, Mom, you didn't know enough to do this and that, and you deserve to get hacked," I think that's blaming the victim....
(Emphasis mine)

Users aren't going to act securely. It's worth reiterating this message until the security industry finally decides to "get it" and start accepting responsibility for security problems, rather than passing the buck.