2010-11-22

Let's Enable Cloud Computing

I've been thinking a lot about "cloud computing" over the past few months, and I keep coming back to the same conclusion every time: the InfoSec community is inhibiting IT innovation by throwing up weak, largely unsubstantiated concerns over the security risks of "cloud computing." Overall, our industry's reaction smacks of "fear of the unknown." [1]

After some research[2][3][4][others], I've found that most security-related arguments against cloud computing qualitatively fall into one of the following risks, in no particular order:
  1. Context-hopping. A compromise of one virtual environment may facilitate access to another virtual environment. This is a technical risk.
  2. Supervisory control. A compromise in a virtual environment may lead to an "escape" from that environment to the supervisory process that controls it and other environments. Together with #1, these are also called "VM Escapes." This is a technical risk.
  3. Inferential data loss. Others could make inferences about your environment by inspecting their own (resources available, etc.). This is a technical risk.
  4. Change management. Virtual environments can be changed rapidly, meaning a possible loss of control. This is a procedural risk.
  5. Role confusion. Virtual environments, being controlled by different actors at different layers, may lead to confusion about important task execution (think: backups). This is a procedural risk.
  6. Forensics. Virtual environments may complicate or limit forensic investigations and e-discovery. This is a technical risk.
  7. *Control. In outsourced situations, loss of control of the underlying hardware and supervisory process externalizes certain risk-introducing actions like misconfigurations. It also may inhibit validation of controls at lower levels of the software or hardware, and outsiders have administrative access to the underlying environment. This is an implementation risk.
  8. *Data location. In a virtual environment, the location of data at any given point is uncertain, with possible legal or export control implications. This is an implementation risk.
  9. *Privacy. In outsourced scenarios, another entity dictates the conditions and depth of law enforcement cooperation. This is an implementation risk.
  10. *Continuity. Hosting infrastructure on a company's servers could be at risk if the company folds or experiences other stability issues. This is an implementation risk.
I've marked the risks exclusive to outsourced cloud services with an asterisk.

Let's focus on those risks that impact all implementations of cloud computing; that is, items 1-6. To be blunt, the only risk that deserves special attention is [6] Forensics, because of the loss of the often-invaluable unallocated space on a disk or in memory. Every single one of the technical risks [1]-[3] are already accepted by organizations at the network layer: this includes VLANs, MPLS tagging, and other network abstractions we have been using for years. I've yet to hear an argument as to why we should treat virtualization on the host any differently than we do on the network for these risks. Procedural risks [4] and [5] already exist in production environments, and should already be managed by established processes and organizational responsibility. If these are issues for cloud computing, they're issues for the broader IT organization. If nothing, they are not unique nor limited to the cloud.

Looking at the other half of our risks, again we see risks either already accepted or not specific to cloud computing, with the exception of privacy and possibly data location. Organizations that have this concern, however, can easily work with their provider to manage the privacy risk, and I'm not convinced that the data location issue is a problem - after all, packets are routinely routed around the world irrespective of the export status of their content. In any case, it's likely that this is easily addressed as well. [7] and [10] are already an accepted risk at the network layer by any organization with a WAN managed by an ISP.

In contrast, I'm going to provide a few reasons cloud computing could actually help security, if properly implemented.
  1. Intrusion detection. The supervisory process is a place where all network and host activity can be monitored from a single vantage point. This holds great promise for intrusion detection and behavioral analysis by exposing far more data than could be afforded previously.
  2. Compliance monitoring. User activity could easily be monitored across multiple systems and applications. Restrictions on where data resides could similarly be implemented across systems easily (think: DRM).
  3. Availability (yes, it is a security concern). Redundancy and rapid recovery become far more affordable.
That's just off the top of my head. Of course, with some careful thought and collaboration with virtual machine vendors, other opportunities are likely to arise. However, if our industry takes a "no" stance, in spite of the lack of any appreciable risk increase, we will be cut out of this evolution and lose valuable opportunities to turn cloud computing into a benefit rather than a cost from a security perspective.

I find it appropriate that the iconic security object is a firewall, because this is how most security professionals think. Classic InfoSec mindset is as a gateway; a veto-holding non-voting member of the IT community. The correct role, in my opinion, is as an active participant in technical innovation, architecture, and the engineering process, making sure requirements are met in a way that balances risk with cost - not eliminating risk at extraordinary cost. Compliance and auditing are my key suspects in holding us back from this goal, but that's an argument I'll save for another day.

References
  1. C|Net - Risks outweigh rewards according to most professionals: http://news.cnet.com/8301-1001_3-20001921-92.html
  2. Lenny Zeltser's blog: http://blog.zeltser.com/post/1525310925/top-ten-cloud-security-risks
  3. Infoworld, quoting Gartner: http://www.infoworld.com/d/security-central/gartner-seven-cloud-computing-security-risks-853
  4. NYTimes Op-Ed by Johnathan Zittrain: http://www.nytimes.com/2009/07/20/opinion/20zittrain.html?_r=1

2010-10-01

Why there shouldn't be a dot-secure

A few days ago, Cyberwar Chief Gen. Alexander proposed building a separate, secure network for the nation's critical infrastructure. By now, this has been widely derided by many security specialists, but I wanted to throw my hat in the ring with a few comments.

Separation is an effective control in theory. One chronic problem our industry suffers is "ivory tower" syndrome, with decisions divorced from reality. This is an example.

SIPRnet is an example of where separation has effectively mitigated risk. The DoD's network is largely isolated, and as a result, has mitigated risk that internet-connected networks experience. Notice how I said "mitigated," not "prevented." Security is about risk management, not risk elimination.

The problem with separation comes in the form of exceptions and enforcement. The more exceptions, and and less enforcement, the less effective the separation, and the less risk mitigation. The diminishing role of firewalls as an effective security device is a stark example of this.

Think of this in terms of "meatspace": the Great Wall of China, the Berlin wall, the Maginot line - all were colossal failures for their stated goals. Additionally, the massive investment of resources for construction and maintenance detracted from other more effective strategies, amplifying their detrimental impact. Yet island nations such as Britain, which has had a complete water barrier, has enjoyed the security benefits of this isolation throughout its history.

The general's proposal is a fool's errand. I would say the same about an isolation regime only for the defense industrial base and the DoD, given the interconnectedness and overlap of those networks. What he proposes is a geometrically larger problem, with corresponding increases in the need for exception and difficulty of enforcement. The exceptional cost of such an approach could not possibly justify the resultant risk mitigation IMO. That amount of money would go much further in mitigating risk by investing in broadly-adopted and linked authentication mechanisms, secure DNS, counterintelligence, and cross-industry threat focused network defense.

2010-07-07

Why my Twitter Feed is Hilarious

...or, the yes-huh, nut-uh of "cyberwar":

2010-06-06

Security Academia: Stop Using Worthless Data

I have a new litmus test that I use to help me vet the many intrusion detection related academic papers that come across my desk. I call it the "relevant data test." If your approach does not study relevant data, I will not read it. You may indeed have found a new way to leverage Hidden Markov Models in some neat heuristic, layered approach. I do not care. Novel or precise as your approach may be, the applicability of it is predicated upon the relevancy of your data. You may as well have found a new way to model the spotting of a banana as it ripens, if your data has nothing to do with intrusions in 2010.

It's time to wake up, folks. A 10-year-old data set for intrusion detection is utterly worthless, as your conclusions will be if you use it. I will never again read further than "benchmark KDD '99 intrusion data set." There is no faster way to communicate to an informed audience that you just don't understand intrusions than by analyzing data that is this old. Such attacks are generations behind those that modern network defenders face today. Understand this: you are solving the problems exemplified by your data set. If your data is 11 years old, so is your problem, and your solution is only as effective as that problem is relevant. Few, if any, attacks from 1999 are relevant today.

Make no mistake about it, I understand the researcher's lament! There is no modern pre-classified data set like those relics of careers gone by. Finding a good corpus is excruciatingly difficult. But in legitimate, scientific, empirical studies, this is absolutely no excuse for using irrelevant data. In fact, without first establishing the relevancy of ANY data set, even those used in the past, one's findings fall apart.

To pick but one example, in the last two issues of IEEE Transactions on Dependable and Secure Computing, two of the three IDS-related articles based their findings on data sets that are 7 or more years old. This is emblematic of why so much research is ignored by industry, and that which isn't often falls flat in practice. If I were an editor of that periodical, which I have been reading for quite some time, I would have rejected nearly every intrusion detection paper submitted in the last 3 years outright on this basis alone.

The data commonly considered the "gold standard" by academics has not been relevant for at least half a decade. Research done in that period whose findings relied on 2001 and prior data is not in any way conclusive, in my professional opinion.

2010-04-28

Spy Museum opens FUD exhibit

It is really bothersome to see a museum as popular and, until recently, esteemed as the Spy Museum open an exhibit pandering to fear. In the two-sentence description, a "cyber attack" is compared to Pearl Harbor, immediately discrediting anything that might be contained therein. Disturbingly, this analogy is made by Richard Clarke, someone with serious pull in matters of national policy. Such ludicrous hyperbole may make the museum some serious coin, but it sets back understanding of real-life CNA and CNE issues, the balance between them, and their practical use in modern society and warfare. The result will be misplaced priorities by decision-makers for whom these visitors vote, poorly-invested research and defense dollars, and if left unchecked, economic, military, and intelligence disadvantages on the world stage. Like the CNN-broadcast "Cyber Shockwave," the only thing missing from this exhibit is an F-35, Bruce Willis, and the "I'm a Mac" guy.

An exhibit headline, visible on the museum's website, reads "If cyber spies break America's security codes, could power lines turn into battle lines?" A better question is "who is the curator, a 16-year-old World-of-Warcraft gamer?" On second thought, even a pizza-faced teen would probably know this doesn't make one bit of sense.

Update
A description of the phear. Sadly, it's recommended as something to do. And believe.
It’s a frightening thought—and an exhibit that, for better or worse, is designed to imbue its viewers with the reality of that fear as well as educate them. This is the kind of thinking that led to an extra gift, tucked into the Spy Museum’s Field Guide to Asymmetrical Warfare and passed out at the reception: a flash drive.

(Emphasis my own)

2009-12-31

TL;DNT: Academia and industry are both failing

(Too long, did not tweet) I think this is more applicable to my personal blog on industry and academia anyway.

On the cusp of 2010, the state of information security in our society can only be described as a mess. I've come to the conclusion that my career path will now and forever be an effort to bring more science of computing to security in practice (severely lacking now), and reality of security to academia (also severely lacking now). This is at the heart of our mess, and will also be the solution to it. Few-to-no tenure-track professors at accredited universities have real-world experience.

Academic papers are written around decade-old problems, using decade-old data sets, demonstrating a decade-old mindset and ignorance to the volatility of security in practice. There are few models - even fewer that are relevant - and little agreement on terminology as fundamental as risk, threat, and vulnerability.

Industry makes risk decisions with scant or no objective data, builds models on subjective criteria, suffers from physics envy, and is often totally incapable of performing analysis that adheres to the scientific method. In some cases, industry still fails to recognize that security is risk management, evident by the all-too-common requests for ROI to justify security spending. I've seen nearly every word in the English language prefixed by "cyber-" in the last 24 months, simply because it's a buzzword. It's so overused I cringe the few times I have to say it, and the hype risks an overcorrection in the coming years that will back-burner the issues at hand, or water them down with gimmicks and sales pitches to the point where serious concerns in need of resolution are met with the eye-rolling more appropriately reserved for notions such as "cyber Katrina" or "cyber 9/11."

The US now has a "cyber security czar," virtually ensuring failure of public policy just as we've seen with most other "czars" (how's that war on drugs going?). Policymakers don't realize that electronic espionage is just as serious if not moreso than traditional methods of espionage. No agreement has been made on how conflicts (espionage and outright aggression) escalate beyond the internet into the real world, despite having very serious real-world implications in and of themselves. We are not holding to account other countries who tacitly or explicitly permit attacks against our country's critical infrastructure, ensuring their continuity for lack of any sort of risk associated with their actions. Open dialogue is taking place, but only on the most greatly exaggerated, dated, or unlikely risks, reducing national information security strategy to the same level of effectiveness as airline security.

I normally don't like rants without solutions, so for that I apologize. Maybe I'm just in a bad mood. At the risk of reducing all these problems to one oversimplified solution, I strongly feel that bringing academia and industry closer together in how to approach information security issues is the only way to begin to fix most of these problems.

2009-12-17

A song for the season

Enjoy. Thanks to my coworker Roger for the assist.
On the 12th day of Christmas, my CIRT did find for me...
12 users clicking
11 hackers hacking
10 sites cross-scripting
9 drives receiving
8 gigs a-taken
7 widgets stolen
6 passwords broken
5 forged emails,
4 PDFs,
3 word docs,
2 hyperlinks,
... and a hole in Adobe new-Player