Let's Enable Cloud Computing

I've been thinking a lot about "cloud computing" over the past few months, and I keep coming back to the same conclusion every time: the InfoSec community is inhibiting IT innovation by throwing up weak, largely unsubstantiated concerns over the security risks of "cloud computing." Overall, our industry's reaction smacks of "fear of the unknown." [1]

After some research[2][3][4][others], I've found that most security-related arguments against cloud computing qualitatively fall into one of the following risks, in no particular order:
  1. Context-hopping. A compromise of one virtual environment may facilitate access to another virtual environment. This is a technical risk.
  2. Supervisory control. A compromise in a virtual environment may lead to an "escape" from that environment to the supervisory process that controls it and other environments. Together with #1, these are also called "VM Escapes." This is a technical risk.
  3. Inferential data loss. Others could make inferences about your environment by inspecting their own (resources available, etc.). This is a technical risk.
  4. Change management. Virtual environments can be changed rapidly, meaning a possible loss of control. This is a procedural risk.
  5. Role confusion. Virtual environments, being controlled by different actors at different layers, may lead to confusion about important task execution (think: backups). This is a procedural risk.
  6. Forensics. Virtual environments may complicate or limit forensic investigations and e-discovery. This is a technical risk.
  7. *Control. In outsourced situations, loss of control of the underlying hardware and supervisory process externalizes certain risk-introducing actions like misconfigurations. It also may inhibit validation of controls at lower levels of the software or hardware, and outsiders have administrative access to the underlying environment. This is an implementation risk.
  8. *Data location. In a virtual environment, the location of data at any given point is uncertain, with possible legal or export control implications. This is an implementation risk.
  9. *Privacy. In outsourced scenarios, another entity dictates the conditions and depth of law enforcement cooperation. This is an implementation risk.
  10. *Continuity. Hosting infrastructure on a company's servers could be at risk if the company folds or experiences other stability issues. This is an implementation risk.
I've marked the risks exclusive to outsourced cloud services with an asterisk.

Let's focus on those risks that impact all implementations of cloud computing; that is, items 1-6. To be blunt, the only risk that deserves special attention is [6] Forensics, because of the loss of the often-invaluable unallocated space on a disk or in memory. Every single one of the technical risks [1]-[3] are already accepted by organizations at the network layer: this includes VLANs, MPLS tagging, and other network abstractions we have been using for years. I've yet to hear an argument as to why we should treat virtualization on the host any differently than we do on the network for these risks. Procedural risks [4] and [5] already exist in production environments, and should already be managed by established processes and organizational responsibility. If these are issues for cloud computing, they're issues for the broader IT organization. If nothing, they are not unique nor limited to the cloud.

Looking at the other half of our risks, again we see risks either already accepted or not specific to cloud computing, with the exception of privacy and possibly data location. Organizations that have this concern, however, can easily work with their provider to manage the privacy risk, and I'm not convinced that the data location issue is a problem - after all, packets are routinely routed around the world irrespective of the export status of their content. In any case, it's likely that this is easily addressed as well. [7] and [10] are already an accepted risk at the network layer by any organization with a WAN managed by an ISP.

In contrast, I'm going to provide a few reasons cloud computing could actually help security, if properly implemented.
  1. Intrusion detection. The supervisory process is a place where all network and host activity can be monitored from a single vantage point. This holds great promise for intrusion detection and behavioral analysis by exposing far more data than could be afforded previously.
  2. Compliance monitoring. User activity could easily be monitored across multiple systems and applications. Restrictions on where data resides could similarly be implemented across systems easily (think: DRM).
  3. Availability (yes, it is a security concern). Redundancy and rapid recovery become far more affordable.
That's just off the top of my head. Of course, with some careful thought and collaboration with virtual machine vendors, other opportunities are likely to arise. However, if our industry takes a "no" stance, in spite of the lack of any appreciable risk increase, we will be cut out of this evolution and lose valuable opportunities to turn cloud computing into a benefit rather than a cost from a security perspective.

I find it appropriate that the iconic security object is a firewall, because this is how most security professionals think. Classic InfoSec mindset is as a gateway; a veto-holding non-voting member of the IT community. The correct role, in my opinion, is as an active participant in technical innovation, architecture, and the engineering process, making sure requirements are met in a way that balances risk with cost - not eliminating risk at extraordinary cost. Compliance and auditing are my key suspects in holding us back from this goal, but that's an argument I'll save for another day.

  1. C|Net - Risks outweigh rewards according to most professionals: http://news.cnet.com/8301-1001_3-20001921-92.html
  2. Lenny Zeltser's blog: http://blog.zeltser.com/post/1525310925/top-ten-cloud-security-risks
  3. Infoworld, quoting Gartner: http://www.infoworld.com/d/security-central/gartner-seven-cloud-computing-security-risks-853
  4. NYTimes Op-Ed by Johnathan Zittrain: http://www.nytimes.com/2009/07/20/opinion/20zittrain.html?_r=1


Why there shouldn't be a dot-secure

A few days ago, Cyberwar Chief Gen. Alexander proposed building a separate, secure network for the nation's critical infrastructure. By now, this has been widely derided by many security specialists, but I wanted to throw my hat in the ring with a few comments.

Separation is an effective control in theory. One chronic problem our industry suffers is "ivory tower" syndrome, with decisions divorced from reality. This is an example.

SIPRnet is an example of where separation has effectively mitigated risk. The DoD's network is largely isolated, and as a result, has mitigated risk that internet-connected networks experience. Notice how I said "mitigated," not "prevented." Security is about risk management, not risk elimination.

The problem with separation comes in the form of exceptions and enforcement. The more exceptions, and and less enforcement, the less effective the separation, and the less risk mitigation. The diminishing role of firewalls as an effective security device is a stark example of this.

Think of this in terms of "meatspace": the Great Wall of China, the Berlin wall, the Maginot line - all were colossal failures for their stated goals. Additionally, the massive investment of resources for construction and maintenance detracted from other more effective strategies, amplifying their detrimental impact. Yet island nations such as Britain, which has had a complete water barrier, has enjoyed the security benefits of this isolation throughout its history.

The general's proposal is a fool's errand. I would say the same about an isolation regime only for the defense industrial base and the DoD, given the interconnectedness and overlap of those networks. What he proposes is a geometrically larger problem, with corresponding increases in the need for exception and difficulty of enforcement. The exceptional cost of such an approach could not possibly justify the resultant risk mitigation IMO. That amount of money would go much further in mitigating risk by investing in broadly-adopted and linked authentication mechanisms, secure DNS, counterintelligence, and cross-industry threat focused network defense.


Why my Twitter Feed is Hilarious

...or, the yes-huh, nut-uh of "cyberwar":


Security Academia: Stop Using Worthless Data

I have a new litmus test that I use to help me vet the many intrusion detection related academic papers that come across my desk. I call it the "relevant data test." If your approach does not study relevant data, I will not read it. You may indeed have found a new way to leverage Hidden Markov Models in some neat heuristic, layered approach. I do not care. Novel or precise as your approach may be, the applicability of it is predicated upon the relevancy of your data. You may as well have found a new way to model the spotting of a banana as it ripens, if your data has nothing to do with intrusions in 2010.

It's time to wake up, folks. A 10-year-old data set for intrusion detection is utterly worthless, as your conclusions will be if you use it. I will never again read further than "benchmark KDD '99 intrusion data set." There is no faster way to communicate to an informed audience that you just don't understand intrusions than by analyzing data that is this old. Such attacks are generations behind those that modern network defenders face today. Understand this: you are solving the problems exemplified by your data set. If your data is 11 years old, so is your problem, and your solution is only as effective as that problem is relevant. Few, if any, attacks from 1999 are relevant today.

Make no mistake about it, I understand the researcher's lament! There is no modern pre-classified data set like those relics of careers gone by. Finding a good corpus is excruciatingly difficult. But in legitimate, scientific, empirical studies, this is absolutely no excuse for using irrelevant data. In fact, without first establishing the relevancy of ANY data set, even those used in the past, one's findings fall apart.

To pick but one example, in the last two issues of IEEE Transactions on Dependable and Secure Computing, two of the three IDS-related articles based their findings on data sets that are 7 or more years old. This is emblematic of why so much research is ignored by industry, and that which isn't often falls flat in practice. If I were an editor of that periodical, which I have been reading for quite some time, I would have rejected nearly every intrusion detection paper submitted in the last 3 years outright on this basis alone.

The data commonly considered the "gold standard" by academics has not been relevant for at least half a decade. Research done in that period whose findings relied on 2001 and prior data is not in any way conclusive, in my professional opinion.


Spy Museum opens FUD exhibit

It is really bothersome to see a museum as popular and, until recently, esteemed as the Spy Museum open an exhibit pandering to fear. In the two-sentence description, a "cyber attack" is compared to Pearl Harbor, immediately discrediting anything that might be contained therein. Disturbingly, this analogy is made by Richard Clarke, someone with serious pull in matters of national policy. Such ludicrous hyperbole may make the museum some serious coin, but it sets back understanding of real-life CNA and CNE issues, the balance between them, and their practical use in modern society and warfare. The result will be misplaced priorities by decision-makers for whom these visitors vote, poorly-invested research and defense dollars, and if left unchecked, economic, military, and intelligence disadvantages on the world stage. Like the CNN-broadcast "Cyber Shockwave," the only thing missing from this exhibit is an F-35, Bruce Willis, and the "I'm a Mac" guy.

An exhibit headline, visible on the museum's website, reads "If cyber spies break America's security codes, could power lines turn into battle lines?" A better question is "who is the curator, a 16-year-old World-of-Warcraft gamer?" On second thought, even a pizza-faced teen would probably know this doesn't make one bit of sense.

A description of the phear. Sadly, it's recommended as something to do. And believe.
It’s a frightening thought—and an exhibit that, for better or worse, is designed to imbue its viewers with the reality of that fear as well as educate them. This is the kind of thinking that led to an extra gift, tucked into the Spy Museum’s Field Guide to Asymmetrical Warfare and passed out at the reception: a flash drive.

(Emphasis my own)