2008-12-26

The best foreword I've (yet) read

Being of a scientific and engineering mind, I love me some empirical data. This is why it's a crying shame that I've taken so long to get around to Andrew Jaquith's Security Metrics [Addison-Wesley, 2007]. I have owned the book for a year, and have only now completed the foreword by Daniel E. Geer, Jr. Sc.D.

This is the best foreword I've read to date. It alone has changed how I think about metrics that measure security. If you never own this book or read it to completion, read the foreword. At only 4 pages, it is a concise and fundamental articulation of how to think about quantitatively measuring security. If you haven't read it, stop by a bookstore and check it out when you can spare 5 minutes. You'll be happy you did.

2008-12-08

EWD on Information Security

Last week, Slashdot featured EWD1036-11 (handwritten manuscript by Edsger W. Dijkstra) titled On the cruelty of really teaching computer science. Besides being fantastic reading for any computer scientist, Dijkstra inadvertently makes some points very salient to the security field specifically worth pointing out in this 1988 essay.
[Lines of code] is a very costly measuring unit because it encourages the writing of insipid code, but today I am less interested in how foolish a unit it is from even a pure business point of view. My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventinoal wisdom is so foolish as to book that count on the wrong side of the ledger.
Could it be that our software development process is fundamentally flawed? That vulnerabilities are merely an artifact, or symptom, of a problem that transcends all software engineering? In his essay, Dijkstra insists upon building code guided by formal mathematical proof, as such code is correct by design. Does this sound familiar? Perhaps like "secure by design?" This is a grave and pessimistic evaluation of the state of software development that still holds a great deal of merit two decades after it was written. Today, we see Dijkstra's diagnosis painfully manifested as viruses, worms, hackers, computer network exploitation, and the resultant loss of intellectual property.

Later, Dijkstra enumerates opposition to his proposed approach to development paired with formal mathematical proof. Again intersecting the security discipline, he writes:
the business community, which, having been sold to the idea that computers would make life easier, is mentally unprepared to accept that they only solve the easier problems at the price of creating much harder ones.
And thus, on December 2, 1988 - almost exactly twenty years ago to the day as I write this - Edsger W. Dijkstra defines the source of computer security problems by reiterating the "law" of unintended consequences. Accepting this axiom, security practitioners focus on identifying the harder problems resulting from "easy," mathematically imprecise, logically dubious solutions upon which the bulk of our computing infrastructure operates. I feel very strongly that this one statement scopes our discipline better than any other that has yet been made - so strongly that it is worth re-evaluating what information security is.

Security is the identification and mitigation of the unintended consequences of computer system use that results in the compromise of the confidentiality, integrity, or availability of said system or its constituent data.

2008-12-06

Life Imitating Computers: The Evolution of Human Thinking

I have been working on this short essay for a long time, sorting out my thoughts on the issue and trying to convey what I'm thinking in a clear and concise manner. I sincerely hope that you enjoy it, find it insightful, and do not think of me as a ranting lunatic after reading it.

For thousands of years, mankind has relied on oral history to pass along anecdotes, stories of our history, lessons learned, and any other bit of collective knowledge that societies felt necessary to preserve in order to facilitate the survival of the species - explicitly or otherwise. It is the recognition of this benefit that has largely enabled humans to thrive in societies which wisely chose the knowledge to pass along, and has led to the creation of such constructs as "conventional wisdom," "wives tales," fables, stories, and even religion. While it was initially feared as challenging this status quo of knowledge transfer, Gutenberg's invention of the printing press around 1439 was an amplification of these constructs; an argument reinforced by the first book to be pressed - the Bible - and proven to be correct over time. This invention was the mother of all evolutionary inventions in man's history at that time.

While the pairing of the printing press and widespread literacy opened the door of knowledge to many more of our species, the spread of and access to this information was still spotty and slow. It had been, and still was, necessary for mankind to keep much of the knowledge needed to process information and analyze various aspects of one's own life, surroundings, and society in our collective heads for daily use. This was the driving need for the continuity of our legacy constructs: while we could gain knowledge and share it far more easily, to leverage it in a practical sense we had to be able to keep that information in our heads. We had evolved to easily store knowledge in terms of these constructs through natural selection, and thus our conventional mechanisms for knowledge transfer between generations survived, and even thrived, under this new regime of recordation.

Computer systems also have a problem of information access, for which various components have been developed to address. ENIAC, and early computers like it, had to be able to store information that would be processed in the "processor" itself. It only had one type of memory - essentially, a flipflop. It was this single mechanism that was available for all type of data. This limited the computation to that which could be crammed into this expensive memory. Later, the concept of a slower "core" memory unit was developed. Data that had to be immediately operated upon had to exist in registers (memory) on the processor itself. That which did not need to be operated on immediately could be swapped out to the slower, larger "core" memory. Modern computers have many levels of memory, from registers that operate at the speed of the processor, to multi-layer on-chip cache from which the registers are populated, to RAM which holds necessary but less-immediately accessed data, to disk which holds infrequently accessed data. Along with evolutions in mechanisms for storing data have come evolutions in how to most effectively leverage them, including predictive algorithms for data caching and swapping from the slower to faster storage devices to minimize execution delays due to memory access.

Like the development of slower, larger memory to support data computation in our modern computers, we have collectively invented this revolutionary tool known as the internet. As the ready availability of data to mankind increases, we are forced to rely less and less on our conventional (less accurate) mental constructs, just as computers needed to store smaller and smaller portions of the data and instructions that could be processed at ready access to the CPU. As a result of all of this, in the case of computers as well as mankind, the set of information available increased exponentially. When performing tasks, we now have a wealth of available information that doesn't have to be at the tip of our fingers, or on the top of our brain, in order to be processed in a reasonable period of time. We read things on the internet, perform research in a few minutes, and - if necessary - remember it to perform a task more quickly the next time. We may "swap out", or forget, something that we previously needed on a regular basis with confidence that if we need it again later, we will be able to find it. This is a rudimentary memory management algorithm, adapted to human nature.

All of this raises some important questions that mankind needs to reckon with in the not-too-distant future. How might this revolution in the very essence of our thinking change our constructs? In what ways will fictional literature be impacted? Will we still tell our children stories? How will religion survive? Can computer memory management techniques be adapted by psychologists to train humans to more effectively leverage our new tools like the internet? Is this evolution leaving us vulnerable should we somehow "lose" this tool through war or regression in civilization like that which happened after the first Roman empire? These questions will be answered, implicitly or explicitly, in coming generations. How we answer these questions and resolve the inevitable conflict in between the question and answer will shape no less than the future of our species. It is essential that we recognize the existence and significance of these questions now, if we have hope of answering them as a civilized society, rather than through war or deterioration of our hard-won civilization.

Research that recognizes the issue of technology fundamentally changing ourselves and society is now being highlighted by mainstream media outlets. Recently, USAToday published an article that discusses technology's impact to our social interactions. Closer to the point I make above is this article discussing how surfing the internet alters how one thinks. The latter seems to infer that this model of cognition will be more efficient than our legacy constructs by suggesting those who are able to leverage it will be ahead of others intellectually and socially in future generations.

2008-12-02

The Importance of Vocabulary

A brief essay I wrote for the SANS Computer Forensics, Investigation, and Response blog on language - let's see if they post it :-).

Over the past few days, a discussion has been forming on the GCFA mailing list regarding the use of the word evidence. Specifically, how appropriate is it to call a hard drive (or more logical construct such as a file) "evidence" when it may turn out that the object will serve no purpose in conclusively resolving an investigation? Is it evidence, or is another word more apropos?

Reading the dialogue reminded me once again of the importance of vocabulary, particularly in technical fields where clear, precise communication is an operational imperative rather than merely a creative expression or embellishment. While it may seem academic, mutual agreement on the use of these critical terms serves as the basis for communication in computer forensics. The more clearly defined our language is, the more effective and efficient our communications will be. Even in the first-person, definitions carry great significance, influencing no less than the very way that we think. As George Orwell said, if thought corrupts language, language can also corrupt thought. This feedback loop cannot be overstated - clarity in language will influence a deeper clarity of thought.

Insofar as our fields of study are concerned, largely in their infancy with respect to other scientific fields, disambiguation of terminology is a significant challenge. Various leading texts provide differing and sometimes conflicting word definitions & usage - even with basics such as what an 'incident' is. Media coverage of security compromises often overlooks the significant differences between CNA ("taking out the DNS infrastructure") and CNE ("industrial espionage"). Our vendors are not exactly helping the situation either - as a high-profile example, see Microsoft's Threat Modeling, which is really risk modeling. It is easy to see that we, as professionals in our young field, wield great power in shaping the future through contributions to our common language where it is still unclear or improperly used. I encourage readers to participate in these discussions whenever they arise. Diversity in opinion and vigorous dialogue are necessary to solve these foundational problems and mature our industry.

As to the definition of the word evidence, I'll leave that to a better discussion forum than a blog.

2008-11-25

What security can learn from the recent financial crisis

In the most recent Scientific American Perspectives, the editors lament the state of our economic system and place a great deal of blame for it on software models. In their words, risk management models should serve only as aids not substitutes for the human factor. While this is certainly not the only example of the perils of algorithms replacing analysts, it is perhaps the most poignant.

In the security industry, software vendors and managers have been pushing hard for years to supplant analysts with software -- the theory is that automated software can do just as good a job, and after all, labor in this day and age is expensive. The danger, of course, is that the security field is far less mature than the study of capitalism. Instead of dangerously repurposed algorithms originally designed for unrelated fields of physics and mathematics, though, algorithms never having a connection to any causal relationship are employed in our industry. Indeed, the end state of "security" has been elusive even in the most anecdotal of terms; we are a long way away from quantitative methods to define the risk management that is our jobs. Yet software vendors are happy to hand-wave their way through a sale in an effort to provide what amounts to a false sense of security riding on principles that are often far enough from empirically proven that they are better described as "faith" than "science," even though they are presented as such. Management, without the requisite technical skill set or trust of their subordinates to identify the b.s., is too often eager to buy into the hype.

The information security industry as a whole would be wise to learn from this painful lesson in economics: technologies are tools, to be used by skilled analysts to digest large and complicated data sets and produce actionable intelligence. Analysts drive the tools, the tools should not drive the analysts. Otherwise, you find yourself dangerously reliant on inflexible tools incapable of identifying the larger systemic problems, and the only means to identify a problem is the collapse of the entire system - in our case, a catastrophic compromise of security.

2008-11-14

Mirror's Edge

This sounds like a really cool game (warning: annoying flash and sound inside) with an intriguing plot:

"Once this city used to pulse with energy; dirty and dangerous, but alive and wonderful. Now it is something else. The changes came slowly at first. Most did not realize or did not care, and accepted them. They chose a comfortable life. Some didn't... they became our clients.

"In a city where information is heavily monitored, agile couriers called runners transport sensitive data away from prying eyes. In this seemingly utopian paradise, a crime has been committed, your sister has been framed and now you are being hunted."

And besides, that's a really freaking cool tat. Even if it is anime-ink. I wonder if the discovery to beat the game is realizing the effectiveness of hiding in plain sight. I doubt it. That would be a pretty crappy video game. It might make for a pretty sweet aspect, though. I'm not a huge gamer, but I do have myself an Xbox 360. I may have to check this out.

The game is an extreme example, but nevertheless a potent reminder that hiding data isn't always bad... a notion utterly lost on many in the general public. Any awareness is good awareness.

2008-11-12

Solve the right problem with NAC


NAC is an important technology. It's neat. It's cool. But it's expensive. And, while many Cisco or networking zealots may argue to the contrary, it's not always necessary.

NAC prevents unauthorized computers from participating in a network. This is good for environments which your IT staff doesn't have control of, but need to permit a certain level of access to. VPN's, of course, are one of the most common examples. They also happen to be one of the simplest use cases for most administrators.

However, in corporate environments where assets are owned by the same entity that control the network, NAC shouldn't be a replacement for good software management. With a few notable exceptions, if you can implement NAC, you can typically implement good software management on your endpoints.

NAC is also not an appropriate binary access control mechanism in most cases. If the primary goal is to restrict network access to computers you own, this is a site security problem. Naturally, if you have contractors or customers who require access to your network, there is a role for NAC to play. The right answer here is to define your security requirements in general terms, articulating the decision point between an IT and physical security concern for each aspect.

Use NAC. But do it with a clear understanding of your goals, and apply it just like you would any other technology: where it's appropriate. IT solutions are slick, but they're not always the best option available.

Image from http://download.101com.com/wa-mcv/spo/images/april7/monitor.gif

2008-11-06

Why the Obama-McCain Hack may be bigger than you think

A recent Newsweek article revealing that both US presidential campaigns were compromised by 'a foreign entity or source' is getting a lot of attention. The article ominously quotes the FBI: "You have a problem way bigger than what you understand." Boy aren't they kidding. Let me explain a parallel to you, since the correlation is far from obvious.

You have probably read news reports about defense-related data on unclassified networks being targeted by actors that seem to be abroad. Working professionals in the defense infosec industry understand the logic from the perspective of an adversary: target technology while it is being developed on unclassified networks, by necessity for collaboration, because once the military receives the technology it will be harder to get these details as some become classified or more closely held. There is asymmetry between information sourced at contractors (tends to be unclassified), versus the very same type of information sourced within the government (tends to be classified). This is one of the not-secret, but not-widely-known dirty little truths about our classification system.

Here, we see the same tactic with a wholly different kind of information. Policy decisions being made by the Obama and McCain camps during election season are likely to translate into official US Government policy once one of them is elected, at least insofar as election promises are upheld. Some of these details are likely going to be held close to the vest, and almost certainly classified. Naturally, while policies are under development in a not-yet-elected campaign office, they are unclassified with the custodians (campaign workers) unqualified or uninterested in protecting them - except possibly from the other candidate. This is a brilliant application of the same tactics available to adversaries for acquiring military technology, perfectly timed for the only period that such an attack may be successful in compromising the confidentiality of future policy stances. This parallel may have significant implications; specifically what depends on the viewpoint of the reader, but the alignment is no less than 'quite interesting.'

If there is a silver lining here, it's that Barak Obama's office now has a first-hand understanding of just how severely questions of information security and electronic espionage have the potential to impact national security. Let's hope they remember that when deciding on IT and government-wide security strategies for the next 4 years.

2008-10-24

In Case of Vulnerability, Do Not Discard Brain

I saw this while reading Bruce Schneier's blog last night and felt it entirely appropriate to the recent reaction of the security community to flaws in major pieces of software.

First, there was Dan Kaminsky's DNS flaw. EVERYBODY PANIC! Not only was the community in an uproar about how this could be the end of the Internet as we know it, but adoration of Kaminsky was rampant, with some claiming he even changed the future of internet security by being the umteenth person to practice responsible disclosure. The flaw was serious. Swift reaction by administrators was in fact necessary to stymie widespread problems. But the panic induced and irrational aspects of the response are not much different than US citizens immediately surrendering their civil liberties post-9/11 somehow thinking this would prevent another terrorist attack. The one unusual example set by Dan Kaminsky was his rational approach to a serious vulnerability. That, my friends, is what is lacking in our community today.

Case in point: the most recent Microsoft RPC vulnerability and corresponding out-of-cycle patch: MS08-067. Should we be concerned about this? Absolutely. Does PoC code exist? Yes - and we know our Antivirus vendors won't detect it because they feel proof-of-concept code is insignificant rubbish. Oh, woe is the security analyst! Even the venerable SANS Internet Storm Center is in a tizzy:

It is expected that with the release of the update, much more of the hacker community will become aware of how to exploit this and create a major worm outbreak or botnet activity.

Look, a swift response is necessary, and for those responsible for software patching this is most certainly an all-hands-on-deck scenario. I maintain, though, that this is mostly a concern for home users. In light of the Nimdas, Code Reds, Slammers, and Blasters of the past, companies have built and honed their software patching infrastructure - especially with respect to Microsoft products. And once our Anti-Virus masters deem the proof-of-concept code "in the wild", when their job becomes easy, I'm sure we'll get detection for our AV products. Distribution of virus definitions is also a mostly-solved problem for the enterprise. The only folks who need to worry are those who work in environments where management has decided that these infrastructure components are not important, and therefore problems still exist despite a litany of products available to address them... and Mom and Dad, of course.

The security community needs to make sure the appropriate urgency is communicated to individuals responsible for infrastructure components, and keep their ear to the ground, but this is not a time for panic. Panic leads to irrational decision-making like slamming out patches without adequate testing on mission-critical systems, and reduced focus on sophisticated adversaries in favor of these more broad issues which, in the end, will most likely have a smaller impact in terms of net loss if handled with grace.

In case of vulnerability, do not discard brain.

2008-10-17

Antivirus is failing; long live antivirus

From the most recent SANS Newsbites:

--Security Suite Vendors Question Secunia Study
(October 15, 2008)
Makers of antivirus products and security suites are calling into question the validity of a recent study from Secunia. The study tested a dozen security suites against "300 exploits targeting vulnerabilities in various high-end, high-profile programs" and found the highest scoring suite caught just 64 of the 300 exploits. Some of the companies whose products were tested say that just one aspect of their products was examined. Others whose products were not included called the study a publicity stunt.
http://www.darkreading.com/document.asp?doc_id=166027&f_src=drdaily
http://www.theregister.co.uk/2008/10/15/secunia_tests_backlash/
[Editor's Note (Skoudis): Designing a thorough and fair test regimen is quite difficult, and running the suite of tests against increasingly complex products is very time consuming and expensive. Matt Carpenter and I did this in 2007 for seven endpoint security products, and it consumed two months of our time. Whenever you see a test report of security products, make sure you look carefully at the description of the test methodology and testbed to determine what they measured and how. No test suite is perfect, but some better reflect operational environments than others.]


I took a look at Secunia's test methodology. They cover a broad range of exploits used by sophisticated adversaries in modern highly-targeted attacks. Their results for particular malicious files & attack types I've seen reflect my own experiences at a large enterprise CIRT, defending against highly-targeted attacks designed for the explicit purpose of compromising proprietary information. Not surprisingly, their resulting detection rate reflects my experiences as well. While the proportions used by Secunia may not have fairly reflected the universe of malware that's "in the wild" today, I don't care. There's no point in comparing detection rates for Blaster, Slammer, and other previously-solved problems. What I care about are the serious threats; the Malware that's being used against carefully-selected targets, that's working. The malware that only has to change by less than 5% (as measured by fuzzy hashing ala ssdeep) to evade detection by leading vendors. That's where the adversaries' foci is today, it's where we need anti-virus the most, and it's where anti-virus is failing us. Naturally, their conclusion is spot-on:

These results clearly show that the major security vendors do not focus on vulnerabilities. Instead, they have a much more traditional approach, which leaves their customers exposed to new malware exploiting vulnerabilities.

Kudos to Secunia for standing up to the industry.

Most of the anti-virus vendors are fighting hard to maintain a status quo which no longer reflects reality. If you'll recall, they lashed out against Skoudis and Carpenter when their tests led to similar conclusions about the state of the AV industry almost exactly a year ago. They're better off putting their resources into product engineering to address 21st century threats, than marketing and PR.

2008-10-13

Airplane ephiphanies

I have the strangest epiphanies on airplanes. And I fly a lot for work. The convergence of these realities means I have many strange epiphanies that I need to sort through and figure out which are worthy of a second thought and which aren't. I wish I knew why - it must be something about the combination of the effects of altitude, boredom, the random musings of my iPod's "shuffle" feature, and the occasional overpriced adult beverage. But I digress...

My iPod randomly happened across Stevie Wonder's Sir Duke this evening, which naturally made me reminisce about how much I used to love his music. I went on to listen to his many other brilliant recordings I had. I then browsed around and found my purchase of Michael Jackson's Thriller, on the recent 25th anniversary of the release of the best-selling album of all time. I thought to myself how much I loved both of these artists back in the 80's, only to forsake them for nearly a decade as uncool or otherwise irrelevant to contemporary rock. Oh how I would've lamented my future had 21-year-old me seen 30-year-old me lip-singing Part-Time Lover with the zeal of a teenager on a flight to Las Vegas. But today I look at these artists, and their work, with a sense of greater perspective. Yes, there are cheesey elements to these songs that often relegate them to the bowels of dentists' offices, but the important components that made them great in the first place - the groove, the feel, that were all fresh and new then and now serve as the basis for so many other hit songs - those elements are still there and worthy of study. Listening again, I could only shake my head that I had ever thought that these important components had been overlooked by myself or others, and feel guilty for having let such a obviously timeless elements be forgotten, even if temporarily. But yet they were, and now the same thing is happening to music produced in the 90's.

Where could I possibly be going with this? On the eve of delivering a presentation that will call the classic incident response model 'irrelevant,' I see similar veins of amnesia in the security community. We started off with email viruses, and "evolved" to large-scale worms with the dawn of the new millennium. In 2003, if you had asked any one of us about a Word document with a macro that drops code, we would've laughed in your face at your ignorance and failure to evolve with the rest of the world. Yet that very mechanism is how malicious code is being delivered today, with adversaries exploiting the KISS principle like we never would've guessed. Email attachments that compromise systems - what could be more elegantly simple? The bad guys remember how Stevie Wonder's groove totally drove Superstition, or how the unique combination of rhythm and tambre absolutely set Michael Jackson's hits a whole level above anything else at the time. They know how to take these key elements and build new art with them. I've seen Macro viruses incredibly effective as recently as 2007, when married with highly-effective social engineering that convinces users to bypass mechanisms there to protect them from that very danger.

Today, many scoff at the Blaster and Slammer worms of 2001-2003 as bygones of a past era. They are no longer the key focus of our adversaries, and we must evolve along with them (make no mistake about it - the bad guys, not the good guys, drive this industry). But in our haste to move forward, we must remember the elemental components, the groove, of the internet worms of the past, or we'll be destined to suffer from them again.

2008-10-03

Fostering the Multidisciplinary Analyst

In my years in information security, I've come to appreciate my liberal-arts undergraduate degree in ways I never thought I would. This has driven me to increasingly read up on ostensibly unrelated subjects in science and engineering. At the very least, it has been interesting. And at times, it has lent insight into new ways of solving problems that I otherwise would not have likely thought of. It's been this drive to broaden my technical horizons that has made me a huge fan of Scientific American over the past year. I've become an avid reader. If you don't have your own source of broader knowledge, I would encourage you strongly to find one. It has been the catalyst that has allowed me to take my career to that always-desired "next level."

On a related note, I have two specific recommendations. In the most recent SciAm (Vol 299, Num 4) Perspectives, editor Matt Collins writes Questions for Would-be Presidents. If you're planning on voting in the US this fall, which any responsible citizen should, this will be an interesting one-page read for you. Make no mistake about it, while issues of science are rarely if ever discussed in national media, the questions Matt poses are the type that will drive the country's innovation and inevitably determine our place the globe 10, 20, and 50 years from now.

The second recommendation I have is the entire Vol 299, Num 3. The featured articles in this issue focus on the area of security and privacy, and include the best single article on encryption I've ever read, How to Keep Secrets Safe, by Anna Lysyanskaya. I'm quoting liberally from it in a revision of an encryption class I teach at the company I work for, and I'm certain that anyone finding this blog of interest will enjoy it.

2008-10-02

Ex-Tip registered on Sourceforge

I've (finally) registered version 0.1 of Ex-Tip on Sourceforge. Naturally, if you have any questions or would like to contribute, by all means contact me. I'm also accepting any modules you may develop that work well with the framework.

Also, I'd like to thank SANS fellow Rob Lee for inviting me to contribute to the SANS Forensics blog. He and the current contributors have a good thing going here, and I look forward to contributing. Naturally, I will continue to write here as well.

2008-09-22

Shameless plug: SANS Forensics/IR Summit

I will be participating in a Defense Industrial Base / Law Enforcement / Dept of Defense panel at the SANS WhatWorks Summit in Forensics and Incident Response. The topic, broadly, will be "How are government agencies and contractors responding to large scale intrusions successfully?" Even if you don't do work with or for the government, I would encourage you to attend if you happen to be at the summit. The DoD, and by extension their contractors, see the bleeding edge of new offensive techniques, often years before other commercial sectors. Law enforcement organizations, naturally, become involved and bear witness to the same. If you're interested in how large organizations defend themselves against and respond to attacks that you will likely be seeing in the future, this will hopefully be a good session to attend.

The panel spots will be filled by decision-makers and technical staff alike, from large DIB contractors to DC3 to the FBI.

2008-09-18

Identity theft victim no. 52,000,001

No matter what lengths you go to, sometimes it's impossible to prevent identity theft. Countrywide recently disclosed that 2 million of its mortgage customers may have had their identities stolen - one of which was likely me.

Now, I've always been very paranoid about who does and doesn't get what from me, with the perhaps-naive hope that this would at least mitigate the risk. I consider myself to be well educated on the topic. But in the back of my mind, I always knew I was at risk - after all, I worked at a financial institution for years. I saw just how secure it was, and by proxy the data of its customers.

When companies such as these - whose data helps define our identities - can't secure their systems, absolutely anyone can be a victim. This is why stronger legislation and repercussions are necessary for violations: they are the only thing that will force companies' hands in taking these issues, which the public is utterly defenseless on, seriously.

2008-09-03

Over-visualization fun

I'm a very big - nay, a huge t-shirt fan. I'll admit, I even subscribe to a t-shirt blog. If I attended meetings, it'd be an illness.

Threadless is a tee site I'm particularly fond of. While browsing their seemingly bottomless vault of shirts for sale, I came across this one. It hit home for a number of reasons.

Over the past few weeks I've struggled with the problem of visualizing a massive amount of data relating to some security incidents. This has proven a worthy endeavor not only in illustrating causality that isn't apparent in the raw data itself, but also in communicating to management various parts of the "story," letting them draw their own conclusions. I'll hopefully get to writing about a couple of techniques (no data, naturally) that have been particularly helpful in the coming weeks.

In a number of cases, the approaches I've taken have failed, most due to "over-dimensionality;" trying to cram too many variables into the diagram. What resulted was cool, but required far too much explanation - much like the visualization in this picture. The data itself in this case is likely meaningless, but it's a good example of what can result when analysts are overly ambitious in attempting to communicate findings. It's easy to do. When we understand all of the data we have, thanks to many long hours of study and analysis, we feel every detail is important because we understand its contribution. But in telling the story, guiding readers to a conclusion, or illustrating causality, many times it is necessary to gloss over detail that can be spoken to or revealed if additional questions arise.

I've found that studying Tufte's literature has been a great help in improving my skills in visualization throughout the course of this calendar year, and while I appreciated this skill before, I now realize how critical it is to this profession. I'd encourage everyone in InfoSec to find a way to sharpen their skills in data visualization. It will pay dividends in your career you didn't expect.

With special thanks to my boss for initially inspiring me to investigate this topic more thoroughly.

2008-08-27

Quantum physics, in laymen's terms

Tonight, thanks to cracked.com, I read both the most concise and hilarious description of quantum physics that will perhaps ever be penned. From the article The 5 Scientific Experiments Most Likely to End The World:

To grossly simplify it, on a scale smaller than atoms, the quantum level, everything suddenly turns into a goddamn circus. Quantum physics is to regular everyday physics as a David Lynch film is to a mainstream blockbuster. We're talking particles popping in and out of existence, being in two places at the same time, and generally acting like assholes.
Look at that particle. What an asshole.

And, for what it's worth, I've been following the LHC experiment mentioned in the article with eager anticipation over the last 8 months or so. If you're unfamiliar with it, I recommend you read up. This may be the physics gold mine of a generation, and even if it isn't, it's a scientific and engineering achievement without parallel in many disciplines, from physics to electrical engineering to computer science.

2008-08-25

Obama on security

The next president will play the most pivotal role thus far in the role of the United States in the information security domain, and will arguably make decisions that will lead to victory or defeat in the country's first major conflict which will involve significant "cyber operations," to use the military vernacular. Recently, the USAF has stood up, and subsequently down, an entire command to address the problem of military operations in the "cyber domain" (I will continue to put "cyber" in quotes because I feel it's a worthlessly ambiguous term, but acknowledge its widespread use in the military). Clearly, leadership is needed, from all levels of policymakers in the US.

That's why I was very happy to read the following excerpts from Barack Obama's Summit on Confronting New Threats at Purdue University in July:

we can - and must - strengthen our cyber defenses in the 21st century.
...
We know that cyber-espionage and common crime is already on the rise. And yet while countries like China have been quick to recognize this change, for the last eight years we have been dragging our feet.

To quote a famous hip-hop artist, and many others, "talk is cheap." I sincerely hope that this is on Barack Obama, and John McCain's, radars as an issue in need of attention early in their presidency. Bruce Schneier recently articulated what he felt were the most important security-related issues for the next president. I feel one was left out: a comprehensive platform on dealing with electronic espionage and a well-articulated plan for response to and consequences for those actions. It sounds like this is starting to make it onto the radars of at least some of our near-future policymakers. It's about time.

Image courtesy echosphere.net.

2008-07-29

Apple gets an F

Apple needs to start paying half as much attention to security as they're paying to design. Weeks after every other major vendor has released a patch to what we affectionately call the Kaminsky DNS Flaw at work, and months after being informed of the problem, Apple still hasn't patched their implementation.

I never had a soft-and-fuzzy feeling about Apple's commitment to patching, but for them to sit on a serious, ubiquitous flaw as their competitors react responsibly for once shows in no uncertain terms that their priorities seem to lie elsewhere.

Microsoft is a great pioneer in doing things wrong - from a security perspective, anyway. You'd think Apple would do everything it could to differentiate itself and win more market share. You'd think...

2008-07-23

FlyClear passes privacy audit

In a recent press release, Verified Identity Pass, Inc. - commonly known to US air travelers as FlyClear - announced they had passed a four-month long audit of adherence to their own privacy commitments. This is a rare good-news story that acknowledges the significant concerns raised by privacy groups such as EPIC. To what end their own stated privacy commitments addresses those concerns I will leave to the advocates, but an important disclaimer from the audit report was left out of the press release.

...the projection of any conclusions, based on our findings, to future periods is subject to the risk that the validity of such conclusions may be altered because of changes made to the system or controls, the failure to make needed changes to the system or controls, or a deterioration in the degree of effectiveness of the controls.

I wouldn't even point this out if we were talking about anything but a government-sponsored program/company: periodic auditing is absolutely essential to ensure ongoing confidence in the program. The more consecutive audits passed, the greater public confidence grows. I haven't signed up for the program in part because I was concerned about the privacy of my data. This helps offset my reluctance. The effectiveness of the entire program, of course, is another topic altogether.

2008-07-16

Dan Kaminsky is NOT a hero

Before I launch into my rant about all the swirl that's resulted from Dan Kaminsky's recent disclosure of a DNS flaw, I want to make one thing clear: While I do not know him nor have I worked with him, I nevertheless hold Dan's skills in high regard and respect him as a professional. The DNS flaw behind this is indeed serious. Nothing I'm about to say should be seen as a reflection on him or his work, but rather the sometimes-OCD InfoSec community and online media outlets.

Yesterday I read a column by Robert Vamosi, linked off of C|Net, that made me vomit a little bit in my mouth. His comments on Kaminsky would make the reader think that the man just saved the entire world+dog for today and the rest of time from certain doom from some three-headed unstoppable eating machine with minty fresh breath but a bad, bad attitude. Heck, he may just be the second coming. Oh man, that means I'm going to hell for not capitalizing He. Allow me to quote from the article titled - no kidding - The man who changed internet security:

There have been other multiparty patch releases, but never has there been one on such a massive scale.

What he [...] did over the last few months was not only responsible but extraordinary.

all future vulnerability disclosures could benefit from his example.

With the DNS flaw, Kaminsky was in a very weird position. What he found wrong [...] wasn't just within one vendor's product, it cut across various products

He has changed Internet security, and done so for the better of us all.

This is a great amalgamation of all of the idolatry directed at Dan, all in one column. To categorize all of this, many people - professionals in the field (self-proclaimed or otherwise) - seem to be under any combination of the following false impressions:
  1. The scope of this issue is without precedent. This is simply not true. Especially in the late 90's and early 2000's as attackers began seriously exploring computer vulnerabilities, there have been a number of widespread service implementation problems - or problems affecting a hugely critical piece of software (think: Bind before many people used MS's DNS server). A recent example is the vulnerability in the implementation of BGP by every major router manufacturer in 2007 which could lead to a spoofed denial-of-service and ZOMG TAKE DOWN THE WHOLE INNERWEBS!
  2. Having to coordinate patches between vendors is unusual. While no doubt most vulnerabilities impact only a single vendor, it's also not uncommon to find a second vendor, perhaps borrowing from the same segment of code (I'm looking at you Unix), that is also vulnerable. For an easy example, see (1), or many vulnerabilities found in open source/GPL code over the years.
  3. This vulnerability is new and completely unexpected. While we won't know for sure until this is discussed at BlackHat, there is evidence suggesting this isn't true. People have pointed out that similar techniques to poison DNS have already been discussed. We can certainly say the severity of the exploit seems new, but beyond that, any responsible discussion on the topic needs to wait until all the facts are in front of the public for peer review. I wouldn't say this is patently false, but I would say to anyone making this assertion, "not so fast there..."
  4. Responsible disclosure is somehow novel, invented, or revolutionized by Dan Kaminsky. These people either have had their head in the ground since 2000 or so when the debate between full and responsible disclosure first erupted on BugTraq, or they never understood what the term meant. At the time of the writing of this entry, a Google search for "responsible vulnerability disclosure" returned "about" 287,000 pages.
To quote his recent blog entry, he's been "the beneficiary of what can only be described as 'redonkulous amounts of press'." To wit, there is plenty of good press discussing the vulnerability and how to fix it - that's obviously not what I'm talking about. Dan's a great professional, I hate to see fanboys like this surface and cheapen - rather than reinforce - his m4d sk1lz.

To Dan: Kudos. To all the fanboys and fangirls: Please to be redirecting your significant energy and time to something a little more productive.

2008-07-12

In case you missed it...

In the most recent SANS NewsBites, editor Brian Honan points readers to a great skit on identity theft by British sketch comedians Mitchell & Webb. Hilarious, concise, and satirical-just what you'd expect from British humo(u)r. Worth the 1:55 if you have it to spare.