2008-12-08

EWD on Information Security

Last week, Slashdot featured EWD1036-11 (handwritten manuscript by Edsger W. Dijkstra) titled On the cruelty of really teaching computer science. Besides being fantastic reading for any computer scientist, Dijkstra inadvertently makes some points very salient to the security field specifically worth pointing out in this 1988 essay.
[Lines of code] is a very costly measuring unit because it encourages the writing of insipid code, but today I am less interested in how foolish a unit it is from even a pure business point of view. My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventinoal wisdom is so foolish as to book that count on the wrong side of the ledger.
Could it be that our software development process is fundamentally flawed? That vulnerabilities are merely an artifact, or symptom, of a problem that transcends all software engineering? In his essay, Dijkstra insists upon building code guided by formal mathematical proof, as such code is correct by design. Does this sound familiar? Perhaps like "secure by design?" This is a grave and pessimistic evaluation of the state of software development that still holds a great deal of merit two decades after it was written. Today, we see Dijkstra's diagnosis painfully manifested as viruses, worms, hackers, computer network exploitation, and the resultant loss of intellectual property.

Later, Dijkstra enumerates opposition to his proposed approach to development paired with formal mathematical proof. Again intersecting the security discipline, he writes:
the business community, which, having been sold to the idea that computers would make life easier, is mentally unprepared to accept that they only solve the easier problems at the price of creating much harder ones.
And thus, on December 2, 1988 - almost exactly twenty years ago to the day as I write this - Edsger W. Dijkstra defines the source of computer security problems by reiterating the "law" of unintended consequences. Accepting this axiom, security practitioners focus on identifying the harder problems resulting from "easy," mathematically imprecise, logically dubious solutions upon which the bulk of our computing infrastructure operates. I feel very strongly that this one statement scopes our discipline better than any other that has yet been made - so strongly that it is worth re-evaluating what information security is.

Security is the identification and mitigation of the unintended consequences of computer system use that results in the compromise of the confidentiality, integrity, or availability of said system or its constituent data.

2 comments:

Dean Jackson said...

It was an excellent read, and he's a genius, if only for search algorithms and leading the charge to stop using GOTO statements.

That said, we're talking about a man who didn't own or regularly use a computer until he needed one for email and web browsing. He didn't spend any time in the corporate world, and there's a significant cost/benefit analysis that you'd have to do before trying to implement his work.

For example, mathematically provable code is *not* easy to write. It's often easier in languages that aren't commonly used; Lisp and ML come to mind. It requires a level of rigor that many programmers simply aren't capable of, either.

Or, I think from the code-generation standpoint, provable code is useful for applications that *must* be perfect, but outside of that, it's a very questionable business move.

Although admittedly, there's a much stronger argument for it on the security side of the fence.

Michael Cloppert said...

Dean,

Absolutely agreed that there are practical limitations to contend with, but I think viewing unproven code as a fall-back position or lower-quality product is a good start. Just like anything in security, the ideal is unreachable, but this gives a very concise, understandable point we can aim at. Also remember that in this sense we are speaking of the science of computers. Theoretical computer science can be as far removed from practical development as applied physics can be from theoretical physics. But the theory shapes the understanding of the practitioners and future applications, and it is at that low of a level that I feel Dijkstra is correct in suggesting we re-evaluate the science.

For a sub-discipline that's made provable algorithms work, see: cryptography. Cryptographers can quickly point to just where their algorithms are and aren't provable. They know and understand the risks at a level rarely matched by other practitioners of computer science. No, all of our cryptographic systems aren't proven completely correct or sound, but going through the exercise of proof reveals where they may fail.