2007-12-26

Would you like to play a game?

On a lighter note, I recently made available another paper on my website. Titled Scalable CLA Strategies in Snafu, it is a simple investigation of collective learning automata strategies in the old 8-bit game Snafu, realized in film as the Light Cycle scene in the movie Tron. Enjoy!

2007-12-14

2008 DoD Cybercrime Convention

I will be lecturing again at the 2008 DoD Cybercrime Convention in St. Louis, MO. Last year, I spoke about advanced attacks from the front line. This year, I will be discussing tactical tool development supporting incident response from both a theoretical and practical perspective. Abstract, FTA:

Highly-motivated, advanced attackers have been successful in adapting their techniques to avoid traditional defensive and analytical tools. Anti-virus, firewalls, and IDS’s are no longer effective countermeasures to these adversaries, forcing analysts to quickly develop specific tools to combat certain aspects of an attacker's M.O. In this presentation, various new & emerging tools developed by analysts that have been successful in helping combat these threats are discussed.

2007-11-24

Proper customer email correspondence

Despite the expenditure of a great deal of effort, users are still ill-prepared for email-borne threats. Much of this is due to the mixed messages users receive. We tell users to not click on links in email to strange websites, then send them surveys from third-party companies they've never heard of and encourage them to participate. We tell users to not open attachments they're not expecting, then send out broadcast messages to many recipients with a PDF containing the information they need to read. When I say "we," I don't mean security analysts, but rather employers, service providers, vendors, etc. It's no wonder users still have no idea when they can and can't click on a link, or open an email or attachment.

I get my car insurance from Progressive. Yesterday, I received the following email. This is the type of actions that are needed to maintain user diligence and continue to leverage email as an effective communication mechanism.

======================================================================
Important changes are coming soon to your Progressive e-mails.
==============================
========================================

Dear MICHAEL CLOPPERT:

We're writing to let you know about some important changes to your
Progressive e-mails to ensure that you continue to receive and
recognize them.

Please note these key changes in your e-mails over the next few
months:

- E-mails will be sent from a new address:
customerservice@email.progressive.com

Please add this e-mail address to your address book or approved
senders to ensure that our e-mails reach you.

- Links in the e-mail will point to re.progressive.com instead of
re.progressivedirect.com.

2007-11-23

An Open Letter to SANS

I have been a strong proponent of SANS and GIAC for many years. Their training is, quite simply, the best available in many of the sub-disciplines within Information Security. Their staff represent the best of the best in the industry. I am a member of the SANS advisory board, and while I have no financial incentive in the success of the organization, I feel the continued health of SANS is vital to the Information Security discipline. It is for that reason that I have become concerned about some of the decisions made by SANS over the past few years. Beginning with the decision to separate the practical from certification, and continuing through to the introduction of their Master's degree, I see decisions increasingly being made solely around financial considerations.

In August, Stephen Northcutt asked the advisory board for our thoughts on discontinuing an unprofitable certification. I am posting the bulk of my response below, as it articulates many of my concerns with SANS. It is my hope that by voicing my opinion, positive direction can be maintained in the organization and, by consequence, the industry as a whole.

This cuts right to a core issue about SANS that I have been meaning to bring to the attention of the advisory board & leadership for some time, which is this: SANS needs to decide if its primary mission is to make money, or to educate. Many decisions I've seen from the leadership at SANS in the past few years seem to indicate that it is the former. I hope, for the sake of the integrity of the organization, that this tendency can be changed. It would be rather naive of me to think that this note would begin to turn the ship, but I hope it can raise awareness of the issue. I can say with absolute certainty that it has been noticed by professionals and decision-makers outside of SANS (some of whom I respect greatly); this is a real risk.

Bringing this more to the point, I believe that the value of certifications should not be solely measured by their profitability. SANS needs to remain in good financial standing, no doubt, but costs can be reclaimed elsewhere. Other untapped profit opportunities (corporate sponsorship, linking employers with job hunters, etc.) are out there. Universities face the very same trade-offs. In recent years, a debate has grown about the cost and value of technical degrees versus liberal arts degrees. Merely charging more for some degrees than others was highly controversial for the Universities; dropping less profitable, more technical degrees would be considered unconscionable. If SANS wants to operate at a similar level, I feel it must adopt this sort of mindset.

If [this certification] is judged to be valuable as an educational tool to the Information Security community at large, and it can reasonably be afforded by SANS, it should be kept. Otherwise, you needlessly sacrifice education for a larger bottom line, which advances a financial rather than educational mission. If we feel [this certification] in its current instantiation is a bad way to vet the top of the InfoSec talent pool, then it's a different problem we're talking about and financial concerns shouldn't really play a part in our discussions - the shortcomings should be addressed and a new approach tried before the life of this certification is prematurely cut short.

2007-11-11

Overhaul Anti-Virus Products NOW

It's been a few weeks since the below story appeared in SANS NewsBites, but I wanted to point it out to the community. The story, and subsequent NewsBites editor comments, speaks volumes to not only challenges with Anti-Virus that we're currently experiencing, but also to the attitude of the established Anti-Virus industry towards anyone not already part of their collective. I've lamented about the state of the anti-virus industry in the past, but this particular problem is the most dire for their industry - and the rest of us. The nature of the industry's rebuff of Ed Skoudis and Tom Liston (both highly-respected and recognized security professionals) that is discussed in the comments section below echoes of attitudes I've found amongst individual "antivirus researchers" with whom I've worked - some even as peers and coworkers. I think the root of the problem is Antivirus companies and contributors have developed their own self-serving, self-congratulating circle that espouses "group think" and rejects constructive criticism from anyone not a part of this clique. Further, they do not see themselves as security analysts and companies. Malware has become woven into the fabric of the security challenges facing entities in the 21st century and at this point the two can scarcely be separated in many cases. It's time these companies and contributors begin seeing themselves as part of the larger security industry, not simply a clique that sits at the "cool kids" table at lunch.

Enjoy:
TOP OF THE NEWS
-- Overhaul AntiVirus Product Testing Now
(October 10, 2007)
Momentum is growing for an overhaul of the way anti-virus products are tested. Presently, tests focus on signature-based malware detection and have changed little over the last 10 years. However, anti-virus technology is changing to meet the demands created by new malware, and the organizations believe that tests need to be reformulated to reflect that change. The tests don't examine the effectiveness of behavioral anomaly malware detection, a technique that is proving valuable in catching malware that spreads quickly. Proposals for new testing methods and procedures will be presented in November at the Association of AntiVirus Asia Researchers 2007 Conference in Seoul, Korea.
http://www.theregister.co.uk/2007/10/10/av_tests_revamp/print.html
[Editor's Note (Skoudis);: Given the change in threat, with new malware introduced every hour, this is a good development. My colleagues and I have been pushing for something like this for about 2 years now. I made a call very much like this at a talk during the Anti-Spyware Coalition meeting in February 2006. My colleague Tom Liston and I released a tool called Spycar a year and a half ago to test the behavior-based claims of anti-malware vendors, and discovered that most were either not doing any behavior-based detection at all or had severely broken behavior-based functionality. Many of the anti-malware vendors publicly scoffed at our testing, while a few said that our results were interesting. Just last month, my colleague Matt Carpenter and I tested a bunch of products again, and found that the behavior-based detection capabilities of all the major vendors were severely lacking. I'm hopeful that this new initiative will help improve the state of
behavior-based detection.
(Liston): Saying we were "skoffed at" is Ed's way of being polite. Essentially, they told us that we were amateurs who didn't know what we talking about, even _after_ we had discovered and disclosed serious flaws in the behavior-based detection offered by several major vendors. With increasing numbers of malware being released and the rise of targeted malcode attacks, behavior-based detection is going to become
anti-malware's front line defense. It's nice to finally see this reality being addressed by AV vendors' testing methods, and, of course, it's also nice to be vindicated.
(Northcutt): The anti-virus industry has served us well. Without them, computing might well have failed, but they are a tad stuck in their ways and malware grows ever more complex. I hope that projects like Spycar, which hold vendors responsible for something beyond signature detection, continue to prosper.]

2007-10-28

User Education is NOT (necessarily) the Answer

In the past few years, user education has been all the rage in the security industry. Today, we are quick to point out that one of the biggest computer vulnerabilities is actually not in the computer at all, but rather the mound of carbon and water exerting force normal to the surface of the keyboard. Unfortunately, this externalization of the security problem has become an excuse for the shortcomings of IT and information security just as frequently as it is the actual cause of compromise.

While the computer industries have largely failed at this important task until very recently, it is not the panacea that we are making it out to be. Anytime you hear about computer security failures, the response from "security experts" is always "patch and educate your users." This is important, but such a response trivializes the underlying complexities of computer systems and the persistence of the advanced and skilled adversary. Take the following example from Forbes discussing alleged security breaches at military contractors which quotes Allan Paller, director of SANS:

'More important than the elusive identity of hackers is the question of how to keep them at bay. Paller recommends that corporate security offices teach employees to be on the lookout for fraudulent e-mails. Companies could "inoculate" staff by occasionally spoofing phishing e-mails themselves and then alerting their victims, Paller suggests.'

It's a shame that someone as highly visible and regarded as Allan Paller would take the opportunity to presumably get a sound bite before using his contacts to understand the facts, if any, behind the article. Regardless, this is a perfect example of what I'm talking about. User education can only go so far, and is unlikely to thwart dedicated attackers. To follow this example through, what if the attacker in question includes a signature in the email with legitimate contact information? What if the name in the From: bar is someone the target knows? This information can be trivially forged, but it can also be just as trivially collected. Have you ever scrutinized emails that are "from" someone with whom you work, with their valid signature at the bottom, containing a Word document that seems to be topically relevant? Then why would your users? This goes further: adversaries can - and have - compromised real accounts which they then use to spread infected documents. So in some cases even legitimate email can't be trusted.

The bottom line is that user education is important. We all know it's important. But let's make sure this is the answer when it needs to be, and not given as a response action to any and every notion of computer compromise. Doing so will inevitably lead to an undermining of the industry's credibility if it isn't tempered.

Recommended tool: Bloglines

In the past, I had relied on my web browser to track RSS feeds for me. A few weeks ago, I began using Bloglines based on the recommendation of both my roommate and coworker. It has changed my blog-reading life. Some benefits:
  • Blackberry & mobile browser support
  • Optional content display (title only, summary, full) when viewing feeds
  • Folder-based categorization system showing number of unread entries for each feed and folder
  • Unseen entries for a feed displayed in a frame next to your feed listing (order adjustable)
  • Ability to flag entries as persistent (shown whenever a feed is viewed, even if no longer new)
I'm able to more efficiently keep tabs on all of my security related websites, and search what other people are reading - but be careful! If you don't want others to see blogs you're reading, be sure to mark them "Private" when subscribing to the feed.

2007-08-20

Follow Up: Principle of Most Privilege

SANS ISC handler John Bambenek has an interesting diary entry discussing what he calls the "Principle of Most Privilege" - the design of security devices to only identify that which is known to be absolutely bad. It is what I refer to as Tradeoff 3: Complete Solutions versus Incomplete Solutions in my previous blog entry. I like his terminology better, as it is far more concise. It's interesting that some feedback I'd received from Sourcefire was part of the inspiration for my lamenting, as it was John's. Hopefully, as more and more credible analysts bring these points to vendors, they will begin to listen and address the shortcomings of the paradigms behind their product offerings and business models.

2007-05-25

Four interdepent reasons why Information Security fails

In a fit of inspiration on the metro this morning, extending to frustration at work as to why we're increasingly having to build our own tools to detect advanced threats, I realized that four overlapping assumptions and associated tradeoffs by both the InfoSec field as a whole, and the manufacturers supporting it, are creating artificial barriers to success in network defense (NetD). In this essay, I outline those problems, how they connect, and why they're preventing progress in tools and techniques that are so desperately needed.

Tradeoff 1: Inline Analysis versus Offline Analysis
Assumption: Analysis must be done inline.
Many tools and approaches rely on, or are limited by, the ability of the solution to function at the speed of the technology being "protected." Email cannot be delayed by minutes or hours for processing. Packets must be delivered with near-zero delay in transmit time. This is because analysis is being done inline. The delivery of the message, the execution of the process, the delivery of the packet, is blocked until the requisite analysis is complete. It doesn't have to be this way. Why not analyze packets offline? Our three other assumptions form the premise that, if accepted, necessitate inline analysis.

Tradeoff 2: Protection versus Detection
Assumption: Detection is insufficient
This isn't so much an explicit assumption as the others, although we all read Gartner's prediction in 2003 that IDS will be dead in a few years (and hopefully you laughed as heartily as I did). Many in the IDS/IPS community acknowledge that such devices will never run in purely-prevention mode. That's good. But why aren't we incorporating this mentality into other tools? Anti-virus, for example, always looks for things that are known to be bad. Sure, you've got heuristics, but you can't granularly define "block this, allow that and log, ignore this other stuff." Not only that, all we have to go on when an action is blocked is some meaningless name that, when referenced on the vendor's website, gives a vague and often useless explanation. Allow blocking and logging based on behavior. Give administrators the ability to say "log all files matching this md5deep sequence 80% or more," "block if this other md5deep sequence is matched 90% or more." There is no granularity, and the tool is geared exclusively toward protection, with detection as an afterthought. Firewall rule strategy is another example. Why do we only give advice on blocking and allowing? Why doesn't our technique set include "Identify regions you don't do business with, and log that activity with a higher priority" (assume for a second that you can give firewall logs a priority)? Blocking and allowing is often the only strategy ever discussed.

The perceived protection necessity is driven by inline, general analysis and the 100% solution. If those premises are accepted, this one follows necessarily.

Tradeoff 3: Complete Solutions versus Incomplete Solutions
Assumption: Incomplete solutions are not worth pursuing
This assumption is a bit more subtle, but it's prevalent throughout our industry. Consumers want, and vendors deliver, complete solutions. Consumers do not want incomplete solutions. Both of these statements are unfortunately true - true because the consumer perceives an incomplete solution cannot produce value. The consumer here is you, me, or often our bosses - heck, much of the InfoSec field. The problem with this mindset is that it's wrong. Incomplete solutions can deliver significant value. If a solution doesn't scale, that doesn't make it a worthless solution. It simply means the solution needs to be selectively applied where it can provide the most value. An example: a system that opens every Word document an enterprise receives in a VM, analyzes the content, and reports on what files are written, etc., as a result. Computationally expensive? You bet it is! Can a large enterprise do this for all Word documents inbound, to find that elusive Zero-day exploit? Probably not. Can a large enterprise apply this to all email from countries and regions with which it does not have business? Probably. Can a large business limit this further to, say, "@competitor.com" email addresses as sources? You see where I'm going.

Incomplete solutions are often seen with skepticism for a variety of reasons, notably the implication that the technology is immature. This isn't always the case. Some problems simply have no known solution in polynomial time - the cost of computing a solution increases exponentially with the size of the problem space, making scalability impossible. And even if a tool is immature, as long as the approach is valid, a positive result is a positive result. A Word document that, when opened, drops something into C:\WINNT\system32, is suspicious, nevermind the fact that you only analyzed 5% of all Word documents. If that 5% represents 80% of your malicious email, you've done a good job.

I've heard IDS vendors refuse to do, say, base64 decoding, because "what about base32? or any of the other weird one-offs?" This is the same thing. A complete solution is not always necessary - but it becomes more necessary if you insist on inline, general prevention.

Tradeoff 4: General versus Specific
Assumption: All tools must solve all problems
Kudos to the folks that developed Nessus, Nmap, Tripwire, (until recently) Snort/Sourcefire, and anyone else who built and continued to hone tools focused on a specific task. That is why these tools are the best in the industry: their authors and development teams weren't distracted by the untold other aspects of Information Security that have problems that need to be solved. Unfortunately, this is becoming more and more the exception. Managers want tools that solve all of the problems for them, and to support their bottom line, every security vendor out there is happy to oblige. The problem is that these tools are then designed to provide some value to everyone. That leaves an enormous gap that is the rest of the problems some organizations face. Every industry, every region, every company has nuanced security challenges. When every tool solves a few problems, it leaves no tools left to combine to build a custom solution to the remaining problems, like when you built that giant blue battleship with a million Legos when you were 12, only to have your dumb Labrador walk through it and destroy it.

We have the general problems licked. It's the specific threats that are now most effective, seriously threatening bottom lines and even national security. Why are there 30 approaches to anomaly-based IDS's, and no approaches for mining and clustering firewall data? SIM data? Why are there no pcap-based systems to monitor network-aware applications like FTP and DNS? Why are tools for analyzing full packet captures and netflow so nascent from a security perspective? I'll tell you why: these are tools that need to be selected and deployed by skilled analysts; put together to form a custom security system. They're specific, tactical tools that need to be implemented with care, and the industry fails to recognize these tools as absolutely essential in filling the large gaps left by the general tools. A general tool can't solve all of anyones' problems. We need more specific tools - toolsets, even - to enable enterprises to customize their defenses. But before this happens, the assumption that all tools do everything needs to be banished from our industry. I'm tired of my AV trying to be a HIPS, and my firewall trying to analyze layer 7 packets. But this all ties into prevention over detection, 100% solutions, ... and now I'm a broken record.


Delivery of an email that's infected is currently considered a failure of the system because detection isn't good enough, and we can't analyze every email in the depth we'd like. If you buy the premises, this is logical. I don't. All of these assumptions are incorrect, and in my opinion, are influenced by the fact that advanced analysis is not assumed as a follow-up step to detection. Again, the "Complete Solution" rears its ugly head. No technology provides a complete solution. In order to provide analysts with the data they need to make decisions that, today, cannot be made by a computer, we must leave the above assumptions behind and broaden our analytical scope. But for the meantime, you'll notice the strong interdependence of the assumptions. If one or two can be broken, the others will fall with ease (reminds me a bit of NP-Complete theory).

These are not the only problems or challenges the industry faces today, but many of the failures of our industry are symptomatic of them. The sooner the industry - and our government - acknowledges the true security solution space, and leverages all areas of it, the sooner we will be able to appropriately address the advanced threats.

2007-04-09

Throwback

A friend just sent me this link. I recommend you read it immediately and thank me later. Incredible on so many levels, and surprisingly applicable to the modern computer.

http://davidguy.brinkster.net/computer/

How can I argue against IE?

A friend of mine often asks me questions about Information Security that make great blog entries, so thanks again, Mike.

The question was, in essence: I have someone above me in the management chain that wants to switch to IE. How can I convince him this is a bad decision?

My response:
As far as the Microsoft schtick is concerned, there are a number of ways to approach this.

First, the because-experts-said-so approach:
SANS, a group of widely-respected security analysts, recommend using an "alternate" web browser. This can be found in many places on their site, http://www.sans.org (alternately http://isc.sans.org). Securityfocus (http://www.securityfocus.com/), also a respected InfoSec site, probably has a lot of resources / "experts" supporting this notion as well. I'd consider myself an expert in the field, and I emphatically endorse this recommendation.

Second, the hard data approach:
Internet Explorer was vulnerable for 284 of the 365 days in 2006. This study is cited everywhere, but by the sound of it, mainstream news is the way to go with this guy, so here's a pseudo-omnibus article in the Washington Post (which is the original source, I believe):
http://blog.washingtonpost.com/securityfix/2007/01/internet_explorer_unsafe_for_2.html
Firefox's number in this study was 9. I can't say I've seen the data, or that it's been peer reviewed, and it *is* mainstream media we're talking about, but data is data.

Third, the anecdotal approach:
You can find many citations for evidence that attackers are now focusing on applications, rather than operating systems. There is also plenty of data to show that the preponderance of these attacks are against Microsoft products, because of their market penetration and therefore large target space. From a security perspective, by choosing to go with Microsoft applications, you are intentionally putting your computers into the most frequently targeted space of computing assets on the Internet. There's a cost-benefit that needs to be considered here, but the expected benefit had better be pretty high, because the cost in terms of security will be severe.

Fourth, the control approach:
Firefox is extensible, and offers many extensions that improve security. Like the "noscript" plugin, that lets users select which sites can and can't execute javascript. Or the ASN lookup plugin, that looks up the ASN of the site you're visiting to make sure it's actually the company the user thinks he/she is visiting. The list goes on...

Microsoft pundits will refute the data approach with their own FUD, but I can assure you there are no security experts endorsing IE, there is no counter-argument for the anecdotal approach, and IE simply is not extensible like Firefox. The four together should make a compelling argument.

This horse has been long-dead as far as most InfoSec professionals are concerned, but making an argument for the "right decision" isn't always straightforward.

2007-04-02

A bit of help on the MS .ANI exploit...

Recently, I was tasked with looking into this a bit more closely at work, as not many details were readily available on this exploit. Below are the findings, for what they're worth. Some sanitization has occurred, of course, but this is probably good data for the community as a whole so I've decided to share it.
MS Security Advisory 935423: Vulnerability in Windows Animated Cursor Handling
http://www.microsoft.com/technet/security/advisory/935423.mspx

McAfee Links:
http://vil.nai.com/vil/content/v_141860.htm
http://vil.nai.com/vil/content/v_vul28505.htm

A few days old, but ISC is at Yellow:
http://isc.sans.org/diary.html?storyid=2542&rss

Other ISC entries related:
http://isc.sans.org/diary.html?storyid=2555&rss
http://isc.sans.org/diary.html?storyid=2551&rss

MS will be releasing a patch for this vulnerability tomorrow (4/3/2007):
http://blogs.technet.com/msrc/archive/2007/04/01/latest-on-security-update-for-microsoft-security-advisory-935423.aspx

Information on the ANI file format (unofficial):
http://www.wotsit.org/download.asp?f=ani&sc=228832732

Text from that link:
<--------- SNIP --------->
From robertjh@awod.com Fri Aug 30 19:18:26 1996
To: "'paul@wotsit.demon.co.uk'"
Subject: ANI (Windows95 Animated Cursor File Format)
Date: Thu, 29 Aug 1996 21:52:01 -0400

ANI (Windows95 Animated Cursor File Format)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This is a paraphrase of the format. It is essetially just a RIFF file
with extensions... (view this monospaced)
This info basically comes from the MMDK (Multimedia DevKit). I don't
have it in front of me, so I'm going backwards from a VB program I wrote
to decode .ANI files.

"RIFF" {Length of File}
"ACON"
"LIST" {Length of List}
"INAM" {Length of Title} {Data}
"IART" {Length of Author} {Data}
"fram"
"icon" {Length of Icon} {Data} ; 1st in list
...
"icon" {Length of Icon} {Data} ; Last in list (1 to cFrames)
"anih" {Length of ANI header (36 bytes)} {Data} ; (see ANI Header TypeDef )
"rate" {Length of rate block} {Data} ; ea. rate is a long (length is
1 to cSteps)
"seq " {Length of sequence block} {Data} ; ea. seq is a long (length is 1
to cSteps)

-END-

- Any of the blocks ("ACON", "anih", "rate", or "seq ") can appear in any
order. I've never seen "rate" or "seq " appear before "anih", though. You
need the cSteps value from "anih" to read "rate" and "seq ". The order I
usually see the frames is: "RIFF", "ACON", "LIST", "INAM", "IART", "anih",
"rate", "seq ", "LIST", "ICON". You can see the "LIST" tag is repeated and
the "ICON" tag is repeated once for every embedded icon. The data pulled
from the "ICON" tag is always in the standard 766-byte .ico file format.

- All {Length of...} are 4byte DWORDs.

- ANI Header TypeDef:

struct tagANIHeader {
DWORD cbSizeOf; // Num bytes in AniHeader (36 bytes)
DWORD cFrames; // Number of unique Icons in this cursor
DWORD cSteps; // Number of Blits before the animation cycles
DWORD cx, cy; // reserved, must be zero.
DWORD cBitCount, cPlanes; // reserved, must be zero.
DWORD JifRate; // Default Jiffies (1/60th of a second) if rate chunk not
present.
DWORD flags; // Animation Flag (see AF_ constants)
} ANIHeader;

#define AF_ICON =3D 0x0001L // Windows format icon/cursor animation


R. James Houghtaling
<--------- SNIP --------->

As I was unable to locate any specifics about this exploit, I was left to
observe known-bad files collected by CIRT and compare them to the
Snort/Sourcefire signature, on the assumption that it is to at least some
degree correct and will catch a subset of bad ANI files.

The Snort signature is as follows (my comments inserted in-line):

alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any \
(msg:"WEB-CLIENT Microsoft ANI file parsing overflow"; \
flow:established,from_server; \
content:"RIFF"; nocase:; \ <- Magic # for ANI files, a subset of RIFF files content:"anih"; nocase:; \ <- ANI header indicator byte_test:4,>,36,0,relative,little; \ <- DWORD cbSizeOf reference:cve,2004-1049; classtype:attempted-user; sid:3079; rev:3; ) The primary indicator here is the trigger on cbSizeOf > 36. Other content
matches are to simply identify the data stream as containing an ANI file and
position the pointer accordingly. cbSizeOf is the size of the ANI header
structure. We can infer from this that, if the signature is correct, the size
of the ANI header structure is a trigger for the overflow, and that trigger is
a value greater than 36.

I then analyzed two specimens from :
=== 5597p.jpg ===
As noted in the ticket, this is not a .jpg. It is a compliant ANI file. The
file has a proper RIFF header, followed by two ANI headers:

==== 5597p.jpg ANI Header 1 ====
Beginning at offset 0x0C of the file, we see the following
(Note: all values little-endian)
61 6E 69 68 ["anih" header marker]
24 00 00 00 [cbSizeOf = 0x24 = 36]
24 00 00 00 [cFrames = 0x24 = 36]
FF FF 00 00 [cSteps = 0xFFFF = 65535]
09 00 00 00 [cx must be 0, is 0x09]
00 00 00 00 [cy must be 0, is 0x0]
00 00 00 00 [cBitCount must be 0, is 0x0]
00 00 00 00 [cPlanes must be 0, is 0x0]
00 00 00 00 [JifRate = 0x0]
04 00 00 00 [flags = 0x04 = ?]
Observe one reserved value is 0x09 that should be 0x0, and cbSizeOf is = 36.
The snort signature would not trigger an alert, as cbSizeOf must be > 36 for
an alert to fire. Once the byte comparison is performed here, no additional
processing of the packet will occur.

==== 5597p.jpg ANI Header 2 ====
Beginning at offset 0x50 of the file, we see the following:
(Note: all values little-endian)
61 6E 69 68 ["anih" header marker]
52 00 00 00 [cbSizeOf = 0x52 = 82]
30 31 32 33 [cFrames = 0x33323130 = "0123" text]
30 31 32 33 [cSteps = 0x33323130 = "0123" text]
[...similar bogus data for remaining 66 bytes...]
Here, cbSizeOf is = 82. The Snort signature would fire if this header were
first, but it is not. Thus, no alert fires.

The conclusion here is either:
[1] This file does not execute the exploit correctly and was not responsible
for compromise of the victim; or
[2] The snort signature will only catch the subset of files where the ANI
header triggering the overflow appears before any other ANI headers in the file.

=== 2720p.jpg ===
This file's headers, RIFF and ANI, are identical to those of 5597p.jpg. As a
result the analysis yields the same observations, and the same conclusions:
[1] This file does not execute the exploit correctly and was not responsible
for compromise of the victim; or
[2] The snort signature will only catch the subset of files where the ANI
header triggering the overflow appears before any other ANI headers in the file.

As it is expected that one of the previous two files compromised the victim
machine in , my inclination is to believe that [2] is the correct
take-away from this analysis. This is not directly supported by any analysis
or evidence, however.

This appears to be a good candidate vulnerability for a compiled rule, as each
ANI section header could be iteratively analyzed for offending content. I
will begin to create this as time permits.

Thanks to REDACTED, here is another analysis of the vulnerability that
draws a similar conclusion (limited effect of IDS) but for different,
less-technical reasons that could generally be applied to all IDS rules:
http://erratasec.blogspot.com/2007/04/ani-0day-vs-intrusion-detection.html

2007-01-16

Administrative: Server outage, etc.

Well, I had a server outage right in the middle of the insanity of the holidays, and have yet to be able to fully recover. Surprising as it may seem, I do not have spare hardware sitting around for such a problem, or the free cycles to correct it at this time.

In any case, the new home of this blog will be http://blog.cloppert.org, rather than the old cloppert.org/blog site. Hosting services from Google gives me the reliability I need to develop this as a more professional and reliable source of information (and opinion). While I typically avoid product recommendations, this is what I'll be using until I get my home web services installed. Please update bookmarks & RSS feeds accordingly; this location will not change again virtually, even though it may occasionally migrate from a hosted server to a home server.

Thanks, and sorry for any inconvenience.