WMF 0-day exploit & developing Snort sigs

Two days ago, the presence of a Windows Metafile / graphics processing engine vulnerability made itself known in the form of an exploit, first identified on the Full-Disclosure mailing list. If you're not already familiar with this, I recommend you read up on it now. This has the potential to cause a lot of problems, even though its current implementation is "only" the distribution of spyware. For starters, you can check out the ISC's announcement and Microsoft's advisory.

As part of my job is protecting a large company from junk like this, I immediately began searching for work-arounds. Having tested (and failed) all of the work-arounds mentioned on public websites, I turned to my IDS. "At least," I figured, "I can see when my systems are getting compromised." Unfortunately, all of the signatures I'd found on the WMF exploit seemed either a bit arbitrary, very specific to this vulnerability, or generally not well-formed. I therefore took it upon myself to develop one of our own. Here, I share the process I used and the resulting signature. It was a good exercise on proper signature building that I hadn't been through in awhile. If you develop IDS signatures for your organization, this may be worth a read. Oh, and one more thing: this is your fair warning that my [pre] tags may make some of the literal text a bit difficult to read. I tried to work around this, but there are still formatting problems - my bad.

So, my approach was to identify any WMF files that contained what looked like an overflow or exploit, rather than write the signature to the exploit I currently had in-hand. My first task was to get a handle on what WMF files looked like: their file structure, etc. I found a great sourceforge article describing this here.

From that article:

The standard Windows metafile header is 18 bytes in length and is
structured as follows:

typedef struct _WindowsMetaHeader
WORD FileType; /* Type of metafile (0=memory, 1=disk) */
WORD HeaderSize; /* Size of header in WORDS (always 9) */
WORD Version; /* Version of Microsoft Windows used */
DWORD FileSize; /* Total size of the metafile in WORDs */
WORD NumOfObjects; /* Number of objects in the file */
DWORD MaxRecordSize; /* The size of largest record in WORDs */
WORD NumOfParams; /* Not Used (always 0) */

FileType contains a value which indicates the location of the metafile
data. A value of 0 indicates that the metafile is stored in memory,
while a 1 indicates that it is stored on disk.

HeaderSize contains the size of the metafile header in 16-bit WORDs.
This value is always 9.

Version stores the version number of Microsoft Windows that created the
metafile. This value is always read in hexadecimal format. For example,
in a metafile created by Windows 3.0 and 3.1, this item would have the
value 0x0300.

FileSize specifies the total size of the metafile in 16-bit WORDs.

NumOfObjects specifies the number of objects that are in the metafile.

MaxRecordSize specifies the size of the largest record in the metafile
in WORDs.

NumOfParams is not used and is set to a value of 0.

The first 18 bytes of the captured exploit file (dont' try this at home, kids!) are:

[clopperm@orion wmf_vuln]$ hexdump -n 18 xpl.wmf
0000000 0001 0009 0300 1f52 0000 0006 003d 0000
0000010 0000

Remember, we're looking at this little-endian. This means we see...

WMFHEAD our_file = {
WORD FileType=1;/* won't change */
WORD HeaderSize=9; /* always 9 - won't change */
WORD Version=0x0300; /* doesn't matter, but is 0300 = windows 3.0/3.1 */
DWORD FileSize=8018; /* 8018 words=16036 bytes, checks out, but may change */
WORD NumOfObjects=6; /* 6 objects in file, may change */
DWORD MaxRecordSize=61;/* 61 word max size. Not sure how this impacts sig either */
WORD NumOfParams=0; /* Always = 0. */

So, we can say the typical WMF header looks like this:

0001 0009 ???? ???? ???? 0006 ???? ???? 0000

Translating this into a snort signature (remember, on the network, we need to translate to big-endian!), we get something like this:

alert tcp any any -> any any (msg: "INFORMATIONAL - Windows Metafile (WMF) detected"; content: "|01 00 09 00|"; content: "|06 00|"; distance:6; within:2; content: "|00 00|"; distance:4; within:2; sid:2000002; rev:1; )

Testing this signature on our exploit, it fires properly. MS Office comes with a large number of WMF files. I tested this signature's effectiveness by copying all of the files in the clear between two servers and validating that this identification signature fired on them, as well.

The WMF files in MS Office appear to be Aldus placeable metafiles, which means they have an additional header at the beginning of the file (see the sourceforge article). Since we didn't specify a depth, the signature fired anyway. This is good. While it would be nice to be able to specify an initial search depth to match the first pattern, HTML & such appearing before the file make this difficult to guess.

We know the Intel NOP opcode is 0x90. Looking for this repeated will identify a NOP slide. The exploit we have in-hand shows the NOP slide beginning at address 0x426 to address 0x15b0, but without more detailed knowledge of the vulnerability (and to make this more generic, covering ALL WMF overflows), we'll need to search the rest of the packet. We'll require 16 NOP's (an arbitrary number) to trigger. Our new signature looks like this:

alert tcp any any -> any any (msg: "SHELLCODE - Windows Metafile (WMF) with NOP Slide"; content: "|01 00 09 00|"; content: "|06 00|"; distance:6; within:2; content: "|00 00|"; distance:4; within:2; content: "|90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90|"; distance:0; sid:2000002; rev:1; )

Testing this against our exploit, we have a winner! This should detect this and any other WMF files that include a NOP slide in them. Confirming we didn't just create a noisy signature, this didn't fire on traffic containing legitimate WMF file copies, the same one we used previously to validate the part of the rule that detects the WMF file format.

The "any" keywords for ports & IP's are dangerous. This is the one thing we'll modify specifically for this exploit, so we don't over-burden the ids:

alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg: "SHELLCODE - Windows Metafile (WMF) with NOP Slide"; flow: established,from_server; content: "|01 00 09 00|"; content: "|06 00|"; distance:6; within:2; content: "|00 00|"; distance:4; within:2; content: "|90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90|"; distance:0; classtype:attempted-user; reference:url,www.frsirt.com/english/advisories/2005/3086; sid:2000002; rev:1; )

This will catch any HTTP downloads of WMF files with NOP slides in them. You'll note that I also added the classtype and URL references. Of course, this was tested against the exploit flying over the wire, as well as legitimate WMF file copies, and passed with flying colors. Now,
all we need to worry about is if the attack vector changes. We don't have to worry about the specifics of the exploit!

Hope this is informative to everyone. It was yesterday's afternoon project for me :-)


InfoSec Podcast

As of yesterday morning, I've become a huge fan and experienced consumer of Podcasts. Okay, so I've listened to about 10, but really, who's counting?

In any case, a co-worker recommended the CyberSpeak podcast, hosted by former federal agents Bret Padre and Ovie Carroll. Listening to the previous two installments, I have to say this is an enjoyable 'cast with two guys who obviously know their stuff. While I'd already read about most of the news discussed, it was great to hear it discussed in an informal setting by two professionals. The perspective offered by witnessing such a dialogue is very informative, and helped me digest the news in a different way than simply reading about the stories on my favorite InfoSec news sites. I highly recommend listening in, especially if you don't have the time at work to discuss these types of events with other "birds of a feather;" or, as may be the case for some, there are no BoF's where you work. The tidbits of peripheral information they offer are often insightful, and not the kinds of things you pick up from simply reading the news. Their personalities also translate what could be boring, inane material into something that's genuinely fun to listen to -- well, for people in the industry, anyway (I wouldn't recommend it to someone like my father, the accountant).

(Entry updated 12/30 with the preferred URL for the podcast; thanks, Bret & Ovie!)


Some Casual Reading and 2006 Security Forecast

Interesting Articles
While I don't intend for this blog to become YANLS (Yet Another News Linking Site), I've come across a number of articles at work recently that have been very informative on a variety of subjects. Reader beware: these range in technical complexity from the most basic to the very complex.

I've been preaching to friends, family, co-workers, and anyone who will listen about the good, yes, good things that legislation like Sarbanes-Oxley, GLBA, and HIPAA have brought to the world. Most notably, pushing real security controls & technologies into companies previously too cheap to heed the advice of their security analysts. CSO Online recently published an article titled How To Love Sarbanes-Oxley, written by a security manager at Kennametal, that gets into the details of some of the good effects of the legislation, from a security perspective.

In the category of "blogging about blogging," we find the Security Monkey blog titled A day in the life of a security investigator. This is an entertaining read that illustrates some of the trials of being a security analyst. It's well-written, and a good read if you have some time on your hands. Writing about the specifics of my work is an employment gray area that I painstakingly avoid, but if I were to do so, it would many times read like this. For those of you who podcast, this may be of interest to you as well.

Finally, I ran across a very detailed and informative overview of both recovering data deleted off of magnetic media, as well as how to delete this data so that it's difficult to recover. The paper, from the University of Auckland, is titled Secure Deletion of Data from Magnetic and Solid-State Memory.

2006 Security Forecast
It's going to rain. But then again, since about 1999/2000, the security outlook for any given year has looked about like the weather forecast for Seattle: Rain, with a chance of more rain. I actually just wanted to note a few specifics in this section:
  1. Security configuration management software, also known as "Enterprise Configuration Management Software" and "Enterprise Risk Management Software" will get a lot of attention. This is software that will aggregate the configuration data from your network control devices (firewalls, routers, etc.) as well as your vulnerability assessment software. Based on this information, it can give you a view of your overall security risk (one host has a higher risk than another, because a vulnerability is not mitigated with a firewall rule), analyze access between nodes & sites, and evaluate the impact of network changes on the risk assumed by your organization. Skybox is, to my knowledge, the only COTS product that can deliver this. I will be writing about this software in more detail in the near future, hopefully.
  2. We will see more effective use of IM and P2P software to spread malware. These are the two vectors that offer the greatest amount of targets through the exploitation of a single technology, and to date have not been effectively attacked. Specifically, this makes for a great introduction into an otherwise-secure corporate network, since many users circumvent strong firewall egress controls by connecting to these services via weak HTTP proxy controls. A hybrid AIM / Microsoft vulnerability-du-jour attack could be especially damaging.
  3. Personal networking sites like Myspace and Friendster allow users to post HTML content without going to the pain of setting up their own website, DNS, etc. This is also a great way to launch phishing attacks, as well as host malicious code to be downloaded by bots, exploit the IE vulnerability du jour, and a variety of other bad things with minimal accountability. This has been rare to date, if it's happened at all, but the potential of these sites will soon be discovered by attackers. If administrators aren't proactive in filtering the content of their users' pages, this could mean trouble for the rest of us.


Tuning for Cryptographic Hashing Algorithms

The main reason I haven't written in awhile is that I've been working on a paper on performance tuning for cryptographic hashing algorithms, which is finally done. This paper investigates processor configurations optimal for RIPEMD-160, SHA-1, and MD5 algorithms. It provides detailed information on CPU design, cache organization, and compiler optimizations for each of the algorithms, and draws some conclusions on the cryptographic hashing application domain as a whole. The data is also available on my website. This information would be helpful to an engineer desinging an FPGA for hardware-accelerated hashing, and could also be of use in choosing a general-purpose CPU that would be performing intensive cryptographic hashing. There is certainly more to explore than what the paper covers, but it gives a good overview of the demands placed on hardware by these algorithms and opportunities for better efficiency.


The Inter-Species Computer Virus

There's been an enormous amount of focus on the recent Sony DRM fiasco across the security industry recently, so rather than reiterate what's already been said, I figured I'd point out something that's been simmering in the back of my mind for a number of months now: what I call the "inter-species computer virus."

I see viruses as being on the cusp of a revolution. While it's true we've seen viruses move from computer to PDA in the past (I wish I could find a source, but at the moment I can't), this has so far failed to cause any major outbreaks. However, as the typical handheld device becomes increasingly complex and internetworked, the potential damage for such a worm has grown significantly in the past few years. This threat is especially pronounced with cellular devices. Recently, a virus was written that tried - unsuccessfully - to make the jump from a cellphone to PC. The spread method was rudimentary, using a smart card as the carrier mechanism for the code. But it foreshadows more virulent threats to come. Consider the different propogation vectors that are now available between some devices and "ordinary" computers that never used to exist:
  • Conventional PDA's. Data is transferrable via (1) memory chip, as with the aforementioned virus, (2) bluetooth, and (3) USB connection. Targets could include other PDA's, or PC's. Most of these devices are also infrared-enabled, as well, but I'd consider this threat minimal due to limitations of the technology.
  • Cellular Phones. Resembling more and more of a computer as the months progress, potential device-to-computer data paths include (1) memory chip, (2) SMS text messaging to other devices like blackberries, (3) instant messaging, and (4) bluetooth. And that's not beginning to cover the functionality available in devices like the Treo.
  • Blackberry devices. These are the penultimate virus propogation devices. They communicate with other devices via (1) internet access on the cellular network, (2) bluetooth, (3) SMS to other portable devices, (4) Instant Messaging through a third-party app, (5) Email, and (6) USB connection. That's no less than SIX different vectors an infected Blackberry device could leverage to attack other devices like cell phones or ordinary computer systems.
  • IP Phones. These don't represent a bridge between computers and mobile devices yet, but the phone on your desk is now a device that potentially touches the same data network as your computer, and the same phone network as your cell.
As we have seen in the past, if you build it, the h4X0rZ will come. There's no need to claim the sky is falling, and there's no sense in irrational policies based on speculation of what could happen. But the industry, and most importantly users, need to be aware of the implications that all of these shiny objects bring with them. I see this kind of a problem as more of an inevitability than some science-fiction concoction. It's up to us, and device manufacturers, to be proactive in addressing this. Hopefully, with strong user awareness and an industry prepared to deal with such threats, the damage will be minimal.


Follow-up to Insecure Code Accountability

My last post discussed former Cybersecurity Chief Howard Schmidt's proposal to hold software developers accountable for insecure code. I stated that Mr. Schmidt exhibited a fundamental misunderstanding of how software development works. On Sunday, Bruce Schnieier took a different approach by discussing the economics of software purchasing & development, and how these realities mean such an approach wouldn't work. An interesting read.


And this guy's an "expert"?

On Tuesday, former White House "cybersecurity advisor" Howard Schmidt suggested that developers should be held personally accountable for software flaws. "We need individual accountability from developers for end-to-end solutions," he is quoted as saying.

It is scary that someone who held such an influential position in politics regarding information security is so clearly lacking a fundamental understanding of the process by which software is released. Accountability is a major problem right now in the software industry, but blaming the coders is a terrible approach. When I did some work as an engineer, it became quickly apparent to me how easily management decisions (and even promises made by marketing/PR departments) could compromise the quality of my work. Unreasonable or constantly changing priorities and timelines can easily degrade the quality of any employee's work, whether in the software industry or otherwise. And poorly-implemented software development life-cycle models, which are often controlled by a programmer's employer, can also allow buggy code to make it to final release.

There are so many factors, and people, involved in software development that it's unreasonable to hold individuals accountable. Only by holding companies that develop sofware accountable will we begin to see an increase in software quality.


10/6/2005: A Dark Day for Security

The sky is not falling. The apocalypse is not near. Symantec & McAfee have not merged to corner the anti-virus market. But on this day, we see dark clouds on the horizon for the InfoSec industry.

First, Checkpoint, overlord of the software running a sizeable percentage of the world's firewalls, announced it was buying up Sourcefire, maintainer of the wildly popular and industry-leading open source Snort IDS. Within a matter of hours, news broke that Tenable Network Security, maintainer of the wildly popular and venerable Nessus vulnerability scanner, would no longer release its software under the GPL beginning with the next major release due to a "loophole" allowing its competitors to copy off of Tenable's work.

Business and economic arguments aside, these are ominous developments. For years, Snort and Nessus have both been considered the baseline to which other COTS products in their respective fields have been compared. Their open development and liberal use licenses are a big part of what made them so popular and well-known. Of course, they stand on the merits of their technology alone. But it's been the ease of access to these products that has made them so pervasive.

Some of the implications of these announcements are obvious. One that may not stand out as much, and is worthy of special note, is the impact of these announcements on small and medium-sized businesses. These companies are often the bane of security analysts, as their low-budget IT shops can't afford good security, or haven't yet realized its importance. Nessus and Snort are a staple for security-conscious IT staff in those situations, working on minimal IT budgets where vulnerability assessment scanners and IDS's are scraped together using spare parts and old equipment. Even so, thanks to Snort and Nessus, these administrators can run VA scans and intrusion detection tools that provide the same quality of results as a security team in a company with a budget orders of magnitude larger. Today's news seriously jeopardizes these capabilities down the road, which put the small and medium sized companies in an even worse spot. Less security for these companies means more zombies, more warez sites, more worms, and generally bad news for everyone.

For Nessus, this is the nail in the coffin. While Tenable plans to keep supporting version 2.0, it's only a matter of time before this development tree is EOL. Hopefully, the new license isn't too restrictive, and will facilitate continuous development and acceptance amongst individual information security professionals. Tenable thinks that it has gotten very little from the open-source community in return for its GPL'd software. While this may be true, it overlooks the fact that the open nature of the GPL is what allowed Nessus to become so prevalent in the first place.

The story for Snort is not nearly so bad. It's almost certain that Checkpoint's short-term intention is to integrate Snort into its Checkpoint NG firewall software (or whatever its next major release will be called) to create a combined IDS/IPS/firewall product offering. While this is happening, I can see Checkpoint leaving Sourcefire's product alone. What concerns me is what will happen to Snort after that? The license for Snort, once integrated with Checkpoint's closed-source firewall software, is certain to change or else the entire product will be GPL'd. The new integrated product is where Checkpoint is likely to focus their development. This means one of two things: the new features and technologies implemented in the IDS/IPS/firewall software will be brought into the Snort/Sourcefire product offering, meaning a license change from GPL, or even worse, no further development on the stand-alone Snort will be done.

The story for these two security stalwarts is far from over, and many events could transpire that make this a non-event. But for now, the future of these two products is up in the air. And for this security professional, that is a very scary thing.


DDoS: Everything you could possibly need to know

This is more of a reference than it is a blog. I recently found a link to the most comprehensive all-in-one resource for information relating to Distributed Denial of Service (or DDoS) attacks that I have ever seen. The page seems to be well maintained (as of the publishing of this blog) by its author, Dave Dittrich. Mr. Dittrich has been inovlved in a number of infosec research projects, including the Honeynet Project, teaches at the University of Washington, and has done extensive work with DDoS tools and in related research.

This DDoS resource goldmine is broken down into sections including:
- Related literature
- Analysis and talks on attack tools
- Defensive tools
- Advisories
- Mitigation information
- Legal implications
- Related research
- News articles
...just to name a few. If you've ever been interested in doing any work or research in this field, this page is a great starting point.


A light at the end of the tunnel

Nearly a year ago, I wrote about the need for a standard malware nomenclature. Around the same time, I also commented on the need for an information security clearinghouse, possibly run by the DHS. It seems someone was listening to the pleas from the security community: today, C|Net reports that US-CERT (run by DHS) will be getting into the business of naming malware by acting as the public face of the Common Malware Enumeration Initiative, designed by a number of government entities as well as the much-respected MITRE. By running this through the government, the politics of inter-company nomenclature are completely circumvented. Each company can keep their own nomenclature, and map to the CME ID through their products and websites. One major issue that isn't clearly addressed, however, is how variants will be handled by CME. The state of malware being what it is today, this is the biggest point of confusion in battling outbreaks. New viruses aren't nearly as common as variants of old, tried-and-true formulas. Without a way to clearly address variants, this system may be much less effective. Its potential at this point, however, is great.

This announcement, along with other recent developments at US-CERT such as the revealing of the National Vulnerability Database (NVD), is positioning the site to become a critical juncture for the information security community. It appears the DHS, in at least one small respect, is starting to show some positive progress in its mission. I've personally met the gentleman responsible for the creation of the NVD at NIST, as well as some others involved with US-CERT, and have been very pleased with what I've seen. This is something to keep a close eye on in the coming 6-12 months, as it may soon be bookmarked as your browser's home page.

The only concern I have thus far is how quickly and completely US-CERT disseminates information to the public. There is much more to US-CERT than meets the eye; it is also a powerful tool for inter-agency communication and data sharing within the US Government. If the movement of information from the protected side to the public side is kept open, this may end up being a key cog in fighting the good fight for analysts in the years to come.


Marcus Ranum on the "Six Dumbest Ideas in Computer Security"

While I don't want to become a blog that blogs about other blogs, one particular piece by Marcus Ranum is noteworthy: The Six Dumbest Ideas in Computer Security. Marcus Ranum is one of the oldest names in Information Security, having been involved in designing a number of groundbreaking tools and is currently the CEO of NFR. Suffice it to say his comments carry some weight in this industry. Many of his opinions in this paper are valid, but a few I disagree with. Regardless, this is a piece that is worth reading closely.


New trends in malware

First off, it has been over two months. A busy, busy summer has unfortuantely made me put this project on the back burner. I'm hoping to reverse that trend in the coming weeks as I attempt to work an update into my Monday morning routine.

Administrative notes aside, this week saw two important revelations in malware. The first is one that has the most broad implications, and is merely a foreshadowing of darker days ahead. F-Secure reported that the Commwarrior.B virus took out nearly all of a Scandanavian company's cell phones a week ago Wednesday, according to C|Net. This is just months after a WDSGlobal expert claimed that the threat is overblown, citing internal data that such viruses accounted for only 0.0036% of all of that company's support calls. What's important here is the difference between the current threat and the future threat. While the current state of affairs is such that these infections are relatively rare, the atmosphere is as ripe as it could be for a major, major problem in the not-too-distant future, and necessitates security professionals begin thinking about what to do when that time comes. Features are being added to mobile phones at a bilstering pace, making them behave more and more like portable computers than simple telephony devices. Want proof? Many believe that Apple is set to release iPod-like phones with a major phone manufacturer any day now. As these new features are rapidly added, history shows us that security takes a back-seat to features and shortening time-to-production. Hopefully, history will not repeat itself here.

A second uncelebrated, but important piece of security-related news in the past week was the linking of an individual suspected of authoring the Zotob worm to a credit card fraud ring. For over a year, security experts have been warning that the identity theft and malware underworlds were colliding. Recently, the public has finally begun to see that in cases like the CardSystems ID theft. This marks the first major malware outbreak, to my knowledge, that has been linked by law enforcement authorities to an identity theft ring. Moreover, the suspect, Farid Essebar, is also believed to have had a hand in 20 other pieces of malware. This could be the groundbreaking case that offers the public a rare glimpse of the collision of two underground groups, and is worth following.


The future direction of data breaches

A few weeks ago, SANS' Internet Storm Center handler Marcus H. Sachs requested input from the community on what we feel will be the new trends in Information Security in the coming months. He was kind enough to quote an excerpt from my response:

Mike was maintaining a positive outlook when he wrote, "For years, organizations have been spending a lot of money on poorly-implemented or half-baked security solutions so they can check a box on an audit finding. At the same time, auditors have been providing findings of such poor quality that the information is nearly useless to their customers. I believe some of the recent high-profile identity theft cases will bring this to light, and hopefully improve auditing practices and force the hand of large organizations to *properly* implement security technologies."

I've been increasingly convinced of this in the weeks that have passed since I wrote that. However, there is another side to that coin: in order for bad but audit-compliant practices to be exposed, some colossal failures must happen. This week, we saw the first, with the theft of 40 million (yes, a 40 with 6 zeros after it) credit card numbers improperly kept by CardSystems Solutions. They were out of compliance of Visa and MasterCard standards, but had "recently" passed an audit (I believe the date they mentioned was 2003).

I have seen the Visa and MasterCard audits first-hand, and I have seen what "compliance" means. It means dusting everything off, putting together some reports to show that your organization was recently following the standards & practices agreed upon, panicking for a few weeks until the auditors leave, and then letting everything fall apart again shortly thereafter. Granted, my experience was only at one company, but the company was only "compliant" while the auditors were there. They were able to get away with it, and I suspect CardSystems Solutions did the exact same thing. Auditors are far too lenient on corporations, rubber-stamping compliance checks because the company has a "plan in place to implement" any security requirements not currently met. Plans that are surreptitiously ignored until the next go-round.

Less visible to consumers, and more visible to organizations, are the details of audit findings. These are littered with more false positives (or, audit findings that are incorrect) than actionable information. The data is poorly presented, voluminous, and very difficult to manage, with the exception of executive-level reports, of course. IT shops sink vast resources into addressing or proving invalid the specifics of these low-quality audit findings, resources that would be far better spent on bigger-picture projects like implementing technologies that will enable audit compliance or improve security in the future.

The injustice in this process, and its subsequent discovery and correction, is that consumers will have to suffer before their financial lives are improved.


How much do you trust your users?

In 2000, the Computer Security Institute and US FBI released an influential study [PDF] that showed 80% of attacks originate from inside an organization. This oft-cited and long-outdated report highlighted a real problem that had previously been ignored by many organizations. It gave security analysts the evidence they needed to convince IT managers that the "candy bar" security model - networks with a hard and crunchy exterior, but a soft & chewy interior - didn't work.

A few days ago, another study was released that could provide further impetus to improve information security policies across the board. This more relevant data is buried in a report by the US Secret Service and Carnegie Mellon's CERT which concluded that "insider revenge is often behind cyberattacks." The somewhat-alarmist conclusions and sound bites highlighted in the report belie some very interesting statistics, particularly that "57% of the attacks were carried out by systems administrators, while 33% were caused by privileged users."

This study was rather limited and still a little dated, involving 49 cases of insider attacks between 1996 and 2002. But its results still speak volumes: 90% of all attacks in the study were performed by users with higher-than-normal privileges. If this isn't enough to take wind out of the sails of those who still believe a firewall is adequate protection, then those people are beyond the realm of rational thought.

The lessons here are twofold:
  1. The people who get administrative or elevated privileges should be limited absolutely as much as possible, and
  2. Those who have elevated privileges should be the most carefully watched.
All of us have the necessity to place a certain amount of trust in users; it is imperative for the business line of any organization to function. Even in our personal lives, we take such risks. I let friends use my computer, and have an unprivileged account set up specifically for this purpose. Sometimes I get lazy, and let them do things like surf the web with my user account logged in. One time, I found that someone had gone through my personal email. No one is immuned to this. Given that the fallibility of us as humans is at the root of the problem, the only solution is what I mention above: restrict and monitor.

Hopefully, some inquisitive minds with the necessary time and funding will be able to perform a similar, broader study on attacks by users with elevated privileges. A study with this specific focus would grab the attention of a broader audience, including IT decision makers, and further raise the bar on internal, layered security.


CISSP Practice Exams: Buyer beware

As I've said in earlier posts, in the interests of being unbiased, I try to avoid commenting on products, services, or companies directly unless it applies to a specific point. This article will be an exception.

Earlier today, I received an email forward from a coworker of mine who is studying for the CISSP. On Saturday, he spent around $100 on a practice exam produced by Boson, and his experience is worthy of note. Sean is an experienced, skilled, and knowledgeable IT professional whose opinions I respect greatly, and while I have never used the product he refers to, I have no reason to doubt the validity of his complaints. The entire contents of his email are as follows:
---------------------------- Original Message ----------------------------
Subject: CISSP practice exam a huge let-down
From: "Sean Wilkerson" <sean@xxxxx.com>
Date: Sat, May 21, 2005 4:06 pm
To: support@boson.com

Boson Support,
I am in the midst of studying for my CISSP exam, which I am due to take
in three weeks. In preparation for this exam, I have taken numerous
practice exams, including those offered directly by ISC2 (the
organization who makes, and hosts the CISSP). I did some side-by-side
analysis of your exam features vs. cram-session's and decided to go with
yours. This morning, I purchased the three exam pack which you feature
for the CISSP, and then took exam one of the series. I was miserably let
down by the content, grammar, and structure of your exam, which I found
to be counterproductive, and distracting. After about 20-30 questions
in, I realized that the problems with your exam were not limited to the
rare case of a bad question, but were throughout the entire question-set.
As this point, I started taking notes as to the major complaints I have
with your product, which I would like to share with you here.

- The UI continually messes up, by not showing entire question. To see
the entire question, you have to frequently adjust the size of the
application window. Even if the window is maximized, you still don't see
it all, without adjusting the window slightly, which results in the rest
of the text suddenly appearing. This is a glitch, flat out. I am using
Windows XP SP2, which should be supported. Being a security
professional, I also have my system entirely patched (with the latest MS
patches), firewalled, anti-virus protected, and have NO malware
installed on my system as detectable by any of the several tools I use.
- I find your questions to be vague and confusing. They are not clear or
specific (as the real CISSP questions are). I have done enough research
for the CISSP to know, what types of questions to expect on the exam and
what you provide is not it. I found that your questions were not even
remotely similar to the sorts of questions I will see on the exam. The
fill-in-the-blank non-sense, the questions about vendor specifics (see
below), the failure to use the actual terms you were describing in the
question, were all symptoms of this problem. This issue is exacerbated
by the bad grammar, (see below).
- Incredibly poor grammar throughout the test in both the questions and
answers, though mostly the questions. Lots of simple mechanics
mistakes. Extremely poor editing. This is INCREDIBLY distracting. I
found myself mentally correcting the exam's grammar, rather than
concentrating on the content. This is not a failure of me, the test
taker, but of the test content provider and editing staff.
- The CISSP uses the same format for every question. Specifically, there
is a question, with four multiple choice answers, to which the test-taker
should choose the one (read 1) choice which best answers the question.
Your test had questions with anywhere from four to six possible answers.
Furthermore, many of the questions required more than one answer. If the
intention is to prepare a customer for the CISSP, than this is
- There are too many questions on proprietary software and OS
platforms. The CISSP is software and OS agnostic, so a well-written
practice exam should be as well. Being intimately familiar with MS
Windows, for example, is not a requirement of either being a CISSP or a
security professional, and should therefore not be on a practice test
designed to prepare one for the CISSP exam.

I am not usually the one who speaks out, or complains about trivial
things, but I feel this is non-trivial. I the $99.48 I paid for these
this morning was a waste of money. Additionally, the time I spent this
morning both taking your exam, and writing this e-mail, has done nothing
to help me prepare for, or pass the CISSP exam, but has instead giving
you critical feedback which will *hopefully* help you to improve your

I have already un-installed your software from my computer, and plan to
never use it again, as I see no benefit.

Please get back to me soon and explain how you will honor the quality of
your product and customer service.

Sean Wilkerson
I'd like to thank Sean for his permission to reproduce this informative email. I know how thoroughly and meticulously he researches everything, so I'm certain that from all the information available publicly, this looked like a good exam. The only recommendation I can make to avoid this situation is to talk with people who have taken both the practice exam and the CISSP itself before spending money on any practice exam.


Malware analysis and the long days of May

The long period without an update is not without a good cause; I am currently finishing a paper on an XML Framework for intrusion detection signatures. The paper is technically finished, but not polished to where I would feel comfortable for the world+dog to read it. So, for anyone concerned I had been redirected to /dev/null, fear not.

The Internet Storm Center handler's diary for today has an excellent write-up on malware analysis by Tom Liston, one of information security's more colorful personalities. I first became familiar with Tom's work during the CodeRed outbreak in 2001, when he developed a little piece of bandwidth-saving code that later grew into the LaBrea Tarpit. If you have ever wondered what sort of work and insight go into basic (yes, I said basic) malware analysis, this article offers a peek into the field. As you might expect, the work done here is merely the tip of the iceberg when it comes to analyzing malicious code. Tom doesn't even go into the details of shellcode used to overwrite stack pointers, assembly code tracing, and packet captures often analyzed in reverse engineering malware.


Vulnerability Assessment: A Component-Based Design using CORBA

First, I apologize for the sporadic outages over the past week. A vortex of hardware and ISP problems combined to give me a horrible connectivity problem. The ISP issues seem to have been resolved, and new hardware is on order.

Last night I finished a paper titled Vulnerability Assessment: A Component-Based Design using CORBA. This paper discusses a different approach to designing VA tools with the goal of improving reliability and efficiency. Current VA tools are configured and designed in a way that makes them rather inflexible, leading to efficiency and management problems. Designing the system using a component architecture - CORBA, in this case - the system can be more flexible and handle connectivity issues more gracefully. Also available online is a presentation of the paper's main topics that I gave a few weeks ago at GWU. The paper is not intended to be a complete solution to the problems mentionted, but rather an introduction to the problems and a framework for a solution. Comments are welcome.


What is Common Criteria

I received the following from a friend today, via email, that I thought warranted some attention:
know anything about this?

Provided in the link is a set of "Common Criteria" tools developed by Apple. Common Criteria (which is actually shorthand) is a buzzword that many people in IT have heard, but is a topic they don't have much exposure to. For that purpose, I figured I would provide a brief outline and resources for further research.

The "Common Criteria for Information Technology Security Evaluation" is essentially a set of guidelines that can be applied to computers for certification as "secure." I put that word in quotes for a reason: every good analyst knows that meeting certain predefined and broad guidelines doesn't guarantee system security. Additional monitoring and analysis by security analysts should be performed in order to evaluate the unique security issues that apply to each system. However, these guidelines can play an important role in assessing risk within an organization. The CC guidelines are tied closely to "Certification and Accreditation" (often called "CnA") of mission-critical US government systems. Performing CnA's is required by the Federal Information Security Management Act (FISMA) of 2002, and is one item that every government agency is graded on each fiscal year. The formula provided by FISMA employing CC isn't perfect, but it's a step in the right direction.

I haven't used the tools identified in the link, so I'm speculating here, but it appears that this tool will evaluate the system against the Common Criteria established by NIST, the NSA, and a host of other foreign government bodies.

Analyzing and implementing the Common Criteria can itself be a career, and I haven't even scratched the surface here. However, I recommend security professionals familiarize themselves with Common Criteria Project concepts, where they are used, and what their goals and implications are. This is information that will come in handy, in a practical or theoretical sense, at some point in many InfoSec career paths.


On Vulnerability Assessment, and Internet Reconnaissance

Today I will be discussing two completely unrelated topics, both involving very recent events.

Vulnerability Assessment
Yesterday, I was privileged to have a project of mine presented as part of the SANS WhatWorks series. Bill Geimer, my boss and the manager of the contract, presented along with the other engineer involved, Brent Duckworth. The presentation was an excellent outline of some of the challenges in implementing an enterprise-class vulnerability assessment/management system from a high level, as well as highlighting how to run such a project smoothly and properly. This was certainly one of the most successful InfoSec projects I've been involved with, and I was happy to see it highlighted to a global audience. By the close of the webinar, over 800 attendees had connected. I was pleased to see so many individuals interested in our work.

The presentation is still available online. If you're interested in effectively using vulnerability assessment tools in an enterprise or business environment, I highly recommend you listen to it. I will include a much more technical entry on designing effective vulnerability assessment tools in a later entry, once my research on Component-based Design of Vulnerability Assessment Tools with CORBA is complete in a few weeks. I would also be more than happy to answer any questions regarding the project here, but the reader should understand some specific questions may reveal sensitive information and will be deferred.

Internet Reconnaissance: TCP/1025
Moving on to a more serious and technical subject, one network that I monitor has seen an enormous increase in TCP/1025 scans. The network saw nearly 2.6 million requests for this service over the 24-hour period yesterday, from 10,820 unique sources, compared to just a few thousand in previous weeks. According to IANA, this port is reserved for "network blackjack," but I doubt 10,820 people suddenly got the internet gambling+hacking bug in the same day. This was mentioned yesterday in the Internet Storm Center's diary. If anyone has any helpful information on this, please contact the handlers at the ISC so this information can get compiled and analyzed quickly. This is the kind of activity that can precede (and has in the past) a huge attack that affects everyone.


Hardware Fingerprinting: Good but not quite Great

Last week, UCSD PhD student Tadayoshi Kohno and CAIDA associates Andre Broido and KC Claffy published a paper detailing the identification of unique pieces of hardware on a network. The basic assumption of the paper, titled Remote physical device fingerprinting, is that the system clock keeping time on every networked device is inaccurate in a unique way, slowly creeping ahead or behind time at a predictable rate. Mr. Kohno makes compelling arguments that this time skew can be identified across multiple hops, long distances, and high-latency links, even if the system in question is using NTP to synchronize its clock with a more accurate (presumably atomic) clock.

This influential and original research is just the kind of infusion of fresh ideas that the information security field needs; however, it isn't quite the remarkable feat that many have heralded it as. One of the fundamental assumptions of the research is that each system's clock skew is unique, but no data is provided, nor references cited, to back this claim. I would be interested to see some follow-up research in this field to show exactly how unique a clock skew can be. Even if the clock skew is found to be repeatable only once in every one million devices, the sheer size of the Internet means that this method could not alone uniquely identify systems. Furthermore, kernel modifications or system-level tools can defeat this process. These two observations do not diminish the value of the research, but are important points that some supporters seem to have glossed over.

This research will inevitably be added to the toolchest of reconnaissance techniques employed by COTS vulnerability assessment scanners, as well as open-source tools such as Fyodor's nmap. While it won't be used alone, when combined with other data collected by these tools, it will serve as the centerpiece to a set of data that will accurately identify a unique system anywhere on the Internet.


Pity the Consumer

Reports are now surfacing that, since a number of recent security problems at T-Mobile including the Paris Hilton fiasco, sales of T-Mobile's SideKick are going through the roof.

Excuse me?

Immediately after reading this, during a trip to the men's room, I found someone's PDA. My curious side insisted I at least turn it on long enough to see if there was any password protection or encryption software: there was not. I quickly sent out an APB, and of course left the device off and hidden to protect the owner's privacy, but not all PDA owners are so fortunate.

These two stories illustrate a profound misunderstanding by the general public of what personal security means. Many seem to believe that if your social security card, mother's maiden name, and wallet remain out of reach of thieves, they are safe. Nothing could be further from the truth. Personal data on a PDA, or even moreso on a service where you have no control over its security (like T-Mobile's SideKick), can be all an attacker needs to steal or sell a person's identity. Clearly, a lot of education needs to be done here, before more companies profit from poor security procedures, practices, and technologies that put their customers at risk.


Secure Online Banking and Another take on Microsoft and Malware

In this article, I have a few comments on two recent Information Security-related news items.

In a previous article titled "Microsoft's role in the battle against spyware," I commented on the irony of Microsoft profiting from a problem they played a big role in creating. It seems I'm not alone in this sentiment. Gartner's Neil MacDonald, speaking at the RSA conference this week, noted "Microsoft's overriding goal should be to eliminate the need for AV and AS products, not simply to enter the market with look-alike products at lower prices." Many of his comments were absolutely on-point, especially those demanding spyware/adware solutions from Anti-Virus companies as part of their current product offerings, not a separate product. TechWeb has a good summary of his speech that I recommend. While I don't always agree with Gartner, particularly since their irresponsible comments about IDS being dead, what Mr. MacDonald says here is exactly what the industry needs to hear.

In another recent bit of news, a businessman has filed lawsuit against Bank of America, claiming
$90,000(US) was wired out of his account without his permission. After my experience in a Fortune 500 financial institution, this only surprises me because it hasn't already happened. In our organization, there were impressively-tight security controls from a system and network perspective, but the information security team had little to no say in what went on at the application level, particularly for line-of-business applications developed in-house. Due to the perceived role of the information security team, warnings about shortcomings in online banking applications (such as authentication only by card number and pin) fell on deaf ears. It's a shame that things like this have to happen before financial institutions begin taking threats to their customers' accounts seriously. Hopefully other corporations will learn from this mistake so other online banking customers don't fall victim.


An amusing commentary on the state of malware

One of the restrooms at a location where I work has a cheap magazine rack. On occasion, people will leave behind sections of the Washington Post or, more commonly, the free version given to Metro riders called the Washington Post Express. Today, I noticed something different. There was a magazine-sized publication printed on newsprint-style paper in the rack, opened to a page and folded back on itself so as to only show one of the pages. The page contents were divided into three vertical columns, filled with plain-text advertisements for a variety of educational lectures each a few paragraphs in length. At the top, and the center of the page, I saw the following (copied verbatim):

10 ways to avoid the 60,000 Viruses on the Internet
Here is the most valuable computer course you could ever take! Avoid the drive-by download! You'll walk away with a ten point checklist of FREE ways to avoid the 60,000 viruses on the internet. Learn where to test your system for vulnerabilities. Discover which spyware detector is actually spyware! Override your default settings to make your system safe. Find out what the biggest mistake people make with their computer. Why a free firewall is better than Microsoft's and plenty of time for Q&A with "The Computer Guy."
(followed by text about the lecturer)

Knowing nothing about the individual giving this lecture, I have no reason to doubt that the information provided may be very useful to the average computer user. However, I wasn't sure what was more amusing: the text of the advertisement, that the class is $25 for "Nonmenbers" or $15 for "Members" yet you're provided with a "free" checklist, or that the ad was right next to a headline bellowing Astral Travel: How to Induce Out-of-Body Experiences offered by the same organization.

This advertisement was in a magazine from "First Class Inc., a non-profit 'adult-ed' center." This harkens back to my previous call for a comprehensive information security clearinghouse. While the information provided in this lecture may be great, there is an equal chance that it could be bad advice; this organization is hardly an accredited university. The government provides services that serve the public's interest in many aspects: the FDA, the CDC, etc. Why not Information Security as well?


Communications, Privacy Laws, and Security

As far back as 1997, I can remember Voice-over-IP, or VoIP, being called the "next big thing." Today, it seems the prophecies are finally coming true. Unfortunately, the widespread adoption of this technology stands to throw into complete disarray the boundaries of privacy laws intended to protect citizens, and the remediation could have a significant impact on the security industry.

Confusion over the application of the Federal Wiretap Act of 1968 has already arisen with regard to Instant Messaging, and this is a good starting point for a discussion on privacy in a digital environment like the Internet. If I am chatting on AIM from my home computer, sending personal messages to a friend who is at work, the conversation may be recorded. In fact, there is an emerging niche market of products designed specifically for such a purpose. The argument for such monitoring goes like this: every organization has a right (and sometimes obligation) to monitor the use of their computers and networks. There are many reasons for this, not the least of which is making sure sensitive information is not leaked. If someone happens to be chatting up a storm on IM and personal information gets logged, well, too bad. That individual knows the rules. On the other hand, as the user at home, I have no intention of my message being seen by anyone other than the recipient, and I have no way of knowing that my friend is on a network that might be monitored. On its face, mine seems to be the kind of situation for which the Privacy Act was designed, however there is little to no precedent either way. And unlike email, which already has a strange judicial precedent, the technology is not store-and-forward, so the one existing ruling regarding Internet communications cannot be applied. Now, I should know that IM conversations are easily read by third parties, but difficulty of the act of intercepting a conversation has nothing to do with its legality.

These privacy and legal concerns are quickly being realized by adopters of VoIP, except now the technology impacted completely mimics the type of technology the Wiretap Act was meant to protect: voice communications. Every time packets of VoIP data are sent over the Internet, they are most likely being analyzed by packet loggers, IDS's, and a variety of other network monitoring gear. The privacy of this data is entirely in the hands of the people who configured the devices, and the logging of this data falls into the same huge gray area as our IM conversation above. Furthermore, it would be easy to build products to monitor this data in a comprehensive manner, as with the IM conversation recorders above. After all, why not? It's the same communication paradigm: packets of communication data being sent in TCP packets over an IP network. The only difference here is that a person's voice, not fingers, generated the message.

What we have here is quite a conundrum. It's obvious that the current ambiguity with respect to privacy laws cannot last. Lines will be drawn, whether they be in the form of legislation or judicial precedent, and there is a good chance it will make the job of information security analysts considerably more difficult.

I believe that privacy laws are an important part of our democracy in the United States. That being said, security and privacy are often at odds with each other, and some would argue that this is even a zero-sum-game. If you gain security, you lose privacy, and vice versa. Consider what would happen to the job of security analysts if it is determined that neither IM nor VoIP conversations may be monitored. Intrusion detection systems would need to ignore such traffic. However, this leaves a significant gap through which an attacker could penetrate a network, as vulnerabilities are found in the associated protocols or their implementations. As an analyst, I cannot both monitor for malicious traffic and protect peoples' privacy! Any false positive that alarms on normal communication, or any attack that may also lead to the capture of benign traffic, would expose me or my organization to lawsuits. The contrary is just as concerning, as it would be a significant blow to privacy laws in the United States.

The only way to prevent this worst-case scenario is to make sure those who draw the lines in the sand, those who make the laws and set judicial precedent, make exceptions for legitimate and necessary monitoring of network traffic. It is equally important that these exceptions are well-defined, and do not create the potential for loopholes or abuse. In the interim, we must rely on the software and hardware vendors to assist in any way they can. A method for adding legal disclaimers on all IM's entering and leaving a monitored network would be a good place to start. Something similar for VoIP would be very difficult, given the backward-compatibility with POTS systems, but even a brief 2-second "this call may be monitored by networking devices" would work. Of course, there is currently no incentive for companies to install such devices, should they exist. The problem is a complex one, and watching the solution develop in time will be just as exciting as it will be scary.


Administrative Note: Errant Post

It has been brought to my attention (thank you, Kevin) that a draft of an upcoming entry was posted for a brief period of time last night. Often, I write my entries over a period of days, a few minutes or hours at a time. It appears that I accidentally posted last night's "doodlings" instead of saving them. I apologize for the inconvenience. That should teach me to blog after midnight on a work night :-)

Hopefully, the previously-posted doodlings will be translated into rational thought by the end of this evening.


Project Management and Information Security

Many information security-related problems that face large organizations stem from the fact that security was not considered in the ground-up planning that went into many existing IT solutions. Recently, IT executives are beginning to realize this mistake and are challenged with providing a band-aid for poorly-implemented systems already in use, as well as finding ways to avoid such mistakes in the future. They realize that the only preventive solution is at the project management level, however their solutions often become just another facet of an implementation process already rife with holes. While an improvement, the problem is still not fully addressed. These implementation issues, which become security issues, can be traced to the very organization of many IT departments, and the only true resolution is a complete paradigm shift at the departmental level.

In my experience, IT solutions are implemented in a very decentralized manner - business lines will manage their own projects, pulling in assistance from different IT groups as necessary. Meanwhile, within the IT department, each technician is grouped in a silo with other like-minded and like-skilled technicians. DBA's exist in the database group, Unix admins are in the Unix group, etc. When a business-line IT project manager pulls in a Windows admin for assistance, the Windows admin won't necessarily know when a data storage expert should be pulled in to implement a storage area array, versus simply storing all the data locally. It's easy to see how security fits into this equation: IT engineers can't be experts at everything, and often InfoSec is misunderstood and not considered until the end.

I had a revelation a few months ago while thinking back to a co-oping job I had in college as an electrical engineer. I was on a team tasked with building a handheld electronic device that monitored water quality. The team was made up of a few electrical engineers, a few computer engineers, a mechanical engineer, and a project manager. We were all part of the pool of engineers for the company. When management decided to pursue a technology, a team consisting of one engineer of every major type was assembled under an available project manager. Engineers with each specialty were chosen based on their current work load and the anticipated work load of their role in the project. Rarely would any one engineer be working on only one project. Sometimes, an engineer's skills were of very limited use, like the mechanical engineer in my experience outlined above. But still, the entire team was held accountable for the successes or failures of the project, so attendance at all team meetings was good, and everyone worked to the same goal, even when their role was a small one. There was no concept of a "manager" for the mechanical engineers, and a "group" for electrical engineers. We were all part of the collective - the Borg, if you will - of engineers, and we were all allowed to focus on our area of expertise.

I surmise that IT project management should work on similar principles: when a business need is identified, it is assigned to a team including a specialist for every major body of knowledge (as defined by management), and handed to a project manager with the appropriate details and timelines. There would be no fights over territory between groups, as is often the case with information security and networking when dealing with firewall issues. Accountability would fall to the entire group, not just one or two individuals. When a solution is needed to support a specific technology - say, a new database must be deployed in support of the Unix group's centralized system logs - the project would not be led by a techie in that group with little to no project management skills. This happens all the time. A separate part of the IT division would be responsible for operational and support issues, meaning constant routing problems in Islamabad wouldn't delay a network infrastructure change in Istanbul. The operational groups would simply follow procedures created by the engineers implementing the technology. And security would be built in to every single project.

I am a bit of an idealist. I realize that a radical and fundamental change like this is not likely to happen except in the case of a dreadful reorganization. I've also been told that some companies are already organized in this manner. But for those that aren't, such an approach would not only increase the security of their IT solutions, it would streamline the department's overall response time and support of the critical business lines that make money for the company. This is a win-win situation.


Kevin Mitnick eat your heart out: Some jaw-dropping tales of hacking

Someone get the movie rights to this, we've got a real-life blockbuster on our hands: Kevin Poulsen of SecurityFocus reports today that a hacker had access to one or more T-Mobile servers for over a year, with complete access to innumerable accounts and data therein, including one of a Secret Service agent that exposed confidential information on ongoing investigations. This is the most spectacular hacking story I've ever read, with the Secret Service serving as both the victim and investigator, and even ties in with the AOL leak of 92 million email addresses investigated last year. What is especially disturbing is that, once again, it looks like someone convicted of hacking is being handed a career in Information Security. Supposedly, the Secret Service is offering leniency to the hacker for his help in other investigations. It's an interesting angle on the old problem of jail overcrowding: why not just hire people when they break the law? In all seriousness, it is all too often that high-profile virus writers and hackers get employed by agencies or security companies, while people involved in lower-profile and lower-impact incidents are punished severely under the auspices of "terrorism" according to the USA Patriot Act. Not only have these people proven they can't be trusted, this certainly isn't discouraging others from hacking. The message here is, hacking will land you in jail, but only if you don't have good hacker-fu.

A second revelation today involved GMail. Specially-crafted email messages reportedly will cause the contents of memory (which may include anything from other emails to usernames and passwords) to be delivered to the attacker in an email in their inbox. For months I've been ridiculed by my associates for not trusting GMail and sticking to my own home server, so this is a bit of vindication for me, but is bad news in general. In Google's defense, they have some of the biggest brains in the industry, so I'm sure this will be resolved quickly and will be the exception rather than the norm.

This last tidbit, irony and all, is from SANS NewsBytes:

--Hacker Gets Data on 32,000 Students and Staff at George Mason University
(11 January 2005)
A hacker compromised a Windows server and gained access to social security numbers and other private information of thousands of students and staff at George Mason University. The university is one of the Centers of Excellence in Information Security designated by the US government.

The sad part is that this undermines confidence in the government's Centers of Excellence in Information Security, even though such education programs are institutionally separate from the administration of the universities involved.


Microsoft's role in the battle against spyware

Today, Microsoft announced that it is releasing its own anti-spyware application. This is the first step in its endeavour into the world of anti-virus software, which it seems to be gearing up as part of its revenue stream. On the surface, it appears that Microsoft may finally be giving its users an overdue helping-hand, but the move marks the beginning of something that may be a security setback in the years to come.

In providing these tools - pro bono or for a fee - Microsoft is coming dangerously close to pushing band-aid solutions as an acceptable security recourse. The time and effort they're putting into a whole new product line would much better serve customers if it were spent performing more code reviews to proactively correct more security flaws. Second, if the rumors pan out and Microsoft does in fact start charging for the anti-virus software they develop, they will be profiting from a problem they created themselves. This is something that they may actually be able to sell to the consumer market, but the corporate world will most likely be wise to such shenanigans. What concerns me is that the naievite of the consumer market leaves them vulnerable to such a ploy, and it's been apparent that the Department of Justice isn't too concerned with protecting them either.

Microsoft could provide these tools free of charge, as they currently do with their firewall. This would be the best, and most ethical, approach in my opinion. But again there is a danger: given Microsoft's track record, it's likely that the software will be bundled with the operating system. With a built-in safety net, the motivation for Microsoft to produce higher-quality code is lessened, and the security of personal computers is all once again in the hands of a single vendor who has given the topic far too little consideration in the past.

In my opinion, this direction would largely serve to perpetuate the problems we currently see with Microsoft products, while giving users a false sense of security. But with Redmond, I tend to be a skeptic, so let's just hope it doesn't play out that way.