Security companies and governments conspire to discover and hide software vulnerabilities that can be used as spyware vectors

The Electronic Frontier Foundation's Marcia Hoffman writes about security research companies that work to discover "zero day" vulnerabilities in software and operating systems, then sell them to governments and corporations that want to use them as a vector for installing spyware. France's VUPEN is one such firm, and it claims that it only sells to NATO countries and their "partners," a list that includes Belarus, Azerbaijan, Ukraine, and Russia. As Hoffman points out, even this low standard is likely not met, since many of the governments with which VUPEN deals would happily trade with other countries with even worse human rights records -- if Russia will sell guns to Syria, why not software exploits? VUPEN refuses to disclose their discoveries to the software vendors themselves, even for money, because they want to see to it that the vulnerabilities remain unpatched and exploitable for as long as possible.

“We wouldn’t share this with Google for even $1 million,” said VUPEN founder Chaouki Bekrar. “We don’t want to give them any knowledge that can help them in fixing this exploit or other similar exploits. We want to keep this for our customers.” VUPEN, which also “pwned” Microsoft’s Internet Explorer, bragged it had an exploit for “every major browser,” as well as Microsoft Word, Adobe Reader, and the Google Android and Apple iOS operating systems.

While VUPEN might be the most vocal, it is certainly not the only company selling high-tech weaponry on the zero-day exploit market. Established U.S. companies Netragard, Endgame, Northrop Grumman, and Raytheon are also in the business, according to Greenberg. He has also detailed a price list for various zero-day exploits, with attacks for popular browsers selling for well over $100,000 each and an exploit for Apple’s iOS going for a quarter million. But who exactly are these companies selling to? No one seems to really know, at least among people not directly involved in these clandestine exploit dealings. VUPEN claims it only sells to NATO governments and “NATO partners.” The NATO partners list includes such Internet Freedom-loving countries as Belarus, Azerbaijan, Ukraine, and Russia. But it’s a safe bet, as even VUPEN’s founder noted, that the firm’s exploits “could still fall into the wrong hands” of any regime through re-selling or slip-ups, even if VUPEN is careful. Another hacker who goes by the handle “the Grugq” says he acts as a middleman for freelance security researchers and sells their exploits to many agencies in the U.S. government. He implies the only reason he doesn’t sell to Middle Eastern countries is they don’t pay enough.

EFF calls out governments for trafficking in these vulnerabilities, rather than demanding their disclosure and repair. Any unpatched vulnerability puts every user of the affected software at risk. For a government to appropriate a vulnerability to itself and keep it secret in the name of "national security," rather than fixing it for the nation's citizens, is "security for the 1%."

“Zero-day” exploit sales should be key point in cybersecurity debate

Facebook passwords: many employers can snoop them, and don't need to ask

US senators are calling for action on employers' habit of demanding employees' Facebook passwords, but no one seems to notice that many companies configure their computers so that they can eavesdrop on your Facebook, bank, and webmail passwords, even when those passwords are "protected" by SSL. In my latest Guardian column, "Protecting your Facebook privacy at work isn't just about passwords," I talk about how our belief that property rights -- your employer's right to control the software load on the computer they bought for your use -- have come to trump privacy, human rights and basic decency.

Firms have legitimate (ish) reasons to install these certificates. Many firms treat the names of the machines on their internal networks as proprietary information (eg accounting.sydney.australia.company.com), but still want to use certificates to protect their users' connections to those machines. So rather than paying for certificates from one of the hundreds of certificate authorities trusted by default in our browsers – which would entail disclosing their servers' names – they use self-signed certificates to protect those connections.

But the presence of your employer's self-signed certificate in your computers' list of trusted certs means that your employer can (nearly) undetectably impersonate all the computers on the internet, tricking your browser into thinking that it has a secure connection to your bank, Facebook, or Gmail, all the while eavesdropping on your connection.

Many big firms use "lawful interception" appliances that monitor all employee communications, including logins to banks, health providers, family members, and other personal sites.

Protecting your Facebook privacy at work isn't just about passwords

Update: To everyone who says that your employer has the unlimited right to spy on your computer use because you're on company property, here's a paragraph from later in the piece:

Besides, there are plenty of contexts in which "company property" would not excuse this level of snooping. If you met your spouse on your lunchbreak to discuss a private medical matter in the break room or car park, you would probably expect that your employer wouldn't use a hidden microphone to listen in on the conversation – even though you were "on company property". Why should your employer get to snoop on your private webmail conversations with your spouse during your lunch-break?

TSA gets Bruce Schneier booted from House Committee on Oversight and Government Reform hearing

Bruce Schneier was invited to testify about the TSA to the House Committee on Oversight and Government Reform, but at the last minute he was disinvited, after the TSA objected to having him in the room.

On Friday, at the request of the TSA, I was removed from the witness list. The excuse was that I am involved in a lawsuit against the TSA, trying to get them to suspend their full-body scanner program. But it's pretty clear that the TSA is afraid of public testimony on the topic, and especially of being challenged in front of Congress. They want to control the story, and it's easier for them to do that if I'm not sitting next to them pointing out all the holes in their position. Unfortunately, the committee went along with them. (They tried to pull the same thing last year and it failed -- video at the 10:50 mark.)

The committee said it would try to invite me back for another hearing, but with my busy schedule, I don't know if I will be able to make it. And it would be far less effective for me to testify without forcing the TSA to respond to my points.

Congressional Testimony on the TSA (Thanks, Bruce!)

Bruce Schneier and former TSA boss Kip Hawley debate air security on The Economist

The Economist is hosting a debate between Bruce Schneier and former TSA honcho Kip Hawley, on the proposition "This house believes that changes made to airport security since 9/11 have done more harm than good." I'm admittedly biased for Bruce's position (he's for the proposition), but it seems to me that no matter what your bias, Schneier totally crushed Hawley in the opening volley. The first commenter on the debate called Hawley's argument "post hoc reasoning at its most egregious," which sums it all up neatly.

Here's a bit of Schneier:

Let us start with the obvious: in the entire decade or so of airport security since the attacks on America on September 11th 2001, the Transportation Security Administration (TSA) has not foiled a single terrorist plot or caught a single terrorist. Its own "Top 10 Good Catches of 2011" does not have a single terrorist on the list. The "good catches" are forbidden items carried by mostly forgetful, and entirely innocent, people—the sorts of guns and knives that would have been just as easily caught by pre-9/11 screening procedures. Not that the TSA is expert at that; it regularly misses guns and bombs in tests and real life. Even its top "good catch"—a passenger with C4 explosives—was caught on his return flight; TSA agents missed it the first time through.

In previous years, the TSA has congratulated itself for confiscating home-made electronics, alerting the police to people with outstanding misdemeanour warrants and arresting people for wearing fake military uniforms. These are hardly the sorts of things we spend $8 billion annually for the TSA to keep us safe from.

Don't be fooled by claims that the plots it foils are secret. Stopping a terrorist attack is a political triumph. Witness the litany of half-baked and farcical plots that were paraded in front of the public to justify the Bush administration's anti-terrorism measures. If the TSA ever caught anything even remotely resembling a terrorist, it would be holding press conferences and petitioning Congress for a bigger budget.

And some of Hawley:

More than 6 billion consecutive safe arrivals of airline passengers since the attacks on America on September 11th 2001 mean that whatever the annoying and seemingly obtuse airport-security measures may have been, they have been ultimately successful. However one measures the value of our resilient society careening through ten tumultuous years without the added drag of one or more industry-crushing and national psyche-devastating catastrophic 9/11-scale attacks, the sum of all that is more than its cost. If the question is whether the changes made to airport security since 9/11 have done more harm than good, the answer is no.

Risk management is second nature to us. At the airport we see a simple equation: "I pay a cost in convenience and privacy to get reasonable certainty that my flight will be terror-free." Since 9/11, the cost feels greater while the benefits seem increasingly blurred. Much of the pain felt by airport security stems from the security process not keeping up with its risk model. In airport security, we have stacked security measures from different risk models on top of each other rather than adding and subtracting security actions as we refine the risk strategy. This is inefficient but it does not create serious harm.

Schneier adds, "I'll take suggestions for things to say in Part III."

This house believes that changes made to airport security since 9/11 have done more harm than good. (Thanks, Bruce!)

How a cult created a chemical weapons program

A really, really interesting report from The Center for a New American Security about how Japanese cult Aum Shinrikyo developed its own chemical weapons program, and what factors enabled it to successfully attack a Tokyo subway with sarin gas. I'm still reading through this and will probably have something longer to say later. But it's got some very interesting examples of things I've noticed in other analyses of successful terrorist attacks: Groups can do things that make them seem comically inept, and they can fail over and over, and still end up pulling off a successful attack. In the end, some of this is about simple, single-minded perseverance. You don't have to be a criminal mastermind. You just have to be willing to keep trying long after most people would have given up. (Via Rowan Hooper)

Anonymosus-OS: the checksums that don't check out

Further to the ignoble saga of Anonymosus-OS, an Ubuntu variant targeted as people who want to participate in Anonymous actions: Sean Gallagher has done the legwork to compare the checksums of the packages included in the OS with their canonical versions and has found a long list of files that have been modified. Some of these ("usr/share/gnome/help/tomboy/eu/figures/tomboy-pinup.png: FAILED") are vanishingly unlikely to be malware, while others ("usr/share/ubiquity/apt-setup") are more alarming.

None of this is conclusive proof of malware in the OS, but it is further reason not to trust it -- if you're going to produce this kind of project and modify the packages so that they don't check, you really should document the alterations you've made.

all.md5 > /dev/shm/check.txt
md5sum: WARNING: 143 of 95805 computed checksums did NOT match
anonymous@anonymous:/$ grep -v ': OK$' /dev/shm/check.txt
usr/share/locale-langpack/en_AU/LC_MESSAGES/subversion.mo: FAILED
usr/share/locale-langpack/en_GB/LC_MESSAGES/gbrainy.mo: FAILED
usr/share/applications/language-selector.desktop: FAILED
usr/share/locale-langpack/en_GB/LC_MESSAGES/file-roller.mo: FAILED
usr/share/locale-langpack/en_CA/LC_MESSAGES/metacity.mo: FAILED
usr/share/locale-langpack/en_GB/LC_MESSAGES/jockey.mo: FAILED
usr/share/locale-langpack/en_AU/LC_MESSAGES/lightdm.mo: FAILED
usr/share/doc/libxcb-render0/changelog.Debian.gz: FAILED...

The bad checksums in Anonymous-OS (Thanks, Sean!)

Preliminary analysis of Anonymosus-OS: lame, but no obvious malware


On Ars Technica, Sean Gallagher delves into the Anonymosus-OS, an Ubuntu Linux derivative I wrote about yesterday that billed itself as an OS for Anonymous, with a number of security/hacking tools pre-installed. Sean's conclusions is that, contrary to rumor, there's not any malware visible in the package, but there's plenty of dubious "security" tools like the Low Orbit Ion Cannon: "I don't know how much more booby-trapped a tool can get than pointing authorities right back at your IP address as LOIC does without being modified."

As far as I can tell, Sean hasn't compared the package checksums for Anonymosus-OS, which would be an important and easy (though tedious) step for anyone who was worried about the OS hiding malware to take.

Update: Sean's done the checksum comparison and found 143 files that don't match up with the published versions.

Some of the tools are of questionable value, and the attack tools might well be booby-trapped in some way. But I don't know how much more booby-trapped a tool can get than pointing authorities right back at your IP address as LOIC does without being modified.

Most of the stuff in the "Anonymous" menu here is widely available as open source or as Web-based tools—in fact, a number of the tools are just links to websites, such as the MD5 hash cracker MD5Crack Web. But it's clear there are a number of tools here that are in daily use by AnonOps and others, including the encryption tool they've taken to using for passing target information back and forth.

Lame hacker tool or trojan delivery device? Hands on with Anonymous-OS

Android screen lock bests FBI

A court filing from an FBI Special Agent reports that the Bureau's forensics teams can't crack the pattern-lock utility on Android devices' screens. This is moderately comforting, given the courts' recent findings that mobile phones can be searched without warrants. David Kravets writes on Wired:

A San Diego federal judge days ago approved the warrant upon a request by FBI Special Agent Jonathan Cupina. The warrant was disclosed Wednesday by security researcher Christopher Soghoian,

In a court filing, Cupina wrote: (.pdf)

Failure to gain access to the cellular telephone’s memory was caused by an electronic ‘pattern lock’ programmed into the cellular telephone. A pattern lock is a modern type of password installed on electronic devices, typically cellular telephones. To unlock the device, a user must move a finger or stylus over the keypad touch screen in a precise pattern so as to trigger the previously coded un-locking mechanism. Entering repeated incorrect patterns will cause a lock-out, requiring a Google e-mail login and password to override. Without the Google e-mail login and password, the cellular telephone’s memory can not be accessed. Obtaining this information from Google, per the issuance of this search warrant, will allow law enforcement to gain access to the contents of the memory of the cellular telephone in question.

Rosenberg, in a telephone interview, suggested the authorities could “dismantle a phone and extract data from the physical components inside if you’re looking to get access.”

However, that runs the risk of damaging the phone’s innards, and preventing any data recovery.

FBI Can’t Crack Android Pattern-Screen Lock

The Apocalypse will be a lot like flying coach

What could possibly make a 1960s-era nuclear war worse than you'd already assumed it would be? How about being packed like sardines into a fallout shelter with 13 of your soon-to-be-closest friends?

Frank Munger is a senior reporter with the Knoxville News Sentinel, where he covers Oak Ridge National Laboratory—a nearby energy research facility that previously did a lot of civil defense research. Munger turned up this, and several other photos, of mockup nuclear shelter arrangements tested out in the basement at ORNL when the facility was trying to establish best practice scenarios for surviving the Apocalypse.

They look ... less than pleasant.

That said, though, they may not have been meant as long-term arrangements. Munger linked to an Atlantic article that makes an interesting case related to these photos: If what you're talking about is one relatively small nuclear bomb (as opposed to massive, hydrogen bomb, mutually assured destruction scenarios), the idea of "Duck and Cover" isn't as ridiculous as it sounds. If you could get these 14 people out of the way of the fallout for a couple weeks, their chances of survival would rise exponentially. Fallout shelters were not meant to be "the place you and your people live for the next 50 years."

The radiation from fallout can be severe -- the bigger the bomb, and the closer it is the the ground, the worse the fallout, generally -- but it decays according to a straightforward rule, called the 7/10 rule: Seven hours after the explosion, the radiation is 1/10 the original level; seven times that interval (49 hours, or two days) it is 1/10 of that, or 1/100 the original, and seven times that interval (roughly two weeks) it is 1/1000 the original intensity.

See the rest of Frank Munger's photos of ORNL fallout shelter mockups.Read the rest of The Atlantic article on "duck and cover".

Passphrases suck less than passwords, but they still suck

In "Linguistic properties of multi-word passphrases" (PDF, generates an SSL error) Cambridge's Joseph Bonneau and Ekaterina Shutova demonstrate that multi-word passphrases are more secure (have more entropy) than average user passwords composed of "random" characters, but that neither is very secure. In a blog post, Joseph Bonneau sums up the paper and the research that went into it.

Some clear trends emerged—people strongly prefer phrases which are either a single modified noun (“operation room”) or a single modified verb (“send immediately”). These phrases are perhaps easier to remember than phrases which include a verb and a noun and are therefore closer to a complete sentence. Within these categories, users don’t stray too far from choosing two-word phrases the way they’re actually produced in natural language. That is, phrases like “young man” which come up often in speech are proportionately more likely to be chosen than rare phrases like “young table.”

This led us to ask, if in the worst case users chose multi-word passphrases with a distribution identical to English speech, how secure would this be? Using the large Google n-gram corpus we can answer this question for phrases of up to 5 words. The results are discouraging: by our metrics, even 5-word phrases would be highly insecure against offline attacks, with fewer than 30 bits of work compromising over half of users. The returns appear to rapidly diminish as more words are required. This has potentially serious implications for applications like PGP private keys, which are often encrypted using a passphrase. Users are clearly more random in “passphrase English” than in actual English, but unless it’s dramatically more random the underlying natural language simply isn’t random enough. Exploring this gap is an interesting avenue for future collaboration between computer security researchers and linguists. For now we can only be comfortable that randomly-generated passphrases (using tools like Diceware) will resist offline brute force.

Some evidence on multi-word passphrases (via Schneier)

TSA: we still trust body-scanners, though "for obvious reasons" we can't say why

Yesterday, I wrote about Jon Corbett's video, in which he demonstrates a method that appears to make it easy to smuggle metal objects (including weapons) through a TSA full-body scanner. The TSA has responded by saying that they still trust the machines, but they won't say why, "for obvious security reasons."

As Wired's David Kravets points out, Corbett is only the most recent critic to take a skeptical look at the efficacy of the expensive, invasive machinery. Other critics include the Government Accountability Office ("the devices might be ineffective") and the Journal of Transportation Security ("terrorists might fool the Rapiscan machines by taping explosive devices to their stomachs").

Corbett responded to the TSA's we-can't-tell-you-or-we'd-have-to-kill-you rebuttal with "You don't believe it? Try it."

“These machines are safe,” Lorie Dankers, a TSA spokeswoman, said in a telephone interview.

In a blog post, the government’s response was that, “For obvious security reasons, we can’t discuss our technology’s detection capability in detail, however TSA conducts extensive testing of all screening technologies in the laboratory and at airports prior to rolling them out to the entire field.”

TSA Pooh-Poohs Video Purporting to Defeat Airport Body Scanners

Maher Arar on Canada's pro-torture policy

Maher Arar, a Canadian who was rendered to Syria for years of brutal torture on the basis of bad information from Canada's intelligence agencies, writes in Prism about the revelation that Canadian public safety minister Vic Toews has given Canadian intelligence agencies and police the green light to use information derived from torture in their work. Arar cites examples of rendition and torture based on the "Hollywood fantasy that underlines the 'ticking bomb' scenario that minister Toews was apparently contemplating when he wrote this directive."

What makes this direction even more alarming is that the fat annual budgets devoted to enhancing national security have not been balanced by a similar increase in oversight. In fact, the government chose to ignore the most important recommendation of Justice O’Connor which is to establish a credible oversight agency that has the required powers to monitor and investigate the activities of the RCMP and those of other agencies involved in the gathering and dissemination of national security information. Unlike the powerless Commission for Public Complaints Against the RCMP (CPC) or the Security Intelligence Review Committee (SIRC) this agency would also be granted subpoena power to compel all agencies to produce the required documents.

Coming back to the directive one can only cite two examples here which I believe are sufficient to illustrate the hollowness of the argument presented in the directive. The first relates to the invasion of Iraq which we now know was based on false intelligence (see this video) that was extracted from Ibn al-Shaykh al-Libi while he was being tortured in Egypt. Al-Libi was later found dead inside his prison cell. Some human rights activists believe the Gaddafi regime liquidated him three years after he was rendered to Libya by the CIA.

Torture Directive 2.0 (Thanks, Richard!)

(Image: Rothenburg Germany Torture Museum, a Creative Commons Attribution (2.0) image from nanpalmero's photostream) (Thanks, Richard!)

HOWTO get metal through a TSA full-body scanner

Jon Corbett, an engineer who is suing the TSA over the use of full-body "pornoscanners," has developed and documented a simple way to smuggle metallic objects, including guns, through the scanners. He tested the method at real TSA checkpoints, producing video-documentation that shows him apparently passing through the scanners with odd-shaped metal objects in a hidden pocket sewn into his garments. The method relies on the fact that the scanners show subjects' bodies as light objects on a dark background, and also render metal as dark objects. If an object is off to the side of the subject -- in a side pocket, say -- it shows up as black-on-black and is thus invisible.

To put it to the test, I bought a sewing kit from the dollar store, broke out my 8th grade home ec skills, and sewed a pocket directly on the side of a shirt. Then I took a random metallic object, in this case a heavy metal carrying case that would easily alarm any of the “old” metal detectors, and walked through a backscatter x-ray at Fort Lauderdale-Hollywood International Airport. On video, of course. While I’m not about to win any videography awards for my hidden camera footage, you can watch as I walk through the security line with the metal object in my new side pocket. My camera gets placed on the conveyer belt and goes through its own x-ray, and when it comes out, I’m through, and the object never left my pocket.

Maybe a fluke? Ok, let’s try again at Cleveland-Hopkins International Airport through one of the TSA’s newest machines: a millimeter wave scanner with automated threat detection built-in. With the metallic object in my side pocket, I enter the security line, my device goes through its own x-ray, I pass through, and exit with the object without any complaints from the TSA.

$1B of TSA Nude Body Scanners Made Worthless By Blog — How Anyone Can Get Anything Past The Scanners (via MeFi)

Android lets apps secretly access and transmit your photos

Writing in the NYT's BITS section, Brian X. Chen and Nick Bilton describe a disturbing design-flaw in Android: apps can access and copy your private photos, without you ever having to grant them permission to do so. Google says this is a legacy of the earlier-model phones that used removable SD cards, but it remains present in current versions. To prove the vulnerability's existence, a company called Loupe made an Android app that, once installed, grabbed your most recent photo and posted it to Imgur, a public photo-sharing site. The app presented itself as a timer, and users who installed it were not prompted to grant access to their files or images. A Google spokesperson quoted in the story describes the problem, suggests that the company would be amenable to fixing it, but does not promise to do so.

Ashkan Soltani, a researcher specializing in privacy and security, said Google’s explanation of its approach would be “surprising to most users, since they’d likely be unaware of this arbitrary difference in the phone’s storage system.” Mr. Soltani said that to users, Google’s permissions system was ”akin to buying a car that only had locks on the doors but not the trunk.”

I think that this highlights a larger problem with networked cameras and sensors in general. The last decade of digital sensors -- scanners, cameras, GPSes -- has accustomed us to thinking of these devices as "air-gapped," separated from the Internet, and not capable of interacting with the rest of the world without physical human intervention.

But increasingly these things are networked -- we carry around location-sensitive, accelerometer-equipped A/V recording devices at all times (our phones). Adding network capability to these things means that design flaws, vulnerabilities and malicious code can all conspire to expose us to unprecedented privacy invasions. Unless you're in the habit of not undressing, going to the toilet, having arguments or intimate moments, and other private activities in the presence of your phone, you're at risk of all that leaking online.

It seems to me that neither the devices' designers nor their owners have gotten to grips with this yet. The default should be that our sensors don't broadcast their readings without human intervention. The idea that apps should come with take-it-or-leave-it permissions "requests" for access to your camera, mic, and other sensors is broken. It's your device and your private life. You should be able to control -- at a fine-grained level -- the extent to which apps are allowed to read, store and transmit facts about your life using your sensors.

Et Tu, Google? Android Apps Can Also Secretly Copy Photos

FBI anti-terrorism expert: TSA is useless

Steve Moore, who identifies himself as a former FBI Special Agent and head of the Los Angeles Joint Terrorism Task Force Al Qaeda squad, says that the TSA is useless. He says that they don't catch terrorists. He says they won't catch terrorists. He says that they can't catch terrorists. Oh, he also claims 35 years' piloting experience and a father was United's head of security and anti-hijacking SWAT training and experience.

Frankly, the professional experience I have had with TSA has frightened me. Once, when approaching screening for a flight on official FBI business, I showed my badge as I had done for decades in order to bypass screening. (You can be envious, but remember, I was one less person in line.) I was asked for my form which showed that I was armed. I was unarmed on this flight because my ultimate destination was a foreign country. I was told, "Then you have to be screened." This logic startled me, so I asked, "If I tell you I have a high-powered weapon, you will let me bypass screening, but if I tell you I'm unarmed, then I have to be screened?" The answer? "Yes. Exactly." Another time, I was bypassing screening (again on official FBI business) with my .40 caliber semi-automatic pistol, and a TSA officer noticed the clip of my pocket knife. "You can't bring a knife on board," he said. I looked at him incredulously and asked, "The semi-automatic pistol is okay, but you don't trust me with a knife?" His response was equal parts predictable and frightening, "But knives are not allowed on the planes."...

The report goes on to state that the virtual strip search screening machines are a failure in that they cannot detect the type of explosives used by the “underwear bomber” or even a pistol used as a TSA’s own real-world test of the machines. Yet TSA has spent approximately $60 billion since 2002 and now has over 65,000 employees, more than the Department of State, more than the Department of Energy, more than the Department of Labor, more than the Department of Education, more than the Department of Housing and Urban Development---combined. TSA has become, according to the report, “an enormous, inflexible and distracted bureaucracy more concerned with……consolidating power.”

Each time the TSA is publically called to account for their actions, they fight back with fear-based press releases which usually begin with “At a time like this….” Or “Al Qaeda is planning—at this moment …..” The tactic, of course, is to throw the spotlight off the fact that their policies are doing nothing to make America safer “at a time like this.” Sometimes doing the wrong thing is just as bad as doing nothing.

TSA: Fail (via MeFi)