Anonymosus-OS: the checksums that don't check out

Further to the ignoble saga of Anonymosus-OS, an Ubuntu variant targeted as people who want to participate in Anonymous actions: Sean Gallagher has done the legwork to compare the checksums of the packages included in the OS with their canonical versions and has found a long list of files that have been modified. Some of these ("usr/share/gnome/help/tomboy/eu/figures/tomboy-pinup.png: FAILED") are vanishingly unlikely to be malware, while others ("usr/share/ubiquity/apt-setup") are more alarming.

None of this is conclusive proof of malware in the OS, but it is further reason not to trust it -- if you're going to produce this kind of project and modify the packages so that they don't check, you really should document the alterations you've made.

all.md5 > /dev/shm/check.txt
md5sum: WARNING: 143 of 95805 computed checksums did NOT match
anonymous@anonymous:/$ grep -v ': OK$' /dev/shm/check.txt
usr/share/locale-langpack/en_AU/LC_MESSAGES/subversion.mo: FAILED
usr/share/locale-langpack/en_GB/LC_MESSAGES/gbrainy.mo: FAILED
usr/share/applications/language-selector.desktop: FAILED
usr/share/locale-langpack/en_GB/LC_MESSAGES/file-roller.mo: FAILED
usr/share/locale-langpack/en_CA/LC_MESSAGES/metacity.mo: FAILED
usr/share/locale-langpack/en_GB/LC_MESSAGES/jockey.mo: FAILED
usr/share/locale-langpack/en_AU/LC_MESSAGES/lightdm.mo: FAILED
usr/share/doc/libxcb-render0/changelog.Debian.gz: FAILED...

The bad checksums in Anonymous-OS (Thanks, Sean!)

Preliminary analysis of Anonymosus-OS: lame, but no obvious malware


On Ars Technica, Sean Gallagher delves into the Anonymosus-OS, an Ubuntu Linux derivative I wrote about yesterday that billed itself as an OS for Anonymous, with a number of security/hacking tools pre-installed. Sean's conclusions is that, contrary to rumor, there's not any malware visible in the package, but there's plenty of dubious "security" tools like the Low Orbit Ion Cannon: "I don't know how much more booby-trapped a tool can get than pointing authorities right back at your IP address as LOIC does without being modified."

As far as I can tell, Sean hasn't compared the package checksums for Anonymosus-OS, which would be an important and easy (though tedious) step for anyone who was worried about the OS hiding malware to take.

Update: Sean's done the checksum comparison and found 143 files that don't match up with the published versions.

Some of the tools are of questionable value, and the attack tools might well be booby-trapped in some way. But I don't know how much more booby-trapped a tool can get than pointing authorities right back at your IP address as LOIC does without being modified.

Most of the stuff in the "Anonymous" menu here is widely available as open source or as Web-based tools—in fact, a number of the tools are just links to websites, such as the MD5 hash cracker MD5Crack Web. But it's clear there are a number of tools here that are in daily use by AnonOps and others, including the encryption tool they've taken to using for passing target information back and forth.

Lame hacker tool or trojan delivery device? Hands on with Anonymous-OS

Android screen lock bests FBI

A court filing from an FBI Special Agent reports that the Bureau's forensics teams can't crack the pattern-lock utility on Android devices' screens. This is moderately comforting, given the courts' recent findings that mobile phones can be searched without warrants. David Kravets writes on Wired:

A San Diego federal judge days ago approved the warrant upon a request by FBI Special Agent Jonathan Cupina. The warrant was disclosed Wednesday by security researcher Christopher Soghoian,

In a court filing, Cupina wrote: (.pdf)

Failure to gain access to the cellular telephone’s memory was caused by an electronic ‘pattern lock’ programmed into the cellular telephone. A pattern lock is a modern type of password installed on electronic devices, typically cellular telephones. To unlock the device, a user must move a finger or stylus over the keypad touch screen in a precise pattern so as to trigger the previously coded un-locking mechanism. Entering repeated incorrect patterns will cause a lock-out, requiring a Google e-mail login and password to override. Without the Google e-mail login and password, the cellular telephone’s memory can not be accessed. Obtaining this information from Google, per the issuance of this search warrant, will allow law enforcement to gain access to the contents of the memory of the cellular telephone in question.

Rosenberg, in a telephone interview, suggested the authorities could “dismantle a phone and extract data from the physical components inside if you’re looking to get access.”

However, that runs the risk of damaging the phone’s innards, and preventing any data recovery.

FBI Can’t Crack Android Pattern-Screen Lock

The Apocalypse will be a lot like flying coach

What could possibly make a 1960s-era nuclear war worse than you'd already assumed it would be? How about being packed like sardines into a fallout shelter with 13 of your soon-to-be-closest friends?

Frank Munger is a senior reporter with the Knoxville News Sentinel, where he covers Oak Ridge National Laboratory—a nearby energy research facility that previously did a lot of civil defense research. Munger turned up this, and several other photos, of mockup nuclear shelter arrangements tested out in the basement at ORNL when the facility was trying to establish best practice scenarios for surviving the Apocalypse.

They look ... less than pleasant.

That said, though, they may not have been meant as long-term arrangements. Munger linked to an Atlantic article that makes an interesting case related to these photos: If what you're talking about is one relatively small nuclear bomb (as opposed to massive, hydrogen bomb, mutually assured destruction scenarios), the idea of "Duck and Cover" isn't as ridiculous as it sounds. If you could get these 14 people out of the way of the fallout for a couple weeks, their chances of survival would rise exponentially. Fallout shelters were not meant to be "the place you and your people live for the next 50 years."

The radiation from fallout can be severe -- the bigger the bomb, and the closer it is the the ground, the worse the fallout, generally -- but it decays according to a straightforward rule, called the 7/10 rule: Seven hours after the explosion, the radiation is 1/10 the original level; seven times that interval (49 hours, or two days) it is 1/10 of that, or 1/100 the original, and seven times that interval (roughly two weeks) it is 1/1000 the original intensity.

See the rest of Frank Munger's photos of ORNL fallout shelter mockups.Read the rest of The Atlantic article on "duck and cover".

Passphrases suck less than passwords, but they still suck

In "Linguistic properties of multi-word passphrases" (PDF, generates an SSL error) Cambridge's Joseph Bonneau and Ekaterina Shutova demonstrate that multi-word passphrases are more secure (have more entropy) than average user passwords composed of "random" characters, but that neither is very secure. In a blog post, Joseph Bonneau sums up the paper and the research that went into it.

Some clear trends emerged—people strongly prefer phrases which are either a single modified noun (“operation room”) or a single modified verb (“send immediately”). These phrases are perhaps easier to remember than phrases which include a verb and a noun and are therefore closer to a complete sentence. Within these categories, users don’t stray too far from choosing two-word phrases the way they’re actually produced in natural language. That is, phrases like “young man” which come up often in speech are proportionately more likely to be chosen than rare phrases like “young table.”

This led us to ask, if in the worst case users chose multi-word passphrases with a distribution identical to English speech, how secure would this be? Using the large Google n-gram corpus we can answer this question for phrases of up to 5 words. The results are discouraging: by our metrics, even 5-word phrases would be highly insecure against offline attacks, with fewer than 30 bits of work compromising over half of users. The returns appear to rapidly diminish as more words are required. This has potentially serious implications for applications like PGP private keys, which are often encrypted using a passphrase. Users are clearly more random in “passphrase English” than in actual English, but unless it’s dramatically more random the underlying natural language simply isn’t random enough. Exploring this gap is an interesting avenue for future collaboration between computer security researchers and linguists. For now we can only be comfortable that randomly-generated passphrases (using tools like Diceware) will resist offline brute force.

Some evidence on multi-word passphrases (via Schneier)

TSA: we still trust body-scanners, though "for obvious reasons" we can't say why

Yesterday, I wrote about Jon Corbett's video, in which he demonstrates a method that appears to make it easy to smuggle metal objects (including weapons) through a TSA full-body scanner. The TSA has responded by saying that they still trust the machines, but they won't say why, "for obvious security reasons."

As Wired's David Kravets points out, Corbett is only the most recent critic to take a skeptical look at the efficacy of the expensive, invasive machinery. Other critics include the Government Accountability Office ("the devices might be ineffective") and the Journal of Transportation Security ("terrorists might fool the Rapiscan machines by taping explosive devices to their stomachs").

Corbett responded to the TSA's we-can't-tell-you-or-we'd-have-to-kill-you rebuttal with "You don't believe it? Try it."

“These machines are safe,” Lorie Dankers, a TSA spokeswoman, said in a telephone interview.

In a blog post, the government’s response was that, “For obvious security reasons, we can’t discuss our technology’s detection capability in detail, however TSA conducts extensive testing of all screening technologies in the laboratory and at airports prior to rolling them out to the entire field.”

TSA Pooh-Poohs Video Purporting to Defeat Airport Body Scanners

Maher Arar on Canada's pro-torture policy

Maher Arar, a Canadian who was rendered to Syria for years of brutal torture on the basis of bad information from Canada's intelligence agencies, writes in Prism about the revelation that Canadian public safety minister Vic Toews has given Canadian intelligence agencies and police the green light to use information derived from torture in their work. Arar cites examples of rendition and torture based on the "Hollywood fantasy that underlines the 'ticking bomb' scenario that minister Toews was apparently contemplating when he wrote this directive."

What makes this direction even more alarming is that the fat annual budgets devoted to enhancing national security have not been balanced by a similar increase in oversight. In fact, the government chose to ignore the most important recommendation of Justice O’Connor which is to establish a credible oversight agency that has the required powers to monitor and investigate the activities of the RCMP and those of other agencies involved in the gathering and dissemination of national security information. Unlike the powerless Commission for Public Complaints Against the RCMP (CPC) or the Security Intelligence Review Committee (SIRC) this agency would also be granted subpoena power to compel all agencies to produce the required documents.

Coming back to the directive one can only cite two examples here which I believe are sufficient to illustrate the hollowness of the argument presented in the directive. The first relates to the invasion of Iraq which we now know was based on false intelligence (see this video) that was extracted from Ibn al-Shaykh al-Libi while he was being tortured in Egypt. Al-Libi was later found dead inside his prison cell. Some human rights activists believe the Gaddafi regime liquidated him three years after he was rendered to Libya by the CIA.

Torture Directive 2.0 (Thanks, Richard!)

(Image: Rothenburg Germany Torture Museum, a Creative Commons Attribution (2.0) image from nanpalmero's photostream) (Thanks, Richard!)

HOWTO get metal through a TSA full-body scanner

Jon Corbett, an engineer who is suing the TSA over the use of full-body "pornoscanners," has developed and documented a simple way to smuggle metallic objects, including guns, through the scanners. He tested the method at real TSA checkpoints, producing video-documentation that shows him apparently passing through the scanners with odd-shaped metal objects in a hidden pocket sewn into his garments. The method relies on the fact that the scanners show subjects' bodies as light objects on a dark background, and also render metal as dark objects. If an object is off to the side of the subject -- in a side pocket, say -- it shows up as black-on-black and is thus invisible.

To put it to the test, I bought a sewing kit from the dollar store, broke out my 8th grade home ec skills, and sewed a pocket directly on the side of a shirt. Then I took a random metallic object, in this case a heavy metal carrying case that would easily alarm any of the “old” metal detectors, and walked through a backscatter x-ray at Fort Lauderdale-Hollywood International Airport. On video, of course. While I’m not about to win any videography awards for my hidden camera footage, you can watch as I walk through the security line with the metal object in my new side pocket. My camera gets placed on the conveyer belt and goes through its own x-ray, and when it comes out, I’m through, and the object never left my pocket.

Maybe a fluke? Ok, let’s try again at Cleveland-Hopkins International Airport through one of the TSA’s newest machines: a millimeter wave scanner with automated threat detection built-in. With the metallic object in my side pocket, I enter the security line, my device goes through its own x-ray, I pass through, and exit with the object without any complaints from the TSA.

$1B of TSA Nude Body Scanners Made Worthless By Blog — How Anyone Can Get Anything Past The Scanners (via MeFi)

Android lets apps secretly access and transmit your photos

Writing in the NYT's BITS section, Brian X. Chen and Nick Bilton describe a disturbing design-flaw in Android: apps can access and copy your private photos, without you ever having to grant them permission to do so. Google says this is a legacy of the earlier-model phones that used removable SD cards, but it remains present in current versions. To prove the vulnerability's existence, a company called Loupe made an Android app that, once installed, grabbed your most recent photo and posted it to Imgur, a public photo-sharing site. The app presented itself as a timer, and users who installed it were not prompted to grant access to their files or images. A Google spokesperson quoted in the story describes the problem, suggests that the company would be amenable to fixing it, but does not promise to do so.

Ashkan Soltani, a researcher specializing in privacy and security, said Google’s explanation of its approach would be “surprising to most users, since they’d likely be unaware of this arbitrary difference in the phone’s storage system.” Mr. Soltani said that to users, Google’s permissions system was ”akin to buying a car that only had locks on the doors but not the trunk.”

I think that this highlights a larger problem with networked cameras and sensors in general. The last decade of digital sensors -- scanners, cameras, GPSes -- has accustomed us to thinking of these devices as "air-gapped," separated from the Internet, and not capable of interacting with the rest of the world without physical human intervention.

But increasingly these things are networked -- we carry around location-sensitive, accelerometer-equipped A/V recording devices at all times (our phones). Adding network capability to these things means that design flaws, vulnerabilities and malicious code can all conspire to expose us to unprecedented privacy invasions. Unless you're in the habit of not undressing, going to the toilet, having arguments or intimate moments, and other private activities in the presence of your phone, you're at risk of all that leaking online.

It seems to me that neither the devices' designers nor their owners have gotten to grips with this yet. The default should be that our sensors don't broadcast their readings without human intervention. The idea that apps should come with take-it-or-leave-it permissions "requests" for access to your camera, mic, and other sensors is broken. It's your device and your private life. You should be able to control -- at a fine-grained level -- the extent to which apps are allowed to read, store and transmit facts about your life using your sensors.

Et Tu, Google? Android Apps Can Also Secretly Copy Photos

FBI anti-terrorism expert: TSA is useless

Steve Moore, who identifies himself as a former FBI Special Agent and head of the Los Angeles Joint Terrorism Task Force Al Qaeda squad, says that the TSA is useless. He says that they don't catch terrorists. He says they won't catch terrorists. He says that they can't catch terrorists. Oh, he also claims 35 years' piloting experience and a father was United's head of security and anti-hijacking SWAT training and experience.

Frankly, the professional experience I have had with TSA has frightened me. Once, when approaching screening for a flight on official FBI business, I showed my badge as I had done for decades in order to bypass screening. (You can be envious, but remember, I was one less person in line.) I was asked for my form which showed that I was armed. I was unarmed on this flight because my ultimate destination was a foreign country. I was told, "Then you have to be screened." This logic startled me, so I asked, "If I tell you I have a high-powered weapon, you will let me bypass screening, but if I tell you I'm unarmed, then I have to be screened?" The answer? "Yes. Exactly." Another time, I was bypassing screening (again on official FBI business) with my .40 caliber semi-automatic pistol, and a TSA officer noticed the clip of my pocket knife. "You can't bring a knife on board," he said. I looked at him incredulously and asked, "The semi-automatic pistol is okay, but you don't trust me with a knife?" His response was equal parts predictable and frightening, "But knives are not allowed on the planes."...

The report goes on to state that the virtual strip search screening machines are a failure in that they cannot detect the type of explosives used by the “underwear bomber” or even a pistol used as a TSA’s own real-world test of the machines. Yet TSA has spent approximately $60 billion since 2002 and now has over 65,000 employees, more than the Department of State, more than the Department of Energy, more than the Department of Labor, more than the Department of Education, more than the Department of Housing and Urban Development---combined. TSA has become, according to the report, “an enormous, inflexible and distracted bureaucracy more concerned with……consolidating power.”

Each time the TSA is publically called to account for their actions, they fight back with fear-based press releases which usually begin with “At a time like this….” Or “Al Qaeda is planning—at this moment …..” The tactic, of course, is to throw the spotlight off the fact that their policies are doing nothing to make America safer “at a time like this.” Sometimes doing the wrong thing is just as bad as doing nothing.

TSA: Fail (via MeFi)

Homeland Security memo warned of violent threat posed by Occupy Wall Street

An October, 2011 Department of Homeland Security memo on Occupy Wall Street warned of the potential for violence posed by the "leaderless resistance movement." (via @producermatthew).

Update: Looks like there's a larger Rolling Stone feature on this document:

As Occupy Wall Street spread across the nation last fall, sparking protests in more than 70 cities, the Department of Homeland Security began keeping tabs on the movement. An internal DHS report entitled “SPECIAL COVERAGE: Occupy Wall Street [PDF]," dated October of last year, opens with the observation that "mass gatherings associated with public protest movements can have disruptive effects on transportation, commercial, and government services, especially when staged in major metropolitan areas." While acknowledging the overwhelmingly peaceful nature of OWS, the report notes darkly that "large scale demonstrations also carry the potential for violence, presenting a significant challenge for law enforcement."

Scalable stylometry: can we de-anonymize the Internet by analyzing writing style?

One of the most interesting technical presentations I attended in 2012 was the talk on "adversarial stylometry" given by a Drexel College research team at the 28C3 conference in Berlin. "Stylometry" is the practice of trying to ascribe authorship to an anonymous text by analyzing its writing style; "adversarial stylometry" is the practice of resisting stylometric de-anonymization by using software to remove distinctive characteristics and voice from a text.

Stanford's Arvind Narayanan describes a paper he co-authored on stylometry that has been accepted for the IEEE Symposium on Security and Privacy 2012. In On the Feasibility of Internet-Scale Author Identification (PDF) Narayanan and co-authors show that they can use stylometry to improve the reliability of de-anonymizing blog posts drawn from a large and diverse data-set, using a method that scales well. However, the experimental set was not "adversarial" -- that is, the authors took no countermeasures to disguise their authorship. It would be interesting to see how the approach described in the paper performs against texts that are deliberately anonymized, with and without computer assistance. The summary cites another paper by someone who found that even unaided efforts to disguise one's style makes stylometric analysis much less effective.

We made several innovations that allowed us to achieve the accuracy levels that we did. First, contrary to some previous authors who hypothesized that only relatively straightforward “lazy” classifiers work for this type of problem, we were able to avoid various pitfalls and use more high-powered machinery. Second, we developed new techniques for confidence estimation, including a measure very similar to “eccentricity” used in the Netflix paper. Third, we developed techniques to improve the performance (speed) of our classifiers, detailed in the paper. This is a research contribution by itself, but it also enabled us to rapidly iterate the development of our algorithms and optimize them.

In an earlier article, I noted that we don’t yet have as rigorous an understanding of deanonymization algorithms as we would like. I see this paper as a significant step in that direction. In my series on fingerprinting, I pointed out that in numerous domains, researchers have considered classification/deanonymization problems with tens of classes, with implications for forensics and security-enhancing applications, but that to explore the privacy-infringing/surveillance applications the methods need to be tweaked to be able to deal with a much larger number of classes. Our work shows how to do that, and we believe that insights from our paper will be generally applicable to numerous problems in the privacy space.

Is Writing Style Sufficient to Deanonymize Material Posted Online? (via Hack the Planet)

Dan Kaminsky on the RSA key-vulnerability

Dan Kaminsky sez,

There's been a lot of talk about some portion of the RSA keys on the Internet being insecure, with "2 out of every 1000 keys being bad". This is incorrect, as the problem is not equally likely to exist in every class of key on the Internet. In fact, the problem seems to only show up on keys that were already insecure to begin with -- those that pop errors in browsers for either being unsigned or expired. Such keys are simply not found on any production website on the web, but they are found in high numbers in devices such as firewalls, network gateways, and voice over IP phones.

It's tempting to discount the research entirely. That would be a mistake. Certainly, what we generally refer to as "the web" is unambiguously safe, and no, there's nothing particularly special about RSA that makes it uniquely vulnerable to a faulty random number generator. But it is extraordinarily clear now that a massive number of devices, even those purportedly deployed to make our networks safer, are operating completely without key management. It doesn't matter how good your key is if nobody can recognize it as yours. DNSSEC will do a lot to fix that. It is also clear that random number generation on devices is extremely suspect, and that this generic attack that works across all devices is likely to be followed up by fairly devastating attacks against individual makes and models. This is good and important research, and it should compel us to push for new and interesting mechanisms for better randomness. Hardware random number generators are the gold standard, but perhaps we can exploit the very small differences between clocks in devices and PCs to approximate what they offer.

Primal Fear: Demuddling The Broken Moduli Bug (Thanks, Dan!)

WSJ: Google caught circumventing iPhone security, tracking users who opted out of third-party cookies

Google has been caught circumventing iOS's built-in anti-ad-tracking features in order to add Google Plus functionality within iPhone's Safari browser. The WSJ reports that Google overrode users' privacy settings in order to allow messages like "your friend Suzy +1'ed this ad about candy" to be relayed between Google's different domains, including google.com and doubleclick.net. This also meant that doubleclick.net was tracking every page you landed on with a Doubleclick ad, even if you'd opted out of its tracking.

I believe that Google has created an enormous internal urgency about Google Plus integration, and that this pressure is leading the company to take steps to integrate G+ at the expense of the quality of its other services. Consider the Focus on the User critique of Google's "social ranking" in search results, for example. In my own life, I've been immensely frustrated that my unpublished Gmail account (which I only use to anchor my Android Marketplace purchases for my phone and tablets, and to receive a daily schedule email while I'm travelling) has somehow become visible to G+ users, so that I get many, many G+ updates and invites to this theoretically private address, every day, despite never having opted into a directory and never having joined G+.

In the iPhone case, it's likely that Google has gone beyond lowering the quality of its service for its users and customers, and has now started to violate the law, and certainly to undermine the trust that the company depends on. This is much more invasive than the time Google accidentally captured some WiFi traffic and didn't do anything with it, much more invasive than Google taking pictures of publicly visible buildings -- both practices that drew enormous and enduring criticism at the expense of the company's global credibility. I wonder if this will cause the company to slow its full-court press to make G+ part of every corner of Google.

EFF has an open letter to Google, asking them to make amends for this:

It’s time for a new chapter in Google’s policy regarding privacy. It’s time to commit to giving users a voice about tracking and then respecting those wishes.

For a long time, we’ve hoped to see Google respect Do Not Track requests when it acts as a third party on the Web, and implement Do Not Track in the Chrome browser. This privacy setting, available in every other major browser, lets users express their choice about whether they want to be tracked by mysterious third parties with whom they have no relationship. And even if a user deleted her cookies, the setting would still be there.

Right now, EFF, Google, and many other groups are involved in a multi-stakeholder process to define the scope and execution of Do Not Track through the Tracking Protection Working Group. Through this participatory forum, civil liberties organizations, advertisers, and leading technologists are working together to define how Do Not Track will give users a meaningful way to control online tracking without unduly burdening companies. This is the perfect forum for Google to engage on the technical specifications of the Do Not Track signal, and an opportunity to bring all parties together to fight for user rights. While the Do Not Track specification is not yet final, there's no reason to wait. Google has repeatedly led the way on web security by implementing features long before they were standardized. Google should do the same with web privacy. Get started today by linking Do Not Track to your existing opt-out mechanisms for advertising, +1, and analytics.

Google, make this a new era in your commitment to defending user privacy. Commit to offering and respecting Do Not Track.

Google Circumvents Safari Privacy Protections - This is Why We Need Do Not Track

Bruce Schneier's Liars and Outliers: how do you trust in a networked world?

John Scalzi's Big Idea introduces Bruce Schneier's excellent new book Liars and Outliers, and interviews Schneier on the work that went into it. I read an early draft of the book and supplied a quote: "Brilliantly dissects, classifies, and orders the social dimension of security-a spectacularly palatable tonic against today's incoherent and dangerous flailing in the face of threats from terrorism to financial fraud." Now that the book is out, I heartily recommend it to you.

It’s all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society. I trust airline pilots, hotel clerks, ATMs, restaurant kitchens, and the company that built the computer I’m writing this short essay on. I trust that they have acted and will act in the ways I expect them to. This type of trust is more a matter of consistency or predictability than of intimacy.

Of course, all of these systems contain parasites. Most people are naturally trustworthy, but some are not. There are hotel clerks who will steal your credit card information. There are ATMs that have been hacked by criminals. Some restaurant kitchens serve tainted food. There was even an airline pilot who deliberately crashed his Boeing 767 into the Atlantic Ocean in 1999.

My central metaphor is the Prisoner’s Dilemma, which nicely exposes the tension between group interest and self-interest. And the dilemma even gives us a terminology to use: cooperators act in the group interest, and defectors act in their own selfish interest, to the detriment of the group. Too many defectors, and everyone suffers — often catastrophically.

Liars and Outliers: Enabling the Trust that Society Needs to Thrive