What decryption orders mean for the Fifth Amendment

A federal judge in Colorado recently handed down a ruling that forced a defendant to decrypt her laptop hard-drive, despite the Fifth Amendment's stricture against compelling people to testify against themselves. The Electronic Frontier Foundation's Marcia Hoffman has commentary on the disappointing ruling:

In the order issued yesterday, the court dodged the question of whether requiring Fricosu to type a passphrase into the laptop would violate the Fifth Amendment. Instead, it ordered Fricosu to turn over a decrypted version of the information on the computer. While the court didn't hold that Fricosu has a valid Fifth Amendment privilege not to reveal that data, it seemed to implicitly recognize that possibiity. The court both points out that the government offered Fricosu immunity for the act of production and forbids the government from using the act of production against her. We think Fricosu not only has a valid privilege against self-incrimination, but that the immunity offered by the government isn't broad enough to invalidate it. Under Supreme Court precedent, the government can't use the act of production or any evidence it learns as a result of that act against Fricosu.

The court then found that the Fifth Amendment "is not implicated" by requiring Fricosu to turn over the decrypted contents of the laptop, since the government independently learned facts suggesting that Fricosu had possession and control over the computer. Furthermore, according to the court, "there is little question here but that the government knows of the existence and location of the computer's files. The fact that it does not know the specific content of any specific documents is not a barrier to production." We disagree with this conclusion, too. Neither the government nor the court can say what files the government expects to find on the laptop, so there is testimonial value in revealing the existence, authenticity and control over that specific data. If Fricosu decrypts the data, the government could learn a great deal it didn't know before.

In sum, we think the court got it wrong.

Disappointing Ruling in Compelled Laptop Decryption Case

Parents' snooping teaches kids to share their passwords with each other


Matt Richtel's recent NYT article on teenagers who share their Facebook passwords as a show of affection has raised alarms with parents and educators who worry about the potential for bullying and abuse.

But as danah boyd points out the practice of password-sharing didn't start with kids: it started with parents, who required their kids to share their passwords with them. Young kids have to share their passwords because they lose them, and older kids are made to share their passwords because their parents want to snoop on them. Basically, you can't tell kids that they must never, ever share their passwords and require them to share their passwords.

There are different ways that parents address the password issue, but they almost always build on the narrative of trust. (Tangent: My favorite strategy is when parents ask children to put passwords into a piggy bank that must be broken for the paper with the password to be retrieved. Such parents often explain that they don’t want to access their teens’ accounts, but they want to have the ability to do so “in case of emergency.” A piggy bank allows a social contract to take a physical form.)

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

There’s another thread here that’s important. Think back to the days in which you had a locker. If you were anything like me and my friends, you gave out your locker combination to your friends and significant others. There were varied reasons for doing so. You wanted your friends to pick up a book for you when you left early because you were sick. You were involved in a club or team where locker decorating was common. You were hoping that your significant other would leave something special for you.

How Parents Normalized Teen Password Sharing

(Image: Swordfish, a Creative Commons Attribution Share-Alike (2.0) image from ideonexus's photostream)

HOPE 9 Call for Papers

Emmanuel Goldstein from 2600 sez, "The Call For Papers for 2600 Magazine's HOPE Number Nine conference has been issued. HOPE stands for Hackers On Planet Earth and has been happening in New York City since 1994. Typically over 100 talks and panels are presented, featuring topics including hardware hacking, social engineering, net activism, privacy, surveillance, censorship, etc. HOPE takes pride in providing a wide mix of both subject matter and speakers, featuring presenters of all ages, backgrounds, and levels of experience. HOPE Number Nine will take place July 13-15, 2012 at New York's Hotel Pennsylvania." (Thanks, Emmanuel!)

TSA-compliant cupcakes: "I am not a gel"


Inspired by Rebecca Hains' harrowing tale of cupcake confiscation by the Las Vegas TSA, Providence, RI's Silver Spoon Bakery is selling "TSA-compliant cupcakes." These have exactly three ounces of frosting, and come in a ziplock baggie with a boarding card and a little Richard Nixon badge bearing the legend, "I am not a gel."

Bakery's TSA Compliant Cupcake is latest volley in Cupcakegate (via Digg)

CyanogenMod, the free/open port of Android, gains traction

Here's a good brief look at the state of CyanogenMod, a free/open fork of the Android operating system that lets you do a lot more with your tablet/phone. I really like the way that CyanogenMod exerts force on the Android ecosystem: back when Google was unwilling to ship a tethering app (even for "Google Experience" phones like the Nexus One), CyanogenMod gave users the choice to tether. I think that the number of users who went to the fork freaked out both Google and the carriers, and in any event, tethering quickly became an official feature of Android.

Now CyanogenMod is toying with the idea of a Banned Apps store, consisting of apps that were banned from Google Marketplace for "no good reason" (generally because they threatened Google or the carriers in some way). It's hard for users to get upset about functionality restrictions that they don't know about, but once their friends get the ability to do more, they'll clamor for it, too.

And Google has a strong incentive to keep up with CyanogenMod's functionality: once you've rooted your device and installed a new OS on it for the first time, it's pretty easy to keep on doing it for future devices. I know I worried a lot the first time, and laughed through subsequent installs -- and the process just keeps getting easier. It's really in Google's interest that Android users not get the CyanogenMod habit, and the best way to prevent that is to keep up with CyanogenMod itself, even if it means sacrificing a little profitability, and that's good for users.

Given the success of CyanogenMod, it should be no surprise that the project is continuing to evolve and grow into new areas. Koushik Dutta, one of the CyanogenMod team members, would like to see an App Store for root apps and apps that are "getting shut down for no good reason." The idea seems pretty handy from a user perspective, and as Dutta points out, could even help fund the CyanogenMod project.

Apparently, Dutta approached Amazon with his idea of bundling their AppStore in CyanogenMod with the provision that Amazon would give CyanogenMod a portion of the sales. Sadly, Amazon brushed Dutta off, so it would appear that this isn't going to happen in the short term. Still, it appears there are a number of users on Google+ that are excited about the project, so hopefully it will come to fruition. Dutta's proposed store would be open-source so it would be available to any custom ROM, not just CyanogenMod.

CyanogenMod Enjoys User Growth, Considers Launching A Banned App Store (via Digg)

Recursive phishing email

Bruce Sterling received a phishing email purporting to be a followup to a report of a phishing email. Coming soon: a phishing email purporting to be a phishing email purporting to be a followup to a report of a phishing email.

US-CERT is forwarding the following Phishing email that we received to the APWG for further investigation and processing.

Please check attached report for the details and email source

US-CERT has opened a ticket and assigned incident number PH0000005007349. As your investigation progresses updates may be sent at your discretion to soc@us-cert.gov and should reference PH0000002359885.

Phishing email arrives disguised as phishing email

What happens to your luggage after you check it at the airport?

Okay, yes. This is an ad for a Delta "track your luggage" app. And, yes, it blacks out the part where your luggage goes through security.

But it's also a nifty little video that reminds me of the how's-it-made genre of Sesame Street videos that I loved as a child. There's just something about stuff riding on conveyer belts, know what I mean?

It was also interesting to get a reminder that luggage is loaded into and unloaded from the airplane by hand. So all the times I've stood around getting cranky at waiting for my luggage to show up on the carousel ... there's some people doing their best to get it to me fast and without throwing it around everywhere. I think, next time, I'll have a little more patience.

(Thanks, Andrew Balfour!)

Researcher: T-Mobile UK is secretly disrupting secure communications, leaving customers vulnerable to spying


Mike Cardwell claims that T-Mobile UK are silently disrupting VPNs and secure connections to mail-servers, using packet-injection techniques more often found in the Great Firewall of China. He documents his findings in detail, and has found someone on the T-Mobile customer forums who claims that a senior technician there stated that it was a deliberate policy decision at T-Mobile to keep mail from being sent through any servers apart from their own.

The consequence of this is that you must communicate over T-Mobile's 3G network in a way that allows them to snoop on you and read your email. And since 3G security has been compromised for years, it also means anyone within range of your cell tower can also snoop on you. Mike borrowed techniques from those who fight the Great Firewall of China to build a system that lets him tunnel securely and keep his sensitive data secret, but unless you run your own servers, you're screwed if you're a T-Mobile customer.

Mike's SIM is a pay-as-you-go SIM, and his previous SIM, which came with a contract, didn't experience this filtering. Either this is the result of different filtering schemes for different customers or it's a new policy. I hope T-Mobile clarifies (and terminates) this policy soon.

I run my own Linux server, and self-host several services. I use SSL whenever possible. If I connect to my mail submission service with immediate encryption on port 465, T-Mobile instantly sends a spoofed RST TCP packet to both my server and my client in order to disrupt/disconnect the connection. I ran tcpdump on both ends of the connection to verify that this was happening. They also do the same for mail submission port 587. This time, they let you connect, but as soon as you send a STARTTLS command, the RST packets appear, and the connection drops. This isn't just for my mail server, I experienced the same problems using smtp.gmail.com as well...

I route all of my Internet traffic over an OpenVPN to my Linode.com VPS. This has always worked fine with my original SIM. With the new SIM, no matter which port I configure OpenVPN on, the RST packets appear. IMAP over SSL on port 993 works fine, but if I switch that off and configure OpenVPN to listen on port 993, it is blocked. So the blocks aren't even port based. They've got some really low level deep packet inspection technology going on here. The Great Firewall of China uses the exact same technique of sending RST packets to disrupt connections.

Punching through The Great Firewall of T-Mobile

Virtual sweatshops versus CAPTCHAs

KolotiBablo, a Russian service, pays workers in China, India, Pakistan, and Vietnam to crack CAPTCHAs -- it's a favorite of industrial scale spammers. This company's fortunes represent an interesting economic indicator of the relative cost of labor (plus Internet access and junk PCs) in the poorest countries in the world, versus skilled programmer labor to automate CAPTCHA-breaking (or automating a man-in-the-middle attack on CAPTCHAs, such as making people solve imported Gmail account-creation CAPTCHAs in order to look at free porn).

Paying clients interface with the service at antigate.com, a site hosted on the same server as kolotibablo.com. Antigate charges clients 70 cents to $1 for each batch of 1,000 CAPTCHAs solved, with the price influenced heavily by volume. KolotiBablo says employees can expect to earn between $0.35 to $1 for every thousand CAPTCHAs they solve. The twin operations say they do not condone the use of their services to promote spam, or “all those related things that generate butthurt for the ‘big guys,’” mostly likely a reference to big free Webmail providers like Google and Microsoft. Still, both services can be found heavily advertised and recommended in several underground forums that cater to spammers and scam artists.

Virtual Sweatshops Defeat Bot-or-Not Tests

SF vs SF

Illustration: Kurt Caesar (?)

Tell me the difference between these two pieces of text.

Read the rest

Printer malware: print a malicious document, expose your whole LAN

One of the most mind-blowing presentations at this year's Chaos Communications Congress (28C3) was Ang Cui's Print Me If You Dare, in which he explained how he reverse-engineered the firmware-update process for HPs hundreds of millions of printers. Cui discovered that he could load arbitrary software into any printer by embedding it in a malicious document or by connecting to the printer online. As part of his presentation, he performed two demonstrations: in the first, he sent a document to a printer that contained a malicious version of the OS that caused it to copy the documents it printed and post them to an IP address on the Internet; in the second, he took over a remote printer with a malicious document, caused that printer to scan the LAN for vulnerable PCs, compromise a PC, and turn it into a proxy that gave him access through the firewall (I got shivers).

Cui gave HP a month to issue patches for the vulnerabilities he discovered, and HP now has new firmware available that fixes this (his initial disclosure was misreported in the press as making printers vulnerable to being overheated and turning into "flaming death bombs" -- he showed a lightly singed sheet of paper that represented the closest he could come to this claim). He urges anyone with an HP printer to apply the latest patch, because malware could be crafted to take over your printer and then falsely report that it has accepted the patch while discarding it.

Cui's tale of reverse-engineering is a fantastic look at the craft and practice of exploring security vulnerabilities. The cases he imagined for getting malware into printers were very good: send a resume to HR, wait for them to print it, take over the network and pwn the company.

Cui believes that these vulnerabilities are likely present on non-HP printers (a related talk on PostScript hacking lent support to his belief) and his main area of research is a generalized anti-malware solution for all embedded systems, including printers and routers.

Just in case this has scared the hell out of you (as it did me), be assured that there are many lulz to be had, especially when Cui described his interactions with HP, who actually had a firmware flag called "super-secret bypass of crypto-key enabled."

Print Me If You Dare

State of the arms race between repressive governments and anti-censorship/surveillance Tor technology (and why American companies are on the repressive governments' side)

Last night's Chaos Computer Congress (28C3) presentation from Jacob Applebaum and Roger Dingledine on the state of the arms race between the Tor anti-censorship/surveillance technology and the world's repressive governments was by turns depressing and inspiring. Dingledine and Applebaum have unique insights into the workings of the technocrats in Iranian, Chinese, Tunisian, Syrian and other repressive states, and the relationship between censorship and other human rights abuses (for example, when other privacy technologies failed, governments sometimes discovered who was discussing revolution and used that as the basis for torture and murder).

Two thirds of the way through the talk, they broaden the context to talk about the role of American companies in the war waged against privacy and free speech -- SmartFilter (now an Intel subsidiary, and a company that has a long history of censoring Boing Boing) is providing support for Iran's censorship efforts, for example. They talked about how Blue Coat and Cisco produce tools that aren't just used to censor, but to spy (all censorware also acts as surveillance technology) and how the spying directly leads to murder and rape and torture.

Then, they talked about the relationship between corporate networks and human rights abuses. Iran, China, and Syria, they say, lack the resources to run their own censorship and surveillance R&D projects, and on their own, they don't present enough of a market to prompt Cisco to spend millions to develop such a thing. But when a big company like Boeing decides to pay Cisco millions and millions of dollars to develop censorware to help it spy on its employees, the world's repressive governments get their R&D subsidized, and Cisco gets a product it can sell to them.

They concluded by talking about how Western governments' insistence on "lawful interception" back-doors in network equipment means that all the off-the-shelf network gear is readymade for spying, so, again, the Syrian secret police and the Iranian telcoms spies don't need to order custom technology that lets them spy on their people, because an American law, CALEA, made it mandatory that this technology be included in all the gear sold in the USA.

If you care at all about the future of free speech, democracy, and privacy, this is an absolute must-see presentation.

Iran blocked Tor handshakes using Deep Packet Inspection (DPI) in January 2011 and September 2011. Bluecoat tested out a Tor handshake filter in Syria in June 2011. China has been harvesting and blocking IP addresses for both public Tor relays and private Tor bridges for years.

Roger Dingledine and Jacob Appelbaum will talk about how exactly these governments are doing the blocking, both in terms of what signatures they filter in Tor (and how we've gotten around the blocking in each case), and what technologies they use to deploy the filters -- including the use of Western technology to operate the surveillance and censorship infrastructure in Tunisia (Smartfilter), Syria (Bluecoat), and other countries. We'll cover what we've learned about the mindset of the censor operators (who in many cases don't want to block Tor because they use it!), and how we can measure and track the wide-scale censorship in these countries. Last, we'll explain Tor's development plans to get ahead of the address harvesting and handshake DPI arms races.

How governments have tried to block Tor

Linguistics, Turing Completeness, and teh lulz


Yesterday's keynote at the 28th Chaos Computer Congress (28C3) by Meredith Patterson on "The Science of Insecurity" was a tour-de-force explanation of the formal linguistics and computer science that explain why software becomes insecure, and an explanation of how security can be dramatically increased. What's more, Patterson's slides were outstanding Rageface-meets-Occupy memeshopping. Both the video and the slides are online already.

Hard-to-parse protocols require complex parsers. Complex, buggy parsers become weird machines for exploits to run on. Help stop weird machines today: Make your protocol context-free or regular!

Protocols and file formats that are Turing-complete input languages are the worst offenders, because for them, recognizing valid or expected inputs is UNDECIDABLE: no amount of programming or testing will get it right.

A Turing-complete input language destroys security for generations of users. Avoid Turing-complete input languages!

Patterson's co-authors on the paper were her late husband, Len Sassaman (eulogized here) and Sergey Bratus.

LANGSEC explained in a few slogans

Stratfor hacked; clients and credit card numbers exposed

Intelligence and security research group Stratfor was hacked Saturday, and a a list of clients, personal information and credit card numbers purloined from its servers.

Having exposed the group's customers, the hackers apparently used the card numbers to make donations to the Red Cross and other charities.

The New York Times' Nicole Perlroth writes that the attack was also likely intended to embarrass Stratfor. She ends with a curious quote from Jerry Irvine, a member of the Department of Homeland Security's cybersecurity task force:

“The scary thing is that no matter what you do, every system has some level of vulnerability,” says Jerry Irvine, a member of the National Cyber Security Task Force. “The more you do from an advanced technical standpoint, the more common things go unnoticed. Getting into a system is really not that difficult.”

Sure, if it's a web server, exposed to the public by design.

But Stratfor didn't just expose a website to the public. It also, apparently, put all this other stuff online, in the clear, for the taking.

It's true that websites are like storefronts, and that it's more or less impossible to stop determined people from blocking or defacing them now and again.

Here, however, it looks like Stratfor left private files in the window display, waiting to be grabbed by the first guy to put a brick through the glass.

Now, I'm not a member of the national IT security planning task force. But I'm pretty sure that putting unencrypted lists of credit card numbers and client details on public-exposed servers isn't quite explained by "no matter what you do, every system has some level of vulnerability."

UPDATE: One Anon claims that the hack was not the work of Anonymous. However, the usual caveats apply: no structure, no official channels, no formal leaders or spokespersons.

Walk through an airport with Bruce Schneier

Vanity Fair's Charles C. Mann walked through Reagan International Airport with Bruce Schneier, noting all the ways in which "security" adds expense and inconvenience without making us safer. By the end of the trip, he concluded:

To walk through an airport with Bruce Schneier is to see how much change a trillion dollars can wreak. So much inconvenience for so little benefit at such a staggering cost. And directed against a threat that, by any objective standard, is quite modest. Since 9/11, Islamic terrorists have killed just 17 people on American soil, all but four of them victims of an army major turned fanatic who shot fellow soldiers in a rampage at Fort Hood. (The other four were killed by lone-wolf assassins.) During that same period, 200 times as many Americans drowned in their bathtubs. Still more were killed by driving their cars into deer. The best memorial to the victims of 9/11, in Schneier’s view, would be to forget most of the “lessons” of 9/11. “It’s infuriating,” he said, waving my fraudulent boarding pass to indicate the mass of waiting passengers, the humming X-ray machines, the piles of unloaded computers and cell phones on the conveyor belts, the uniformed T.S.A. officers instructing people to remove their shoes and take loose change from their pockets. “We’re spending billions upon billions of dollars doing this—and it is almost entirely pointless. Not only is it not done right, but even if it was done right it would be the wrong thing to do.”

Smoke Screening (via Kottke)