Boing Boing 

Android lets apps secretly access and transmit your photos

Writing in the NYT's BITS section, Brian X. Chen and Nick Bilton describe a disturbing design-flaw in Android: apps can access and copy your private photos, without you ever having to grant them permission to do so. Google says this is a legacy of the earlier-model phones that used removable SD cards, but it remains present in current versions. To prove the vulnerability's existence, a company called Loupe made an Android app that, once installed, grabbed your most recent photo and posted it to Imgur, a public photo-sharing site. The app presented itself as a timer, and users who installed it were not prompted to grant access to their files or images. A Google spokesperson quoted in the story describes the problem, suggests that the company would be amenable to fixing it, but does not promise to do so.

Ashkan Soltani, a researcher specializing in privacy and security, said Google’s explanation of its approach would be “surprising to most users, since they’d likely be unaware of this arbitrary difference in the phone’s storage system.” Mr. Soltani said that to users, Google’s permissions system was ”akin to buying a car that only had locks on the doors but not the trunk.”

I think that this highlights a larger problem with networked cameras and sensors in general. The last decade of digital sensors -- scanners, cameras, GPSes -- has accustomed us to thinking of these devices as "air-gapped," separated from the Internet, and not capable of interacting with the rest of the world without physical human intervention.

But increasingly these things are networked -- we carry around location-sensitive, accelerometer-equipped A/V recording devices at all times (our phones). Adding network capability to these things means that design flaws, vulnerabilities and malicious code can all conspire to expose us to unprecedented privacy invasions. Unless you're in the habit of not undressing, going to the toilet, having arguments or intimate moments, and other private activities in the presence of your phone, you're at risk of all that leaking online.

It seems to me that neither the devices' designers nor their owners have gotten to grips with this yet. The default should be that our sensors don't broadcast their readings without human intervention. The idea that apps should come with take-it-or-leave-it permissions "requests" for access to your camera, mic, and other sensors is broken. It's your device and your private life. You should be able to control -- at a fine-grained level -- the extent to which apps are allowed to read, store and transmit facts about your life using your sensors.

Et Tu, Google? Android Apps Can Also Secretly Copy Photos

FBI anti-terrorism expert: TSA is useless

Steve Moore, who identifies himself as a former FBI Special Agent and head of the Los Angeles Joint Terrorism Task Force Al Qaeda squad, says that the TSA is useless. He says that they don't catch terrorists. He says they won't catch terrorists. He says that they can't catch terrorists. Oh, he also claims 35 years' piloting experience and a father was United's head of security and anti-hijacking SWAT training and experience.

Frankly, the professional experience I have had with TSA has frightened me. Once, when approaching screening for a flight on official FBI business, I showed my badge as I had done for decades in order to bypass screening. (You can be envious, but remember, I was one less person in line.) I was asked for my form which showed that I was armed. I was unarmed on this flight because my ultimate destination was a foreign country. I was told, "Then you have to be screened." This logic startled me, so I asked, "If I tell you I have a high-powered weapon, you will let me bypass screening, but if I tell you I'm unarmed, then I have to be screened?" The answer? "Yes. Exactly." Another time, I was bypassing screening (again on official FBI business) with my .40 caliber semi-automatic pistol, and a TSA officer noticed the clip of my pocket knife. "You can't bring a knife on board," he said. I looked at him incredulously and asked, "The semi-automatic pistol is okay, but you don't trust me with a knife?" His response was equal parts predictable and frightening, "But knives are not allowed on the planes."...

The report goes on to state that the virtual strip search screening machines are a failure in that they cannot detect the type of explosives used by the “underwear bomber” or even a pistol used as a TSA’s own real-world test of the machines. Yet TSA has spent approximately $60 billion since 2002 and now has over 65,000 employees, more than the Department of State, more than the Department of Energy, more than the Department of Labor, more than the Department of Education, more than the Department of Housing and Urban Development---combined. TSA has become, according to the report, “an enormous, inflexible and distracted bureaucracy more concerned with……consolidating power.”

Each time the TSA is publically called to account for their actions, they fight back with fear-based press releases which usually begin with “At a time like this….” Or “Al Qaeda is planning—at this moment …..” The tactic, of course, is to throw the spotlight off the fact that their policies are doing nothing to make America safer “at a time like this.” Sometimes doing the wrong thing is just as bad as doing nothing.

TSA: Fail (via MeFi)

Homeland Security memo warned of violent threat posed by Occupy Wall Street

An October, 2011 Department of Homeland Security memo on Occupy Wall Street warned of the potential for violence posed by the "leaderless resistance movement." (via @producermatthew).

Update: Looks like there's a larger Rolling Stone feature on this document:

As Occupy Wall Street spread across the nation last fall, sparking protests in more than 70 cities, the Department of Homeland Security began keeping tabs on the movement. An internal DHS report entitled “SPECIAL COVERAGE: Occupy Wall Street [PDF]," dated October of last year, opens with the observation that "mass gatherings associated with public protest movements can have disruptive effects on transportation, commercial, and government services, especially when staged in major metropolitan areas." While acknowledging the overwhelmingly peaceful nature of OWS, the report notes darkly that "large scale demonstrations also carry the potential for violence, presenting a significant challenge for law enforcement."

Scalable stylometry: can we de-anonymize the Internet by analyzing writing style?

One of the most interesting technical presentations I attended in 2012 was the talk on "adversarial stylometry" given by a Drexel College research team at the 28C3 conference in Berlin. "Stylometry" is the practice of trying to ascribe authorship to an anonymous text by analyzing its writing style; "adversarial stylometry" is the practice of resisting stylometric de-anonymization by using software to remove distinctive characteristics and voice from a text.

Stanford's Arvind Narayanan describes a paper he co-authored on stylometry that has been accepted for the IEEE Symposium on Security and Privacy 2012. In On the Feasibility of Internet-Scale Author Identification (PDF) Narayanan and co-authors show that they can use stylometry to improve the reliability of de-anonymizing blog posts drawn from a large and diverse data-set, using a method that scales well. However, the experimental set was not "adversarial" -- that is, the authors took no countermeasures to disguise their authorship. It would be interesting to see how the approach described in the paper performs against texts that are deliberately anonymized, with and without computer assistance. The summary cites another paper by someone who found that even unaided efforts to disguise one's style makes stylometric analysis much less effective.

We made several innovations that allowed us to achieve the accuracy levels that we did. First, contrary to some previous authors who hypothesized that only relatively straightforward “lazy” classifiers work for this type of problem, we were able to avoid various pitfalls and use more high-powered machinery. Second, we developed new techniques for confidence estimation, including a measure very similar to “eccentricity” used in the Netflix paper. Third, we developed techniques to improve the performance (speed) of our classifiers, detailed in the paper. This is a research contribution by itself, but it also enabled us to rapidly iterate the development of our algorithms and optimize them.

In an earlier article, I noted that we don’t yet have as rigorous an understanding of deanonymization algorithms as we would like. I see this paper as a significant step in that direction. In my series on fingerprinting, I pointed out that in numerous domains, researchers have considered classification/deanonymization problems with tens of classes, with implications for forensics and security-enhancing applications, but that to explore the privacy-infringing/surveillance applications the methods need to be tweaked to be able to deal with a much larger number of classes. Our work shows how to do that, and we believe that insights from our paper will be generally applicable to numerous problems in the privacy space.

Is Writing Style Sufficient to Deanonymize Material Posted Online? (via Hack the Planet)

Dan Kaminsky on the RSA key-vulnerability

Dan Kaminsky sez,

There's been a lot of talk about some portion of the RSA keys on the Internet being insecure, with "2 out of every 1000 keys being bad". This is incorrect, as the problem is not equally likely to exist in every class of key on the Internet. In fact, the problem seems to only show up on keys that were already insecure to begin with -- those that pop errors in browsers for either being unsigned or expired. Such keys are simply not found on any production website on the web, but they are found in high numbers in devices such as firewalls, network gateways, and voice over IP phones.

It's tempting to discount the research entirely. That would be a mistake. Certainly, what we generally refer to as "the web" is unambiguously safe, and no, there's nothing particularly special about RSA that makes it uniquely vulnerable to a faulty random number generator. But it is extraordinarily clear now that a massive number of devices, even those purportedly deployed to make our networks safer, are operating completely without key management. It doesn't matter how good your key is if nobody can recognize it as yours. DNSSEC will do a lot to fix that. It is also clear that random number generation on devices is extremely suspect, and that this generic attack that works across all devices is likely to be followed up by fairly devastating attacks against individual makes and models. This is good and important research, and it should compel us to push for new and interesting mechanisms for better randomness. Hardware random number generators are the gold standard, but perhaps we can exploit the very small differences between clocks in devices and PCs to approximate what they offer.

Primal Fear: Demuddling The Broken Moduli Bug (Thanks, Dan!)

WSJ: Google caught circumventing iPhone security, tracking users who opted out of third-party cookies

Google has been caught circumventing iOS's built-in anti-ad-tracking features in order to add Google Plus functionality within iPhone's Safari browser. The WSJ reports that Google overrode users' privacy settings in order to allow messages like "your friend Suzy +1'ed this ad about candy" to be relayed between Google's different domains, including and This also meant that was tracking every page you landed on with a Doubleclick ad, even if you'd opted out of its tracking.

I believe that Google has created an enormous internal urgency about Google Plus integration, and that this pressure is leading the company to take steps to integrate G+ at the expense of the quality of its other services. Consider the Focus on the User critique of Google's "social ranking" in search results, for example. In my own life, I've been immensely frustrated that my unpublished Gmail account (which I only use to anchor my Android Marketplace purchases for my phone and tablets, and to receive a daily schedule email while I'm travelling) has somehow become visible to G+ users, so that I get many, many G+ updates and invites to this theoretically private address, every day, despite never having opted into a directory and never having joined G+.

In the iPhone case, it's likely that Google has gone beyond lowering the quality of its service for its users and customers, and has now started to violate the law, and certainly to undermine the trust that the company depends on. This is much more invasive than the time Google accidentally captured some WiFi traffic and didn't do anything with it, much more invasive than Google taking pictures of publicly visible buildings -- both practices that drew enormous and enduring criticism at the expense of the company's global credibility. I wonder if this will cause the company to slow its full-court press to make G+ part of every corner of Google.

EFF has an open letter to Google, asking them to make amends for this:

It’s time for a new chapter in Google’s policy regarding privacy. It’s time to commit to giving users a voice about tracking and then respecting those wishes.

For a long time, we’ve hoped to see Google respect Do Not Track requests when it acts as a third party on the Web, and implement Do Not Track in the Chrome browser. This privacy setting, available in every other major browser, lets users express their choice about whether they want to be tracked by mysterious third parties with whom they have no relationship. And even if a user deleted her cookies, the setting would still be there.

Right now, EFF, Google, and many other groups are involved in a multi-stakeholder process to define the scope and execution of Do Not Track through the Tracking Protection Working Group. Through this participatory forum, civil liberties organizations, advertisers, and leading technologists are working together to define how Do Not Track will give users a meaningful way to control online tracking without unduly burdening companies. This is the perfect forum for Google to engage on the technical specifications of the Do Not Track signal, and an opportunity to bring all parties together to fight for user rights. While the Do Not Track specification is not yet final, there's no reason to wait. Google has repeatedly led the way on web security by implementing features long before they were standardized. Google should do the same with web privacy. Get started today by linking Do Not Track to your existing opt-out mechanisms for advertising, +1, and analytics.

Google, make this a new era in your commitment to defending user privacy. Commit to offering and respecting Do Not Track.

Google Circumvents Safari Privacy Protections - This is Why We Need Do Not Track

Bruce Schneier's Liars and Outliers: how do you trust in a networked world?

John Scalzi's Big Idea introduces Bruce Schneier's excellent new book Liars and Outliers, and interviews Schneier on the work that went into it. I read an early draft of the book and supplied a quote: "Brilliantly dissects, classifies, and orders the social dimension of security-a spectacularly palatable tonic against today's incoherent and dangerous flailing in the face of threats from terrorism to financial fraud." Now that the book is out, I heartily recommend it to you.

It’s all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society. I trust airline pilots, hotel clerks, ATMs, restaurant kitchens, and the company that built the computer I’m writing this short essay on. I trust that they have acted and will act in the ways I expect them to. This type of trust is more a matter of consistency or predictability than of intimacy.

Of course, all of these systems contain parasites. Most people are naturally trustworthy, but some are not. There are hotel clerks who will steal your credit card information. There are ATMs that have been hacked by criminals. Some restaurant kitchens serve tainted food. There was even an airline pilot who deliberately crashed his Boeing 767 into the Atlantic Ocean in 1999.

My central metaphor is the Prisoner’s Dilemma, which nicely exposes the tension between group interest and self-interest. And the dilemma even gives us a terminology to use: cooperators act in the group interest, and defectors act in their own selfish interest, to the detriment of the group. Too many defectors, and everyone suffers — often catastrophically.

Liars and Outliers: Enabling the Trust that Society Needs to Thrive

Prime Suspect, or Random Acts of Keyness

The foundation of Web security rests on the notion that two very large prime numbers, numbers divisible only by themselves and 1, once multiplied together are irreducibly difficult to tease back apart.

Read the rest

Transparency Grenade: a grenade-shaped surveillance device for smoke-filled rooms

Julian Oliver's "Transparency Grenade" is a surveillance device shaped like a Soviet F1 Hand Grenade, stuffed with network sniffers and other technology. It is intended to be hidden in smoke-filled rooms where secretive and corrupt meetings are taking place, so that all the material therein can be widely viewed.

Most importantly however it is the hyperbole and fear around containing these volatile records, of the cyber burglary, that increasingly yields assumptive logics that ultimately shape how we use networks and think about the right to information. Just as record companies claim billions in losses due to file sharing, the fear of the leak is being actively exploited by law makers to afford organisations greater opacity and thus control.

This anxiety, this 'network insecurity', impacts not just upon the freedom of speech but the felt instinct to speak at all. All of a sudden letting public know what's going on inside a publicly funded organisation is somehow 'wrong' -Bradley Manning a sacrificial lamb to that effect. Meanwhile civil servants and publicly-owned companies continue to make decisions behind guarded doors that impact the lives of many, whether human or other animal.

The Transparency Grenade (We Make Money Not Art)

Transparency Grenade (project page)

What's the social cost of making it harder to get Sudafed?

Writing in The Atlantic, Megan McArdle analyzes the societal cost of requiring a doctor's visit to get a prescription for Sudafed, in order to make it harder to acquire materials used in fabricating meth. She makes a compelling case that, as bad as meth labs are, and as much as they cost society, cracking down on basic, useful medicine also entails horrendous expense.

But this is sort of a side issue. What really bothers me is the way that Humphreys--and others who show up in the comments--regard the rather extraordinary cost of making PSE prescription-only as too trivial to mention.

Let's return to those 15 million cold sufferers. Assume that on average, they want one box a year. That's going to require a visit to the doctor. At an average copay of $20, their costs alone would be $300 million a year, but of course, the health care system is also paying a substantial amount for the doctor's visit. The average reimbursement from private insurance is $130; for Medicare, it's about $60. Medicaid pays less, but that's why people on Medicaid have such a hard time finding a doctor. So average those two together, and add the copays, and you've got at least $1.5 billion in direct costs to obtain a simple decongestant. But that doesn't include the hassle and possibly lost wages for the doctor's visits. Nor the possible secondary effects of putting more demands on an already none-too-plentiful supply of primary care physicians.

Of course, those wouldn't be the real costs, because lots of people wouldn't be able to take the time for a doctor's visit. So they'd just be more miserable while their colds last. What's the cost of that--in suffering, in lost productivity?

Perhaps it would be simpler to just raise the price of a box of Sudafed to $100. Surely that would make meth labs unprofitable--and save us the annoyance of a doctor's visit.

Do We Need Even Tighter Controls on Sudafed? (via Schneier)

(Image: Project 365 #121: 010509 The Spy That Came In With A Cold, a Creative Commons Attribution (2.0) image from comedynose's photostream)

Involuntary transparency for Canada's spying-bill MP

Vic Toews is the controversial Canadian Minister of Public Safety whose spying bill will require ISPs to log and retain an enormous amount of your online activity, and then make that available to police without a warrant. Yesterday, Toews drew criticism when he said that opponents of his bill "stand with child pornographers." Today an anonymous party has created a Vikileaks Twitter account that is publishing embarrassing personal details culled from affidavits filed in Mr Toews's divorce, saying, "Vic wants to know about you. Let's get to know about Vic." It's not clear to me whether these affidavits were under seal, or part of the public record (they seem to come from this case: FEHR, LORRAINE K. vs TOEWS, VICTOR E. (FD08-01-86932) Mantioba Queen's Court of Queens Bench). This is an awfully ugly tactic and will likely be counterproductive. It does demonstrate that once material is stored, it is likely to leak, and that the best way to protect private information is to refrain from gathering and aggregating it in the first place. Update: looks like publishing court records is kosher in Manitoba.

EFF: Tens of thousands of websites' SSL "offers effectively no security"

The Electronic Frontier Foundation's SSL Observatory is a research project that gathers and analyzes the cryptographic certificates used to secure Internet connections, systematically cataloging them and exposing their database for other scientists, researchers and cryptographers to consult.

Now Arjen Lenstra of École polytechnique fédérale de Lausanne has used the SSL Observatory dataset to show that tens of thousands of SSL certificates "offer effectively no security due to weak random number generation algorithms." Lenstra's research means that much of what we think of as gold-standard, rock-solid network security is deeply flawed, but it also means that users and website operators can detect and repair these vulnerabilities.

While we have observed and warned about vulnerabilities due to insufficient randomness in the past, Lenstra's group was able to discover more subtle RNG bugs by searching not only for keys that were unexpectedly shared by multiple certificates, but for prime factors that were unexpectedly shared by multiple publicly visible public keys. This application of the 2,400-year-old Euclidean algorithm turned out to produce spectacular results.

In addition to TLS, the transport layer security mechanism underlying HTTPS, other types of public keys were investigated that did not use EFF's Observatory data set, most notably PGP. The cryptosystems that underlay the full set of public keys in the study included RSA (which is the most common class of cryptosystem behind TLS), ElGamal (which is the most common class of cryptosystem behind PGP), and several others in smaller quantities. Within each cryptosystem, various key strengths were also observed and investigated, for instance RSA 2048 bit as well as RSA 1024 bit keys. Beyond shared prime factors, there were other problems discovered with the keys, which all appear to stem from insufficient randomness in generating the keys. The most prominently affected keys were RSA 1024 bit moduli. This class of keys was deemed by the researchers to be only 99.8% secure, meaning that 2 out of every 1000 of these RSA public keys are insecure. Our first priority is handling this large set of tens of thousands of keys, though the problem is not limited to this set, or even to just HTTPS implementations.

We are very alarmed by this development. In addition to notifying website operators, Certificate Authorities, and browser vendors, we also hope that the full set of RNG bugs that are causing these problems can be quickly found and patched. Ensuring a secure and robust public key infrastructure is vital to the security and privacy of individuals and organizations everywhere.

Researchers Use EFF's SSL Observatory To Discover Widespread Cryptographic Vulnerabilities

Studios winning the battle to stop Oscar screeners from leaking; losing the war

For ten years, Kickstarter founder former CTO Andy Baio has been compiling his "Pirating the Oscars" reports, which document which Oscar-nominated movies are available as downloads on P2P and other file-sharing services, measuring how effective the studios are at controlling leaks of "screeners" -- DVDs set to members of the Academy for review consideration. This year marks a turning point for the industry, as it ends a three-year-long trend of increased screener leaks.

However, Baio says, the studios have "won the battle and lost the war," as this year also marks the first year that 92 percent of the nominated films were "available as high-quality DVD or Blu-ray rips." As Baio notes, "If the goal of blocking leaks is to keep the films off the internet, then the MPAA still has a long way to go."

But the MPAA may have little to do with the decline. Oscar-nominated films could be coming out earlier in the year, making screeners less important.

Or maybe the interests between the mainstream downloader and industry favorites is diverging? If the Oscars are mostly arthouse fare and critical darlings, but with low gross receipts, they'll be less desirable to leak online. It would be very interesting to track the historical box office performance of nominees to see how it affects downloading. (Maybe next year!)

The continuously shrinking window between theatrical and retail releases may be to blame. After all, once the retail Blu-ray or DVD is released, there's no reason for pirate groups to release a lower-quality watermarked screener.

Pirating the Oscars 2012: Ten Years of Data

Yes Men to keynote this summer's Hackers on Planet Earth conference in NYC

2600 Magazine's Emmanuel Goldstein writes, "One of the keynote addresses at this year's HOPE conference (July 13-15, NYC) will be given by The Yes Men. They're a natural fit at the biennial hacker conference which will be featuring highly technical along with socially relevant talks. The Yes Men have done a great deal to challenge the system and publicize injustice through the art of social engineering over the years, fooling the mass media into believing they represented the World Trade Organization, Dow Chemical, and the United States Chamber of Commerce, and more. Their words at HOPE ought to give a lot of people ideas on how they might be able to change the world."


(Image: "The Yes Men" as Exxon Mobil executives, a Creative Commons Attribution (2.0) image from itzafineday's photostream)

Tradecraft of a "mercenary hacker" who supplies 1%ers, crooks, and jealous spouses

Gawker has a profile of "Martin," a "mercenary hacker" who provides IT security consulting to millionaires, crooks, cheating spouses (or spouses who suspect their other halves of cheating) and so on. Martin's tradecraft -- rotating SIM cards using pill-sorters labelled for each day of the week and the like -- would be moderately effective against an unskilled attacker, but it seems to me that it wouldn't survive an advanced persistent threat like a government or a major spy agency. For example, he instructs his clients to use "dumb" candybar phones instead of smartphones, which, on the surface, has some logic to it (smartphones are more complex, so they have more attack-surface). But the crypto in wireless telephony is junk, so anyone with a little smarts and the capacity to follow a recipe they find on the Internet can build interception equipment that would allow them to listen in on the calls from such a phone. On the other hand, a smartphone allows users to overlay their own, industry-grade crypto for voice and SMS communications.

Likewise, Martin has his customers rotate SIMs every day, but reuses the SIMs every 14 days. This does require adversaries to acquire fourteen times more numbers and intercept them, but that, in and of itself, is not that challenging (if you can wiretap one number, you can wiretap 14, too). Especially as the phones maintain the same IMEI -- the hardcoded serial number that is sent along with the phone signalling information, which uniquely identifies a handset regardless of what number it's using. Again, this is where a smartphone would help, as a sufficiently rooted phone can be instructed to spoof its IMEI with each call, or on some other rotating basis.

Martin also provides "search-engine optimization" -- gaming FourSquare to boost the apparent popularity of a club, gaming YouTube falsely increment the view-counter, and he'll install a keylogger on a phone or computer for you, or sell you hidden wireless mics and cameras.

With Martin's system, each crewmember gets a cell phone that operates using a prepaid SIM card; they also get a two-week plastic pill organizer filled with 14 SIM cards where the pills should be. Each SIM card, loaded with $50 worth of airtime, is attached to a different phone number and stores all contacts, text messages and call histories associated with that number, like a removable hard drive. This makes a new SIM card effectively a new phone. Every morning, each crewmember swaps out his phone's card for the card in next day's compartment in the pill organizers. After all 14 cards are used, they start over at the first one.

Of course, it would be hugely annoying for a crewmember to have to remember the others' constantly changing numbers. But he doesn't have to, thanks to the pill organizers. Martin preprograms each day's SIM card with the phone numbers the other members have that day. As long they all swap out their cards every day, the contacts in the phones stay in sync. (They never call anyone but each other on the phones.) Crewmembers will remind each other to "take their medicine," Martin said.

Not only does Martin's system make wiretapping difficult, Martin claims it can protect the group if a phone gets compromised. If authorities snatch or tap a phone from Martin's system, they'll have access to only 1/14th of the entire network. The crew can just replace their SIM cards from that day in the pill organizer, assured that the other 13 of their SIM cards are still secure

The Mercenary Techie Who Troubleshoots for Drug Dealers and Jealous Lovers (via Kottke)

What decryption orders mean for the Fifth Amendment

A federal judge in Colorado recently handed down a ruling that forced a defendant to decrypt her laptop hard-drive, despite the Fifth Amendment's stricture against compelling people to testify against themselves. The Electronic Frontier Foundation's Marcia Hoffman has commentary on the disappointing ruling:

In the order issued yesterday, the court dodged the question of whether requiring Fricosu to type a passphrase into the laptop would violate the Fifth Amendment. Instead, it ordered Fricosu to turn over a decrypted version of the information on the computer. While the court didn't hold that Fricosu has a valid Fifth Amendment privilege not to reveal that data, it seemed to implicitly recognize that possibiity. The court both points out that the government offered Fricosu immunity for the act of production and forbids the government from using the act of production against her. We think Fricosu not only has a valid privilege against self-incrimination, but that the immunity offered by the government isn't broad enough to invalidate it. Under Supreme Court precedent, the government can't use the act of production or any evidence it learns as a result of that act against Fricosu.

The court then found that the Fifth Amendment "is not implicated" by requiring Fricosu to turn over the decrypted contents of the laptop, since the government independently learned facts suggesting that Fricosu had possession and control over the computer. Furthermore, according to the court, "there is little question here but that the government knows of the existence and location of the computer's files. The fact that it does not know the specific content of any specific documents is not a barrier to production." We disagree with this conclusion, too. Neither the government nor the court can say what files the government expects to find on the laptop, so there is testimonial value in revealing the existence, authenticity and control over that specific data. If Fricosu decrypts the data, the government could learn a great deal it didn't know before.

In sum, we think the court got it wrong.

Disappointing Ruling in Compelled Laptop Decryption Case

Parents' snooping teaches kids to share their passwords with each other

Matt Richtel's recent NYT article on teenagers who share their Facebook passwords as a show of affection has raised alarms with parents and educators who worry about the potential for bullying and abuse.

But as danah boyd points out the practice of password-sharing didn't start with kids: it started with parents, who required their kids to share their passwords with them. Young kids have to share their passwords because they lose them, and older kids are made to share their passwords because their parents want to snoop on them. Basically, you can't tell kids that they must never, ever share their passwords and require them to share their passwords.

There are different ways that parents address the password issue, but they almost always build on the narrative of trust. (Tangent: My favorite strategy is when parents ask children to put passwords into a piggy bank that must be broken for the paper with the password to be retrieved. Such parents often explain that they don’t want to access their teens’ accounts, but they want to have the ability to do so “in case of emergency.” A piggy bank allows a social contract to take a physical form.)

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

There’s another thread here that’s important. Think back to the days in which you had a locker. If you were anything like me and my friends, you gave out your locker combination to your friends and significant others. There were varied reasons for doing so. You wanted your friends to pick up a book for you when you left early because you were sick. You were involved in a club or team where locker decorating was common. You were hoping that your significant other would leave something special for you.

How Parents Normalized Teen Password Sharing

(Image: Swordfish, a Creative Commons Attribution Share-Alike (2.0) image from ideonexus's photostream)