Dropbox has some genuinely great security reporting guidelines, but reserves the right to jail you if you disagree

Dropbox has published a set of guidelines for how companies can "encourage, support, and celebrate independent open security research" — and they're actually pretty great, a set of reasonable commitments to take bug reports seriously and interact respectfully with researchers.


That's very important, because laws like section 1201 of the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act impose potential criminal and civil liability on security researchers who investigate and disclose bugs without manufacturer approval. This is bad news, because companies have a long history of suppressing bug reports, deceptively minimizing their impact, and allowing defects to linger while they are being actively exploited by bad guys, who get to extend the lives of their attack approaches because the public isn't made aware that the approaches even exist.

Recently, a broad tech industry coalition proposed that security researchers should always be subject to a manufacturers' veto on true facts about defective products.

Dropbox's position, however reasonable in many of its aspects, is woefully deficient, because the company reserves the right to invoke DMCA 1201 and/or CFAA and other tools that give companies the power to choose who can say true things abour mistakes they've made.

This is not normal. Before DRM in embedded software and cloud connectivity, became routine there were no restrictions on who could utter true words about defects in a product. The Copyright Office has weighed in to say that they don't think that using the DMCA in this way is legitimate (but they are unable to change the rules, because their statutory authority does not extend to permitting security researchers to release proof-of-concept code).

Dropbox starts from the admirable position of lamenting the fact that companies have arrogated to themselves the power to silence whistleblowers who report dangerous product defects — but the actual terms they propose say that the problem isn't silencing whistleblowers, it's unfairly silencing whistleblowers. By reserving the right to sue security researchers for telling the truth in inconvenient ways, Dropbox is treating the power to censor as a feature, not a bug — and differing from the companies they decry for bullying only in the particulars of when the power to censor should be invoked, not whether that power is legitimate in the first place.

I think Dropbox's heart is in the right place here and I hope they'll take this criticism onboard by way of a friendly amendment. Neither DMCA 1201 or CFAA were crafted to give companies a say in who can warn the public about mistakes they made. It is never legitimate to use them this way. A best-of-breed vulnerability disclosure program should demonstrate good faith by covenanting never to invoke these laws to punish security disclosures — not even when a security researcher ignores your guidelines.


Looking at our own VDP, we realized we could do better, and immediately committed to updating our VDP to be best-of-breed. Our updated VDP contains the following elements:

1. A clear statement that external security research is welcomed.

2. A pledge to not initiate legal action for security research conducted pursuant to the policy, including good faith, accidental violations.

3. A clear statement that we consider actions consistent with the policy as constituting "authorized" conduct under the Computer Fraud and Abuse Act (CFAA).

4. A pledge that we won't bring a Digital Millennium Copyright Act (DCMA) action against a researcher for research consistent with the policy.

5. A pledge that if a third party initiates legal action, Dropbox will make it clear when a researcher was acting in compliance with the policy (and therefore authorized by us).

Protecting Security Researchers [Chris Evans/Dropbox]

(via 4 Short Links)