Market for zero-day vulnerabilities incentivizes programmers to sabotage their own work

In this Forbes editorial, Bruce Schneier points out a really terrible second-order effect of the governments and companies who buy unpublished vulnerabilites from hackers and keep them secret so they can use them for espionage and sabotage. As Schneier points out, this doesn't just make us all less secure (EFF calls it "security for the 1%") because there are so many unpatched flaws that might be exploited by crooks; it also creates an incentive for software engineers to deliberately introduce flaws into the software they're employed to write, and then sell those flaws to governments and slimy companies.

I’ve long argued that the process of finding vulnerabilities in software system increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on — and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

The Vulnerabilities Market and the Future of Security (via Crypto-gram)


  1. Like making zip guns for a coupla bucks apiece, and selling them to the Cops for $300.00 each, and keeping the real guns on the street. 

  2. One of the underlying problems is those EULAs we all click through that absolve developers of any and all damages that may come from using their software.   Ask yourself: If money can be made by putting intentional faults into security products, why aren’t bank vaults broken into all the time?  The answer is in the liability, as well as the licensing and bonding process that vault makers and locksmiths must go through in order to sell their products and services.

    Of course, that would require regulation.  A naughty word in politics today.

    1. Bank vaults are pretty simple compared to software, yet if they were exposed to everyone on the internet I’d imagine they’d get opened pretty quickly.

      1. Bank vaults, and for that matter any kind of traditional physical security, are as exposed as any Internet connected computer.  The only reason you perceive them to be less exposed is that vaults tend to be at the center of nested layers of security, and that the difficulty of tackling that security outweighs the benefits of breaking in.  Furthermore, when break-ins do occur, banks are required to inform authorities and the bank’s customers that such a break-in did occur.

        Instead, you have systems where single-point vulnerabilities lay open entire networks to outside access.  Worse, now the developers of those systems (and the software the systems run) have an incentive to ensure they are insecure.  Allowing those developers to limit their legal liabilities to customers only increases that incentive to sell out those same customers.

        1. To take a whack at a bank’s security, you have to actually go there and run the risk of arrest. You can go at whatever server you like from a coffee shop with practically no chance of being identified.

          Moreover, you can usually get a copy of software to attack in the privacy of your own network.

  3. Don’t hate the player…

    (but seriously, this whole thing needs to be taken care of by some serious market regulation.)

  4. I notice you mention governments and businesses paying for zero day vulnerabilities.  But you don’t mention criminal organizations doing the same thing.  Is it that you think the black market for vulnerabilities is not a major part of the market?

    1. Of course they are … they’re /criminals/. But the Govt – if not businesses – are supposed to be looking out for our interests, not looking for new ways to sell us down the river.

Comments are closed.