Farhad Manjoo (previously) writes in the New York Times about his cautious optimism that the big platforms are finally taking some steps to prevent harassment, but he also worries that this is setting the stage for a new era in tech, one in which the rules guarantee that Big Tech never has to worry about being challenged by upstarts.
The rules that immunized online hosts from illegal speech by their users were key to the growth of tech. It's just not possible to evaluate everything that someone might want to post to a modest comment section (let alone a platform like Twitter or Facebook) for its compliance with all federal statutes regarding speech. If Facebook (or Twitter, or WordPress, or Boing Boing) had had to hire a giant team of lawyers to read every user post before it went live, the web would have just been cable TV: something only giant corporations got to make.
But we're in a new era. Decades of neglect by antitrust regulators have led to a highly concentrated market dominated by a handful of extremely cash-rich companies. Though even they couldn't afford to review and police every utterance by their users, it's within the realm of possibility to imagine that they and regulators will strike some kind of deal to spend a few billions (or tens of billions) on semi-automated filtering systems.
After all, monopolists love regulation, provided it's the kind of regulation they can afford and that potential upstarts who might challenge them in unexpected ways can't.
Big Tech is already dabbling with filters to downrank harassment and other undesirable speech. Sometimes, they even work! But the platforms' position is that they can't tell you how these things work, or bad guys will figure out how to defeat them. That kind of security-by-obscurity is brittle: well-resourced bad guys (think of billionaire-backed trolls aslosh in dark money) can figure out how to game these filters, while actual grassroots activists who get overblocked are out of luck (you can see this playing out in Cambodia right now, where the authoritarian ruling elite uses Facebook policies to force dissidents to choose between unmasking themselves and subjecting themselves to reprisals, or being kicked off of Facebook for violating the platform's anti-anonymity policies).
There's no easy answer. Security economics tells us that when there's a lot to be gained by probing systems for vulnerabilities and exploiting them, you'll get well-resourced attackers (think of Nazis exploiting bugs in the systems at progressive hacker cons). That's why some people call Ethereum's smart contracts "million dollar bug-bounties."
The stakes are very high with Big Tech. They have attained a monopoly on public discourse, and exclusion from the platforms — or censorship of your point of view — is very costly indeed. It would be tragic if, in our effort to make the platforms less toxic, we made the stakes even higher (by eliminating the possibility that a platform would be replaced by an upstart), inviting even more powerful, entrenched interests to figure out how to game their rules and systems.
None of these problems are impossible to solve. Tech companies are spending huge sums to improve themselves, and over time they may well come up with innovative new ideas for policing content. For example, Facebook's recent decision to release its data to a group of academic researchers may allow us to one day determine, empirically, what effects its content policies are really having on the world.
Still, in the end we will all be left with a paradox. Even if they're working with outsiders to create these policies, the more these companies do to moderate what happens on their sites, the more important their own policies become to the global discourse.
A lot of people are worried that Mark Zuckerberg is already too powerful. The danger is that we ain't seen nothing yet.
Tech Companies Like Facebook and Twitter Are Drawing Lines. It'll Be Messy. [Farhad Manjoo/New York Times]
(Image: Redi-Rock International, CC-BY)