Google and Facebook's "fake news" ban is a welcome nail in the coffin of "software objectivity"

In the wake of the Trump election — a triumph of fake news — both Google and Facebook have announced that they will take countermeasures to exclude "fake news" from their services, downranking them in the case of Facebook and cutting them off from ad payments in Google's case.


This is the latest step on a long journey. In its early days, Google was bombarded with complaints about its search-ranking: the proprietors of low-ranked sites insisted that Google had insulted them by not ranking them higher, and wanted to know how they could improve their ranking and thus make more money, command more attention, and increase their influence.

Google's initial response was to hide behind machine objectivity: "Our algorithm has downranked you because your site does not have the mathematical correlates of quality and relevance. You can't argue with math. If you want a higher ranking, make a better site."

This, of course, was not true. The decisions that Google makes to assign high scores to certain metrics (lots of inbound links, or lots of clicks, or certain properties of language) and downrank others (slow loading, inbound links from known "bad" sites, etc) reflects human, editorial judgment. Even where Google uses external, measurable phenomena to tune its system — "people click more when we give them search results like this" — reflects a human choice, and is no guarantee of objective quality (people might click more because Google has upranked deceptive sites that make alluring too-good-to-be-true promises; upon discovering this, those users might regret having clicked, but this is much harder to measure).

True or not, it gave Google a readymade and unassailable response to people who didn't like their search results, which was, basically, everyone who operated a website that didn't appear in the first page of results on searches for their chosen topic — which is to say, the entire world of web publishers, save the statistically irrelevant rounding error's worth of sites that showed up on page one of Google searches.

But this position had a serious flaw: if Google's results are the product of math, not human expression, then there is no free speech interest in protecting those results. The entertainment industry was the first to advance this argument — to governments, not Google — demanding that their sites be top-ranked for searches, and that "pirate" sites be downranked.

Governments were fairly receptive to this argument, and that opened the floodgates. Soon, everyone was in on the act: companies worried about counterfeiting, people who felt they had been defamed, anti-pornography campaigners, campaigners against the sexual abuse of children, "right to be forgotten" people embarrassed about newsworthy episodes in their past — there is literally no end to the constituencies who feel like they have a legitimate interest in tweaking Google's search results.


In 2012, Google had had enough. They commissioned Eugen Volokh, the famed First Amendment legal scholar, to write a legal white-paper explaining that search results were indeed the result of human judgment, and had been all along. The government doesn't barge into the New York Times editorial meeting and dictate which stories will go above the fold of tomorrow's edition — they have no more business telling Google which results come first in search results.

Facebook is now edging into the same territory, by much the same route. Since its first blush of popularity, Facebook has maintained that its algorithm was assembling your feed based on some objective notion of "relevance," not the human judgment of its human masters.


Now, with governments rattling their sabers, and the election-haunted public howling for blood, Facebook is moving through the journey Google began in 2012. A few days ago, Zuckerberg was saying, "Facebook can't be the arbiter of truth," and now the company is signalling that it will be just that.

I am just as worried as can be that Facebook's war on "fake news" will end up catching a lot of important stories that are not fake — stories that are urgent, like the North Dakota pipeline tragedy, which keeps getting worse, and which requires realtime access to attention at key moments lest it become a massacre.

But Facebook isn't moving from "not exercising editorial judgment" to "adding editorial to the algorithm." The algorithm is editorial, and it has been since the first time a programmer looked at the results of a test suite and found them satisfying.

Machine learning is in a crisis. The illusion of algorithmic objectivity is driving racist policing and incarceration under the guise of empirical, evidence-based crime-fighting. Like Facebook's algorithm, like Google's search ranking, predictive policing, automated sentencing reviews, and other systems of computer-aided judgment can be programmed to make good judgments or bad ones, and ultimately, the difference is something that humans, not computers, will have to determine.


Facebook could get it horribly wrong with its "fake news" filter, but the problem won't be that Facebook has decided to filter based on content, it will be that its content filter sucks. Facebook, like Google, is a content filter.