Gilad Lotan — our favorite fake-news sleuthing data-scientist — writes about the problem of not-quite-fake news, which is much more pernicious than mere lies: it's news that uses attention-shaping, one-sided "news" accounts that divide their readers into their own "constructed realities."
It's something between conspiracy theories and confirmation bias: a "media hacking" technique that is more virulent than either.
We may be able to learn a lot from spam detection mechanisms by coordinating across entities and limiting access to the criteria for what exactly is considered spam versus ham. But then, to be effective, we would not be being transparent about our actions.
There are other models of automated filtering and downgrading for limiting the spread of misleading information (the Facebook News Feed already does plenty of filtering and nudging). But again, who decides what's in or out, who governs? And who gets to test the potential bias of such an algorithmic system?
If our collective goals include increasing trust in institutions and supporting an informed public, there is a lot of work to be done, and not only by Facebook. A number of actors, including publishers, social networks, content distributors, and forums, are all important in this space. By pointing our fingers at Facebook and looking at the extremes of fake news, I fear we're missing out on an opportunity to actually make a difference.
Fake News Is Not the Only Problem
[Gilad Lotan/Data and Society]