Alex Feerst, Medium's head of trust and safety, conducted a long, wide-ranging interview with senior content moderation staffers with experience at Dropbox, Google, Facebook, Reddit, Pintrest, and many unnamed platforms; the interview is very frank and reveals a group of people with a deep, real-world commitment to protecting users as well as defending free speech.
This raises an important question: why are the platforms so terrible at moderation? How is it that they get so much wrong, allowing so much harassment and also blocking so much legitimate speech?
The answer is there, if you read between the lines. On the one hand, the platforms are so big that even small error rates affect lots of people: a 1% error rate in blocking 500m tweets/day is 5m tweets/day. On the other hand, the platforms give preference to product teams ("product people want to live in optimism, in an idealized vision of how people will use the product, not the ways that people will predictably break it") over the moderation and safety teams.
Y.X. Yang: Sometimes other people in the company ask, “Did you know this was going to happen?” Yes, and we told the product team a year ago that if they did X, Y would happen, and because of the priorities at the time, the product manager went and did the thing anyway. It’s like, “But that’s okay, we’ll help you through this mess.”
Adam: “Six months ago we told you, ‘Don’t pave the city with banana peels.’ You decided, ‘Let’s see what happens if we pave the city with banana peels.’ We are now here to clean up the injuries.”
Vanessa: As with many roles focused on the larger good, it’s important to catch yourself when you’re walking around the company self-righteously and being like, “You’re welcome for being your conscience and heart and soul and working hard on the things that you’re not paying attention to.”
Your Speech, Their Rules: Meet the People Who Guard the Internet [Bärí A. Williams/Medium One Zero]
(via Four Short Links)
(Image: Redi-Rock International, CC-BY)