"Friendly" apps are good at maximizing engagement, but their context-blindness is a cesspit of algorithmic cruelty

Designers use metrics and a/b splitting experiments to maximize engagement with their products, seeking out the phrases that trigger emotional responses in users — like a smart scale that congratulates you on losing weight — but these systems are context-blind, so they are unable to distinguish between people who might be traumatized by their messages (say, a woman who's just miscarried late in her pregnancy, being congratulated on her "weight loss").


The cheap response is, "Well, it's just a dumb machine with a roster of canned responses," but remember: the reason the designers have chosen those canned responses is that, experimentally, they work, creating emotional resonances with users. It's not unreasonable to think that they'll work every bit as well at triggering negative feelings as they do at triggering positive ones.

Some examples: Dan Hon's scale sends condolences to his infant son for having experienced weight-gain and promises to help him get back to his "target weight." Sally Rooney looked at some horrifying Tumblr posts about the rise of Naziism, and the next morning, her phone beeped and pushed this notification: "Beep beep! #neo-nazis is here!"

The canonical example: In 2014, Facebook pushed a notification to Eric Meyer to remind him of the anniversary of his young daughter's death from cancer, with a picture of her surrounded by confetti, baloons, and dancing cartoon people.

Algorithms want us to repeat ourselves, or, failing that, to act like statistically probable other people. They use blind, statistical refinement techniques to drive us to those behavioral outcomes. The methods work, uncovering the nudges that will get past our rational responses, getting under our emotional skins. As a result, when they go wrong, they go terribly wrong.

Now on to the copy. As you might guess, no one at Tumblr sat down and wrote that terrible sentence. They wrote a text string: a piece of canned copy into which any topic could be inserted automatically: "Beep beep! #[trending tag] is here!" (In fact, another Tumblr user shared a version of the notification he received: "Beep beep! #mental-illness is here!")

Text strings like these are used all the time in software to tailor a message to its context—like when I log into my bank account and it says, "Hello, Sara" at the top. But in the last few years, technology companies have become obsessed with bringing more "personality" into their products, and this kind of copy is often the first place they do it—making it cute, quirky, and "fun." I'll even take a little blame for this. In my work as a content strategy consultant, I've helped lots of organizations develop a voice for their online content, and encouraged them to make their writing more human and conversational. If only I'd known that we would end up with so many inappropriate, trying-too-hard, chatty tech products.


The Problem With Your Chatty Apps [Sara Wachter-BoettcherSara Wachter-Boettcher/Wired]