Twitter now allows "non-toxic slurs" as long as the slurs are not hate speech

This week, in Elon Musk's continuing mission to turn the entire world into an electrified dystopian hellscape, Twitter Safety has introduced a new concept called "non-toxic slurs." Unlike hate speech, which is objectively bad, these non-toxic slurs are neither hateful, nor acts of speech, but rather, umm, derogatory insults aimed at a particular group of people, I guess. Which is different, you see. Because they're non-toxic.

If you actually want to know more specifics, this distinction is supposedly based on an analysis from Sprinklr, a company specializing in customer experience management. That analysis was, in turn, based on Twitter data that included a list of 300 English-language slurs and the 550,000 English-language tweets that used those words between January and February 2023. The company explains:

Sprinklr's new toxicity model analyzes data and categorizes content as "toxic" if it is used to demean an individual, attack a protected category or dehumanize marginalized groups. Integrating factors such as reclaimed language and context allowed our model to eliminate false positives and negatives as well. The model uses AI to determine intent and context around the flagged keywords, to help brands understand what is really toxic.

The Sprinklr study concluded that tweets containing slurs typically received fewer impressions, and that roughly 85% of the tweets using those slurs were using them as "in non-toxic contexts like reclaimed speech or casual greetings."

The Muskrat himself explained why this apparently matters:

So I guess the argument here is that Twitter's existing algorithms are already censoring shadow-banning reducing the reach of tweets containing toxic slurs, instead of people using the slur in a supposedly casual or affectionate way, which Twitter does not believe it should be policing, because of free speech. And that in turn makes Twitter safer and reduces other hate speech because of reasons.