No to fact-checked, yes to rekt: Instagram recommendations "flooded" with violent content

After Instagram users reported that their feeds were "flooded" with graphic and violent content, owners Meta claimed it was an "error." The material "should not have been recommended," a spokesperson for the company told CNBC.

The statement comes after a number of Instagram users took to various social media platforms to voice concerns about a recent influx of violent and "not safe for work" content recommendations.

Some users claimed they saw such content, even with Instagram's "Sensitive Content Control" enabled to its highest moderation setting.

According to Meta policy, the company works to protect users from disturbing imagery and removes content that is particularly violent or graphic. 

CNBC's Dylan Butts wrote that it was able to view "dead bodies, graphic injuries and violent assaults."

Meta said that its systems were demoting too much content based on predictions that it "might" violate standards and that it was in the process of "getting rid of most of these demotions." CEO Mark Zuckerberg also announced that the company would allow more political content and change its third party fact-checking program with a "Community Notes" model, similar to the system on Elon Musk's platform X.

Perhaps they didn't realize (or refused to accept) that far-right and violent content are closely entwined and that allowing one would result in the other flying in under its wing. They had the same problem with conservative media and misinformation, if you'll recall: fact-checking tended to hit conservative media more and so had to be gotten rid of. Which, of course, brought us to this.

Previously:
Instagram to blur self-harm images after Molly Russell's suicide
Instagram's new solution against bullying: Artificial Intelligence, and 'Restrict'