YouTube plans to remove content and ban creators who do not disclose the use of AI in their productions. Such videos will be prominently marked, the company says.
We believe it's in everyone's interest to maintain a healthy ecosystem of information on YouTube. We have long-standing policies that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm. However, AI's powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they're unaware that the video has been altered or is synthetically created.
To address this concern, over the coming months, we'll introduce updates that inform viewers when the content they're seeing is synthetic. Specifically, we'll require creators to disclose when they've created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do.
On YouTube one may now find wojaks listlessly reciting computer-generated drivel, complete with hallucinations, repetitions and all the usual turds, in their featureless dungeonlike garages. Maybe a glowing LED pole angled in a corner behind them. YouTube can't fix the problem, because the problem is an untouchable aspect of its revenue generation, so it will instead announce some superficial rules to be applied arbitrarily to PR ends, as it and the world it inhabits are sucked inexorably into the gyre of shit.