YouTube mandates labelling of synthetic, AI-generated content
YouTube has asked its creators to label synthetic and AI-generated content, in an effort towards supporting responsible AI innovation. Viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic, YouTube said in a blog post.
“We’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, scene, or event – is made with altered or synthetic media, including generative AI. We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance,” the blog post said.
The new feature is meant to strengthen transparency with viewers and build trust between creators and their audience, it added.
A label will appear in the expanded description, but for videos that touch on more sensitive topics like health, news, elections, or finance, YouTube will also show a more prominent label on the video itself.
YouTube said the labels will roll out across all YouTube surfaces and formats in the weeks ahead, starting with the mobile app, followed by visibility across its desktop and TV formats.
The video platform said it will grant some time to its creators to adjust to the new process, but will consider enforcement measures in the future for creators who consistently choose not to disclose this information.
“In some cases, YouTube may add a label even when a creator hasn’t disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people,” YouTube said.
YouTube is also working towards an updated privacy process for people to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.