YouTube Details New Rules Requiring Creators to Disclose AI-Generated Content

YouTube

YouTube is rolling out new rules that will require creators to disclose to viewers when realistic content is made with altered or synthetic media, including generative artificial intelligence (GenAI). 

These disclosures will be displayed to viewers as labels in the expanded description, in most cases, the video sharing platform said in a Monday (March 18) blog post. For videos having to do with health, news, elections, finance and other sensitive topics, the labels will be shown prominently on the video itself.

“The new label is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube said in the post.

The new rules will apply to content that viewers could easily mistake for a real person, place or event, according to the post. This content includes the use of the likeness of a real person, whether it’s a likeness of their face or their voice; the alteration of footage of real events or places; and the generation of scenes that look realistic but are fictional.

The rules will not apply to content that is clearly unrealistic — such as that showing a unicorn — or that in which there are only inconsequential changes like color adjustments, lighting filters, background blur or beauty effects, the post said. 

They also won’t apply to the use of GenAI for productivity in tasks like generating scripts, content ideas or captions, per the post.

The labels required by these rules will be rolled out in the coming weeks, beginning with the YouTube mobile app and then expanding to desktop and TV applications, according to the blog post.

“And while we want to give our community time to adjust to the new process and features, in the future we’ll look at enforcement measures for creators who consistently choose not to disclose this information,” YouTube said in the post. “In some cases, YouTube may add a label even when a creator hasn’t disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people.”

YouTube previewed these changes in a November blog post, saying the disclosure requirements and other rules around AI-generated content would be added over the coming months.

In another development in this space, it was reported in October that Adobe and other companies, including ArmIntelMicrosoft and Truepic, established a symbol that can be attached to content alongside metadata, listing its provenance, including whether it was made with AI tools.