Generative Artificial Intelligence (AI) has taken off over the past year, with powerful AI chatbots, image and video generators, and other AI tools flooding the market. The new technology has also brought new challenges related to the responsible use of artificial intelligence, misinformation, impersonation, copyright infringement and more. Now, Youtube has announced a new set of guidelines for AI-generated content on its platform to tackle similar issues. Over the coming months, YouTube will roll out new updates that inform viewers about AI-generated content, require creators to disclose their use of AI tools, and remove harmful synthetic content where necessary.
YouTube announced a series of new policies related to AI content on the platform via its blog, which describes its approach to “responsible AI innovation.” According to the popular video sharing and streaming platform, it will inform viewers when the content they are viewing is synthetic in the coming months. As part of the changes, YouTube creators will also have to disclose whether their content is synthetic or altered using AI tools. These will be achieved in two ways; a new label added to the description panel clarifying the synthetic nature of the content and another, more prominent label – on the video player itself – for certain sensitive topics.
The streaming service also mentioned that it would take action against creators who do not follow its new guidelines for AI-generated content. “Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other sanctions,” the blog said.
In addition, YouTube will also remove some synthetic media, whether or not branded, from its platform. These will include videos that do not adhere to YouTube’s Community Guidelines. Creators and artists will also be able to request the removal of AI-generated content that impersonates an identifiable person using their facial or voice likeness. Content removals will also apply to AI-generated music that mimics an artist’s singing or rapping voice, YouTube said. These AI guidelines and solutions will roll out on the platform in the coming months.
YouTube will also implement generative AI techniques to detect content that violates its Community Guidelines, helping the platform identify and catch potentially harmful and offensive content much faster. The Google-owned platform also stated that it would develop firewalls that prevent its own AI tools from generating harmful content.
Earlier this month, YouTube launched “a global effort” to crack down on ad-blocking extensions, leaving users with no choice but to subscribe to YouTube Premium or allow ads on the site. “Using ad blockers violates YouTube’s terms of service. We’ve launched a global effort to encourage viewers with ad blocking enabled to allow ads on YouTube or try YouTube Premium for an ad-free experience. Ads support a diverse ecosystem of creators globally and allow billions to access their favorite content on YouTube,” the platform said in its statement.