YouTube to require creators to disclose ‘realistic’ AI-generated videos
LA TimesYouTube said it plans next year to begin enforcing a new policy requiring creators to self-identify videos created with generative AI, which will be labeled as such on the platform. YouTube, the video platform owned by Alphabet’s Google, will soon require video makers to disclose when they’ve uploaded manipulated or synthetic content that looks realistic — including video that has been created using artificial intelligence tools. “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” Jennifer Flannery O’Connor and Emily Moxley, YouTube vice presidents of product management, said in a company blog post Tuesday. The company also said that YouTube’s community guidelines, which prohibit digitally manipulated content that may pose a serious risk of public harm, already apply to all video content uploaded to the platform. The company said not all content would be automatically removed once a request is placed; rather, it would “consider a variety of factors when evaluating these requests.” If the removal request references video that includes parody or satire, for instance, or if the person making the request can’t be uniquely identified, YouTube could decide to leave the content up on its platform.