Meta and X questioned by lawmakers over lack of rules against AI-generated political deepfakes
Associated PressDeepfakes generated by artificial intelligence are having their moment this year, at least when it comes to making it look, or sound, like celebrities did something uncanny. Google was the first big tech company to say it would impose new labels on deceptive AI-generated political Two Democratic members of Congress sent a letter Thursday to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino expressing “serious concerns” about the emergence of AI-generated political ads on their platforms and asking each to explain any rules they’re crafting to curb the harms to free and fair elections. Why aren’t you doing this?’ It’s clearly technologically possible.” The letter to the executives from Klobuchar and U.S. Rep. Yvette Clarke of New York warns: “With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues.” X, formerly Twitter, and Meta, the parent company of Facebook and Instagram, didn’t respond to requests for comment Thursday. A House bill introduced by Clarke earlier this year would amend a federal election law to require labels when election “I think that folks have a First Amendment right to put whatever content on social media platforms that they’re moved to place there,” Clarke said in an interview Thursday. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.