Everyone Wants to Regulate AI. No One Can Agree How
WiredAs the artificial intelligence frenzy builds, a sudden consensus has formed. While there’s a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI’s most influential avatar of the moment, OpenAI CEO Sam Altman. Only days before his testimony, Altman was among a group of tech leaders summoned to the White House to hear Vice President Kamala Harris warn of AI’s dangers and urge the industry to help find solutions. In case readers mistake the word blueprint for mandate, the paper is explicit on its limits: “The Blueprint for an AI Bill of Rights is non-binding,” it reads, “and does not constitute US government policy.” This AI bill of rights is less controversial or binding than the one in the US Constitution, with all that thorny stuff about guns, free speech, and due process.