55 years, 3 months ago

LaMDA and the Sentient AI Trap

The focus on sentience also misses the point, says Gebru. “Quite a large gap exists between the current narrative of AI and what it can actually do,” says Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. “This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.” The consequence of speculation about sentient AI, she says, is an increased willingness to make claims based on subjective impression instead of scientific rigor and proof. While every researcher has the freedom to research what they want, she says, “I just fear that focusing on this subject makes us forget what is happening while looking at the moon.” What Lemoine experienced is an example of what author and futurist David Brin has called the “robot empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that in three to five years, people would claim AI systems were sentient and insist that they had rights. The LaMDA incident is part of a transition period, Brin says, where “we're going to be more and more confused over the boundary between reality and science fiction.” Brin based his 2017 prediction on advances in language models.

Wired

Discover Related