Chatbots sometimes make things up. Is AI’s hallucination problem fixable?
Associated PressSpend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods. “I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2. “I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem,” Bagler said, standing up in a crowded campus auditorium to address Altman on the New Delhi stop of the U.S. tech executive’s world tour. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.” Those errors are not a huge problem for the marketing firms that have been turning to Jasper AI for help writing pitches, said the company’s president, Shane Orlick. “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction,” Gates said in a July blog post detailing his thoughts on AI’s societal risks.