Chatbots sometimes make things up. Is AI’s hallucination problem fixable?
Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods. “I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2. “I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem,” Bagler said, standing up in a crowded campus auditorium to address Altman on the New Delhi stop of the U.S. tech executive’s world tour. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.” Those errors are not a huge problem for the marketing firms that have been turning to Jasper AI for help writing pitches, said the company’s president, Shane Orlick. “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction,” Gates said in a July blog post detailing his thoughts on AI’s societal risks.












Discover Related

Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

AI impact to change future workforce

AI impact to change future workforce

AI tracker: Three cases of AI ethics that gave us food for thought this week

Microsoft To Soon Let Users Tailor Copilot to Their Needs

How I realized AI was making me stupid—and what I do now

Agentic AI: The next frontier in artificial intelligence

AI-Powered Agents & Chatbots: Can They Finally Replace Human Help?

’TruthTell Hackathon’ finalists show promise in battling misinformation

IT Ministry examining issue of Grok using Hindi slang, abuses; in touch with X

AI beyond ChatGPT: what does it mean to be human in an age of thinking machines?

AI models flunk language test that takes grammar out of the equation

Secret AI language raises fears of out-of-control artificial intelligence

Don’t freak out about empathic chatbots. Learn from them.

North Korea seen using ChatGPT for AI studies amid cybercrime fears: Report

Bot The Hell, No Legal Liabilities

AI models try to hack opponents when they realise they're losing: Study

Hang on… Did Microsoft just admit that AI could dumb us down?
