Chatbots sometimes make things up. Not everyone thinks AI's hallucination problem is fixable
For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emails Sign up to our free breaking news emails Sign up to our free breaking news emails SIGN UP I would like to be emailed about offers, events and updates from The Independent. “I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem,” Bagler said, standing up in a crowded campus auditorium to address Altman on the New Delhi stop of the U.S. tech executive's world tour. It also helps power automatic translation and transcription services, “smoothing the output to look more like typical text in the target language,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.” Those errors are not a huge problem for the marketing firms that have been turning to Jasper AI for help writing pitches, said the company's president, Shane Orlick. “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction,” Gates said in a July blog post detailing his thoughts on AI’s societal risks.


Discover Related

‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others

Microsoft introduces new feature which can automatically correct inaccurate AI content

Academics warn new science papers are being generated with AI chatbots

Navigating AI ‘hallucinations’ and other such irritants in the age of chatGPT

StupidGPT: ChatGPT-like AI bots are way more stupid than people realise, says AI Expert

A Radical Plan to Make AI Good, Not Evil

ChatGPT-4 is phenomenal but its ‘hallucinations’ make it flawed
