Navigating AI ‘hallucinations’ and other such irritants in the age of chatGPT
1 year, 6 months ago

Navigating AI ‘hallucinations’ and other such irritants in the age of chatGPT

The Hindu  

A few weeks ago, I was preparing for an event where I had to talk about the history of butter in India. Convincing, not accurate First, it’s important to realise that the original design goal of an LLM is to be able to generate convincing human language, not factually accurate human language. As Ganesh Bagler, associate professor at the Infosys Centre for Artificial Intelligence at Indraprastha Institute of Information Technology, Delhi, points out, “While large language models benefit from patterns mined from an ocean of data, these statistical parrots can occasionally churn out nonsense.” And in our butter example, the statistical parrot named ChatGPT, which has no deep, contextual understanding of cows, dairy, and monetary economics, made a connection that an adult human with a college degree would have filtered out for not making sense. “I asked for a Thai stir-fry recipe from a Thai person and it made up a completely fake name, a list of books they’d written, and even a bio for the chef from bits and pieces of other real people’s bios.” Today, I asked Google's Bard for a Thai recipe and what followed was a hilarious set of outright lying and some serious hallucination, including making up non-existent people and books they'd written. Every update to these bots is improving their ability to provide clearer data contexts, refining the AI’s self fact-checking ability, and also introducing new ways for users to guide and improve AI interactions.

History of this topic

Scientists might have found a way to overcome ‘hallucinations’ that plague AI systems like ChatGPT
6 months ago
Cambridge Dictionary reveals word of the year for 2023
1 year, 1 month ago
Chatbots sometimes make things up. Is AI’s hallucination problem fixable?
1 year, 4 months ago
Chatbots sometimes make things up. Not everyone thinks AI's hallucination problem is fixable
1 year, 4 months ago
Does AI Have a Subconscious?
1 year, 7 months ago
AI tools like ChatGPT, Bard can create AI hallucinations, says Stuart Russell
1 year, 8 months ago
ChatGPT-4 is phenomenal but its ‘hallucinations’ make it flawed
1 year, 9 months ago
What can ChatGPT maker’s new AI model GPT-4 do?
1 year, 9 months ago
Explained | What are hallucinating chatbots?
1 year, 10 months ago

Discover Related