Navigating AI ‘hallucinations’ and other such irritants in the age of chatGPT
The HinduA few weeks ago, I was preparing for an event where I had to talk about the history of butter in India. Convincing, not accurate First, it’s important to realise that the original design goal of an LLM is to be able to generate convincing human language, not factually accurate human language. As Ganesh Bagler, associate professor at the Infosys Centre for Artificial Intelligence at Indraprastha Institute of Information Technology, Delhi, points out, “While large language models benefit from patterns mined from an ocean of data, these statistical parrots can occasionally churn out nonsense.” And in our butter example, the statistical parrot named ChatGPT, which has no deep, contextual understanding of cows, dairy, and monetary economics, made a connection that an adult human with a college degree would have filtered out for not making sense. “I asked for a Thai stir-fry recipe from a Thai person and it made up a completely fake name, a list of books they’d written, and even a bio for the chef from bits and pieces of other real people’s bios.” Today, I asked Google's Bard for a Thai recipe and what followed was a hilarious set of outright lying and some serious hallucination, including making up non-existent people and books they'd written. Every update to these bots is improving their ability to provide clearer data contexts, refining the AI’s self fact-checking ability, and also introducing new ways for users to guide and improve AI interactions.