2 months ago
Why AI chatbots ‘hallucinate’ and what researchers are doing about it
People increasingly rely on AI for answers, but it can mislead or spread harmful falsehoods. Here’s how experts are tackling the challenge AI has made it easier than ever to find information: Ask ChatGPT almost anything, and the system swiftly delivers an answer. Publishing confidence scores along with a model’s answers could help people to think more critically about the veracity of information that these tools provide. My lab has also shown that confidence scores can be used to help AI models generate more accurate answers. Most of these approaches assume that the information needed to correctly evaluate an AI’s accuracy can be found on Wikipedia and other online databases.

Discover Related

3 months, 1 week ago
We asked top AI models their best market picks for 2025. And were surprised....

6 months, 4 weeks ago
Microsoft introduces new feature which can automatically correct inaccurate AI content

10 months, 3 weeks ago
Google says AI Overviews do not hallucinate after it tells users to put glue on their pizzas

11 months, 1 week ago
AI chatbots feed our own bias back to us: Study

1 year, 1 month ago
Academics warn new science papers are being generated with AI chatbots

1 year, 6 months ago
Chatbot Hallucinations Are Poisoning Web Search

1 year, 8 months ago
Chatbots sometimes make things up. Is AI’s hallucination problem fixable?

1 year, 8 months ago
Chatbots sometimes make things up. Is AI’s hallucination problem fixable?

1 year, 10 months ago
Navigating AI ‘hallucinations’ and other such irritants in the age of chatGPT

2 years, 2 months ago
Google warns of hallucinating chatbots; says AI offers made-up answers: Report

2 years, 2 months ago