AI chatbots feed our own bias back to us: Study
Artificial Intelligence chatbots are increasingly inclined to echo the views of people who use them, according to US researchers who found that platforms limit what information they share depending on who is asking. Ziang Xiao of Johns Hopkins University in Baltimore said, "Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers." But such assumptions are largely wrong, Xiao and colleagues argue after looking at the results of tests involving 272 participants asked to use standard internet searches or AI to help them write about news topics in the US, such as health care and student loans. The "echo chamber" effect is louder when people seek information from a chatbot using large language models than via conventional searches, the team found. So really, people are getting the answers they want to hear," Xiao said, ahead of presenting the team’s findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems.
Discover Related

Google executive says AI will make your job better, over 80 per cent people agree

How AI chatbots could change online search

AI may replace search engines. Is that good?

ChatGPT Vs ChatSonic Vs YouChat Vs NeevaAI: Who's The Better AI Writer? We Find Out

Google to roll out AI search features as Microsoft rivalry heats up

ChatGPT, the Artificial Intelligent chatbot has an inherently biased nature, here is how
