Learning caution in the age of AI
Online experiment reveals public is too trusting of tech that demonstrates possibly dangerous flaws, Wang Qian reports. As AI increasingly becomes a part of everyday life, Xiang's experiment proves that without labeling content as generated by AI, such accounts could infiltrate social media, posing as real humans and interacting with netizens. "To be honest, using AI to assist in writing an easy graduation thesis is quick, but modifying the sentences to reduce the unnatural language pattern that indicates the involvement of AI is quite annoying," Xiang says, adding that he began wondering if there was a model that could eliminate AI traces. Feeding the AI several thousand questions and answers from Zhihu, which has the most open-source datasets among Chinese social media platforms, an AI account called Ai-Qw was created on July 5. "It triggered reflections on how AI changes communication on social media if AI-generated content dominates the internet and most people believe it to be another human," Xiang says.

Discover Related

AI in a make-believe world, netizens shower likes

Meta’s AI-generated social media users unnerve human users

Meta now has an AI chatbot. Experts say get ready for more AI-powered social media

New AI video tools increase worries of deepfakes ahead of elections

Thousands chatted with this AI ‘virtual girlfriend.’ Then things got even weirder

China Scamster Uses AI Face-Swap Tech To Dupe Victim Out of $622,000
