1 month, 3 weeks ago

Teens are spilling dark thoughts to AI chatbots. Who’s to blame when something goes wrong?

When her teen with autism suddenly became angry, depressed and violent, the mother searched his phone for answers. Character.AI grew quickly since making its chatbot publicly available in 2022, when its founders Noam Shazeer and Daniel De Freitas teased their creation to the world with the question, “What if you could create your own AI, and it was always available to help you with anything?” The company’s mobile app racked up more than 1.7 million installs in the first week it was available. “Those lines between virtual and IRL are way more blurred, and these are real experiences and real relationships that they’re forming,” said Dr. Christine Yu Moutier, chief medical officer for the American Foundation for Suicide Prevention, using the acronym for “in real life.” Lawmakers, attorneys general and regulators are trying to address the child safety issues surrounding AI chatbots. The pair worked on artificial intelligence projects for the company and reportedly left after Google executives blocked them from releasing what would become the basis for Character.AI’s chatbots over safety concerns, the lawsuit said. Google denied that Shazeer and De Freitas built Character.AI’s model at the company and said it prioritizes user safety when developing and rolling out new AI products.

LA Times

Discover Related