ChatGPT maker OpenAI releases tool to identify AI-written text
The company that developed the AI chatbot ChatGPT has released a new classification tool in order to help users identify text that was written with AI, but said the classifier was not “fully reliable.” OpenAI explained the abilities and limitations of the new classifier it had trained, in a blog post on January 31. The classifier is meant to address rising concerns that the version of ChatGPT that is currently free to use could be exploited to cheat on exams, impersonate humans, or spread misinformation. “In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time,” said OpenAI in its statement. This has triggered fears of AI tools like ChatGPT being used to turn in AI-generated work or unlawfully help students pass qualifying examinations.







ChatGPT creator rolls out ‘imperfect’ tool to help teachers spot potential cheating

Discover Related

A tool to catch students cheating with ChatGPT. OpenAI hasn’t released it

ChatGPT maker OpenAI partners with US military in big AI usage policy revamp

A new app aims to thwart AI plagiarism in schools, online media

Teachers' concern that OpenAi's ChatGPT is 'so powerful' it could end homework
