Explained | Does ChatGPT have an ethics problem?
In November 2022, OpenAI opened its most recent and powerful AI chatbot, ChatGPT, to users to test its capability. While it can close the gates for amateur coders looking to build malware, the more seasoned ones could trick the bot into correcting or enhancing malicious code they have partially developed. Cybersecurity firm Check Point’s researchers tested the bot by asking it to draft a phishing email for a fictional webhosting firm. If you believe this to be in error, please submit your feedback – your input will aid our research in this area.” While surreptitiously asking ChatGPT to write malware is one problem, another issue several coders face is the inherently buggy code the bot spews out. What we’ve noticed with AI writing, like these GPT models, is that they write in a statistically vanilla way,” Eric Wang, VP, Artificial Intelligence at plagiarism detection firm Turnitin said.



Discover Related

AI security breach: ChatGPT leaks sensitive conversations, ignites privacy concerns

ChatGPT violated European privacy laws, Italy tells chatbot maker OpenAI

ChatGPT violated European privacy laws, Italy tells chatbot maker OpenAI

ChatGPT maker OpenAI partners with US military in big AI usage policy revamp

OpenAI launches business version of ChatGPT after blowback over privacy

U.S. FTC opens investigation into OpenAI over misleading statements: Report

FTC investigating ChatGPT creator OpenAI over consumer protection issues

ChatGPT maker OpenAI releases tool to identify AI-written text
