OpenAI Employees Accuse The Company Of Neglecting Safety & Security Protocols To Speed Through Innovations: Report
OpenAI has been at the frontline for quite some time when it comes to artificial intelligence and advanced Large Language Models. There have been instances wherein OpenAI employees have resigned from their positions while citing ignorance towards safety and security protocols as the reason. Recently another report has surfaced as per which OpenAI is speeding through and neglecting the safety and security protocols while developing new models. Three OpenAI employees have anonymously told The Washington Post that the team had been under pressure to speed through a new testing protocol which was specifically designed to “prevent the AI system from causing catastrophic harm, to meet a May launch date set by OpenAI's leaders.” The report has highlighted a similar incident which occurred before the launch of the GPT-4o. We basically failed at the process.” OpenAI employees have previously raised concerns about the company's apparent neglect of safety and security measures.
Discover Related

ChapGPT-maker OpenAI partners with AI Safety Institute

OpenAI insiders blowing whistle, warn of reckless race for dominance

Ex-OpenAI employees demand greater protection for whistleblowers

OpenAI Employees Warn of a Culture of Risk and Retaliation

OpenAI forms safety committee as it starts training latest artificial intelligence model

OpenAI sets up Safety and Security Committee featuring Sam Altman after row

OpenAI forms safety and security committee as concerns mount about AI

OpenAI outlines AI safety plan, allowing board to reverse decisions
