OpenAI forms safety and security committee as concerns mount about AI
ChatGPT creator OpenAI on Tuesday said it formed a safety and security committee to evaluate the company’s processes and safeguards as concerns mount over the use of rapidly developing artificial intelligence technology. After that, it will present the company’s full board with recommendations on critical safety and security decisions for OpenAI projects and operations, the firm said in a blog post. The group was tasked with “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” Upon his departure, Leike said OpenAI’s “safety culture and processes have taken a backseat to shiny products.” OpenAI’s new safety and security committee is led by board Chair Bret Taylor, directors Adam D’Angelo and Nicole Seligman and Chief Executive Sam Altman. OpenAI said that it will “retain and consult with other safety, security and technical experts to support this work.” The committee’s formation arrives as the company begins work on training what it calls its “next frontier model” for artificial intelligence.


Discover Related

OpenAI raises $40 billion, valued at $300 billion in historic funding round

Sam Altman Says OpenAI Will Release an ‘Open Weight’ AI Model This Summer

The real story behind Sam Altman’s firing from OpenAI

Former OpenAI CEO Mira Murati launches her own ChatGPT-killer, Thinking Machines Lab

OpenAI board ‘unanimously’ rejects Elon Musk’s $97.4 billion bid for company

OpenAI board declines Elon Musk's $97.4 billion offer: Not for sale

OpenAI board unanimously rejects Elon Musk's $97.4 billion takeover proposal

OpenAI board unanimously rejects Elon Musk’s $97.4 billion proposal

Experts ‘deeply concerned’ as Government agency drops focus on bias in AI

Rebranded AI Security Institute to drop focus on bias and free speech

Elon Musk launches $94.7bn bid to buy OpenAI back from rival Sam Altman

OpenAI Set to make Super Bowl ad debut
