ChatGPT-maker OpenAI says it is doubling down on preventing AI from 'going rogue'
ChatGPT's creator OpenAI plans to invest significant resources and create a new research team that will seek to ensure its artificial intelligence remains safe for humans - eventually using AI to supervise itself, it said on Wednesday. The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research. In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in developing systems more powerful than OpenAI's GPT-4, citing potential risks to society.
Discover Related

Google working on AI reasoning model that will 'make ChatGPT look obsolete'

OpenAI outlines AI safety plan, allowing board to reverse decisions

How AI will have changed the world by 2030, according to experts

AI threatens humanity's future, could endanger civilisation

ChatGPT chief says artificial intelligence should be regulated by a US or global agency
