Elon Musk: AI could pose existential risk if it becomes ‘anti-human’
The IndependentSign up to our free weekly IndyTech newsletter delivered straight to your inbox Sign up to our free IndyTech newsletter Sign up to our free IndyTech newsletter SIGN UP I would like to be emailed about offers, events and updates from The Independent. Read our privacy policy Artificial intelligence could pose an existential risk if it becomes “anti-human”, Elon Musk has said ahead of a landmark summit on AI safety. Mr Musk said: “You have to say, ‘how could AI go wrong?’, well, if AI gets programmed by the extinctionists it’s utility function will be the extinction of humanity.” Referring to Mr Knight, he added: “They won’t even think it’s bad, like that guy”. He said: “If you take that guy who was on the front page of the New York Times and you take his philosophy, which is prevalent in San Francisco, the AI could conclude, like he did, where he literally says, ‘there are eight billion people in the world, it would be better if there are none’ and engineer that outcome.” “It is a risk, and if you query ChatGPT, I mean it’s pretty woke. “People did experiments like ‘write a poem praising Donald Trump’ and it won’t, but you ask, ‘write a poem praising Joe Biden’ and it will.” When asked whether AI could be engineered in a way which mitigates the safety risks, he said: “If you say, ‘what is the most likely outcome of AI?’ I think the most likely outcome to be specific about it, is a good outcome, but it is not for sure.