What are the chances of an AI apocalypse?
Hindustan TimesIn 1945, just before the test of the first nuclear bomb in the New Mexico desert, Enrico Fermi, one of the physicists who had helped build it, offered his fellow scientists a wager. In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” But just how worried is it rational to be? On the one hand, were subject-matter, or “domain”, experts in nuclear war, bio-weapons, AI and even extinction itself. One reason for AI’s strong showing, says Dan Mayland, a superforecaster who participated in the study, is that it acts as a “force multiplier” on other risks like nuclear weapons. Initiatives like a Moscow-Washington “hotline”, agreements to inspect each others’ weapons and treaties designed to limit the sizes of stockpiles all helped to cut the risk of nuclear war.