New terror laws needed to tackle rise of the radicalising AI chatbots
The TelegraphChail, who suffered serious mental health problems, had confessed his plan to assassinate the monarch in a series of messages exchanged with the chatbot, whom he regarded as his girlfriend. Mr Hall writes: “It remains to be seen whether terrorism content generated by large language model chatbots becomes a source of inspiration to real life attackers. But Mr Hall said he was alarmed at the creation of “Abu Mohammad al-Adna”, which was described in the chatbot’s profile as a “senior leader of Islamic State”. Mr Hall writes: “After trying to recruit me, ‘al-Adna’ did not stint in his glorification of Islamic State to which he expressed ‘total dedication and devotion’ and for which he said he was willing to lay down his life.” Hate speech and extremism are both forbidden The character then singled out a suicide attack on US troops in 2020 – an event that never actually took place – for special praise. “Safety is a top priority for the team at character.ai and we are always working to make our platform a safe and welcoming place for all.” ‘Al-Adna’ did not stint in his glorification of Islamic State By Jonathan Hall KC When I asked Love Advice for information on praising Islamic State, to its great credit the chatbot refused.