AI-generated child sex abuse content increasingly found on open web – watchdog
The IndependentSign up to our free weekly IndyTech newsletter delivered straight to your inbox Sign up to our free IndyTech newsletter Sign up to our free IndyTech newsletter SIGN UP I would like to be emailed about offers, events and updates from The Independent. Read our privacy policy AI-generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing it to more people, an internet watchdog has warned. Derek Ray-Hill, interim chief executive of the IWF, said: “People can be under no illusion that AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online. “We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks.” While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people Becky Riggs, National Police Chiefs' Council Many campaigners have called for strict regulation to be put in place around the training and development of AI models, to ensure they do not generate harmful or dangerous content, and for AI platforms to refuse to fulfil any requests or queries which could result in such material being created – a system some AI platforms already have in place. Assistant Chief Constable Becky Riggs, child protection and abuse investigation lead at the National Police Chiefs’ Council, said: “The scale of online child sexual abuse and imagery is frightening, and we know that the increased use of artificial intelligence to generate abusive images poses a real-life threat to children.