AI action plans should be slowed until safeguards for children in place – NSPCC
Sign up to our free weekly IndyTech newsletter delivered straight to your inbox Sign up to our free IndyTech newsletter Sign up to our free IndyTech newsletter SIGN UP I would like to be emailed about offers, events and updates from The Independent. The NSPCC said generative AI is already being used to create sexual abuse images of children, and urged the Government to consider adopting specific safeguards into legislation to regulate AI. “The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety. AI companies must prioritise the protection of children and the prevention of AI abuse imagery above any thought of profit Derek Ray-Hill “We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both Government and tech companies must take the urgent action needed to make generative AI safe for children and young people.” International conference the AI Action Summit is due to take place in Paris next month. Derek Ray-Hill, interim chief executive at the Internet Watch Foundation, which seeks out and helps remove child sexual abuse imagery from the internet, said existing laws, as well as future AI legislation, must be made robust enough to ensure children are protected from being exploited by the technology.


AI means anyone can be a victim of deepfake porn. Here’s how to protect yourself











Google employs AI to detect and combat child sexual abuse material spread online

Discover Related

Creating and sharing deceptive AI-generated media is now a crime in New Jersey

Think Before You Ghibli: Privacy Concerns Over AI-Generated Portraits

An AI Image Generator’s Exposed Database Reveals What People Really Used It For
![MNLU Mumbai's Symposium On AI: Privacy, Security, And IPR [Register By 31st March]](/static/images/error.jpg)
MNLU Mumbai's Symposium On AI: Privacy, Security, And IPR [Register By 31st March]

Australian AI startup is creating fake victims to fool real scammers

E-governance set to get AI push in UP

Agentic AI: The next frontier in artificial intelligence

Big Tech struggles to sell AI as industry wary of copyright infringements

Hollywood urges Trump to protect film, TV from AI

AI is turbocharging organized crime, EU police agency warns

China's DeepSeek AI Banned By US Commerce On Govt Devices: Here's Why

Tech firms must start protecting UK users from illegal content

Spain to impose massive fines for not labelling AI-generated content

Takeaways from our investigation on AI-powered school surveillance

Can AI Feel Anxious? Scientists 'Take Chat-GPT To Therapy' In New Study

Businesses should be clear about what they’re deploying AI for

Guardrails needed for AI growth

Aggressive pornography acts as a gateway to child sexual content, charity warns

Big Tech, Data privacy rules differ over online tracking of children on Net
