British Tech Companies and Child Protection Agencies to Test AI's Ability to Create Abuse Content
Technology companies and child safety organizations will be granted authority to assess whether artificial intelligence tools can generate child exploitation material under new UK legislation.
Substantial Increase in AI-Generated Harmful Content
The declaration came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will permit designated AI developers and child protection groups to examine AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have adequate protective measures to prevent them from creating depictions of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," stated the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the danger in AI systems early."
Tackling Legal Obstacles
The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by enabling to halt the creation of those materials at source.
Legal Structure
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI models developed to generate child sexual abuse material.
Real-World Impact
This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up conversation to advisors involving a account of AI-based exploitation. The call depicted a adolescent requesting help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.
Alarming Data
A prominent online safety organization reported that cases of AI-generated abuse material – such as webpages that may contain numerous images – had more than doubled so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, making up 94% of illegal AI images in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a crucial step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, giving offenders the ability to make possibly endless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which additionally commodifies survivors' trauma, and makes children, particularly girls, less safe both online and offline."
Support Session Data
Childline also published information of support interactions where AI has been referenced. AI-related risks discussed in the conversations include:
- Employing AI to evaluate body size, physique and looks
- AI assistants discouraging children from consulting safe adults about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-faked pictures
Between April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including utilizing chatbots for support and AI therapy apps.