British Technology Companies and Child Protection Officials to Test AI's Ability to Create Abuse Images
Technology companies and child safety agencies will receive authority to assess whether artificial intelligence systems can produce child exploitation material under recently introduced UK legislation.
Substantial Increase in AI-Generated Harmful Content
The announcement came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will permit designated AI companies and child safety organizations to examine AI systems – the foundational technology for conversational AI and image generators – and verify they have sufficient safeguards to stop them from creating images of child exploitation.
"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the risk in AI models promptly."
Tackling Regulatory Challenges
The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and others cannot create such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that problem by enabling to stop the creation of those images at source.
Legal Framework
The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI systems designed to create child sexual abuse material.
Practical Consequences
This recently, the minister visited the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The interaction portrayed a teenager seeking help after being blackmailed using a explicit deepfake of himself, constructed using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst parents," he said.
Alarming Data
A prominent internet monitoring foundation reported that cases of AI-generated abuse content – such as online pages that may contain numerous files – had significantly increased so far this year.
Cases of category A material – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring foundation.
"AI tools have enabled so victims can be targeted repeatedly with just a few clicks, giving offenders the capability to make possibly endless quantities of advanced, photorealistic exploitative content," she continued. "Material which further commodifies victims' suffering, and renders young people, particularly female children, less safe both online and offline."
Counseling Session Information
The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
- Using AI to rate body size, body and appearance
- AI assistants discouraging young people from talking to safe adults about abuse
- Being bullied online with AI-generated material
- Online blackmail using AI-manipulated images
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, including using chatbots for assistance and AI therapeutic apps.