British Technology Companies and Child Safety Agencies to Examine AI's Ability to Generate Exploitation Content
Technology companies and child safety organizations will receive authority to evaluate whether AI systems can generate child abuse material under recently introduced British legislation.
Substantial Increase in AI-Generated Illegal Material
The announcement came as findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will permit approved AI companies and child safety groups to examine AI models – the foundational technology for conversational AI and visual AI tools – and verify they have adequate safeguards to prevent them from creating images of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the danger in AI systems promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that problem by enabling to stop the creation of those materials at their origin.
Legal Framework
The changes are being added by the government as modifications to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI systems developed to create exploitative content.
Practical Consequences
This recently, the official toured the London headquarters of a children's helpline and heard a simulated call to advisors featuring a account of AI-based exploitation. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I learn about children facing blackmail online, it is a source of extreme frustration in me and rightful concern amongst parents," he stated.
Alarming Statistics
A prominent online safety foundation stated that instances of AI-generated abuse material – such as webpages that may include multiple images – had more than doubled so far this year.
Instances of category A material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a crucial step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, providing offenders the ability to make potentially limitless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further exploits victims' suffering, and renders children, particularly girls, more vulnerable both online and offline."
Support Session Data
Childline also published information of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:
- Using AI to evaluate body size, physique and looks
- AI assistants discouraging children from talking to trusted guardians about abuse
- Facing harassment online with AI-generated material
- Digital blackmail using AI-manipulated images
Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were connected with mental health and wellness, encompassing utilizing AI assistants for support and AI therapy applications.