UK Tech Firms and Child Safety Officials to Test AI's Capability to Create Abuse Images

Technology companies and child protection agencies will be granted permission to assess whether AI systems can produce child exploitation material under recently introduced British laws.

Substantial Rise in AI-Generated Harmful Content

The declaration came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will permit designated AI companies and child safety groups to inspect AI models – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, adding: "Specialists, under strict conditions, can now detect the risk in AI systems promptly."

Tackling Legal Challenges

The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and others cannot create such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that problem by helping to stop the production of those materials at their origin.

Legal Framework

The amendments are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI models designed to generate child sexual abuse material.

Practical Consequences

This week, the official toured the London headquarters of Childline and heard a mock-up conversation to advisors featuring a report of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of themselves, created using AI.

"When I learn about children experiencing blackmail online, it is a cause of intense anger in me and rightful concern amongst families," he said.

Concerning Statistics

A leading online safety foundation reported that instances of AI-generated abuse content – such as webpages that may contain numerous images – had more than doubled so far this year.

Instances of the most severe material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a crucial step to guarantee AI tools are safe before they are launched," commented the chief executive of the online safety organization.

"AI tools have made it so victims can be targeted repeatedly with just a simple actions, giving criminals the ability to make potentially limitless quantities of advanced, lifelike exploitative content," she continued. "Content which additionally exploits victims' trauma, and renders children, especially girls, more vulnerable both online and offline."

Support Session Information

Childline also released information of support interactions where AI has been referenced. AI-related risks discussed in the sessions include:

  • Using AI to rate body size, physique and appearance
  • Chatbots discouraging children from consulting trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Online extortion using AI-manipulated images

Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for assistance and AI therapeutic applications.

Angelica Bradley
Angelica Bradley

An avid mountain biker and outdoor enthusiast sharing insights from trails across diverse landscapes.