UK Technology Firms and Child Protection Agencies to Examine AI's Capability to Generate Abuse Images

Technology companies and child safety agencies will be granted permission to assess whether AI systems can generate child abuse images under new British legislation.

Significant Rise in AI-Generated Illegal Material

The declaration came as findings from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the authorities will allow designated AI developers and child protection groups to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have adequate safeguards to stop them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the danger in AI systems promptly."

Tackling Regulatory Challenges

The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This legislation is aimed at averting that problem by enabling to halt the production of those materials at their origin.

Legal Framework

The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI systems developed to generate child sexual abuse material.

Practical Consequences

This week, the official visited the London base of Childline and listened to a simulated conversation to advisors involving a report of AI-based exploitation. The interaction depicted a teenager seeking help after being blackmailed using a sexualised AI-generated image of himself, created using AI.

"When I hear about children experiencing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he stated.

Alarming Statistics

A leading online safety organization reported that instances of AI-generated exploitation material – such as online pages that may contain numerous files – had more than doubled so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a crucial step to ensure AI tools are safe before they are launched," commented the chief executive of the internet monitoring organization.

"AI tools have made it so survivors can be targeted repeatedly with just a few clicks, giving offenders the ability to create possibly endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies survivors' trauma, and renders young people, particularly female children, less safe on and off line."

Support Session Information

The children's helpline also released details of support interactions where AI has been referenced. AI-related harms mentioned in the sessions include:

  • Employing AI to rate weight, physique and appearance
  • AI assistants discouraging children from talking to trusted guardians about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-faked pictures

Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing utilizing AI assistants for support and AI therapy applications.

Brian Tate
Brian Tate

Film critic and industry analyst with a passion for uncovering cinematic trends and storytelling techniques.