The UK government will allow technology companies and child protection charities to proactively test artificial intelligence systems to ensure they cannot generate child sexual abuse material (CSAM). The move is part of an amendment to the Crime and Policing Bill, announced on Wednesday, aimed at strengthening safeguards against the creation and spread of AI-generated child abuse imagery.
Under the proposed law, “authorised testers” will be permitted to assess AI models before their release to verify they are incapable of producing illegal content. Technology Secretary Liz Kendall said the measures would help “ensure AI systems can be made safe at the source,” describing the initiative as an important step in protecting children in the digital age.
The amendment comes amid growing alarm about the rise of AI-generated CSAM. The Internet Watch Foundation (IWF) reported that the number of such cases has doubled in the past year. Between January and October 2025, the organisation removed 426 pieces of AI-related child abuse imagery, compared to 199 during the same period in 2024.
Kerry Smith, chief executive of the IWF, welcomed the government’s proposal, saying it would reinforce ongoing efforts to make the internet safer. “AI tools have made it possible for survivors to be victimised all over again with just a few clicks,” Smith said. “Criminals can now produce limitless, realistic child sexual abuse material. Today’s announcement could be a vital step to ensure AI products are safe before they are released.”
Child protection groups have also voiced cautious support for the new policy. Rani Govender, policy manager for child safety online at the NSPCC, said the measures would help increase corporate accountability but argued that compliance should not be voluntary. “Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design,” she said.
The government said the legal changes would also allow charities and AI developers to test for other harmful content, such as extreme pornography and non-consensual intimate images. Experts have long warned that AI tools trained on vast online datasets can be misused to produce realistic images of abuse, making it difficult for law enforcement to distinguish real material from fabricated content.
Earlier this year, the Home Office said the UK would become the first country to criminalise the creation, possession, or distribution of AI tools designed to generate CSAM, with offenders facing up to five years in prison.
Kendall stressed that the government would not allow technology to outpace safety efforts. “By empowering trusted organisations to scrutinise AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought,” she said. Safeguarding Minister Jess Phillips added that the amendment would “help ensure legitimate AI tools cannot be manipulated into creating vile material, protecting more children from predators.”
