The UK government has unveiled four new laws aimed at tackling the growing threat of child sexual abuse material (CSAM) created using artificial intelligence. The Home Office announced that the UK would be the first country in the world to criminalize the possession, creation, and distribution of AI-generated CSAM, with offenders facing up to five years in prison.
The legislation also targets AI-generated “paedophile manuals” that teach offenders how to use artificial intelligence to exploit children. Those found in possession of such materials could face up to three years behind bars.
Home Secretary Yvette Cooper described AI as “industrializing the scale” of child sexual abuse, warning that the technology is being used to groom, blackmail, and further exploit children. “What we’re seeing is that AI is now putting online child abuse on steroids,” she told the BBC’s Sunday with Laura Kuenssberg, adding that the government may need to take further action to address the evolving threat.
Tougher Measures Against Online Exploitation
The new laws will also criminalize the operation of websites where paedophiles share CSAM or exchange advice on grooming children, with offenders facing up to ten years in prison. Additionally, Border Force officers will gain new powers to compel individuals suspected of posing a sexual risk to children to unlock their digital devices when entering the UK. Given that much CSAM is produced overseas, this measure aims to prevent offenders from importing illegal material. Depending on the severity of the content found, offenders could face up to three years in prison.
AI-generated CSAM is an emerging form of exploitation where technology is used to create images that appear realistic, often by modifying real photographs. Some software can “nudify” existing images, while others swap a child’s face onto an explicit image. In some cases, the voices of real children are used, further re-victimizing survivors.
Experts warn that these AI-generated images are also being weaponized to blackmail children, forcing them into further abuse.
Growing Threat of AI-Generated Abuse
The National Crime Agency (NCA) reports that 800 arrests are made each month related to online threats against children, with an estimated 840,000 adults in the UK posing a potential risk to children.
“This is an area where the technology doesn’t stand still, and our response cannot stand still either,” Cooper emphasized.
Despite broad support for the new laws, some experts argue that the government should have gone further. Professor Clare McGlynn, an expert in online abuse legislation, welcomed the measures but highlighted “significant gaps” in the proposals. She called for a ban on “nudify” apps and stronger action against the depiction of young-looking actors in mainstream pornography, which she described as “simulated child sexual abuse videos.”
The Internet Watch Foundation (IWF) has reported a staggering 380% rise in AI-generated CSAM in 2024, with 245 confirmed reports compared to just 51 in 2023. Each report may contain thousands of images. In one study, the IWF found over 3,500 AI-generated child abuse images on a single dark web site in just one month.
Interim IWF Chief Executive Derek Ray-Hill warned that AI-generated CSAM emboldens offenders and puts real children at greater risk. “It makes real children less safe,” he said, while welcoming the new laws as a “vital starting point” in tackling the problem.
Pressure on Tech Companies to Act
Children’s charity Barnardo’s also praised the government’s action, with Chief Executive Lynn Perry stating, “It is vital that legislation keeps up with technological advances to prevent these horrific crimes.”
She urged tech companies to strengthen safeguards on their platforms and called for robust enforcement of the Online Safety Act by Ofcom.
The new laws will be introduced as part of the Crime and Policing Bill in the coming weeks.