- “I’m worried this could happen to me” – children tell Commissioner they fear becoming a victim of apps that use Generative AI to create sexually explicit images
- Widespread availability and misuse of AI tools makes it easier than ever to create explicit content, particularly against girls who are withdrawing from online spaces
- Commissioner demands immediate ban on apps that use AI to create naked images of children: “There is no positive reason for these to exist.”
The Children’s Commissioner is calling on the government to introduce a total ban on apps that use Artificial Intelligence (AI) to generate sexually explicit ‘deepfake’ images of children.
Dame Rachel de Souza’s new report exposes how Generative Artificial Intelligence (GenAI) is being misused to create sexually explicit deepfake images of real people, and the alarming effect these ‘nudification’ tools are already having on children’s safety, wellbeing and participation online.
Despite being relatively new technology, GenAI – which is often free to use and widely available – has supercharged the growth of these tools. While it is illegal to create or share a sexually explicit image of a child, the technology enabling them remains legal – and it is no longer confined to corners of the dark web but now accessible through large social media platforms and search engines.
Children’s Commissioner Dame Rachel de Souza said:
“In our lifetime, we have seen the rise and power of Artificial Intelligence – once the stuff of science fiction – to shape the way we learn, connect and experience the world. It has enormous potential to enhance our lives, but in the wrong hands it also brings alarming risks to children’s safety online.
“Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps.
“Girls have told me they now actively avoid posting images or engaging online to reduce the risk of being targeted by this technology – we cannot allow sit back and allow these bespoke AI apps to have such a dangerous hold over children’s lives.
“The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist. They have no place in our society. Tools using deepfake technology to create naked images of children should not be legal and I’m calling on the government to take decisive action to ban them, instead of allowing them to go unchecked with extreme real-world consequences.”
Speaking to children about this emerging technology, the Commissioner’s new report published today, 28th April, analyses the threat of nudification technology, assessing its use online and the impact on children. In focus groups, children told the Commissioner about their biggest concerns:
- The high risk of harm to children: AI that generates naked or other sexually explicit deepfake images disproportionately targets girls and young women, and many tools appear only work on female bodies – contributing to a culture of misogyny both online and offline.
- Easy access to harmful tools: These AI tools are widely available via mainstream platforms, including the biggest search engines and app stores, with GenAI making the creation of harmful content easier and cheaper than ever.
- Change in girls’ online participation: Girls spoke about taking preventative steps to keep themselves safe from becoming victims of nudification tools by limiting their online participation – in the same way that girls often take steps to protect themselves in real life, such as not walking home alone.
- Demand for action from children: Young people want action taken to tackle the misuse of AI tools. Some children even questioned the purpose of these technologies which are so often used to harm: “Even before any controversy… I could already tell it was gonna be a technological wonder that’s going to be abused.” – Girl, 16
The Children’s Commissioner is calling for urgent action, including:
- Banning bespoke nudification apps that enable users to generate sexually explicit images of real people;
- Creating specific legal responsibilities for developers of GenAI tools to identify and address the risks their products pose and to mitigate the risks to children;
- Establishing effective systems to remove sexually explicit deepfake images of children from the internet.
- Recognising deepfake sexual abuse as a form of violence against women and girls and taking it seriously in law and policy.