Skip to content

In recent years there has been a huge increase in the use of artificial intelligence (AI) tools, both in terms of the number of products available on the market and their use by consumers. It is clear there are advantages to the use of AI, but there is still so much that is not yet known about these tools.  

Children using AI are potentially exposing themselves to new risks of harm online, and their lives may be reshaped more fundamentally by them in the future. Given its recent emergence, it is unsurprising that the actual impact of AI on children’s lives is still not fully understood. Ofcom tracks children’s online and media usage, and have found that 59% of 7-17 year old and 79% of 13-17 year old internet users in the UK have used a generative AI tool in the last year. Snapchat’s My AI was the most commonly used platform (51%), and there was no difference by gender in the number of children using these tools.  

Concern about AI has not been a major theme for children in my youth voice work or on my work on children’s online lives. Where child have told me about AI, it has largely been to express pessimism about the future and their careers. For example, in The Big Ask, children said the following about artificial intelligence: 

“I personally think that technology would take over and many jobs such as agriculture may be replaced with artificial intelligence. For example, a farmer’s job eliminated for a robot etc.” – Girl, 11, The Big Ask.  

“Not enough jobs because of artificial intelligence taking over. No opportunities for people from poorer backgrounds. No help for people who aren’t academic.” – Girl, 14, The Big Ask

The Government has signalled it will take a pro-innovation approach that will focus on positioning the UK as a market to test and innovate on new AI tools, while using AI in combination with public datasets to improve public services. For example, the Department for Education has recently published its position on how generative AI could be used in the education sector.   

As Children’s Commissioner, I want to sound a note of caution on the risks that AI pose for child protection. The Government’s white paper largely does not address children or child protection, other than to note that AI tools are being deployed to identify child sexual abuse material (CSAM).  

I am concerned about the risks posed by generative AI platforms available to children and the incorporation of AI tools into platforms commonly used by children. These risks may include: 

We are yet to understand the true impact of these tools on children’s lives. However, I consider that AI demonstrates the problem of emerging technologies that are not fully covered by the existing regulatory regime and how children can suffer as a result.  

I have been a strong proponent of the robust protections for children in the Online Safety Act, but it is has taken us many years to get here and many, many children who have grown up in an online environment that was and is not safe or designed for them. I am very pleased that the Act is in law and that I have a statutory role to ensure that children’s voice are heard, but AI is not covered by the Act and I am concerned that we are once again lagging behind an issue. 

More work is needed to fully understand how children can safely interact with these new technologies, and what strong safeguards should look like. I will continue to raise these issues in the implementation of Ofcom’s Children’s Code under the Online Safety Act regime and in my engagement with Ministers on tackling child sexual abuse and exploitation in the UK. I also look forward to addressing them in my response to Baroness Bertin’s Pornography Review, which I am pleased will look at the issue of AI-generated pornography.  

Related News Articles