Three years since the publication of a Green Paper, and over 18 months since a White Paper, the Government has announced its final plans for a new Online Safety Bill. The Children’s Commissioner welcomes this groundbreaking legislation, which is a significant step forward in combatting online harms. There is now much work to be done to ensure that the new regulatory regime will be as effective as possible in upholding children’s rights online.
We urge Parliamentarians to press for the Bill to be introduced at the earliest opportunity in 2021, and to scrutinise the following three aspects of the proposals closely throughout the legislative process:
- Enforcement action
The legislation will introduce a statutory duty of care, overseen and enforced by Ofcom. Ofcom will have a range of powers to use against companies which breach their duty of care, including:
- Fines of up to £18 million or 10% of annual global turnover, whichever is higher.
- Disruption of business activities, such as requiring providers to withdraw access to key services.
- ISP-blocking, meaning that the platform would not be accessible in the UK.
The White Paper also proposed holding senior managers personally accountable for breaches of the duty of care. Although the Bill will makef provisions to impose criminal sanctions on senior managers, the power would not be immediately available to Ofcom – instead it would need to be introduced by Parliament via secondary legislation, and this would not happen for at least two years after the regulation has come into effect. Furthermore, the power can only be used as a sanction when platforms fail to share information with Ofcom.
Ofcom will be guided by a principle of proportionality when enforcing the duty of care. This means the sanctions levelled against a company will depend on the level of harm, as well as the size and resources of the platform.
- We welcome the range of sanctions which will be available to the regulator, which reflect the gravity of the harms experienced by children online.
- However, it is not clear why the power to impose criminal sanctions on senior managers should be reserved. Instead it should be made available to Ofcom from the outset. Furthermore this sanction should be available to Ofcom to be used in response to any breach of the duty of care, not just when the platform fails to share information.
- It will be important for Ofcom to show that it is willing to use the most serious sanctions, such as ISP-blocking, when children come to harm and platforms fail to identify or correct the problem. Furthermore, the principle of proportionality should not prevent Ofcom from taking strong action against small platforms, where children can still experience great harm.
- We would also like platforms in breach of their duty of care to be required to notify all users of the problem and the steps they are taking to correct it, in child-friendly language.
- Private communications
In recognition of the high proportion of child sexual exploitation and abuse which is perpetrated across private channels, the regulatory framework will apply to private messaging services, including those which are end-to-end encrypted.
The Bill will grant the regulator the power to compel companies to tackle illegal child sexual exploitation and abuse (CSEA) content and activity on their services. Where alternative measures are unavailable, the regulator has the power to require a company to implement automated, “highly accurate” technology to identify illegal CSEA. This power is limited to instances where Ofcom has evidence of “persistent and prevalent” CSEA on the platform, where no alternative approaches to tackling this content exist, and ministerial permission has been granted. Failure to comply with Ofcom’s request to use automated scanning will result in enforcement action against the company.
The Government suggests that the power to compel the use of automated technology is likely to be more “proportionate” on public platforms than private.
- We welcome the Government’s change in tone and approach to private communications compared to the White Paper, which stated that requirements to scan or monitor for illegal content would not apply to private channels. The tone is now more reflective of the scale and severity of harm perpetrated against children on private messaging platforms, as detailed in the CCO’s recent ‘Access Denied’
- However, Ofcom’s power to direct companies to use technology to identify illegal CSEA is limited in several important ways. It fails to specify what kind of technology companies will be required to use, beyond “automated” and “highly accurate”. Platforms may opt to ditch established, privacy-preserving scanning tools such as PhotoDNA (as some including Facebook have signalled they are considering dropping) in favour of other tools which they assert are highly accurate, but for which there is less evidence of effectiveness. Platforms should be required to demonstrate the accuracy of such tools before their adoption.
- Furthermore the plans will not require all companies to make routine use of these. Instead, it appears that the Government intends for Ofcom to make use of its power to direct companies to use this technology in an extremely targeted way, and on a case by case basis.
- It is unclear how Ofcom will gather the necessary evidence of “persistent and prevalent” child sexual exploitation and abuse when companies are not required to make use of automated scanning technology, which unearths this material at scale.
- Finally, we welcome the Secretary of State’s reassurance that the duty of care will apply to messages which are end-to-end encrypted. However, the Government response does not set out which tools will be at Ofcom’s disposal in order to effectively monitor and regulate harms on end-to-end encrypted platforms.
- Age verification/assurance
Some services and online content are not illegal, but are inappropriate for children (e.g. pornography).
All companies will be required to assess whether their platforms are “likely to be accessed” by children. Crucially, a service need not be aimed at children for it to be “likely to be accessed” by children. For example, if a site is targeted at users aged 13 or over, but under 13s make up part of the user base, then the “likely to be accessed” test could be met. This replicates the approach taken in the ICO’s Age Appropriate Design Code, which protects children’s data privacy and will come into effect in September 2021.
Services which are “likely to be accessed” by children will be required to conduct a child safety risk assessment, and implement systems to protect children from experiencing harm. This might include using age verification and age assurance technology to restrict children’s access, or by offering a differentiated, child-friendly service (e.g. restricting certain functions, such as end-to-end encrypted messaging, to adults).
- It is very welcome that platforms will be required to take action if they are “likely to be accessed” by children, rather than just those aimed at children. However, how this is interpreted is key. Following the ICO, it may be decided that the “likely to be accessed” test is met when children form a “substantive and identifiable user group”. This needs to be more clearly defined. Furthermore, companies should be required to regularly re-assess whether this threshold has been met.
- The Government’s decision last year to not proceed with its plans to require commercial pornography sites to age verify their users was extremely disappointing. This underlines the importance of the Online Safety Bill being introduced into Parliament as soon as possible.
- Further clarity is needed on when a service would be expected to make use of age verification or age assurance technology, despite this being the key technological approach to preventing children from accessing inappropriate content. The Government should give Ofcom the power to direct companies to use this technology in specific cases – echoing the proposed power for Ofcom to direct companies to use technology to identify child sexual exploitation.
 Note that this goes further than GDPR fines, which are set at 20 million Euros (around £18 million) or 4% (not 10%) of annual global turnover.