Online Safety Bill: Kill Switch for Encryption
Of the many worrying provisions contained within the draft Online Safety Bill, perhaps the most consequential is contained within Chapter 4, at clauses 63-69. This section of the Bill hands OFCOM the power to issue “Use of Technology Notices” to search engines and social media companies.
As worded by the section of the Bill, the powers will lead to the introduction of routine and perpetual surveillance of our online communications. They also threaten to fatally undermine the use of end-to-end encryption, one of the fundamental building blocks of digital technology and commerce.
Use of Technology Notices purport to tackle terrorist propaganda and Child Sexual Exploitation and Abuse (CSEA) content. OFCOM will issue a Notice based on the “prevalence” and “persistent presence” of such illegal content on a service. The phrases “perpetual” and “persistent” recur throughout the Bill but remain undefined, so the threshold for interference could be quite low.
Any company that receives a Notice will be forced to use certain “accredited technologies” to identify terrorist and CSEA content on the platform.
The phrase “accredited technologies” is wide-ranging. The Online Safety Bill defines it as technology that meets a “minimum standard” for successfully identifying illegal content, although it is currently unclear what that minimum standard may be.
The definition is silent on what techniques an accredited technology might deploy to achieve the minimum standard. So it could take the form of an AI that classifies images and text. Or it may be a system that compares all the content uploaded to the hashes of known CSEA images logged on the Home Office’s Child Abuse Image Database (CAID) and other such collections.
Whatever the precise technique used, identifying terrorist or CSEA content must involve scanning each user’s content as it is posted, or soon after. Content that a bot decides is related to terrorism or child abuse will be flagged and removed immediately.
Social media services are public platforms, and so it cannot be said that scanning the content we post to our timelines amounts to an invasion of privacy — even when we post to a locked account or a closed group, we are still “publishing” to someone. Indeed, search engines have been scanning our content (albeit at their own pace) for many years, and YouTube users will be familiar with the way the platform recognises and monetises any copyrighted content.
Please note that by clicking on this video, it will open an external link to the video on YouTube. YouTube engages in extensive data collection and processing practices that are governed by their own terms of service.
It is nevertheless disconcerting to know that an automated pre-publication censor will examine everything we publish. It will chill freedom of expression in itself, and also lead to unnecessary automated takedowns when the system makes a mistake. Social media users routinely experience the problem of over-zealous bots causing the removal of public domain content, which impinges on free speech and damages livelihoods.
However, the greater worry is that these measures will not be limited to content posted only to public (or semi-public) feeds. The Interpretation section of the Bill (clause 137) defines “content” as “anything communicated by means of an internet service, whether publicly or privately…” (emphasis added). So the Use of Technology Notices will apply to direct messaging services too.
This power presents two significant threats to civil liberties and digital rights.
The first is that once an “accredited technology” is deployed on a platform, it need not be limited to checking only for terrorism or child porn. Other criminal activity may eventually be added to the list through a simple amendment to the relevant section of the Act, ratcheting up the extent of the surveillance.
Meanwhile, other Governments around the world will take inspiration from OFCOM’s powers to implement their own scanning regime, perhaps demanding that the social media companies scan for blasphemous, seditious, immoral or dissident content instead.
The second major threat is that the “accredited technologies” will necessarily undermine end-to-end encryption. If the tech companies are to scan all our content, then they have to be able to see it first. This demand, which the government overtly states as its goal, is incompatible with the concept of end-to-end encryption. Either such encryption will be disabled, or the technology companies will create some kind of “back door” that will leave those users vulnerable, to fraud, scams, and invasions of privacy.
Predictable examples include identity theft, credit card theft, mortgage deposit theft and theft of private messages and images. As victims of these crimes tell us, such thefts can lead to severe emotional distress and even contemplation of suicide — precisely the ‘harm’ that the Online Safety Bill purports to prevent.
The trade-off, therefore, is not between privacy (or free speech) and security. Instead, it is a tension between two different types of online security: the ‘negative’ security to not experience harmful content online; and the ‘positive’ security of ensuring that our sensitive personal and corporate data is not exposed to those who would abuse it (and us).
As Cairan Martin, the former head of the National Cyber Security Centre said in November 2021, “cyber security is a public good … it is increasingly hard to think of instances where the benefit of weakening digital security outweighs the benefits of keeping the broad majority of the population as safe as possible online as often as possible. There is nothing to be gained in doing anything that will undermine user trust in their own privacy and security.”
A fundamental principle of human rights law is that any encroachment on our rights must be necessary and proportionate. And as ORG’s challenge to GCHQ’s surveillance practices in Big Brother Watch v UK demonstrated, treating the entire population as a suspect whose communications must be scanned is neither a necessary nor proportionate way to tackle the problem. Nor is it proportionate to dispense with a general right to data security, only to achieve a marginal gain in the fight against illegal content.
While terrorism and CSEA are genuine threats, they cannot be dealt with by permanently dispensing with everyone’s privacy.
Open Rights Group recommends
- Removing the provisions for Use of technology Notices from the draft Online Safety Bill
- If these provisions remain, Use of Technology Notices should only apply to public messages. The wording of clauses 64(4)(a) and (b) should be amended accordingly.
This article was first published here.
Image credits: Open Rights Group, Image by Andrew Gustar under a CC-By-ND 2.0 licence.
(Contribution by: , Free Expression Consultant for EDRi member Open Rights Group)