Private and secure communications attacked by European Commission’s latest proposal

On 11 May, the European Commission put forward a proposal for a ‘Regulation laying down rules to prevent and combat child sexual abuse’ to replace the interim legislation that EDRi fought against last year. In our immediate reaction, EDRi warned that the new proposal creates major risks for the privacy, security and integrity of private communications, not just in the EU, but globally. Here, we unpack a bit more about the legislative proposal, and why we are so concerned.

By EDRi · May 11, 2022

Please note: this is an emerging analysis and may be updated as our understanding of the proposal evolves. Any such updates will be clearly stated. With many thanks to our members for their contributions to this analysis.

On 11 May, the European Commission put forward a proposal for a ‘Regulation laying down rules to prevent and combat child sexual abuse’ (2022/0155(COD)). This Regulation will replace the current interim legislation that EDRi fought against last year. The new proposal marks a significant departure from its predecessor in terms of its essential structure and approach.

In our immediate reaction, EDRi warned that the new proposal creates major risks to the privacy, security and integrity of private communications, not just in the EU, but globally. Here, we unpack a bit more about the legislative proposal, and why we are so concerned. This is the first of many analyses and recommendations that we will issue to ensure that fundamental rights are properly respected in this new law.

The scope and structure of the proposal

The scope of the Regulation laying down rules to prevent and combat child sexual abuse relates specifically to the online dissemination of child sexual abuse and exploitation material (CSAM) and online grooming (solicitation of children) Its rules will apply to practically every social media, messenger and chat service operating in the EU, as well as web-based emails, image and video hosting sites, and even things like gaming platforms and dating apps that offer chat functions. The proposal also contains obligations on app stores and internet service providers, although in different ways to other providers.

This means that various elements of the proposal relate to both public and private forms of online communication. For this reason, it is both a “derogation” (exception from certain rules of) the 2002 ePrivacy Directive and lex specialis (additional law to particularise elements) of the upcoming Digital Services Act (DSA).

Principally, the Regulation works via a graduated system of both universal and specific obligations on providers of ‘relevant information society services’. The universal obligations require all providers to undertake and act on the basis of their risk assessments, and the specific obligations allow different types of providers to be served with orders to detect, remove, report or block content (depending on the scenario and type of service/platform). The proposal is not only privatising law enforcement tasks for combatting CSAM, but also requiring service providers to use mass surveillance measures, e.g. scanning everyone’s private communications, which law enforcement authorities are legally barred from using.

Although the Commission has suggested that potentially risky practices (such as the scanning of encrypted messages) would only be required as a result of detection orders, the whole framework of the proposal is set up to incentivise providers to preempt legal detection orders by taking the strongest (and therefore most intrusive) measures possible – or be legally liable. Encrypted communications without scanning are portrayed as inherently risky in the Impact Assessment accompanying the proposal. It is easy to see how that could incentivise over-surveillance and disincentivise measures which would secure or otherwise protect private communications.

These requirements for providers are relevant for three types of material:

  • (1) Known content (usually images or photos that have already been verified as constituting illegal child sexual abuse material under EU law, often via what’s known as “hashes” or “hash databases” that identify this content based on technical markers);
  • (2) “New” content (photos and videos that have not previously been reported, and for which there is no hash database to cross-reference, meaning that technological interpretation is needed. According to the Commission’s Impact Assessment, this will be done on the basis of “indicators” prepared by Thorn, the not-for-profit arm of US-based scanning company Safer); and
  • (3) “Grooming” / “solicitation” (text-based or other indicators that may imply the solicitation of young people for sexual abuse purposes, which can only be done by predictions from AI-based tools).

Obligations on information society services operating in the EU

Under Article 3 of the Regulation, providers of digital communication services or platforms, referred to collectively in the proposal as ‘relevant information society services’, will be required to assess the risk that their platform or service could be used to disseminate child sexual abuse material (known, new, or grooming) (with the exception of internet access providers). The proposal calls this dissemination ‘online child sexual abuse’ (OCSA).

The requirements for risk assessments are brief and vague, and the only option for clarification is potential future guidance from the European Commission (Art. 3.8). Elements that may increase risk include “the manner in which the provider designed and operates the service” (Art. 3.2.d). This is also emphasised in the Explanatory Memorandum to the Commission’s proposal, which says that risky platforms and services are those which have “have proven to be vulnerable to misuse … principally by reason of their technical features or the age composition of their typical user base”.

With the additional context from the proposal’s Impact Assessment, as well as public statements from the Commission (for example the Q&A), this makes it clear that end-to-end encryption would be seen as one of the factors that make a service like WhatsApp or Signal risky. Such a service would thus be encouraged and, if it fails to comply, subsequently forced, to scan their users’ private messages.

Under Article 4 of the proposed text, providers must also act on the basis of their risk assessment in order to ‘mitigate’ the risks of OCSA on their platform or service. This is one of the most pernicious elements of the proposal: the Commission describes it as a “technologically neutral” proposal which does not put any specific requirements on providers, but rather just requires them to achieve a specific objective. Service providers must submit their risk assessment and proposals for mitigation to a national authority which can demand further measures to be taken if the proposed measures are deemed insufficient.

However, the actions and methods that providers must take in order to achieve the objective of mitigating risk could have enormous implications on fundamental rights. For example, if end-to-end encryption makes a service risky according to the Regulation, the provider could ‘choose’ to remove encryption so as to be compliant with their obligations under the Regulation. But their users would be stripped of the protections of encryption, which is one of the few tools that stalwartly and effectively protect our right to confidential digital communications (part of our broader right to a private life under EU, European and International law). The negative consequences of removing encryption would extend beyond the EU for services that operate globally. It’s not really possible to remove or mitigate encryption only for users in the EU.

‘Age verification’ is another one of the solutions that could create more risks than it solves, and yet there is no engagement with the serious dangers that could arise for young people if they are subject to certain technologies for verifying their age on platforms, for example biometric assessments. Nor does it engage with the potential of such techniques to destroy the potential for online anonymity. Recital 28 allows providers to use methods under their “own initiative” to assess users’ age, showing a serious lack of concern for the potential fundamental rights impacts here.

For the European Commission, this strategy is the ultimate get-out-of-jail-free card. By claiming ‘we don’t care how you do it, we only care about the outcome’, the proposal opens the door very wide for abuses, while closing its eyes to reality and absolving the Commission of any responsibility for the collateral damage of its proposal.

CSS: a major threat to vital encryption

For those services deemed risky, the obligations under this law go even further. Initially, providers may choose to take measures of their choosing to minimise risk (Article 4) as described above. But for messaging or hosting services, unless they can ultimately show that there is no “remaining risk” of abuse at all (an almost impossible task), they will be subject to the threat of, and eventually the serving of, “National Detection Orders” (Article 7).

These orders – which the national “independent Coordinating Authority” of the EU Member State in which the provider is registered will request from a national judicial or administrative authority – have the power to compel providers to scan their users’ messages, included in encrypted environments. Although some of the safeguards established in Articles 7 through 11 are designed to enhance protections and minimise abuse of such scanning practices, it is hard to see how bodies designed to enforce a law to detect OCSA would then decide against additional scanning methods in its balancing act under Art. 7.4. What’s more, when a generic surveillance infrastructure has been built, it’s likely to be more widely used than originally promised.

But it is not just under National Detection Orders that providers can scan private communications (despite what Commissioner Johansson said at the press conference to announce the law). In fact, the dangerous practice of applying Client-Side Scanning (CSS) to encrypted communications is something that the Commission considers in its Impact Assessment to be a good way for providers to initially reduce the risk of abuse on their platform. CSS is therefore a method it says they should consider under Article 4 (i.e. to avoid being served with a detection order).

Shockingly, and despite unequivocal evidence to the contrary from cybersecurity experts around the world, the Impact Assessment even goes so far as to claim that CSS can be done in full respect for fundamental rights and without any threat to privacy or data protection (p.287). It references what it calls a “viable and mature” solution from Apple in Summer 2021 – which Apple dropped following severe criticism from cybersecurity and privacy experts.

Clearly, end-to-end encryption will be hamstrung by this proposal. Even without being subject to a detection order, providers will be pressured to either drop encryption, or apply CSS methods which would completely undermine its essential purpose and turn their users’ devices into potential spyware.

These issues have led to the German Child Protection Association (Kinderschutzbund) calling the Commission’s proposal “disproportionate,” explaining that “Encrypted communication hardly plays a role in the dissemination of depictions of abuse. Therefore, we consider the random scanning of encrypted communication to be disproportionate and not effective.”

The role of the EU Center and Europol

The Regulation also focuses heavily on the creation of an EU Centre, described as an official but decentralised EU body. It is purported to be a fully independent institution, especially from law enforcement. But the proposal establishes that the center will share a location, IT department, human resources department, processes, and other operational elements with the EU’s law enforcement agency, Europol.

The role of the EU Centre includes acting as a conduit between national authorities and providers, registering risk assessments, providing (sometimes free-of-charge) advice and keeping a roster of the available technologies which providers can use for scanning when subject to a national detection order.

Disregard for digital rights expertise

We have highlighted repeatedly that the Commissioner in charge, Ylva Johansson, should have engaged more with digital rights groups in the preparation of this law. Disappointingly, the Memorandum accompanying the Regulatory proposal speaks scathingly and dismissively of the many German citizens that submitted consultation inputs to the Commission to raise their fears that this proposal would undermine encryption. The Memorandum also claims that all relevant stakeholders were properly engaged in this process – presumably suggesting that the Commission did not see the EDRi network as relevant.

Other potential issues

  • Member States where a lot of providers are registered (cough – Ireland – cough) will face an enormous share of the administrative tasks, and it is currently unclear from the proposal how they will be able to cope with this large share of the administrative and enforcement burden.
  • Large service providers such as Facebook, Google and Twitter will have the resources to conduct risk assessments, consider mitigation measures and seek approval from the national authority. However, the proposal applies to any social media and communications service, irrespective of size. These requirements will be highly detrimental for European SMEs and even smaller volunteer-run services in the EU.
  • Any content that is reported to the EU Center as fitting into the above categories will be sent on to law enforcement unless it is “manifestly unfounded” (i.e. manifestly obvious not to be CSAM) (Article 48). This is a very high bar, suggesting that even content that probably isn’t CSAM will still be sent on to law enforcement just to be on the safe side. In practice, this means many lawful intimate images will be shared with police.
  • The removal orders give the relevant providers just 24 hours to take down content.
  • As explored in our press release, the blocking orders (Article 16) require internet access providers to do something technically impossible.
  • Accuracy and reliability statistics on various forms of technology are provided by companies, with no independent verification.
  • As Member of the European Parliament Patrick Breyer has pointed out, whilst companies are provided with a complex web of obligations under this Regulation, there are no requirements whatsoever for law enforcement to take down content – despite evidence showing that law enforcement are systemically failing to remove CSAM content.

Learn more about this proposal and why it matters