Rearranging deck chairs on the Titanic: Belgium’s latest move doesn’t solve critical issues with EU CSA Regulation

The EDRi network has long-urged European Union (EU) lawmakers to ensure that efforts to combat OCSEA (online child sexual exploitation and abuse) are lawful, effective and technically feasible. The goal to protect children online is vital. This can only be done if the proposed measures work and are compatible with human rights, including privacy and the presumption of innocence.

By EDRi · April 4, 2024

Content warning: contains references to child sexual abuse and exploitation

The EDRi network has long-urged European Union (EU) lawmakers to ensure that efforts to combat OCSEA (online child sexual exploitation and abuse) are lawful, effective and technically feasible. The goal to protect children online is vital. This can only be done if the proposed measures work and are compatible with human rights, including privacy and the presumption of innocence.

That’s why we have opposed the European Commission’s draft Child Sexual Abuse (CSA) Regulation, which has been very widely criticised – including by technical experts, police, and some child rights groups. These critiques have been based on the proposed generalised access to the content of electronic communications (which amounts to mass surveillance). This type of scanning can only be implemented by undermining end-to-end encryption (E2EE) and applying age verification – for which no fundamental rights-compliant technology exists.

Lawyers working on behalf of the EU Member States have warned that the original CSA Regulation proposal would violate the essence of the right to privacy – advice that has been very influential on the European Parliament position, and on Council thinking to date.

The current Belgian Presidency of the Council has responded to these concerns, claiming that they have put forward a ‘more proportionate’ approach, which also protects encryption. However, this is not reflected in the latest text that they have put on the table.

The EDRi network urges EU governments to recognise that this proposed Council position is not a silver-bullet solution, and crosses the clear red lines that many Member States have already expressed. We furthermore call on them to refuse to agree to a text which clearly does not reach the requirements laid out in EU fundamental rights law.

Our assessment of the proposed new approach

In its facilitator role, the Belgian Presidency is trying to break the deadlock between EU Member States with a new Council proposal. This new text (Council document 8019/24) amends the entire CSA Regulation, carrying over several worrying provisions from Member States’ negotiations last year (which we criticised) and adding several new – and equally concerning – parts:

1. The new risk framework still allows detection orders to be issued very broadly

Perhaps inspired by the EU’s Artificial Intelligence (AI) Act’s approach to assessing risk (which has been heavily criticised by civil society groups), the Belgian Presidency proposes that providers of online services self-assess their level of risk and report this to their relevant Coordinating Authority (Articles 3, 5 and 5a). Based on the input from providers, Coordinating Authorities then make the final decision about risk categorisation (low, medium, high) of the service in accordance with a methodology and criteria which have not yet been formalised by the Belgian Presidency.

For services in the medium and high risk category, the Coordinating Authority can order adjusted or additional risk mitigation measures (Article 5a). Detection orders can only be issued for services in the high risk category. The text claims that the new risk categorisation approach will make detection orders a “measure of last resort” (recital 18b), something that previous Council texts also claimed to do.

However, the primary conditions for issuing a detection order are unchanged from the previous compromise text, including the vague and overly-broad “evidence of a significant and present or foreseeable risk” of the service being used for the purpose of child sexual abuse. This evidence can even come from simulated tests conducted by the EU Centre (Article 47a). Although the precise criteria for categorising a service as high risk are still unknown, there is likely to be a substantial overlap between these criteria and the other conditions for issuing a detection order. In a nutshell, this means that the issuance of detection orders is only nominally limited by the new risk categorisation approach.

In any case, once the service, or parts of the service, is/are classified as high risk by the Coordinating Authority, detection orders can be issued against all users in a general and indiscriminate manner, including against people with no connection, not even an indirect one, to any OCSEA crime. Therefore, the new proposal does very little to address the key objections about mass surveillance of private communicationsand pulling entirely innocent people into a net of suspicion for the worst kind of crime even though they have done nothing wrong.

Throughout the new text, the phrase “the service or parts or components of the service” (Articles 5.1 and 7.5-7.7) is used to try to indicate that there can be ‘targeting’ of measures. However, this is not genuine or meaningful targeting. As we criticised with the Commission’s original text, “parts or components” does not necessarily mean targeting – it could include, for example, the entire messaging function of a social media platform. Notably, it does not mean targeting against individual users for whom there is reasonable prior suspicion of involvement in online child sexual abuse.

What’s more, the Council’s updated wording still allows “the [whole] service” to be put under a detection order. Article 7.4.(ca) now adds some fundamental rights language and claims to bring in safeguards – but these are superficial. Similarly, new wording on possible cooperation of the EU Center with EU agencies specialised in fundamental rights and data protection is merely voluntary (Article 53a.), therefore providing no guarantees of protection or oversight.

2. Reporting on the basis of repeated flagging is based on statistical misunderstandings

Another purported targeting measure proposed by the Presidency is that reports would be made only on the basis of repeated flagging of users. Article 7.10 introduces a requirement that a first detection only flags within the user’s account. The article suggests that notification to the provider and subsequent reporting obligations to the EU Center happen only if there are two or more detection hits of known child sexual abuse material (images/videos) or three or more hits for unknown material and solicitation.

This is based on a flawed and deeply-misleading calculation of statistical probability, as we see in footnote 2 of Council document 7462/23. The calculation wrongly assumes that uploading content is a one-off activity, whereas in reality people will continually make postings that are scanned by error-prone detection algorithms. Therefore, the Presidency’s assessment that this new threshold for reporting would make the reporting of users to law enforcement ‘targeted’ does not hold up to mathematical scrutiny.

Furthermore, given the inaccuracy of the proposed tools, especially when it comes to solicitation, and the fact that potentially all users could still be scanned, the high chance of these measures being deemed unlawful general monitoring by the Court of Justice has not gone away.

3. Privacy-respecting services will be disproportionately impacted (and encouraged to use age verification and turn off encryption by default)

We also have concerns that this new approach marks out secure and privacy-respecting services as the riskiest. Supporting document WK 3036/2024 REV 1 elaborates on the Presidency’s speculative – and deeply concerning – methodology for assessing risk. Services likely to be high risk include those that people can join anonymously, those without age verification, as well as those that allow any sort of private communication. It also suggests that services which encrypt information only when a user opts in could reduce their risk score – a step which would discourage the use of E2EE, despite it being a standard protocol for secure communication around the world.

Proposed criteria for assessing risk in this document also include criteria that are not feasible – such as “whether the service is accessed from an unsecured public WiFi hotspot” – again ringing alarm bells about the lack of technical expertise in this negotiation process.

Whilst this document does at least consider several concerns that we have raised about the risks of age verification – such as requiring such systems to be “zero-knowledge” and not processing biometric data – other issues raised by age verification, like digital exclusion, violations of free expression and access to information by adolescents, and threats to online anonymity are not resolved.

4. End-to-end encrypted services can still be forced to weaken or undermine security

Language has been added which is intended to address concerns about the protection of encryption, but it is not sufficient to ensure the protection of encryption. For example, Recital 26 now explains that “To avoid undermining cybersecurity, providers should identify, analyse and assess the possible cybersecurity risks derived from the implementation of the technologies used to execute the detection order and put in place the necessary mitigation measures to minimise such risks.

Article 1.5. also states that “This Regulation shall not create any obligation that would require a provider of hosting services or a provider of interpersonal communications services to decrypt or create access to end-to-end encrypted data, or that would prevent the provision of end-to-end encrypted services.”

Yet this wording would not sufficiently protect providers from being forced to weaken the overall integrity and security of their end-to-end encrypted services. That’s because the Presidency has continued to use the evasive language first put forward by the Home Affairs unit of the Commission. Whilst “access” and “decrypt[ion]” are not mandatory, there is nothing to stop providers from being forced to circumvent or weaken their encryption. Reading between the lines, this is very likely to still allow authorities to force providers to deploy client-side scanning (CSS) – which essentially amounts to putting surveillance technology directly on everyone’s devices.

It’s also unreasonable and legally problematic to put the burden on providers to “avoid undermining cybersecurity” given that there is a widespread expert consensus that any tools to access end-to-end encrypted information will undermine cybersecurity. Yet in Article 10.4., providers are required to follow a series of steps to mitigate cybersecurity risks; a bit like telling them to wash their bare hands with water, but that they are forbidden from getting their hands wet.

The obligation for providers to scan private communications if subjected to a detection order always takes precedence over the less-strict requirement to avoid undermining cybersecurity – again meaning that these ‘protections’ for E2EE are little more than lip service.

5. The Commission would be relied upon to unilaterally decide and update the risk criteria

Another novelty of the new Council proposal is that the European Commission is given significant powers to expand and interpret the risk rules – and because this would be done as a delegated act, there would be no input into these rules by the Council or Parliament.

Given the allegations of political integrity violations and unlawful online advert micro-targeting using prohibited characteristics which have tarred the Commission directorate in charge of this law, DG HOME – along with their refusal to engage with digital rights groups or listen to cybersecurity experts – we have serious concerns about empowering them to make such unilateral decisions. Furthermore, DG HOME’s alleged non-impartiality and closeness with developers of scanning technology, like Ashton Kutcher via Thorn, further questions the suitability of such an approach.

For all these reasons, we urge the EU Council to oppose this latest attempt at what, ultimately, still amounts to Chat Control.

This blog is dedicated to our esteemed colleague Professor Ross Anderson, a bold and important voice in our collective work on the EU’s CSA Regulation and a part of EDRi since our inception in 2002. A co-author of the landmark paper on CSS, ‘Bugs in our pockets’, Ross was unwaveringly committed to protecting encryption and combatting threats to digital security around the world. He used his vast technical experience and expertise to fight tirelessly to uphold privacy and the rule of law in the digital age. He will be deeply missed.

Contribution by: Ella Jakubowska, Head of Policy, EDRi & Jesper Lund, Chairman of EDRi member IT-Pol