Terrorist content regulation – prior authorisation for all uploads?
The European Commission’s proposal for a Regulation on Terrorist Content Online is a complex and multi-layered instrument. On the basis of an “impact assessment” that fails to provide meaningful justification for the measures, it proposes:
- Obligations to take down content on the basis of removal orders within one hour
- An arbitrary system of referrals of arbitrarily designated potentially dangerous (but not illegal) content to internet service providers. The removal of this content is decided by the provider, based on their terms of service and not the law
- Unclear, arbitrary “proactive” measures to be imposed on an unclear subset of service providers to remove unspecified content.
However, buried deep in this chaotic-by-default legal drafting are explanatory notes (“recitals”) that goes further than is explicit in all of the headline measures in the proposal:
- Recital 18 introduces the notion that “reliable technical tools” (artificial intelligence software, in other words) may be used by service providers to “identify new terrorist content”.
- Recital 19 goes on to say, that a“decision to impose such specific proactive measures should not, in principle, lead to the imposition of a general obligation to monitor”. “In principle” means that it should not, but it may. In this context, this is the only way these words can be interpreted. Moreover, this recital also gives the national “competent authority” the option to force the use of technical measures upon service providers.
The text goes on to explain that in unspecified circumstances, Member States may derogate from their obligation (under the E-Commerce Directive) not to impose a general monitoring obligation on internet service providers. When doing so, it should “provide appropriate justification” (to whom is not explained).
What does this mean? It means that the proposal explicitly tells European Member States that they have the option to require not “just” monitoring all uploads to filter out of known terrorist content – but to require the use of algorithms to review all content, while it is being uploaded. Permission for the upload would be algorithmically denied or granted. In case of denial of permission by the algorithms involved, personal data may be stored and made to law enforcement authorities.
The proposal is ill-drafted, lacks evidence to justify the extreme measures it contains. It is unclear according to the data provided by the Impact Assessment how such measures would, even in theory, address or resolve the problem at stake. What is clear is that it will lead to more power for big tech companies to scan and delete information online without accountability. Will the European Parliament be able to fix this? Time – and our elected representatives – will tell.
(Contribution by Joe McNamee, EDRi)