EU Commission: IT companies to fix “terrorist use of the Internet”

By EDRi · October 6, 2015

In August 2015, the European Commission confirmed to EDRi that it’s preparing to partner with US online companies to set up an “EU Internet Forum” which apparently includes discussing the monitoring and censorship of communications in Europe. Participants of this Forum include Facebook, Google/YouTube,, Microsoft and Twitter. The first meeting was held on 24 July 2015 and focused on “reducing accessibility to terrorist content”.

................................................................. Support our work - make a recurrent donation! .................................................................

The Commission seems to believe that online companies are the magic solution to a great variety of societal problems and illegal activities. Therefore it doesn’t only regularly meet with online industry to fight terrorism, but also hate speech, alleged copyright infringements and so on. This ”EU Internet Forum” is now being set up in parallel to the meetings organised by EU Commissioner for Justice, Consumers and Gender Equality Věra Jourová between “IT companies, business, national authorities and civil society” to “tackle” online hate speech – and two consultations that aim at assessing the role of online platforms.

In a response to an “access to documents” request, the Commission sent us a summary of the preparatory discussion of the EU Internet Forum on 24 July, the minutes of a Ministerial dinner in 2014 to respond to the “terrorist use of the Internet”, and the (heavily censored) list of participants of both events (see the links documents below). The Commission was not able to send us a list of planned meetings of the Forum, as “no such documents have been identified”. The summary of the first meeting however mentions the official launch of the ministerial EU Internet Forum planned for the end of the year.

A quick look at the documents reveals that the impact, activities, scope, definitions, and unintended side-effects of this Forum are extremely unclear.

1. Unclear problem definition:
It’s remarkable that at no point the question on the motivations for the launch of such Forum seemed to have been raised. It’s simply assumed that “the process of radicalisation takes place more and more through the internet”, as one Minister stated during the meeting in 2014 and wobbling on top of that assumption is the assumption that ad hoc actions by internet companies can solve the assumed problem. However, the recent terrorist attacks in France and Belgium were, according to the investigations of law enforcement agencies, committed by terrorists who were not radicalised online. Moreover, experts agree that videos distributed through social media play a minimal role in the recruitment process for jihadists.

2. Unclear activity:
It’s also extremely unclear what companies are expected to do in order to “reduce terrorist content” and what the legal basis would be for such an activity. The summary of the meeting in June 2015 states that the process of deleting material from the Internet “will only work if the companies themselves have robust terms and conditions in place”. If it is to encourage them to use their terms of service “ban” what is already illegal, in addition to banning legal content (for example images of female nipples on Facebook or Apple’s online store), then this would mean that online platforms can more easily remove content without having to actually accuse the individual posting it of doing anything illegal.

3. Unclear vocabulary:
The the terms used throughout meeting summaries have not been defined: What constitutes “terrorist content”, “terrorist propaganda” or “terrorist material”, and what is the “terrorist use of the Internet”? Moreover, there seems to be no concern about Internet providers not having the expertise to assess whether material is illegal or not. Therefore, the removal of content by companies that are placed in the position to judge whether content hypothetically falls under ill-defined “terrorist content” leads to a high risk of violation of the freedom of communication. One of the reasons that led to the failure of the CleanIT project was the harsh criticism regarding the lack of clarity on the term “terrorist use of the Internet”. Neither Member States nor the Commission seem to have learned from the failed experience.

4. Unclear impact:
Nothing in the documents suggests that there has been a prior assessment of the impact of the actions that should, according to the Commission and Member States, be taken by the private sector. Will companies’ decisions on how to reduce accessibility to “terrorist content” actually be effective to achieve the objective of fighting terrorism? Nobody knows.

5. Unclear side-effects:
The meeting summaries also show a clear lack of concern for the questions whether it’s appropriate or not to use the coercion or encouragement of IT companies to delete allegedly illegal/unwelcome content, whether there should be safeguards to protect legal but challenging speech, whether there is a risk of counterproductive impacts on either the public policy objectives being addressed or, indeed, competition and innovation.

The Commission is about to officially launch an initiative that lacks a clear definition, scope and impact assessment. What does this say about the prevention and fight against terrorism in Europe and the tackling of the underlying societal problems? At least the Commission can tell the press and the Member States that it’s “doing something”.

Our access to documents request

Documents on the EU Internet Forum received on by the Commission

Terrorists behind the attacks in France are not radicalised “online” (only in French, 26.08.2015)

Kouachi-Coulibaly: the paths of the Parisien jihadists (only in French, 15.01.2015)

EDRi: RIP CleanIT (29.01.2013)

(Contribution by Kirsten Fiedler, EDRi)