Q&A on the Recommendation on measures to “effectively tackle illegal content online”
Today, on 1 March 2018, the European Commission proposed a "Recommendation" on the surveillance and filtering of the internet by online companies
What is today’s announcement?
The European Commission has launched yet another policy document (a “Recommendation”) on what internet companies should do to fight illegal content online if they want to avoid bad publicity about failing to do “more” to fight crime.
Didn’t the Commission do that just six months ago?
In order to be able to publish this document today, the Commission would have needed to start preparing it very soon after publishing the Communication in September 2017. This shows that the initiative is entirely public-relations driven. The Recommendation is yet again threatening internet companies with legislation if they do not achieve the (public relations driven) goals that have been set.
Will there be any Member State willing to defend people’s rights to freedom of expression and privacy?
We do not have an answer to this yet. Ultimately, EDRi wants Member States not to criminalise online expressions as “terrorist” offences, such as the César Strawberry case and for the EU to stop asking for privatised law enforcement. In this sense, we would like the EU not to contradict Vice President Ansip’s recent statement “EU’s limited liability system in e-Commerce should remain backbone of an open, fair and neutral internet. I do not want Europe to become a ‘big brother’ society in online monitoring”.
The Commission is not changing the E-Commerce Directive, which is the horizontal instrument for liability of internet companies. However, the proposed Audiovisual Media Services Directive and the proposed Copyright Directive are two examples of how the E-Commerce Directive is being changed by stealth.
What is the purpose of the Recommendation?
Originally, the Commission planned to launch a Recommendation on terrorist content online. It was not long however, until copyright lobbyists got several references to “copyright” and “intellectual property” into the text.
But surely removal of criminal or terrorist content is only part of the problem? Do you have any evidence that the Recommendation is actually useful in fighting crime? What about if removal interferes with an ongoing investigation?
The Commission’s policy is based on a hope of a coincidence. The Commission hopes that, if it creates a public relations problem for internet companies, those companies will react in a way which solves the problem of illegal content online… and will do this in a way which is necessary, proportionate, effective, not counter-productive, not anti-competitive and durable. The Commission has never explained why it believes so fervently in this unlikely coincidence.
One of the demands of the Recommendation is the removal of “terrorist content” within an hour after Europol refers it to the companies. The Commission seems to be forgetting that the Europol Regulation, Article 4.1 m) subjects Europol referrals to the “voluntary consideration” of the companies in light of the companies’ terms and conditions, not the law.
Isn’t it good to do something?
We don’t know. Removing content “voluntarily” could interfere with ongoing investigations and could even tip off criminals. Under the EU “hate speech code of conduct” removals are based on the terms of service of the companies rather than the law.
The Commission recognised it doesn’t even keep records of whether or not there are investigations of criminal or illegal activity that is reported by Europol to the internet service providers.
Who is checking that the crimes are actually being investigated?
Nobody, apparently. None of the data collected suggests that this privatised enforcement policy is working and, more importantly, data exists that suggests that it is actually making the situation worse.
Is the Commission actually collecting any data to make sure this is serving some purpose and is not harmful in the fight against serious crime and terrorism?
No. Just simple data like how quickly certain content is deleted, but nothing useful to assess the illegality of the content, counter-measures by criminals/terrorists, the risk of counter-productive impacts, trends that would indicate a need to adjust policies, etc.
What about the digital single market?
A European internet with only a handful of American platforms implementing the same “voluntary” monitoring and censorship rules across the whole EU, would be a digital single market of sorts. The Commission may say this Recommendation is directed to the small companies, but in reality it knows this is not true. If you take the EU Hate Speech code of conduct, for instance, it is clear that “trusted flaggers” are only reporting content to Google, Facebook and Twitter; and the Commission designed it for the biggest platforms, with little interest in if, how, when or why it would be implemented by smaller providers.
EU Commission’s Recommendation: Let’s put internet giants in charge of censoring Europe (01.03.2018)
https://edri.org/eu-commissions-recommendation-lets-put-internet-giants-in-charge-of-censoring-europe
* This blogpost was modified on 6 March 2018 to rectify an error.