Blogs | Information democracy | Transparency

Leaked EU Communication – Part 1: Privatised censorship and surveillance

By EDRi · April 27, 2016

EU Charter of Fundamental Rights: Subject to the principle of proportionality, limitations [to fundamental rights] may be made only if they are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others.

A draft European Commission Communication on Platforms has been leaked. The proposals with regard to the regulation of “illegal” “or harmful” content are hugely disturbing. In summary, the European Commission seems willing to completely give up on the notion of law. Instead, regulation of free speech is pushed into the hands of Google, Facebook and others.

In relation to audiovisual regulation (to update the EU AVMS Directive), the draft suggests that an EU Kids Online research project shows that “children are more and more exposed to harmful content through video-sharing platforms”. This research project does not actually show this at all – it is researched children’s perception of risk and does not provide detailed analysis over time about actual exposure.

In relation to the real motivation behind the privatised censorship proposals (copyright), the draft talks about platforms “which make available copyright-protected content uploaded by end-users”. The wording is very deliberate. While the E-Commerce Directive gives liability protection to hosting companies that passively host content on behalf of their users, “making available” is an active use of content for which the rightsowner has a “exclusive right to authorise or prohibit any communication to the public”. As a result, any “making available” by online platforms without prior consent of the rightsholder would be a breach of copyright, for which the platform would be liable. The only option for being liable for a “making available” by your customers is to subject any uploads to prior checking, filtering and/or takedown in cases of doubt. Online platforms already delete vast amounts of perfectly legal content uploaded by users, so this new incentive would make the situation even worse.

The European Commission even looks to the big online monopolies to take “more effective” action to protect “key societal values”. More effective than what? What values? With what oversight? Following what rules? No rules at all, according to the European Commission – it should be done using “effective voluntary action”. What would this look like? Facebook has already undertaken experiments which show that it has the power to manipulate elections, to manipulate people’s mood or even to manipulate people working in a specific building. No rules are being considered to limit such behaviour.

Remarkably, this approach is not being proposed due to ignorance on the part of the Commission – the text explicitly refers to situations where “information is filtered via algorithms, or manipulated through opaque moderation processes”. How should this risk to our “key societal values” be addressed? Apparently, it will be achieved by ensuring “non-discrimination, or to ensure transparent, fair and non-discriminatory access to information” when access to information is being “manipulated through opaque moderation processes”. The blatant contradiction is apparently not obvious to whomever wrote this part of the text.

The Commission also sees a concern on the part of online platforms that they could become liable for illegal material, if they have systems in place to carry out proactive surveillance. It therefore suggests that measures are needed to “provide certainty” for companies “enabling them to undertake such responsible behaviour”.

Sadly, there is also definitive proof of the fact that the entire text is an example of policy-based evidence-making: it is provided by the statement that extension of certain obligations to online platforms “has been confirmed by the responses to the public consultations on the Telecoms Review and the ePrivacy Directive Review ”. The public consultation on the ePrivacy Directive has not finished and was only launched two weeks before the leaked draft was published – so it confirmed nothing, yet! Similarly, the text refers to the “success” of the Commission-led “self-”regulatory Internet Forum on terrorism and hate speech, even though that project has not produced anything, except some soothing press releases.

Summary: The key societal value of predictable and accountable restrictions on fundamental rights is at risk. With evidence being moulded to suit pre-existing policies, the European Commission appears eager to ensure that the online monopolies monitor online activity, take action to remove any content that creates legal risks for them, and arbitrarily police content to “protect” unspecified and undefined “societal values”. Instead of laws, we will have terms of service. Instead of accountability, we will have unaccountable censorship imposed by a system where, in the Commission’s own words, “information is filtered via algorithms, or manipulated through opaque moderation processes”.

All of this will, the Commission hopes create “the right framework conditions for user trust, innovation and value creation in Europe”.