CJEU hate speech case: Should Facebook process more personal data?
Austria’s Supreme Court of Justice has referred a case to the Court of Justice of the European Union (CJEU) regarding hate speech on social media platforms. The referral could have a global impact on Facebook – and ultimately on our privacy and freedom of speech.
The case goes back to 2016, when the former leader of the Austrian Green party, Eva Glawischnig, filed a lawsuit against Facebook over insulting post written about her, calling her a “corrupt oaf” and “wretched traitor to her people.” In May 2017, a court in Vienna ruled that the postings must be deleted across the platform, not just in Austria, but worldwide. The Austrian Supreme Court has now asked the Court of Justice of the European Union (CJEU) to confirm (a) if the deletion of a posting is only limited to the member state or should be done globally and (b) if Facebook has to delete only the individual post they were informed about, or also similar postings (for example by using an algorithm).
This raises numerous quite fundamental questions. First, the imposition of such a restriction based on automatic filtering appears to be a fairly literal contradiction to the Netlog/Sabam ruling, which opposed prior filtering of all content at the cost of a service provider (albeit dealing with different subject matter – copyright).
Secondly, it requires additional personal data processing from a company that rapaciously exploits user data for commercial purposes. While such cases are generally considered to be a simple balancing exercise between freedom of expression and the right being breached, the issue is more complex. Due to the personal data processing that the automatic filtering would require, the balance is between privacy, freedom of expression and freedom to conduct a business (especially if imposed on all providers subsequently, thereby also restricting competition) on the one hand, and combating hate speech on the other.
Thirdly, the application of the algorithm to “similar” cases requires a balancing judgement on how an (almost by definition) imperfect technology will be implemented by Facebook. This is especially relevant with regards to the last section of paragraph 1.3.8 of the Draft Recommendation of the Council of Europe (pending approval by Ministers in March 2018) on the roles and responsibilities of intermediaries, which points out the imperfection of automatic filtering – particularly as it cannot assess context. This article, for example, quotes the same words as were used against Eva Glawischnig, but are obviously not hate speech in this context.
If the CJEU were to support the view that Facebook should be obliged to filter by algorithm, it would have to configure its systems in a way that minimises its legal risks. To do this it would need to find a balance between the compelling reasons to over-censor and the limited reasons not take a more diligent approach:
Encouraging an error on the side of over-deletion, we have:
- direct legal liability for failing to respect the order
- secondary law on hate speech
- implementation of terms of service, which are less clear and narrower than the law
Encouraging an error on the side of accuracy or under-deletion, we have:
- customer relations
This would contradict various existing CJEU law on balancing of obligations. This balance of incentives is not conducive to free speech being protected. Support for extra-territorial application seems even more antithetical to the notion of freedom of expression and the rule of law. Restrictions on fundamental rights need to be accessible and predictable – it is challenging to imagine that the CJEU would consider that citizens elsewhere in the EU being subjected to regulation by imperfect Facebook algorithms would meet this criterion. It is puzzling that the Austrian court asked the question.
Fourthly, in the Telekabel case, the CJEU ruled as follows: “Accordingly, in order to prevent the fundamental rights recognised by EU law from precluding the adoption of an injunction such as that at issue in the main proceedings, the national procedural rules must provide a possibility for internet users to assert their rights before the court once the implementing measures taken by the internet service provider are known.” In addition to the fact that such national procedural rules generally do not exist, as shown by experience in Austria implementing that ruling, this safeguard is unenforceable in practice, because the measure will be (by definition) implemented for both illegal content and violations of terms of service.
Fifthly, the application of imperfect algorithms to “similar” and not necessarily illegal content (as well as inevitable extension to terms of service implementation) appears to fall below the standards demanded by the court in relation to the respect of the obligation for restrictions to be “provided for by law”. See, in particular, paragraph 139 of the EU/Canada PNR Opinion: “It should be added that the requirement that any limitation on the exercise of fundamental rights must be provided for by law implies that the legal basis which permits the interference with those rights must itself define the scope of the limitation on the exercise of the right concerned”.
How will the Court balance the rule of law, freedom of expression, privacy, basic principles of international law on predictability and accessibility and freedom to conduct a business with hate speech? Time will tell.
Draft Recommendation of the Committee of Ministers to member states on the roles and responsibilities of internet intermediaries
CJEU Opinion on EU/Canada PNR Opinion
(Contribution by Joe McNamee, EDRi, and Maren Schmid, EDRi intern)