Blogs | Information democracy | Freedom of expression online | Platform regulation

E-Commerce review: Mitigating collateral damage

By EDRi · August 27, 2019

This is the third article in our series on Europe’s future rules for intermediary liability and content moderation. You can read the introduction here.

Asking social media and other platform companies to solve problems around illegal online content can have serious unintended consequences. It’s therefore crucial that new EU legislation in this field considers such consequences and mitigates any collateral damage.



With the adoption of the EU’s E-Commerce Directive in 2000, policymakers put the decision about what kind of content should be removed into the hands of hosting companies. According to this old law, hosting companies are obliged to remove illegal content as soon as they gain knowledge of it. In practice, this means that companies are forced to take a huge number of decisions on a daily basis about the legality of user-uploaded content. And, them being commercial entities, they try to do it fast and cheap. This has serious implications for our fundamental rights to freedom of expression and to access to information.

What’s the problem?

So, what’s the problem? Shouldn’t platform companies be able to decide what can and cannot be posted on their systems? In principle, yes. The problem is that in many cases, the decision about the legality of a given piece of content is not straightforward. It often requires a complex legal analysis that takes into account local laws, customs, and context. Platform companies, however, have no interest in dealing with such complex (and therefore expensive) judgements – quite the opposite! As soon as businesses run the risk of being held liable for user-uploaded content, they have a strong commercial incentive to remove anything that could remotely be considered illegal – anywhere they operate.

To make things worse, many platforms use broad and often vaguely worded terms of service. The intransparent application of those terms has led to overly eager take-down practices at the expense of human rights defenders, artists, and marginalised communities. This was pointed out for example in a recent report from the Electronic Frontier Foundation (EFF), one of EDRi’s US-based members.

Human rights organisations, and especially those fighting for lesbian, gay, bisexual, transgender and queer and intersex (LGBTQI) rights, often face two problems on social media: On the one hand, their content is regularly taken down because alleged breaches of terms of service – despite being completely legal in their country. On the other hand, they are faced with hateful comments and violent threats by other users that are often not removed by platforms. As the EFF report states: “Content moderation does not affect all groups evenly, and has the potential to further disenfranchise already marginalised communities.”

Wrongful take-downs are common

Because none of the big social media companies today include any statistical information about wrongful take-downs and removals in their transparency reports, we can only rely on publicly available evidence to understand the scale of this problem. The examples we know, however, indicate that it’s big. Here are some of them:

  • YouTube removed educational videos about the holocaust, falsely classifying them as hate speech (Newsweek).
  • Facebook removed posts from Black Lives Matter activists, falsely claiming they amounted to hate speech (USA Today).
  • Twitter temporarily blocked the account of a Jewish newspaper for quoting Israel’s ambassador to Germany as saying he avoids contact with Germany’s right-wing AfD party. Twitter claimed the tweet qualified as “election interference” (Frankfurter Allgemeine).
  • Facebook removed posts and blocked accounts of Kurdish activists criticising the Turkish government, classifying them as hate speech and “references to terrorist organisations” (Buzzfeed).

Despite the numerous examples of failing filters, there seems to be an increasing belief by policymakers that algorithms and automatic filtering technologies can solve illegal content problems – without enough thought given to the harmful side-effects.

Content filters are not the solution…

As EDRi has argued before, filters do fail: We’ve seen automated take-downs and blocking of public domain works, of satire and nuance, and even of works uploaded by legitimate rightsholders themselves. Also, filters are not as efficient as one might think. The Christchurch video streaming incident, for example, has shown that content filters based on hash databases can be easily circumvented by applying only minimal changes to the content.

The belief that big tech companies and their filter technologies can somehow magically solve all problems in society is not only misguided, it’s a threat to people’s fundamental rights. The lack of transparency and ineffectiveness of filters also means that the number of take-downs by the big platforms alone is a poor measure of success in the fight against illegal online content.

…but changing business incentives is.

Instead of mandating even more failing filter technology, EU legislation that aims at tackling illegal online content should focus on the root causes of the problem. In reality, many platform companies benefit from controversial content. Hateful tweets, YouTube videos featuring conspiracy theories and outright lies, and scandalous defamatory posts on Facebook are all a great way for platforms to drive “user engagement” and maximise screen time, which in turn increases advertisement profits. These commercial incentives need to be changed.

The fourth and last blogpost of this series will be published briefly. It will focus on what platform companies should be doing, and what a new EU legislation on illegal online content that respects people’s fundamental rights should look like.

Other articles of this series on the E-Commerce Directive review:
1. Opening Pandora’s box?
2. Technology is the solution. What is the problem?