This is the second article in our series on Europe’s future rules for intermediary liability and content moderation. You can read the introduction here.
When it comes to tackling illegal and “harmful” content online, there’s a major trend in policy-making: Big tech seems to be both the cause of and the solution to all problems.
However, hoping that technology would solve problems that are deeply rooted in our societies is misguided. Moderating content that people post online can only be seen as a partial solution to much wider societal issues. It might help us to deal with some of the symptoms but it won’t solve the root of the problems.
Secondly, giving in to hypes and trying to find “quick fixes” for trending topics occupying the news cycle is not good policy-making. Rushed policy proposals rarely allow for an in-depth analysis of the full picture, or for the consideration and mitigation of potential side-effects. Worse, such proposals are often counter-productive.
For instance, an Oxford Internet Institute study revealed that the problem of disinformation on Twitter during the EU elections had been overstated. Less than 4% of sources circulating on that platform during the researchers’ data collection period qualified as disinformation. Overall, users shared far more links to established news outlets than to suspicious online sources.
Therefore, before launching any review of the EU’s e-Commerce Directive, policy-makers should ask themselves: What are the problems we want to address? Do we have a clear understanding of the nature, scale, and evolution of those problems? What can be done to efficiently tackle them? Even though the Directive’s provisions on the liability of online platforms also impact content moderation, the upcoming e-Commerce review is too important to be hijacked by the blind ambition to eradicate all objectionable speech online.
In Europe, the decision about what is illegal is part of the democratic process in the Member States. Defining “harmful online content” that is not necessarily illegal is much harder and there is no process or authority to do it. Therefore, regulatory efforts should focus on illegal content only. The unclear and slippery territory of attempting to regulate “harmful” (but legal) content puts our democracy, our rights and our freedoms at risk. When reviewing the E-Commerce Directive, the EU Commission should follow its Communication on Platforms from 2016.
Once the problems are properly defined and policy-makers agree on what kind of illegal activity should be tackled online, any regulation of online platforms and uploaded content should take a closer look at the services it attempts to regulate, as well as assess how content spreads and at what scale. Regulating the internet as if it consisted only of Google and Facebook, will inevitably lead to an internet that does consist only of Google and Facebook. Unfortunately, as we’ve seen in the debate around upload filters during the copyright reform, political thinking around speech regulation is focused on a small number of very dominant players (most notably Facebook, YouTube, and Twitter). This political focus paradoxically turned out to reinforce the dominant market position of existing monopolies. It would be very unfortunate to repeat the mistakes that were made, in the context of legislation which has as far-reaching consequences as the EU’s e-Commerce Directive.
This article is the introduction to our blogpost series on Europe’s future rules for intermediary liability and content moderation. The series presents the three main points that should be taken into account in an update of the E-Commerce Directive:
- E-Commerce review: Opening Pandora’s box?
- Technology is the solution. What is the problem?
- Mitigating collateral damage and counter-productive effects
- Safeguarding human rights when moderating online content
European Commission Communication on Online Platforms and the Digital Single Market Opportunities and Challenges for Europe (25.05.2016)
Junk News during the EU Parliamentary Elections (21.05.2019)