Delete first, think later
The proposed Digital Services Act wants to push online platforms to quickly remove illegal content. But it uses a sledgehammer on a most intricate challenge: moderating online speech. The result would crush freedom of expression instead of enabling it. This is the second blog post in a new series dedicated to the EU’s proposed Digital Services Act and Digital Markets Act.
Designed by: CC BY 2.0
/The European Commission had the clever idea that the Digital Services Act (DSA) it proposed in December 2020 should introduce smart rules for the online world but essentially maintain its cornerstones, which have governed how online platforms treat user-generated content for two decades. Content hosting providers are generally only liable for user content, if they have “actual knowledge” of any illegal content and do not remove it “expeditiously”.
This general rule is an acknowledgment of a difficult fact: In a world in which people upload millions of hours of video and billions of photos and texts every day, it has become impossible to determine the legality of every single piece of it with certainty.
Of course, some types of illegal online content are relatively easy to spot and should therefore be removed as soon as hosting providers find out, but most of those instances require an informed legality assessment by a trained professional in order to avoid a large number of wrongful removals.
In user notices we trust?
In order to help hosting companies find potentially illegal content on their systems, the DSA proposal obliges them to set up flagging systems, basically online forms through which anybody can notify a hosting provider if they have found something they consider to be illegal content.
Such notice systems are good because they empower users to act against potentially illegal content and ensure that platforms don’t look the other way – that’s especially important when users are victims of abusive or violent online behaviour. Those notices do not, however, prove the actual illegality of any content. At that stage, they are merely allegations of illegality made by random individuals or entities. They can serve to flag pieces of content for further investigation, but cannot be seen as final judgment.
Yet, somehow the Commission’s DSA proposal treats them as proof of (or at least strong evidence for) illegality by proposing, in Article 14, that every single user notice should establish “actual knowledge” under the DSA’s liability regime and therefore immediately make hosting providers liable for the notified content.
Remember: anyone can send a notice. Your neighbour who dislikes you on Facebook. A random internet troll who disagrees with your tweets. Or—if you’re into online sales—a competing Ebay seller who wants to get an edge in the market by blocking your offer. Many of those notices will be wrongful, sometimes obviously and sometimes not. But under the Commission’s proposal every single notice will create an incalculable liability risk for hosting companies which they will try to get out of as fast as they can. How? By removing your content.
How can the DSA avoid over-removal?
Using legal liability to push hosting companies to better moderate content is like using a sledgehammer to excavate an ancient piece of woodwork. It will break things.
With the liability risk on their backs, the only commercially rational reaction for a hosting company (especially SMEs) is to remove the notified content and wait if the user who uploaded it complains. The DSA proposal contains a number of good ways for people to complain about wrongful content takedowns. But this kind of “delete first, think later”-approach should not become standard operating procedure in a democracy that cherishes freedom of expression as a fundamental right.
Instead of threatening hosting providers with a blunt hammer, the DSA should use a sharp knife: Of course hosting providers should be obliged to act expeditiously once they receive a notice about an allegedly illegal piece of content. If they don’t—or ignore any of the other due process rules proposed in the DSA—they would infringe their obligations under the law and be subject to fines as proposed in Article 42 DSA (and those fines should be severe).
In addition, the DSA should make sure that Member States provide sufficient resources to their national Digital Services Coordinators who will be responsible for enforcing the platform providers’ compliance with the law. Only with strong enforcement action can the DSA become an effective tool to curb the power of big tech and strengthen Europe’s digital sovereignty. Automatic content liability through user notices, however, is the wrong tool for the job and will only open a huge door for abuse.
(Contribution by:)
- Access Now: Protecting Free Expression in the Era of Online Content Moderation (May 2019)
- EDRi blog post: The EU’s attempt to regulate Big Tech: What it brings and what is missing (18 December 2020)
- ARTICLE 19: At a glance: Does the EU Digital Services Act protect freedom of expression? (11 February 2021)
- ARTICLE 19: Recommendations for the EU Digital Services Act (April 2020)
- EDRi Position Paper on the Digital Services Act: Platform Regulation Done Right (9 April 2020)