All hands on deck: What the European Parliament should do about the DSA
After the European Commission’s proposal for a Digital Services Act (DSA) in December 2020, no less than seven committees in the European Parliament are now drafting their reports and opinions on the DSA. In parallel, member states are deliberating about the Council’s position, too. Yet, while the Commission has carefully tried to modernise the ageing rules of the E-Commerce Directive and make them fit for the platform economy, several of the committees’ draft reports propose—deliberately or not—to turn the DSA into a dystopian fundamental rights nightmare.
After the European Commission’s proposal for a Digital Services Act (DSA) in December 2020, no less than seven committees in the European Parliament are now drafting their reports and opinions on the DSA. In parallel, member states are deliberating about the Council’s position, too.
Yet, while the Commission has carefully tried to modernise the ageing rules of the E-Commerce Directive and make them fit for the platform economy, several of the committees’ draft reports propose—deliberately or not—to turn the DSA into a dystopian fundamental rights nightmare.
Liability for online platforms
The common flaw of these proposals lies in their approach to platform liability and the failed hope that illegal online content will disappear, or even be meaningfully reduced, if only platforms were legally liable for it. And if they were required to remove illegal content quicker. Some draft reports, therefore, propose rigid removal deadlines:
- seven days from the moment a piece of content has been flagged as allegedly illegal (IMCO),
- 24 hours if the content in question has the potential to harm completely undefined values such as ‘public policy’ (IMCO), and
- 30 minutes if the allegedly illegal content is being live-streamed (JURI).
In reality, the problem is not speed or liability. It is the inevitable complexity of determining the illegality of people’s free speech that makes online content moderation such a dangerous field for regulation.
It may be relatively straightforward to detect a product that is illegally offered on eBay or Amazon. But determining with certainty the legal status of a Tweet or a Facebook post is not. Context matters and some statements made online may be legal in some of the 27 EU member states and not in others. With the exception of only a small minority of manifestly illegal content, those decisions often require in-depth legal assessment by experts.
Delete first, think later
Under those extreme proposals, every single hosting service faces a legal liability risk for any piece of content that has not been removed within the time frames above. What will Facebook, the community-run Twitter alternative Mastodon, or your favourite discussion forum do? To avoid the liability risk, they will remove everything that is flagged to them with only the most superficial verification from their side.
“The threat of Facebook is not in some marginal aspect of its products or even in the nature of the content it distributes. It’s in [the company’s] commitment to unrelenting growth and engagement. It’s enabled by the pervasive surveillance that Facebook exploits to target advertisements and content.” Dr. Siva Vaidhyanathan, University of Virginia
This would allow anyone, from disgruntled ex-employees to governments and foreign operatives, to trigger the (at least temporary) removal of any online content they dislike or otherwise wish to suppress. Just flag it as illegal and it is gone. It would allow police (a possible ‘trusted flagger’ under the DSA) to remove YouTube videos that show police violence against demonstrators. It would allow Viktor Orbán’s government to remove content about or from the LGBTQ community, the same community MEPs just celebrated and pledged to defend.
Liability reduces competition
Large platform companies are more likely to be able to mitigate the liability risks created by the IMCO, JURI and ITRE proposals than small ones. Smaller and community-run platforms will have the most difficulties complying. As a result, abolishing the finely balanced liability regime we have, will cement the dominant position of Big Tech platforms such as Facebook and YouTube. In a best-case scenario, smaller competing platforms will have to buy Big Tech’s content removal algorithms as a service, making the latter even more powerful. In a worst-case scenario, competing platforms will simply shut down and the EU’s political goal of digital sovereignty becomes a distant dream.
Who should be covered by the DSA?
While there are good arguments to be made to broaden the DSA’s scope to create specific rules for search engines and so-called ‘home assistants’, private messaging services certainly do not belong in the DSA. Messaging apps are covered by the ePrivacy Regulation, the adoption of which national governments have been blocking for four long years.
If messaging apps like Signal or WhatsApp were covered by the DSA—as proposed by MEP Geoffrey Didier’s draft report in JURI—there would be no legal distinction anymore between something you post publicly, say, on Twitter, and something you say privately in a chat with your partner or best friends.
Every single word you say over WhatsApp or Signal would need to be scanned and analysed for potential illegality under any of the 27 national jurisdictions in the EU. It would be like if the postman was legally required to open every single letter and package and read it before delivering it to the intended recipient.
(Oh wait, someone did this before: the Stasi in Eastern Germany. It’s clearly not something the EU should be aspiring to).
Protect people from tracking
The good news is, most committees have discussed or proposed some form of strong tracking regulation. Amendments include:
- a simple ban of pervasive tracking as proposed by the EDPS,
- an opt-in solution, where tracking must be switched off by default,
- a prohibition of consent banners that are designed to cheat people into saying yes against their intention (so-called ‘dark patterns’), and
- an obligation to respect automated consent signals.
While option 1 appears to be the cleanest and simplest to implement, options 2, 3 and 4 combined can also be a be powerful protection for users, especially children who cannot be expected to understand the legalese presented by most consent banners.
Large media corporations argue that somehow the free press will die if they cannot spy on every single user’s online behaviour to show “more relevant ads”. Just last week, media lobbyists have sent a long email to LIBE members, pleading to continue allowing them to exploit their users. The truth is, only a tiny minority of news publishers’ revenue comes from online ads. That’s not least because advertising tech companies like Google and Facebook dominate the tracking ads market and take the largest share of the revenue for themselves.
“There are so many different players taking a little cut here, a little cut there — and sometimes a very big cut. A lot of the money that [advertisers] think they are giving to premium publishers is not actually getting to us.” Hamish Nicklin, former chief revenue officer at The Guardian
A recent Accenture study found that over 50% of all ad revenue spent by the industry is lost to ad tech middlemen like Google and Facebook and, unbelievably so, some 15% is completely unaccounted for. No one knows where it goes. This confirms data previously calculated by The Guardian for their own online ads.
In the long run, news publishers actually lose out and we all lose out with them. Because tracking users across the whole internet allows ad tech companies to target them anywhere and allow advertisers to manipulate and discriminate against users at will. Once Google and Facebook know who is who, news publishers as a gateway to their specific readership lose their relevance.
What the DSA should do
If policymakers really want to hold Big Tech platforms accountable, the DSA should:
- Disrupt their tracking business model. It fuels hate, enables manipulation and discrimination, and targets users’ weaknesses to keep them clicking. Prohibiting pervasive tracking, especially without real consent, is the first step to change that.
- Ensure freedom of expression for all. The DSA should maintain the limited liability regime of the E-Commerce Directive and add modern functions to it. Like a mandatory notice-and-action system that obliges platforms to treat content that was flagged to them by users as what it is: an allegation of illegality, not a fact. And it should oblige them to take action transparently and proportionately, regardless of whether that action is taking down content or leaving it up. The DSA must include accessible redress mechanisms for those who flag potentially illegal content—and for those whose content is being removed.
- Not regulate private communications, this is what ePrivacy is for. The DSA is conceived to prevent the distribution of illegal content to the public and create rules for online platforms. There is no place for private messaging apps in the DSA without breaking en masse into people’s most intimate conversations and build a mass surveillance system that would have made the Stasi blush.
Image credit: EFF (CC BY 3.0 US)
The article was published on 14 July 2021.
(Contribution by:)
- Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression: “Report on disinformation”, 13 April 2021
- European Digital Rights (EDRi): “How online ads discriminateUnequal harms of online advertising in Europe”, June 2021
- Siva Vaidhyanathan: “What If Regulating Facebook Fails?”, Wired, 7 February 2021
- noyb: “New browser signal could make cookie banners obsolete”, 14 June 2021
- Norwegian Consumer Council: “Analysis shows how Facebook and Google push users into sharing personal data”, 27 June 2018