EDRi

Public consultation on the Digital Services Act package

Answering Guide for civil society organisations and individuals

INTRODUCTION

The European Commission has launched a public consultation to gather your views about the regulation of online services to shape the future Digital Services Act (DSA).

In her political guidelines, the President of the Commission, Ursula von der Leyen, has announced that the DSA will introduce new liability and safety rules for digital platforms, services and products for the European Union (EU). The future package will likely cover two aspects of platform regulation: (1) a revision of the existing E-Commerce Directive including the liability regime for user-generated content and (2) a set of ex-ante rules to address the concentration of the online market and the abusive practices by dominant tech companies. The Commission’s consultation therefore covers both aspects.

This legislative file has the potential to either become one of this decade’s biggest threats to people’s rights and freedoms online or a big opportunity to clarify and improve the current situation. This is why your contribution in this consultation process is extremely valuable to speak up against agendas pushed for by Big Tech that the Commission will also receive.

We have prepared this guide:

In addition to multiple choice questions, the consultation features a number of free text questions with 3,000 and 5,000 character limits – it means that you have the chance to develop your opinions in details if you want to. However, do not feel obliged to answer all the questions or to write an essay in each free text box. It is up to you to determine how much time and effort you want to spend on each answer, depending on whether you are passionate about and have expertise on a given topic or if you want to get it done quickly.

If you need some guidance about specific wording, please see EDRi’s own response to the consultation.

GETTING STARTED

Go to the consultation page – you will first be asked to create an account for the EU survey portal if you do not already have one. Once registered, you will be asked to identify yourself, the capacity in which you are answering the consultation - either ‘EU citizen’ or ‘Non-EU citizen’ (sigh) - country of origin and email address, all of which are mandatory.

The Commission is unlikely to contact you using this data, except to confirm your response. It will also ask you about whether it can publish your answers – you can choose to stay anonymous or not.

HOW TO NAVIGATE THE CONSULTATION

At the top of the page, you will notice that the consultation is divided into 6 main chapters, and the possibility to submit final remarks, in addition, is possible at the end:

I. Safety and responsibilities

This chapter mainly covers issues of illegal and “harmful” activities online. It seeks to gather data and to generate statistics about the likelihood of people finding illegal or “harmful” content and products on the internet: have you seen such type of content online? What did you do? It also inquires about the current measures put in place by intermediaries (or so-called “platforms”) to deal with illegal activities: are platforms transparent in what they do? What obligations should be imposed on platforms?

II. Liability regime

This chapter derives from the first one but goes into the legal intricacies of platforms’ liability vis-à-vis user-generated content. More specifically, the questions seek answers about what parts of the existing E-Commerce Directive the DSA should change or clarify. How should platforms be categorised? Are the current liability exemptions fit for the purpose? Should platforms be obliged to monitor all content uploaded on their systems?

III. Gatekeeper platforms

This part of the consultation addresses the problems raised by the dominance and control that current large online platforms are able to exert in the digital economy and over our online lives. They achieve this quasi-monopolistic power through a number of practices such as self-preferencing, systematic buyouts of competitors, as well as significant network effects which lock in a significant part of their users. The European Commission announced that it will try to tackle this concentration of market power by introducing so-called ex ante rules as part of the Digital Services Act package in order to ensure markets remain fair and open for all. Therefore, the consultation seeks views on the scope of the problem and the possible measures to address this situation.

From a fundamental rights point of view, it is an important chapter because the current centralised platform economy and the resulting abusive market behaviour is the source of numerous fundamental rights violations (e.g. right to privacy, data protection, right to non-discrimination, etc.). The internet is one of the greatest enablers for people to exercise their fundamental rights when it is characterised by a plural and diverse online ecosystem with real alternative services and business models. This can only be achieved by lowering barriers to enter the market and regulating the most toxic activities of the dominant platforms.

IV. Advertising and smart contracts

This chapter is divided into two parts: one related to online advertising and one on smart contracts. EDRi does not cover the latter as it is currently beyond the remits of its work.

Online advertising, and more specifically the ad tech industry, is a significant piece of the content governance puzzle. While the vast majority of digital services rely on advertisement revenues and the tracking of users’ online behaviour, the impact of content amplification on democratic and fundamental rights is often overlooked. The Commission’s questions allow you to share your experience with online advertising and to propose transparency and accountability measures to regulate it.

V. Self-employed individuals and platforms

EDRi does not cover this part as it is currently beyond the remits of its work.

If you want to share your views and expertise on the topic of self-employed individuals’ rights (so-called “gig workers”, typically Uber drivers, Deliveroo bikers, clickworkers, etc.), this part of the consultation is for you. Especially if you are or were affected yourself. However, the consultation does not seem to have the ambition to address the legal status of gig workers.

VI. Governance and enforcement

This chapter focuses on the governance model and oversight mechanisms that the future DSA should introduce to implement its provisions over digital services across the EU. Its questions cover the role of the EU and cooperation between national authorities. Some seek input on the existing cooperation arrangements in the field of consumer protection and audiovisual media services.

In the following part of this Answering Guide, we provide guidance on the most noteworthy questions depending on your main interests. Where we don’t provide guidance, either the topic goes beyond EDRi’s scope of work or we invite you to refer yourself to our own response. Click on the below topics that are of interest to you.

You want to share your opinion on: FREEDOM OF EXPRESSION, HATE SPEECH AND OTHER ILLEGAL CONTENT

We recommend that you focus on sections I, II and III.

SECTION I - SAFETY AND RESPONSIBILITIES

Go directly to question 11 in section I. Main issues and experiences, subsection:

A. Experiences and data on illegal activities online.

11. Did you ever come across illegal content online (for example illegal incitement to violence, hatred or discrimination on any protected grounds such as race, ethnicity, gender or sexual orientation; child sexual abuse material; terrorist propaganda; defamation; content that infringes intellectual property rights, consumer law infringements)?

You can answer based on your experiences. Only if you answer “Yes, once” or “Yes, several times”, will you be able to access additional questions on the actions you took after encountering such content.

Please be careful to focus on online content that is very likely to be illegal in your country or in the EU. Some policymakers try to push for the removal of legal content that disturbs them and, often users notifying content to platforms wrongly qualify legal content as illegal. For example, last year, only 30% of reports to the Austrian hotline correctly indicated illegal content (43% of reports of illegal child abuse material and only 2% of reports on National Socialist material in 2019). The results from this question will therefore likely overestimate the actual experience of illegal content and will be then included in the Commission's "statistics" as evidence of the scale of illegal content online.

Having said that, if you don't know for certain that content was illegal, you obviously should not say you did come across illegal content or alternatively you can say that you “don’t know”. You should only say you did if you are sure that the content breached a specific provision of your country’s laws.

Questions 12 to 17 help you describe what measure you took after coming across illegal content and in case you did report the content, if it was an easy procedure for you, with satisfying outcomes.

Do not hesitate to answer based on your experiences. The Commission seems mostly interested in how content reporting mechanisms are used in practice. Unfortunately, the Commission does not seem to be interested in the actual outcome of the reporting procedure: What happened to the content? Which entity assessed the content? What was their reasoning? Has the content uploader been informed? Was the content reported to competent authorities?

18. How has the dissemination of illegal content changed since the outbreak of COVID-19? Please explain.
19. What good practices can you point to in handling the dissemination of illegal content online since the outbreak of COVID-19?

EDRi decided not to provide answers to these two questions as we believe it is too early to provide a definite analysis of the dissemination of illegal online content during the COVID-19 pandemic. Given the increased number of people relying on online communications tools due to remote work, the scale of the dissemination might be more important. However, reliable scientific data on that question is still missing. In any case, the crisis does not seem to have dramatically and structurally changed the way illegal content is distributed online. Therefore, it should not be used as a pretext by the Commission to justify a desire to introduce restrictive measures without proper human rights impact assessments.

20.What actions do online platforms take to minimise risks for consumers to be exposed to scams and other unfair practices (e.g. misleading advertising, exhortation to purchase made to children)?
21. Do you consider these measures appropriate? (Yes, No, I don’t know)
22. Please explain.

Although this set of questions is focused on consumer law infringements and consumer protection, there is an opportunity to point out the current major failures of the main social media companies in terms of controlling their targeted advertisement services and fighting illegal ads on their platforms. In these questions it is important to highlight that online platforms profit from ads, regardless of whether they are offensive, dangerous, or even illegal. They indeed have strong economic incentives to accept all ads, which conflicts with their efforts to curb misleading ads, fraud and scam. There are many examples of Google’s and Facebook’s security features (such as identity verification) being circumvented by fraudsters and failing to identify scams. Check out EDRi’s own answer for inspiration.

B. Transparency

1. If your content or offering of goods and services was ever removed or blocked from an online platform, were you informed by the platform?
2. Were you able to follow-up on the information?
3. Please explain.

This is a very welcome set of questions asking about the transparency of removal procedures of online platforms. If you were ever restricted by an online platform to publish something (e.g. your content was deleted or shadow-banned, your account was blocked for a certain period of time or you were prevented from posting anything, etc.), it is important that you share your experience. The scale and consequences of speech limitation taking place on online platforms are usually underestimated, especially when it concerns perfectly legitimate content (independent journalism, social movements claims, etc.). The impact notably on marginalised groups is under-reported, so the Commission does not necessarily have access to a lot of information about this problem set: Platforms put in place specific content policies in order to attract advertisers and nudge users into staying on the platform for as long as possible. This creates incentives to set up discriminating content moderation practices that are detrimental to both user expectation and user rights. Chinese video platform TikTok, for example, was reportedly hiding and suppressing content of “ugly” and “fat” users as well as people with disabilities. This is why it is crucial to point out the various types of discrimination that people face today when a uniform and monolithic set of rules, designed to work for the interest of platforms and advertisers rather than in the interest of the people subjected to it, are imposed.

In addition, the lack of procedural safeguards in this process is mainly due to companies’ opaque and unfair terms of services. On most platforms, there is no information about appeal options, and if they exist there is no right for users to receive a response or explanation, leading to arbitrary content decisions. This makes the exercise of fundamental rights vis-à-vis those platforms extremely difficult. This is why it would be helpful for the Commission to hear testimonies from people and groups whose content has been restricted and who explain how the concerned platform handled the case.

4. If you provided a notice to a digital service asking for the removal or disabling of access to such content or offering of goods or services, were you informed about the follow-up to the request?

It is a similar question to question 17 in section A. but with more detail. This question only applies to you if you have ever submitted such a report for content removal to a platform.

For example, the implementation reports of the Code of Conduct on Illegal Hate Speech have shown that not all platforms systematically inform the flagger (the person who reported a certain piece of content) about their decision: 67.1% of the notifications received feedback in the period November-December 2019 according to the Commission. Apparently, the Commission seeks to confirm or review these statistics. We actually expect the rate to be lower because platforms are likely to be more diligent in informing official flaggers of the Code of Conduct than unaffiliated users.

5. When content is recommended to you - such as products to purchase on a platform, or videos to watch, articles to read, users to follow - are you able to obtain enough information on why such content has been recommended to you? Please explain.

Another question that is important from a digital rights point of view. Most of the time, the answer is probably: no. Most online platforms refuse to provide information and details about their recommendation systems, because they relate to the core of their business model. The exact explanation of why the content that you see on your Facebook/Instagram/Twitter timelines or YouTube recommended list is different than the one your friends or even your family see is the platforms’ secret. The micro-targeting of content according to very specific characteristics poses multiple societal problems both from a democratic but also a privacy rights perspective: The manipulation of voting results in the 2016 U.S. presidential elections and the Brexit referendum have shown the powerful impact that the secret targeting of content for money can have. It has proven to be a dangerous tool in the hands of a secretive Silicon Valley company. Currently, users have no way of knowing how and based on which criteria content is curated for them, and as a result, have no ability to decide for themselves how they want to receive and access online information.

Do not let Facebook fool you with its ‘Why I am seeing this?’ feature, as it does not genuinely improve transparency. The list of criteria which determines why you are actually seeing a certain piece of content or an ad is much more complex, long and intrusive than what Facebook is willing to tell you.

One of our main demands in the context of the DSA is for platforms to be obliged to provide users with meaningful access to their marketing profiles, which includes ALL their information: data collected by the platforms on the platforms themselves and on other websites through trackers, inferred data, data provided voluntarily by the user. Users must be able to control what they see on social media. In practice, that means that user should be able to opt-out from micro-targeting and personalised advertisement and choose for themselves what content they want to prioritise. We recommend asking for mandatory transparency of platforms’ user marketing profiles and an autonomous control of recommendation systems for users.

D. Experiences and data on erroneous removals

1. Are you aware of evidence on the scale and impact of erroneous removals of content, goods, services, or banning of accounts online? Are there particular experiences you could share?

Erroneous removals of content and the banning of accounts have been widely documented by multiple initiatives – the most spectacular ones are often also reported in the press. If you have experienced such wrongful take-downs yourself, you can share your experience here.

Otherwise, we invite you to pick among the numerous resources and examples below:

Finally, it is important to add that the DSA should mandate the reinstatement of wrongfully deleted/disabled content in order to provide users with an effective right to remedy.

2. Clarifying responsibilities for online platforms and other digital services

1. What responsibilities (i.e. legal obligations) should be imposed on online platforms and under what conditions? Should such measures be taken, in your view, by all online platforms, or only by specific ones (e.g. depending on their size, capability, extent of risks of exposure to illegal activities conducted by their users)? If you consider that some measures should only be taken by large online platforms, please identify which would these measures be.
2. Please elaborate, if you wish to further explain your choices.

Although the internet sometimes seems to only consist of a few tech giants, there is still a large number of smaller online service providers, software makers and independent developers out there, including community-led, non-for-profit initiatives (like the famous Wikipedia or the Signal Messenger). This diversity is one of the key resilience factors of the internet infrastructure as a modern public space for debate and communication. It is crucial that, when designing far-reaching legislation to regulate all sorts of online services, measures remain proportionate and do not apply a “one-size-fits-all” approach. For example, a small forum will not be able to afford hiring a full team of moderators like Facebook does; nor should it have to because the risks posed by a small online forum are tiny when compared to a global mega platform with over 2.5 billion daily active users. For reference, please see EDRi’s own response.

Responses from a digital rights perspective:

  • On the proposal to “Detect illegal content, goods or services”: this should not be imposed on any kind of platforms. Tech companies are not law enforcement agencies. Upholding the prohibition to impose a general monitoring obligation on providers (currently Art.15 of the E-Commerce Directive) protects online freedom of expression and the right to access information.
  • On the proposal to “Systematically respond to requests from law enforcement authorities”: Tech companies should not be obliged to respond to unofficial side-channel requests by law enforcement authorities. In order to achieve the removal of potentially illegal content, authorities must follow the appropriate national and EU legal frameworks involving courts or other independent judicial authorities. This is a fundamental requirement for the rule of law.
  • On the proposal to “Cooperate with other online platforms for exchanging best practices, sharing information or tools to tackle illegal activities”: While platforms are free to exchange best practices and other information, this should not be mandated by law. Any such legal obligation would help cement the position of already dominant platforms on the market as established channels between big providers and would leave out smaller competitors or increase their dependence on the goodwill of large players. Such obligation would also help large platforms sell their content moderation tools (including content filters, hash databases, etc.) while smaller providers would be required by law to “cooperate” and thus, adopt and rely on such error-prone technologies.
  • On the proposal to “Maintain an effective ‘counter-notice’ system for users”: Such an obligation would introduce procedural safeguards to ensure that users can access redress mechanisms and remedies when their content is wrongfully taken down. This increases the accountability of service providers towards their users and should be supported from a fundamental rights perspective.

5. How should the reappearance of illegal content, goods or services be addressed, in your view? What approaches are effective and proportionate?
6. Where automated tools are used to detect illegal content, goods or services, what opportunities and risks does their use present as regards different types of illegal activities and the particularities of the different types of tools?

The question of the reappearance of illegal content is at the heart of an ongoing political debate: whether and how EU Member States can require platforms to use automated filters in an attempt to detect and delete prohibited material in users’ posts. Originally, this is not allowed under Article 15 of the E-Commerce Directive, which prohibits the imposition of a general monitoring obligation on hosting service providers (aka screening all content posted to find illegal material).

Put simply, what if a piece of content has been declared illegal by a court, should platforms be obliged to actively block all re-appearances of this and similar pieces of content – for ever (‘stay-down’)?

From a rights perspective, this raises a number of concerns:

First, illegal content can be re-used for legal and positive purposes. For example counter-speech can use originally illegal content and contextualise it to provide examples that explain their illegality. This is often used by anti-racism and anti-discrimination movements in general to call out racist and otherwise discriminatory online content. Another example is originally illegal content that is often used by journalists who report on human rights violations by showing material picturing violent acts or terrorist propaganda.

Second, the implementation of such ‘stay-down’ measures usually implies automated content detection tools such as content filters (algorithms which detect and automatically delete content that was previously identified as undesirable). Such filters are unable to identify and assess the context of a publication and therefore often wrongfully restricts the legitimate re-use of originally illegal content.

We therefore recommend highlighting these concerns and requesting that automated means to address the reappearance of illegal content should not be mandated by law, bearing in mind their high error rates. If platforms use such tools voluntarily, mandatory audits of the algorithmic systems by an independent regulator should ensure that their impact on freedom of expression is not disproportionate.

7. How should the spread of illegal goods, services or content across multiple platforms and services be addressed? Are there specific provisions necessary for addressing risks brought by: Digital services established outside the Union but active on the EU market should fall under the DSA just as much as those established inside the Union in order to ensure the same level of protection of fundamental rights regardless of the country of establishment of the service provider.

8. What would be appropriate and proportionate measures for digital services acting as online intermediaries, other than online platforms, to take – e.g. other types of hosting services, such as web hosts, or services deeper in the internet stack, like cloud infrastructure services, content distribution services, DNS services, etc.?

Such intermediaries are usually ‘content-agnostic’, that is to say that they are not aware of the content they transport, do not host that content on their servers and usually do not have the technical means to identify which exact pieces of content are illegal (e.g. content is encrypted). This is why these intermediaries should not be held responsible for the content they transport. Requesting the opposite would be like giving the postman legal responsibility for the content of the letters they deliver. That would of course force the postman to open up all letters and make a legality assessment for each of them. It is obvious that such a rule would result in a disproportionate restriction of freedom of expression and the right to access information for all senders and recipients of letters.

9. What should be the rights and responsibilities of other entities, such as authorities, or interested third-parties such as civil society organisations or equality bodies in contributing to tackle illegal activities online?

This is a very good question that highlights the nexus of actors involved in the fight against illegal online activities. We, at EDRi, believe that the main actors in this fight should not be platforms but rather public authorities and that their task to protect citizens’ rights should not be delegated to any other entity, especially to private actors.

Courts and judges have an essential role to play in the fight against illegal content online. They bring crime perpetrators to justice and restore justice for the potential victims. Also, they are the only authorities competent and trained to interpret the law. Law enforcement authorities should be better trained and well-staffed to find, properly document, and where appropriate prosecute illegal online activity.

Civil society organisations should not be employed by the law as a replacement for the responsibilities of platforms or oversight authorities. The law must be enforced by the authorities, not by small non-profits struggling to scrape together funding to go to court against some of the largest and most powerful corporations in the world.

15. What would be effective measures service providers should take, in your view, for protecting the freedom of expression of their users? Please rate from 1 (not at all necessary) to 5 (essential).
16. Please explain.

We recommend to rate all of the proposed options as essential (5) or necessary (4). Platform companies, especially those whose business models are tied to personalised advertising, have no incentive to protect the freedom of expression of their users. They mostly follow the needs of their advertising customers. This is why we propose to add to the list: an obligation to give users fine-grained control over what they see – that control should override any business interest a platform may have in distributing certain content. This includes a right for users to switch off personalised/micro-targeted content and advertising.

17. Are there other concerns and mechanisms to address risks to other fundamental rights such as freedom of assembly, non-discrimination, gender equality, freedom to conduct a business, or rights of the child? How could these be addressed?

Freedom of expression is not the only human right at stake in the DSA debate. A single measure would never be sufficient in securing the protection of all fundamental rights. This is why EDRi proposes a set of measures that focus on improving the online ecosystem as a whole to the benefit of everybody, not a few:

  1. Break open the centralised platform economy that is so conducive to the dissemination of toxic online behaviour. Much of the damage inflicted by content like illegal hate speech relates to its viral spread and amplification on and by social media platforms. The DSA has the chance to leave this technological dead-end behind by, among other improvements, requiring dominant social media platforms to open up to competitors with mandatory interoperability. Read more about why interoperability could be one of the solutions to online discrimination and violence
  2. Introduce a workable notice-and-action system that empowers people to notify platforms of potentially illegal online content and behaviour they are hosting.
  3. Effective legal redress should be guaranteed: EDRi proposes the creation of specialised tribunals or independent dispute settlement bodies in EU Member States that are faster and more accessible for affected users to settle speech-related disputes with other users or with platforms.

If you want more details, we invite you to consult our position paper here.

SECTION II - LIABILITY REGIME

2. The liability regime for online intermediaries is primarily established in the E-Commerce Directive, which distinguishes between different types of services: so called ‘mere conduits’, ‘caching services’, and ‘hosting services’. In your understanding, are these categories sufficiently clear and complete for characterising and regulating today’s digital intermediary services? Please explain.
3. Are there aspects that require further legal clarification?

From the users’ perspective, the regime set by Articles 12 to 15 of the Directive has a major impact on the level of freedom of expression, freedom of information, right to privacy and personal data protection on the Internet, as well as on the due process of law. From the intermediaries’ perspective, it must ensure the needed legal certainty to run their activities. EDRi’s response stresses that the lack of clarity and precision of this regime does not currently allow adequate protection of human rights and the rule of law, nor does it ensure legal certainty for intermediaries.

In order for the EU to respect its current obligations with regard to its own Charter of Fundamental Rights and its upcoming obligations under the European Convention on Human Rights, EDRi underlines the need to revise the current intermediaries liability regime as follows:

  • Where an intermediary is not hosting the content (acting as a mere conduit, an access provider or a search engine), it should have no liability for this content, nor should it have general monitoring obligations or obligations to employ proactive measures with regards to this content as an access provider.
  • Where an intermediary acts as a hosting provider, its liability with respect to the user-generated content hosted should be restricted to its lack of compliance with a court order to take down this content. This should not prevent hosting providers from removing content based on their own terms and conditions.
  • Intermediaries should have no legal obligation to monitor content.

4. Does the current legal framework dis-incentivize service providers to take proactive measures against illegal activities? If yes, please provide your view on how disincentives could be corrected.

There is currently a lack of legal certainty because when platform providers actively look for potentially illegal content, they risk being considered as having “actual knowledge” of such content under the E-Commerce Directive and would be required to delete it. So either platforms do not try to identify potentially illegal content at all or they do. If required by law, many would prefer to play it safe and delete content even when its illegality is not certain for fear of becoming legally liable for it. This is why there is a tendency towards over-removal of legitimate online content and censorship of users’ speech.

The scale of (wrongful) content deletion that certain online platforms reach constitutes a sizeable threat to fundamental rights. Even the biggest platforms perform rather badly at this task, with an extremely negative impact on both the protection of victims of illegal content and freedom of expression. This is why we encourage the legislator to enact rules that foster the diversification of online platforms to break down content moderation activities into a manageable user base (aka human-driven, not machine-driven).

In our view, the current legal framework should therefore not be reversed into incentives or obligations for platform providers to take so-called “proactive measures” but could include a clarification that platforms who voluntarily look for potentially illegal content would not be liable for missing content that later turns out to actually be illegal.

5. Do you think that the concept characterising intermediary service providers as playing a role of a 'mere technical, automatic and passive nature' in the transmission of information (recital 42 of the E-Commerce Directive) is sufficiently clear and still valid? Please explain.

This question points to the heated debate over the differentiated nature of intermediaries (aka platforms): are there active and passive intermediaries? Purely passive intermediaries would not interact at all with the content (which is the case mostly of mere conduit and caching services) while it was always a little unclear when the threshold to an “active” intermediary is crossed.

We argue that the distinction between active and passive intermediaries is no longer useful today. The vast majority of hosting providers engage with the content they host in one way or another. The DSA would do well to therefore focus on the types of services an intermediary offers as well as on the strict enforcement of legal obligations such as transparency, privacy and data protection.

6. The E-commerce Directive also prohibits Member States from imposing on intermediary service providers general monitoring obligations or obligations to seek facts or circumstances of illegal activities conducted on their service by their users. In your view, is this approach, balancing risks to different rights and policy objectives, still appropriate today? Is there further clarity needed as to the parameters for ‘general monitoring obligations’? Please explain.

/!\ IF YOU HAVE LITTLE TIME TO DESERVE TO THIS SECTION, AT LEAST INVEST IT IN REPLYING TO THIS QUESTION /!\

The prohibition of any general monitoring obligation is a cornerstone of internet freedoms. It prevents Member States from arbitrarily imposing the screening of all content and communications and therefore permanent surveillance and filtering of citizen’s online lives. Important fundamental rights are at stake: privacy of communications, the right to protection of personal data, freedom of speech and of information, all protected in the Charter of Fundamental Rights.

There are many efforts to water down this principle by trying to draw its limits. The one year-old Copyright Directive, for example, creates an exception for copyright infringements, threatening the openness of the internet and the freedom for people to share information freely. It is crucial that the Commission hears about the importance of upholding this principle (and its integrity!) in the DSA in order to respect people’s fundamental rights.

You want to share your opinion on: FREEDOM OF EXPRESSION AND HARMFUL CONTENT

We recommend focussing on sections I. and IV.

SECTION I - SAFETY AND RESPONSIBILITIES

We recommend replying to the following questions by finding guidance above:

C. Activities that could cause harm but are not, in themselves, illegal

2. To what extent do you agree with the following statements related to online disinformation?
3. Please explain.

4. In your personal experience, how has the spread of harmful (but not illegal) activities online changed since the outbreak of COVID-19? Please explain.
5. What good practices can you point to in tackling such harmful activities since the outbreak of COVID-19?

See questions 18 and 19 of subsection A. Experiences and data on illegal activities online.

2. Clarifying responsibilities for online platforms and other digital services

10. What would be, in your view, appropriate and proportionate measures for online platforms to take in relation to activities or content which might cause harm but are not necessarily illegal?

It is true that certain activities or content are perfectly legal in the strict sense of the term but might cause harm to some people (e.g. promotion of suicide or self-harm towards teenagers). They are not criminalised because criminalisation is not always to best tool to address these specific issues.

Here are ideas to address ‘harmful’ content that go beyond the binary choice of take it down or leave it up:

12. Please rate the necessity of the following measures for addressing the spread of disinformation online. Please rate from 1 (not at all necessary) to 5 (essential) each option below.
13. Please specify

Unfortunately, this question does not specify whether the proposed measures would be imposed on online platforms or introduced as voluntary measures (self-regulation). As a result, it is hard to know how they will be implemented. Something to be pointed out in question 13.

SECTION IV – ONLINE ADVERTISEMENT AND SMART CONTRACTS

See section below.

You want to share your opinion on: THE PROBLEM OF ONLINE ADVERTISEMENT AND MICRO-TARGETING

We recommend focussing on sections I. and IV.

SECTION I - SAFETY AND RESPONSIBILITIES

We recommend replying to the following questions by finding guidance above:

20. In your view, what measures are necessary with regard to algorithmic recommender systems used by online platforms?

Algorithmic recommender systems are responsible for what kind of content users see in their personalised social media timelines, micro-blogging feeds, and recommended content sections. They also curate user content and are therefore not limited to online advertisement. Nevertheless, they play an important role in the ad-driven business model of today’s main online platforms. Indeed, their aim is to keep the users’ attention fixed on the platform for as long as possible so that they can be shown more ads and thus generate more revenue for the platform. Recommender systems achieve this by promoting personalised, targeted content that people are more likely to interact with. Studies have shown that such systems often automatically promote divisive and scandalising content over more balanced or nuanced posts. Algorithmic recommender systems are therefore designed to maximise ad revenues – at the detriment of more important goals such as content diversity, non-discrimination, etc. As a result, it is important to point out the role of recommender systems in the spread of potentially problematic or even illegal content, as this is what is more likely to attract users’ attention and keep them engaged.

We recommend focussing answers on two main goals: (1) Empowering users to better control recommender systems; (2) Giving competent authorities the necessary oversight powers. For example:

SECTION IV – ONLINE ADVERTISEMENT AND SMART CONTRACTS

1. When you see an online ad, is it clear to you who has placed it online?

Most of the time there is no information surrounding an online ad (some exceptions exist but the information is always scarce). Therefore, we recommend to choose “Sometimes: but I cannot always find this information”.

2. As a publisher online (e.g. owner of a website where ads are displayed), what types of advertising systems do you use for covering your advertising space? What is their relative importance?

If you own a small website or blog which relies on advertising income, this question invites you to share your experience. For those who feel compelled to use Google ads or similar privacy-invasive technologies, please check out our recently published Ethical Wed Development booklet and see how you can improve the privacy protection of your users.

Questions 4. and 5. invites you to share more information about your experience as a publisher, if you happen to be one.

15. From your perspective, what measures would lead to meaningful transparency in the ad placement process?

This question concerns the placement of ads in general and is targeted at respondents who work on Real Time Bidding (RTB) and other ad placement processes. However, in the context of the DSA, it is particularly useful to focus on how to bring transparency and privacy to online platforms’ advertisement business and the advertisement tech (Ad Tech) industry as a whole.

Online platforms play a big role in the Ad Tech industry and this is why the DSA should introduce:

  1. Meaningful transparency measures that cover content, funding, and reach of ads and the targeting process, notably via the creation of ad libraries/archives and accessible APIs for researchers.
  2. Meaningful transparency of targeting that covers both parameters selected by advertisers and the optimisation and delivery process controlled by platforms. This includes both publicly available information and explanations offered to individuals in specific cases.
  3. Real privacy protections in online advertising, notably a mandatory easily accessible opt-out for users from personalised ads.

For individuals specifically, we recommend to request the following transparency measures that would help people to better understand why certain ads are shown to them but not to others.

16. What information about online ads should be made publicly available?

We recommend to broaden the answer to this question by reiterating the need for mandatory ad libraries and their elements, as well as adding two more requirements:

18. What is, from your perspective, a functional definition of ‘political advertising’?

This question translates the motivation of European policymakers to distinguish between commercial and political or “issue-based” ads in order to add additional requirements for political advertising and address the risk to our democratic processes posed by micro-targeted content based on large amounts of personal information.

However, it is difficult to draw the line in a consistent manner and there will be always exceptions, especially when this distinction has to be applied to different national and cultural contexts. In our view, it would be more important to focus on regulating the targetting process and limit the amount of personal information used for this purpose. EDRi has nonetheless suggested a definition that focuses on the technique rather than the content: see our response here.

19. What information disclosure would meaningfully inform consumers in relation to political advertising? Are there other transparency standards and actions needed, in your opinion, for an accountable use of political advertising and political messaging?

Please see our explanation to question 16.

20. What impact would have, in your view, enhanced transparency and accountability in the online advertising value chain, on the gatekeeper power of major online platforms and other potential consequences such as media pluralism?

This is where we can develop our thinking on the bigger issues of Ad Tech . Although the Commission probably did not intend to address privacy and data protection concerns in this section, the issue of online advertisement cannot be decoupled from it.

We recommend highlighting the opportunity that a strong ePrivacy Regulation (when finally adopted) and stronger GDPR enforcement would present to create an advertising industry that respects people and preserves privacy rights. As the Dutch public broadcaster NPO has proven, privacy-friendly online advertising is not only possible but financially just as lucrative as ads based on personal data. In 2019, NPO had started replacing micro-targeted ads based on personal data with more privacy-friendly ads based on the context in which an ad is shown. This change enabled them to cut out the middleman, the Ad Tech company running the Real Time Bidding process and taking a sizeable cut from the ad revenues, eventually increasing their own ad revenues by a third. By stopping micro-targeting and illegal practices in the field, your online experience as a user could change dramatically in a positive way. In your answer here, you can also point out that by enforcing data protection laws and reducing personal data-driven ads, the power of dominant advertising companies (e.g. Google) would be diminished.

You want to share your opinion on: THE REGULATION OF BIG TECH AND THE PROMOTION OF A DECENTRALISED AND OPEN INTERNET

We recommend that you focus on section III.

1. To what extent do you agree with the following statements?

This question presents a table containing a series of affirmations about the current state of competition and consumer choice on the online platforms market to which you can indicate whether you agree or not. Although the internet was originally created as a decentralised communication network providing an open, digital public sphere, the development of today’s tech giants led to the centralisation of that network around a few actors that control most of our online experiences today.

This is why we recommend to emphasise the need to limit the power of the centralised platform economy in favour of that decentralised, more human-scale digital public space which can be used under terms that put people at its core. Today, consumers have very little choice when choosing online platforms because the big ones have eaten up the market and each platform operates a closed silo (or ‘walled garden’) without interoperability with competing online services. Dominant platforms take advantage of their position to further increase their power and extend their dominance to other markets and services (just like when Microsoft bought LinkedIn, Facebook bought WhatsApp, and Google now wants to by Fitbit). Check the EDRi’s response for more details.

Main features of gatekeeper online platform companies and the main criteria for assessing their economic power

1. Which characteristics are relevant in determining the gatekeeper role of large online platform companies? Please rate each criterion identified below from 1 (not relevant) to 5 (very relevant):
2. If you replied "other", please list
3. Please explain your answer. How could different criteria be combined to accurately identify large online platform companies with gatekeeper role?

The Commission seeks to better understand the different possible criteria for defining “gatekeeper” platforms. A gatekeeper is when a company offers a service that gives it control over other markets and market players, like Apple controls the market for iPhone apps because it has full control over the iOS app store. Or like Facebook who has gained, through its amassing of 2.5 billion users, almost full control of reaching people on social media. Defining gatekeepers properly is important as they play a crucial role in controlling markets and reducing user freedom. We expect the Commission to propose specific regulation aimed at curbing gatekeeping power so they cannot prevent competitors from entering the market and reaching new users. This is why these criteria are recommended to be rated as the most relevant.

Emerging issues

9. Are there specific issues and unfair practices you perceive on large online platform companies?
11. What impact would the identified unfair practices can have on innovation, competition and consumer choice in the single market?

There are many issues facing people on large online platforms. You have ample choice to pick several and explain them. Don’t hesitate to look at EDRi’s own responses for inspiration. To summarise, gatekeeper platforms abuse their position and put in place practices to cement their dominance over users and competitors. They often try to disempower users in order to increase their profits – which is at odds with the internet’s original promise: to be an open and accessible communication tool for everyone.

10. In your view, what practices related to the use and sharing of data in the platforms’ environment are raising particular challenges?

This question expresses a widespread misconception within the Commission that, in order to be successful, digital services and platforms in particular need to collect huge amounts of personal data in order to be able to offer competitive services.

We do not believe that this is true. There are plenty of great digital services today that do not rely on the abuse of personal data for advertising: email providers, search engines, map apps, cloud storage, internet browsers and even smartphone makers. Therefore, the dominance of today’s Big Tech platforms and applications cannot be broken by just forcing them to share (personal) user data with (European) competitors—this would also likely be illegal under the GDPR.

Being a privacy nightmare is not a prerequisite for building a successful search engine, email app or maps service. We recommend stressing this point in your reply.

13. Which are possible positive and negative societal (e.g. on freedom of expression, consumer protection, media plurality) and economic (e.g. on market contestability, innovation) effects, if any, of the gatekeeper role that large online platform companies exercise over whole platform ecosystem?

We honestly do not see many positive societal effects of gatekeepers. Among the many negative ones, you can choose to highlight:

For more information on this topic, we recommend reading EDRi member, Bits of Freedom’s paper “Fix the system, not the symptoms”, which describes some of the ways in which our digital information ecosystem fails to deliver the communications landscape needed to sustain our democracies. Yes, you guessed it, mainly because of big ad-driven platforms.

Regulation of large online platform companies acting as gatekeepers

1. Do you believe that in order to address any negative societal and economic effects of the gatekeeper role that large online platform companies exercise over whole platform ecosystems, there is a need to consider dedicated regulatory rules?
2. Please explain

The answer is absolutely, yes. As mentioned in SECTION I, subsection 2., we need a tiered approach to platform regulation and not a one-size-fits-all solution. Also, since much of the problems we see online today are due to the behaviour of gatekeeper platforms, it seems appropriate to design dedicated rules to limit their power and dominance.

3. Do you believe that such dedicated rules should prohibit certain practices by large online platform companies with gatekeeper role that are considered particularly harmful for users and consumers of these large online platforms?
4. Please explain your reply and, if possible, detail the types of prohibitions that should in your view be part of the regulatory toolbox.
5. Do you believe that such dedicated rules should include obligations on large online platform companies with gatekeeper role?
6. Please explain your reply and, if possible, detail the types of obligations that should in your view be part of the regulatory toolbox.

One of the most important dedicated rules that the DSA should introduce for dominant platforms is mandatory service interoperability with competing platforms. Did you ever wonder why WhatsApp users cannot send messages to friends on Signal? Or why you can freely choose your email app but not the app you use to view your Facebook timeline? Interoperability ensures that users from one platform are able to interconnect with users on another platform with similar services, and that independent app makers are able to provide apps for all different kinds of existing services. This would help reduce network effects that keep people on the dominant platform only because they don’t want to lose access to all of their contacts.

Services like Facebook, Twitter and YouTube draw much of their power from the large number of users they hold captive in their centralised “walled gardens” — the many users in turn encourage others to join, the so-called “network effect” is a self-accelerating mechanism that makes it incredibly hard for competitors to win back market share (Google had to learn this the hard way when it launched the now defunct social network “Google+” to compete with Facebook). Today, many people would like to escape the dominant services, but not at the cost of losing all their contacts. Unfortunately, the GDPR’s data portability right cannot solve the problem alone – after all, in order to port your data you need a place to port your data to.

In order to limit the risks of user lock-in and the resulting network effects that artificially bind users to one dominant platform, the DSA should oblige “gatekeeper” platforms to open their systems in order to allow competing services to interoperate with the ecosystem they are gatekeeping and freely build services on top of or compatible with the one that the gatekeepers control.

7. If you consider that there is a need for such dedicated rules setting prohibitions and obligations, as those referred to in your replies to questions 3 and 5 above, do you think there is a need for a specific regulatory authority to enforce these rules?
8. Please explain your reply.
9. Do you believe that such dedicated rules should enable regulatory intervention against specific large online platform companies, when necessary, with a case by case adapted remedies?
10. If yes, please explain your reply and, if possible, detail the types of case by case remedies.

New legal obligations for gatekeepers (and other intermediaries) are only going to have their intended impact if they can be reliably enforced. The example of GDPR has shown that enforcement is crucial in the pursuit of justice and comparable compliance standards across all EU member states. We therefore recommend asking for an independent regulatory mechanism to be tasked with overseeing compliance with the interoperability obligation (see SECTION VI for more details about how it should function and be composed of).

17. Specifically, what could be effective measures related to data held by very large online platform companies with a gatekeeper role beyond those laid down in the General Data Protection Regulation in order to promote competition and innovation as well as a high standard of personal data protection and consumer welfare?

The interoperability mandate should be accompanied by strong privacy, security and non-discrimination rules. Platforms that acquire access to user data through interoperability should only be allowed to use that data for the purpose of interoperation, but not for general commercial use such as profiling or advertising. Therefore, any data made available for the purpose of interoperability should only be used for maintaining interoperability, safeguarding user privacy, and ensuring data security. The principles underpinning the GDPR and other relevant legislation, such as data minimisation and privacy by design and default must be protected.

Also, users must be in full control of how, when and for what purposes their personal data is shared: users decide themselves which functionalities and services (public posts, “likes”, direct messages, events, etc.) they would like to share cross-platform and with whom.

22. Which, if any, of the following requirements and tools could facilitate regulatory oversight over very large online platform companies (multiple answers possible):

We recommend to tick all the options available and reiterate that the regulator should be able to investigate the use of algorithms for content curation and moderation, to investigate the use of personalised / micro-targeted advertisement systems, and to process claims for breaches of obligations under the DSA.

You want to share your opinion on: THE ENFORCEMENT MODEL OF THE DSA

We recommend focussing on sections I., III. and VI.

SECTION I - SAFETY AND RESPONSIBILITIES

Go directly to section 2. Clarifying responsibilities for online platforms and other digital services

18. In your view, what information should online platforms make available in relation to their policy and measures taken with regard to content and goods offered by their users?

In this question, the Commission asks about both the content moderation policy that online platforms should include in their terms of service and information that they should provide in their general annual reporting. This is why we recommend to distinguish the two in your answer as well.

What should transparent and accessible terms of service look like?

Find more details in EDRi member Access Now’s paper “26 recommendations on content governance: a guide for lawmakers, regulators, and company policy makers”

What should large online platforms include in their mandatory annual report?

19. What type of information should be shared with users and/or competent authorities and other third parties such as trusted researchers with regard to the use of automated systems used by online platforms to detect, remove and/or block illegal content, goods, or user accounts?

While the use of automated systems is mainstream on today’s big online platforms, they are far from working as well as service providers would like us to believe: systematic content classification errors, opacity, built-in discrimination and bias, unaccountability, arbitrariness of decisions are just a few of the problems users encounter every day. The DSA is an opportunity to shed light on platforms’ algorithmic content moderation practices, in terms of both recommender systems and content filters. Enhancing transparency is the first step to address the systemic challenges manifested by algorithms.

This question provides space to outline the challenges that users, and marginalised groups in particular, face due to discrimination by these systems. The DSA should therefore oblige large platforms to clearly inform users when they are subjected to such technology. In order to enforce the rules, competent authorities should be empowered to independently review and audit algorithmic systems to ensure that their impact on human rights is at the lowest level possible.

21. In your view, is there a need for enhanced data sharing between online platforms and authorities, within the boundaries set by the General Data Protection Regulation?
22. Please explain. What would be the benefits? What would be concerns for companies, consumers or other third parties?

Unfortunately, this question is very vaguely phrased and leaves room for interpretation. What does “enhanced data sharing” mean? Are we talking about “more data sharing” in terms of frequency, or about fewer checks and balances?

From a rights-based perspective, law enforcement authorities already have a wide range of powers to request data from platform providers as part of criminal investigations and judicial proceedings. Thanks to the European Investigation Order this also works across borders. The DSA should therefore not give law enforcement additional powers to access data, especially not by circumventing current legal frameworks.

The suggestion to have data sharing based “On a voluntary and/or contractual basis in the public interest or for other purposes” is one of the attempts to circumvent proper legal frameworks. Different from the due process rules of the EIO, for example, voluntary agreements between private actors and authorities lack any defence rights for accused persons and can be implemented without any public and democratic oversight. In the same vein, it is unclear how or who will decide what the “public interest” is.

We nevertheless recommend selecting “For supervisory purposes of the platforms’ own obligations – e.g. with regard to content moderation obligations, transparency requirements, actions taken in electoral contexts and against inauthentic behaviour and foreign interference” in order to enable access to the information listed in question 18 and 19 and, in addition, stress in question 22 that adequate data protection safeguards should be put in place.

23. What types of sanctions would be effective, dissuasive and proportionate for online platforms which systematically fail to comply with their obligations (See also the last module of the consultation)?

No need to reinvent the wheel here. You can point to GDPR’s financial sanctions as a positive example, but also mention behavioural remedies such as mandatory interoperability, increased transparency, and internal procedural requirements.

SECTION III – GATEKEEPER PLATFORMS

We invite you to reply to the following questions by finding guidance above:

SECTION VI – GOVERNANCE AND ENFORCEMENT

Main issues

1. How important are - in your daily life or for your professional transactions - digital services such as accessing websites, social networks, downloading apps, reading news online, shopping online, selling products online?

This question invites you to rate your satisfaction (1 to 5 stars) of all the digital services you are using regularly, as well as those offered from outside your country of residence. The Commission seeks to understand how important the “Digital Single Market” is for people and how digital services impact your daily life. The question provides the opportunity to recount your own (good and bad) experiences of using all kinds of digital services across borders. Start by recounting how many of the services (apps, websites, online marketplaces, gaming platforms, etc.) that you use are established in your own country of residence.

Governance of digital services and aspects of enforcement

From a fundamental rights perspective there is one particularly important message to get across in this section: whatever the regulatory mechanism the EU will choose for the DSA, it must ensure that its authorities competent for enforcing the newly created rules are equipped with sufficient resources, financial and human, in order to be able to fulfil their mandate of holding big tech companies accountable and ensure their full compliance with this law. The rag rug of poorly financed, understaffed data protection authorities in member states created by GDPR has shown that failing to enforce an otherwise successful law can render crucial regulation toothless.

2. What governance arrangements would lead to an effective system for supervising and enforcing rules on online platforms in the EU in particular as regards the intermediation of third party goods, services and content?
3. Please explain

At EDRi, we believe that a “cooperation mechanism within Member States across different competent authorities responsible for the systematic supervision of online platforms and sectorial issues (e.g. consumer protection, market surveillance, data protection, media regulators, anti-discrimination agencies, equality bodies, law enforcement authorities etc.)” combined with a strong regulatory authority is the most adequate enforcement mechanism for the DSA. The scope of the DSA is so large that no single national authority alone would be able to deal with the number and complexity of oversight cases touching upon so many policy fields, ranging from labour rights to internet regulation and data protection.

You can also positively grade the proposal to create “cooperation schemes with third parties such as civil society organisations and academics for specific inquiries and oversight”. However, it should be added that such schemes should be made public and the rules (contracts, procurement processes) governing them should be scrutinised.

4. What information should competent authorities make publicly available about their supervisory and enforcement activity?

In any democracy, every oversight authority needs to be submitted to public scrutiny. This is why we recommend that in your reply you request the following from future competent authorities:

5. What capabilities – type of internal expertise, resources etc. - are needed within competent authorities, in order to effectively supervise online platforms?

This is an essential question to guarantee the expertise and trustworthiness of competent authorities. Because the DSA does not only have an impact on economic activities, we recommend stressing the need for the authorities to have proven experience in fundamental rights. Furthermore, there should be no conflicts of interest among employees with the companies they oversee. This is particularly necessary as big tech firms systemically co-opt or co-finance the work of academics and other experts through grants—something that can potentially affect a person’s independence.

6. In your view, is there a need to ensure similar supervision of digital services established outside of the EU that provide their services to EU users?
7. Please explain

Yes, absolutely. If they have an important impact on people’s fundamental rights in the EU, the supervision should apply to them. We recommend that you pick “Yes, if they intermediate a certain volume of content, goods and services provided in the EU” and “Yes, if they have a significant number of users in the EU”.