privatised law enforcement

A measure which would be illegal if implemented by a government should also be illegal if implemented by industry as a “voluntary” measure, as a result of government pressure or for public relations or anti-competitive reasons. However, as key international legal instruments, such as the European Charter of Fundamental Rights and the European Convention on Human Rights, as well as national constitutions are binding for states and governments, they are not directly applicable to other entities, such as private companies. As a result, there is a major trend towards governments persuading or coercing companies to impose restrictions on fundamental freedoms under the guise of “self-regulation,” thereby circumventing legal protections.

04 Jul 2019

Real Time Bidding: The auction for your attention

By Andreea Belu

The digitalisation of marketing has introduced novel industry practices and business models. Some of these new systems have developed into crucial threats to people’s freedoms. A particularly alarming one is Real Time Bidding (RTB).

When you visit a website, you often encounter content published by the website’s owner/author, and external ads. Since a certain type of content attracts a certain audience, the website owner can sell some space on their website to advertisers that want to reach those readers.

In the earlier years of the web, ads used to be contextual, and the website would sell its ad space to a certain advertiser in the field. For example, ads on a website about cars would typically relate to cars. Later, ads have become more personalised, and they now focus on the unique website reader. They have become “programmatic advertising”. The website still sells its space, but now it sells it to advertisement platforms, “ad exchanges”. Ad exchanges are digital marketplaces that connect publishers (like websites) to advertisers by auctioning the attention you give that website. This automated auction process is called Real Time Bidding (RTB).

How does Real Time Bidding work?

Imagine auctions, stock exchange, traders, big screens, noise, graphs, percentages. Similarly, RTB systems facilitate the auction of website ad space to the highest bidding advertiser. How does it work?

A website rents its advertising space to one (or many) ad exchanges. In the blink of an eye, the ad exchange creates a “bid request” that can include information from the website: what you’re reading, watching or listening to on the website you are on, the categories into which that content goes, your unique pseudonymous ID, your profile’s ID from the ad buyer’s system, your location, device type (smartphone or laptop), operating system, browser, IP address, and so on.

From their side, advertisers inform the ad exchange about who they want to reach. Sometimes they provide detailed customer segments. These categories have been obtained by combining the advertisers’ data about (potential) customers, and the personal profiles generated by data brokers such as Cambridge Analytica, Experian, Acxiom or Oracle. The ad exchange has now a complex profile of you, made of information from the website supplying the ad space, and information from the advertiser demanding the ad space. When there is a match between a bid request and the advertiser’s desired customer segment, a Demand Side Platform (DSP) acting on behalf of thousands of advertisers starts placing bids for the website’s ad space. The highest bid wins, places its ad in front of a particular website viewer, and the rest is history.

Click to watch the animation


Every time you visit a website that uses RTB, your personal data is publicly broadcasted to possibly thousands of companies ready to target their ads. Whenever this happens, you have no control over who has access to your personal data. Whenever this happens, you have no way of objecting to being traded. Whenever this happens, you cannot oppose to being targeted as Jew hater, incest or abuse victim, impotent, or right wing extremist. Whenever this happens, you have no idea whether you are being discriminated.

Whenever this happens, you have no idea where your data flows.

EDRi’s members suing against RTB

Real time bidding poses immense risks for our human rights in the digital space, specifically for the rights recognised in the EU General Data Protection Regulation (GDPR). More, it puts you at high risks of being discriminated. For these reasons, several EDRi members and observers have taken action and filed lawsuits against RTB in different EU countries. Privacy International, Panoptykon Foundation, Open Rights Group, Bits of Freedom, Digitale Gesellschaft, digitalcourage, La Quadrature du Net and Coalizione Italiana per le Libertà e i Diritti civili are taking part in a wider campaign that urges the ad tech industry to #StopSpyingOnUs.

Support their effort in fighting for your rights and spread the word!

Read More:

Privacy International full timeline of complaints

GDPR Today: Ad Tech GDPR complaint is extended to four more European regulators

Prevent the Online Ad Industry from Misusing Your Data – Join the #StopSpyingOnUs Campaign

The Adtech Crisis and Disinformation – Dr Johnny Ryan

Blogpost series: Your privacy, security and freedom online are in danger (14.09.2016)

03 Jul 2019

EDRi is looking for a Communications Intern


European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defense of our rights and freedoms online!

The EDRi Brussels office is currently looking for an intern to support Senior Communications Manager, Campaigns and Communications Manager, and Community Coordinator. The internship will focus on social media, publications, campaigning, press work, and the production of written materials. The intern will also assist in tasks related to community coordination.

The internship will begin in September 2019 and go on for 4-6 months. You will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).

Key tasks:

  • maintaining social media accounts: drafting posts, creating visuals, engaging with followers, monitoring
  • contributing to drafting and editing of press releases and briefings, newsletter articles, and supporter mailings
  • maintaining mailing lists for press distribution, newsletter and supporter mailings
  • layouts and editing of visual (and audiovisual) materials
  • updating and analysing communications statistics and visibility in media
  • assisting in event organisation

Must-have skills and qualifications:

  • experience in social media community management and publications
  • photo editing skills
  • excellent skills in writing and editing
  • fluent command of spoken and written English

Desired skills:

  • video editing skills
  • experience in journalism, media or public relations
  • interest in online activism and campaigning for digital human rights

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to heini >dot< jarvinen >at< edri >dot< org. Closing date for applications has been extended until 22 July 2019. Interviews with selected candidates will take place during the last two weeks of July.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

03 Jul 2019

Fighting online hatespeech: An alternative to mandatory real names

By Gesellschaft für Freiheitsrechte

The internet facilitates debates: People around the globe can connect at almost zero cost, and information and opinions that would otherwise hardly be noticed can go viral through social media. However, services like Twitter and Facebook can also be used for targeted defamation. Especially people who belong to minorities or endorse views outside the mainstream have described grave verbal attacks. Women who are active in politics often face rape threats. Such abuses of online communication should not be tolerated in a democracy.

An obligation for real names is not a solution

In response, “number plates” for the internet have been proposed – people should be required to disclose their real names before they can participate in forums and on social media. However, such a “real name obligation” would achieve very little in terms of protections against verbal abuse online, and at the same time, it would cause serious collateral damage.

The arguments against an obligation for real names are manifold: For example, its supporters fail to notice that there has been an obligation for real names on Facebook for many years, which many users simply ignore. It’s doubtful whether such an obligation would even be admissible under European law. In any case, such a policy would only apply at the national level. Should platforms simply hide all posts by users from other countries where real names are not required by law?

Everyday experience and recent studies show that a remarkable number of users do not shy away from criminal online activities, even if they are acting under their real names. This is because the problem with pursuing crimes online is not the anonymity of the offenders; it is the irritatingly low level of engagement from the responsible authorities. If it’s possible to commit such crimes without any risk of consequences, this will impact the popular sense of right and wrong.

The biggest disadvantage of a real name obligation is that it would silence those who depend on anonymous or pseudonymous communication. Conservatives often assume that such a need only exists in authoritarian states. However, even in a democracy many people have comprehensible reasons why they would not or cannot communicate openly. For example, people who engage against Nazis can hardly make this public in some regions of Germany without facing significant risk of physical harm. Interestingly, even almost all German judges and prosecutors who actively use Twitter prefer to do so under a pseudonym.

Better: Target the accounts

Introducing a real name obligation would be a dangerous error of judgement, but legislators do need to act. Because online bullies cannot always be identified, the focus should be on their weapons – their accounts, which they use to undertake verbal acts of violence. A judicial process should be introduced in which victims or victim protection organisations can request for accounts that are abused for unlawful speech to be blocked. Courts of law could impose blockages on individual accounts for a certain period of time – or permanently, especially in recurrent cases. The platforms would be barred from showing these accounts to users in a specific geographical location.

Such a judicial process would have many advantages: The identity of the people behind an account would not matter anymore. This would also be an effective course of action against account holders who are known but out of reach, for example because they are located abroad. Contrary to the approach of the Network Enforcement Act (NetzDG) it would not be the platforms who decide, often in dubious ways, which articles are illegal – this would be left to an independent court. Courts have demonstrated that they are capable of making such decisions – in particular, there are courts that specialise in press law and are accustomed to rule even on delicate freedom of speech questions within a few hours.

The NetzDG made social media platforms “addressable”

Of course, such a judicial process would raise questions: Who would be the subject of such a request if the responsible person is not known? With a bit of creativity, those details can be resolved. In the US a judicial petition against “John Doe” is filed in such cases. This anonymous party would be represented in court by the platform that would be responsible to implement any blockages.

Each of the large platforms has already registered a point of contact in Germany pursuant to § 5 NetzDG, so that they are always reachable for courts of law. This procedure could also ensure that the people behind an affected account can be heard in court, if the law would oblige platforms to forward the petition to them (via email for example). This would give the account holder the option to reveal their identity and take over the judicial process under their own name.

Legislative competence probably with the Federal Government

The law to create such a judicial process could be enacted by the German Federal Government. This is not about a new regulation on which content would be admissible online – this would be for the Federal States to enact and would require an arduous update of the Interstate Broadcasting Treaty (Rundfunkstaatsvertrag). The Federal Government could base this law on its competences to regulate judicial procedures as well as telemedia law. The Federal Government should urgently take this opportunity and create a “Protection against Digital Violence Act”, allowing for accounts that publish unlawful content to be blocked. The onus is still on the Federal States to become more effective in pursuing supposedly lesser online offences, which is within their legal purview.

A German version of this article was first published at

EU action needed: German NetzDG draft threatens freedom of expression (23.05.2017)

(Contribution by Ulf Buermeyer, EDRi member Gesellschaft für Freiheitsrechte – GFF, Germany; translation from German into English by EDRi volunteers Stefan and Sebastian)

03 Jul 2019

Open letter demands interoperability of the big online platforms

By La Quadrature du Net

On 21 May 2019, EDRi observer La Quadrature du Net, along with 70 other organisations, including some EDRi members, sent a letter asking the French government and members of the Parliament to force web giants (Facebook, Youtube, Twitter…) to be interoperable with other online services. The purpose is to allow users of these platforms to leave them for other services, while still being able to communicate with people that decided to stay on it – as, for example, this is already the case with emails, with which people are able to communicate regardless of whether they use different email providers like Protonmail, Gmail or RiseUp.

The letter coincides with the French Parliament preparing to vote on a law requiring online platforms to remove hate speech 24 hours after having received a notification. In case they repeatedly fail to do so, a French administration would have the power to impose a fine up to 4 % of their global revenue.

Criticising the dangers of censorship and centralisation of the internet that could result from such a law, the signatories of the open letter recommend that the Parliament does not address the symptoms but the causes of the dissemination of hate speech. One of the causes is the structure and the business model of these platforms that promotes and facilitates the dissemination of hate speech. As the platforms are built on the “attention economy”, it is in their interest to host as much of any kind of engaging content as possible.

The letter explains that forcing web’s giants to become interoperable, based on open standards, would allow people that are “captives” of these platforms to escape them. They would be able to join other services that are more respectful of users’ personal data and freedoms, and not making profits on surveillance and targeted advertising. Outside of these platforms, millions of people are already united across interoperable services such as Mastodon, Diaspora, and PeerTube — notably through ActivityPub, an interoperability protocol published by the World Wide Web Consortium (W3C) in 2018.

The proposition has been well received by experts, journalists, and some members of the French Parliament. Laetitia Avia, the rapporteur of this law, however, has refused to support it, preferring to promote the solution of fast removal of contents. The French government has also rejected the idea of interoperability presented in the letter, stating that it’s “excessively aggressive for the business model of large platforms”, and refusing to see the connection with hate speech. Nevertheless, as some members of the Parliament have proposed amendments on interoperability, the next session in Parliament on 3 July will clarify the results of this first campaign.

Should the Parliament reject the idea, La Quadrature du Net will, together with the signatories of the open-letter, continue to promote the idea of interoperability, in France and at a European level, with the help of EDRi members. It’s urgent to give everyone the ability to escape from the surveillance and toxicity of these giant platforms and to join free, decentralised and human-scale services  — without losing their social links by doing so.

The open letter remains open for signatures from organisations and companies. Individuals are strongly encouraged to spread and promote it widely. To sign the letter, please write at, with the email subject “Signing interoperability letter”, and noting the name of your organisation in the email.

La Quadrature du Net

For the interoperability of the web’s giants: An open letter from 70 organisations (14.06.2019)

French online hate speech bill aims to wipe out racist trolling (29.06.2019)

Report to strengthen the fight against racism and antisemitism online (only in French, 28.09.2018)


Imposing interoperability on platforms? Doubts and prudence of Cédric O (only in French, 05.06.2019)

(Contribution by EDRi observer La Quadrature du Net, France)

03 Jul 2019

EU worries over the possibility of losing wiretapping powers

By Statewatch

5G telecoms networks could render obsolete the “lawful interception” techniques that police is traditionally using, unless the European Union and national governments take action. This was revealed in internal EU documents obtained by EDRi member Statewatch, that has published a new analysis explaining the issues and calling for a public debate.

“It is unsurprising that EU officials are concerned about the possible loss of telephone-tapping powers,” said Chris Jones, a researcher at Statewatch. “However, the very same technologies they are worried about will give law enforcement and security agencies disturbing possibilities for accessing data on individuals in order to track their activities and behaviour. This has to be seen as part of the same issue as the possible loss of ‘traditional’ wiretapping powers. Rather than secretive attempts to influence standard-setting and law-making, a public discussion is required about the acceptable limits of surveillance and interception powers in light of emerging technologies.”

On 7 June 2019, the EU Justice and Home Affairs Council (JHA) held a discussion on implications of 5G in the area of internal security, a topic taken up in documents produced recently by Europol and the EU Counter-Terrorism Coordinator that Statewatch published alongside the analysis.

The documents warn that various aspects of the technology underpinning 5G communications networks could make traditional wiretapping methods far more complicated or even render them useless. For example, the IMSI code – used to identify an individual device – will be encrypted, meaning “the security authority authorities are no longer able to locate or identify the mobile device,” according to Europol. 5G networks will also be able to detect false “base stations” – making it impossible to use IMSI catchers (or “stingrays”), devices that imitate telecoms antennae in order to discreetly acquire user data. Other issues such as network slicing, edge computing, and network function virtualisation raise their own problems, leading to significant new challenges for law enforcement agencies wanting access to individuals’ data.

Proposals to overcome the limitation of traditional wiretapping methods range from trying to influence the international bodies responsible for establishing the relevant technical standards; passing new laws (at both national and EU level) to enforce police demands; and ensuring a broader discussion amongst officials both within the EU and beyond, for example with major surveillance powers such as the US, Australia and Canada.

However, although 5G technologies could limit law enforcement agencies’ access to certain types of data, if the hype is to be believed, one of 5G’s main functions will be to enable the generation, storage and sharing of vast tomes of data on individuals, objects, devices and the environment through the “internet of things”. In the US, for example, data from “smart” (i.e. internet-connected) water meters, pacemakers and in-car safety systems have been used in court proceedings. This presents significant new opportunities for police and security agencies, even if they lose access to other long-standing surveillance techniques.

The analysis argues that both the possibility of law enforcement agencies losing some of their current powers – at the same time as vast new surveillance possibilities are opened up – should be a matter for public debate.


Analysis: A world without wiretapping? Official documents highlight concern over effects 5G technology will have on “lawful interception” (05.06.2019)

Indicative programme – Justice and Home Affairs Council of 6 and 7 June 2019

EU Counter-Terrorism Coordinator: Law enforcement and judicial aspects related to 5G (06.05.2019)

Position paper on 5G by Europol (11.04.2019)

(Contribution by EDRi member Statewatch, the United Kingdom)

03 Jul 2019

Regulating online communications: Fix the system, not the symptoms

By Bits of Freedom

Our digital information ecosystem fails to deliver the communications landscape needed to sustain our democracies. In a problem analysis, EDRi member Bits of Freedom introduces and disentangles some of the key concepts and issues surrounding the dominant role of platforms and the resulting harms to our freedom of expression.

Freedom of expression is a human right enshrined in law. It includes the right to seek, receive and impart information and ideas, without undue interference or fear of retaliation. It is indispensable for both the development of individuals as well as for the protection and advancement of our democratic societies. It is essential for holding those in power to account.

Our current online communications landscape fails to deliver these opportunities. A few giant corporations dominate the ecosystem, leading to the obstruction of our communications, including that of journalists and civil society, undue control over our public debate, and extremely limited possibilities for market challengers.

Characteristics inherent to these giant platforms and the ecosystem in which they operate, make them nearly immune to political, societal and consumer pressure. Therefore it has proven to be difficult for our correctional mechanisms – self-regulation, the market, policy makers and civil society – to sufficiently address the biggest harms and weed out the most toxic practices.

With the paper “Fix the system, not the symptoms”, Bits of Freedom wishes to contribute to shifting the discussion from how we can adapt to these businesses and fix their platforms, towards what a healthy communications landscape looks like in an increasingly digitalised world – and how to get there.

Bits of Freedom

Regulating online communications: fix the system, not the symptoms

Fix the system, not the symptoms (19.06.2019)

(Contribution by EDRi member Bits of Freedom, the Netherlands)

24 Jun 2019

EU Commission discards criticism of net neutrality enforcement

By Jan Penfrat

On 30 April 2019, EDRi and 31 other civil rights organisations sent an open letter to the EU Commission and BEREC. The letter criticised the lack of enforcement of current net neutrality rules in Europe. The signatories also emphasised that the EU finally needs to act against the widespread use of zero-rating practices. Zero-rating favours internet traffic from certain companies by billing it to customers at a lower (zero) rate while discriminating against everybody else. The letter also highlighted that many EU member states do not impose effective penalties against infringers of net neutrality.

Only two weeks later, we addressed a second letter to the EU Commission, warning against the increased use of so-called Deep Packet Inspection (DPI) by telecom operators. DPI is a highly intrusive technology allowing telcos to scan and classify your online content with high granularity, for instance in order to slow down certain internet traffic or bill certain content differently. Of course the technology could also be used to block certain types of traffic such as video streaming or virtual private networks (VPNs).

Commission does not seem to plan action

Unfortunately, the EU Commission’s official responses to those letters have not addressed the points raised by civil society.

In its first response, the EU Commission acknowledges “that the types and levels of sanctions differ widely between Member States” and says it was “monitoring how the existing sanctioning powers are used in practice”. However, no concrete actions or plans are proposed that could tackle the lack of enforcement in Europe. In reality, almost no penalties against infringing telcos have been pronounced so far and those that were issued have been too low to lead to meaningful change. Worse, Portugal and Ireland still have not enacted any penalties for net neutrality infringements at all despite their obligation to do so under EU law.

In its second response, while acknowledging the illegality of slowing down or discriminating traffic in principle, the EU Commission does not seem to think that zero-rating as practised by European telcos today is a problem. Instead, the Commission says, this should be decided on a case-by-case basis – which in practice means that telcos can zero-rate as they please.

Net neutrality violations still happening

As a recent study carried out by EDRi member shows, net neutrality violations have spread across the EU in the past years, the response of national regulators is inconsistent or lacking, and the EU Commission seems to largely ignore the problem.

The European net neutrality guidelines are in the process of being updated and the EU Commission says it plans to publicly consult civil society during that process “so that their interpretation and their arguments will be expressed and taken into account”. EDRi and its member organisations will of course participate in these consultations and hope that they will indeed be taken into account.

Response of the EU Commission to our open letter on the lack of enforcement of 30 April 2019 (PDF) reply_open_internet.pdf

Response of the EU Commission to our open letter against Deep Packet Inspection of 15 May 2019 (PDF) reply_dpi.pdfR

Net neutrality wins in Europe! (29.08.2016)

Zero rating: Why it is dangerous for our rights and freedoms (22.06.2016)

A study evaluates the net neutrality situation in the EU (13.02.2019)

(Contribution by Jan Penfrat, EDRi)

20 Jun 2019

E-Commerce review: Opening Pandora’s box?

By Kirsten Fiedler

The next important battle for our rights and freedoms in the digital sphere is looming on the horizon. While the public debate has recently focused on upload filters for alleged copyright infringements and online “terrorist” content, a planned legislative review will look more broadly at the rules for all types of illegal and “harmful” content.

This review aims to update the rules on how online services, such as social media platforms, should or should not delete or block illegal and “harmful” content. A reform might also bring changes to how online services could be held liable when such content is not taken down. The big question is: will the review of the E-Commerce Directive (ECD) open Pandora’s box and become one of this decade’s biggest threat to citizens’ rights and freedoms online – or will it be a chance to clarify and improve the current situation?

Christchurch, copyright and election manipulation

The recently adopted Copyright Directive and the draft European rules for the removal of terrorist content online initiated the creation of sector-specific rules for content removals.

Events like the Christchurch tragedy, potential disinformation threats during the European elections and hateful comments from increasingly radicalised right-wing extremists after the murder of a German pro-migrant politician contributed further to the debate surrounding illegal and “harmful” online content.

These events led to a multiplication of calls towards online services to “do more” and to “take more responsibility” for what is being uploaded to their servers. Several countries have started discussions about the adoption of national rules. For instance, following the German example, France has just introduced a law against online hate and the UK published a controversial Online Harms Paper.

E-Commerce Directive: What is it and its unavoidable reform

Adopted nearly 20 years ago, the E-Commerce Directive sets up liability exemptions for hosting companies for content that users share on their networks. Until very recently, these rules applied horizontally to all sorts of illegal content, including copyright infringements, hate speech, and child abuse material. The current rules for take-downs and removals are therefore (indirectly) defined by the ECD.

While the Directive is not perfect and created a few issues, mainly due to lack of clarity, its safe harbour provisions encouraged the protection of the fundamental rights of users, in particular the freedom of expression and that of information.

Since the adoption of the ECD, however, the landscape of services that might or might not fall under liability exemptions has drastically changed. Notably, cloud services and social media platforms became very important players and some have gained significant market power. Currently, a small number of dominant platforms have a high impact on individuals’ rights and freedoms, our societies and on our democracies.

The nature of the internet has also vastly changed in the past 20 years towards an increasingly participatory community. As a result, the amount of user-generated content has increased exponentially. On the other hand, we witness more government pressure on companies to implement voluntary mechanisms against alleged illegal or “harmful” content. These two parallel developments resulted in an increasing number of wrongful removals and blocking of legitimate speech.

In the past months, the Directorate-General for Communications Networks, Content and Technology (DG Connect) of the EU Commission already started the process of exploring policy options for content moderation that will be presented to the incoming College of Commissioners. A reform of the ECD to attempt the harmonisation of liability exemptions and content moderation rules seems to have become unavoidable.

The upcoming reform can therefore be both a chance and a potential trap for policy-makers. On one hand, it offers the opportunity to create legal certainty and introduce safeguards that will enable users to enjoy their rights and freedoms. On the other, the reform can be a trap if policy-makers embrace blunt one-size-fits-all solutions that avoid real solutions for societal issues and instead lead to massive collateral damages.

This article is the introduction to our blogpost series on Europe’s future rules for intermediary liability and content moderation. The series presents the three main points that should be taken into account in an update of the E-Commerce Directive:

  1. Technology is the solution. What is the problem?
  2. Mitigating collateral damage and counter-productive effects
  3. Safeguarding human rights when moderating online content

Filters Incorporated (09.04.2019)

e-Commerce Directive

EU Parliament deletes the worst threats to freedom of expression proposed in the Terrorist Content Regulation (17.04.2019)

Phantom Safeguards? Analysis of the German law on hate speech NetzDG (30.11.2017)

E-Commerce directive: ensure freedom of expression and due process of law (17.11.2010)

(Contribution by Kirsten Fiedler, EDRi)

19 Jun 2019

Fighting defamation online – AG Opinion forgets that context matters

By EDRi and IT-Pol

On 4 June 2019, Advocate General (AG) of the Court of Justice of the European Union (CJEU), Maciej Szpunar, delivered his Opinion on the Glawischnig-Piesczek v Facebook Ireland case. The case is related to injunctions obliging a service provider to stop the dissemination of a defamatory comment. Looking carefully at this Opinion is important, as the final ruling of the CJEU usually follows the lines of the AG’s Opinion.

The case involves Ms Glawischnig-Piesczek, an Austrian politician, who was the target of defamatory comment shared publicly on Facebook. As Facebook did not react to her first request for that comment to be deleted, Ms Glawischnig-Piesczek requested the Austrian courts to issue an order obliging Facebook to remove the publication and prevent its dissemination, including exact copies of the original comment as well as “equivalent content”. After the first court injunction, Facebook disabled access in Austria to the content initially published. Ultimately, the Supreme Court of Austria, before which the case was brought, referred to the CJEU several question related to the scope of application of such injunction geographically as well as to statements with identical wording or having equivalent meaning. As Facebook is not necessarily aware of all identical or equivalent content, the upcoming judgment of the CJEU will be essential for the interpretation of the E-Commerce Directive, notably its Articles 14 and 15.

In his Opinion, the AG states that a hosting provider such as Facebook can be ordered to seek and identify, among all the information disseminated by users of that platform, content identical to the content that has been characterised as illegal by a court. Moreover, the hosting provider may be required to search equivalent content, but only among the content disseminated by the user that generated the illegal information in the first place.

The Opinion is interesting for two reasons: first, it provides reflection on the way to distinguish between general and specific monitoring of content by hosting providers; second, it tries to draw a line between “identical” and “equivalent” content.

AG Szpunar starts by expressing great concerns that an obligation put on an intermediary to filter all content would make it aware of illegal content, thus causing the loss of its liability exemption provided under Article 14 of the e-Commerce Directive. In the present case, the referring court has established that Facebook falls under Article 14, so the active-passive host distinction is not further explored in the Opinion. The upcoming CJEU case about liability of YouTube for user uploads (C-682/18) will undoubtedly revisit this question. However, the AG does not preclude the possibility to impose “active” monitoring under the provisions of Article 15 of the same Directive. Recalling the conclusions from the L’Oréal v eBay case (C-324/09), which limits the preventive obligation (ie. “filtering”) to “infringements of the same nature by the same recipient of the same rights, in that particular case trade mark rights” (point 45). For a monitoring obligation to be specific and sufficiently targeted, the AG mentions the criteria of duration, but also the information relating to the nature of the infringements, their author and their subject. It raises the question on how the monitoring can be limited in time and stopped, once a specific case is declared to be over.

Applying these principles to the present case, the AG believes a monitoring obligation for “identical content” among information generated by all users would ensure a fair balance between the fundamental rights involved. His argument is to be found at points 61 and 63 where he speculates that seeking and identifying identical content can be done with passive “software tools” (ie. upload filters), which does not represent “an extraordinary burden” for the intermediary.

This is where the distinction with “equivalent” content is drawn: equivalent content would deserve more “active non-automatic filtering” by the intermediary of all the information disseminated via its platform. What is meant by non-automatic filtering is not entirely clear, but the distinction in the mind of the AG could be between filtering that never requires manual intervention to ensure a fair balance with other fundamental rights (freedom of expression and right to information, in particular) and non-automatic filtering that does require such intervention in order to avoid situations similar to the Netlog case C-360/10, where the CJEU ruled that a preventive filtering system applying indiscriminately to all users was incompatible with the Charter of Fundamental Rights of the European Union.

Unfortunately, a distinction along these lines seems ill-suited for the case at hand which is about defamation. Specific words that are defamatory in the present case could be used in other contexts without being defamatory. Obvious examples would be counterspeech, irony among friends, or even news reporting. The situation is really the same for content defined as identical or equivalent: context matters, and automated algorithms will not be able to make the finely grained decisions about when the use of certain words (whether copied verbatim, that is identical content, or with changes, meaning equivalent content) is legal or illegal. A filtering obligation for identical content will have the same negative effect on freedom of expression and the right to information as a filtering obligation for equivalent content.

The present case will be particularly important for defining the distinction between specific monitoring and general monitoring, where there is presently very little case law. Since the E-Commerce Directive Article 15(1) prohibits general monitoring, specific monitoring by implication is any monitoring that is compatibe with the E-Commerce Directive, interpreted in the light of the Charter of Fundamental Rights. Only the L’Oréal v eBay case (C-324/09) has dealt with this issue. Compared to the earlier case, the AG proposes an expanded definition of specific monitoring which has the notable disadvantage of being rather unworkable since it relies on a flawed dichotomy between identical and equivalent content. This dichotomy is disconnected from the legal reality that specific monitoring must comply with the Charter of Fundamental Rights and prevent the risk of censorship resulting from a filtering obligation. Hopefully, the judgment in the case can present a more workable definition of specific monitoring that is reconcilable with both Articles 14 and 15 of the E-Commerce Directive.

Case C-18/18: Eva Glawischnig-Piesczek v Facebook Ireland Limited

Legal victory for trademark litigants over intermediary liability (13.07.2011)

SABAM vs Netlog – another important ruling for fundamental rights (16.02.2012)

(Contribution by Chloé Berthélémy, EDRi, and Jesper Lund, EDRi member IT-Pol, Denmark)

19 Jun 2019

EU rushes into e-evidence negotiations without common position

By Chloé Berthélémy

On 6 June 2019, the Justice and Home Affairs Council (JHA) – which gathers all EU Member States Ministers of Justice – asked the European Commission to start international negotiations on cross-border access to electronic evidence in criminal matters (so-called “e-evidence”) in the upcoming months. The Commission should enter into bilateral negotiations with the United States (US), and at the same time, it should join the ongoing discussions at the Council of Europe about the adoption of a Second Additional Protocol to the Budapest Convention on Cybercrime – which also deals with access to e-evidence.

Both negotiation mandates were issued while the Commission’s own proposal for a European e-evidence regulation is highly contested and still being debated in the European Parliament. According to this proposal, law enforcement authorities in any EU Member States would be allowed to force providers like Facebook or Google to hand over personal data from users, even if the provider is located in a different country. The authorities of the provider’s country would have almost no say in it, and in most cases would not even know that their citizens’ data has been accessed by foreign authorities.

Many critics, including EDRi, lawyers, academics, the European Data Protection Board (EDPB), and other civil society organisations oppose the very idea behind the e-evidence proposal as it heavily infringes on our fundamental rights without due safeguards.

Even within the EU, some activities are considered criminal in one country, and legal in another. Negotiating similar data access rules with countries like the US or even Russia and Azerbaijan (as part of the Council of Europe) that often have very different concepts of the rule of law, puts people in Europe at risk. This is especially dangerous for political dissidents and activists who have come to the EU as a “safe haven”.

In fact, the previous European Parliament Committee responsible (Committee on Civil Liberties, Justice and Home Affairs, LIBE) expressed serious criticism in a series of Working Documents. Still, the Commission intends to negotiate on its own terms and those of the Council of the European Union, which are very similar.

It is unacceptable that the Commission does not wait for and is unlikely to take into account the position of the co-legislator. In line with the European democratic legislative process, negotiations with third parties should not start as long as no official position of the EU as a whole has been reached. Worse yet, the Commission will likely be obliged to amend its own negotiation position on in order to follow the outcomes of internal discussions between the EU Council and the Parliament. It will greatly undermine the legitimacy and credibility of the EU as a negotiating partner.

No transparency

As usual when the Commission is representing the EU in negotiations of international agreements or treaties, few transparency mechanisms are put in place to inform citizens about what is being discussed, what compromises are being struck, and what concessions are being given from which side of the table. Often such information is kept secret while the issues at stake have considerable impact on people and our democracies. The Commission announced that it will regularly inform the Member States about the progress made, but no such reports seem to be foreseen to the European Parliament. Yet, it is the Parliament that represents European citizens and that is key for democratic scrutiny and transparency. It goes without saying that scrutiny by and participation of civil society will be even more challenging.

The European Data Protection Supervisor (EDPS) recently published a recommendation demanding to include additional data protection principles and fundamental rights protections into the negotiation mandates given to the Commission. It is unclear how this recommendation and the strong criticism from experts across the board will be taken into consideration.

Recent CJEU ruling puts the Commission’s proposal in jeopardy

In addition, the Court of Justice of the European Union (CJEU) recently released a ruling on the issuance of European Arrest Warrants by public prosecutors. It decided that for the purpose of cross-border judicial cooperation, certain public prosecutors cannot qualify as competent “issuing judicial authority” under the European treaties. According to the CJEU, public prosecutor’s offices in countries such as Germany cannot be considered independent as they are likely exposed to direct or indirect instructions from the Minister for Justice and thus to political decisions. Issuing authorities should be capable of exercising their functions objectively, taking into account all incriminatory and exculpatory evidence and without external directions or instructions. This ruling is of importance in the context of the e-evidence proposal. It proposes that judicial authorities, including prosecutors, can issue European production and preservation orders to obtain data in cross-border cases. In line with this CJEU’s ruling, public prosecutors would not be allowed issue these orders for the purpose of judicial cooperation as set out in Article 82(1) of the Treaty on the Functioning of the European Union (TFEU). Thus, the current proposal is weakened in regard to its legality and will need great improvements to ensure compliance with CJEU case law.

Cross-border access to data for law enforcement: Document pool

CCBE press release: CJEU ruling casts doubts on the legality of the proposed e-evidence regulation (29.05.2019)

EDPS Opinion on the negotiating mandate of an EU-US agreement on cross-border access to electronic evidence (02.04.2019)

EDPS Opinion regarding the participation in the negotiations in view of a Second Additional Protocol to the Budapest Cybercrime Convention (02.04.2019)

(Contribution by Chloé Berthélémy, EDRi)