24 Oct 2018

ePrivacy: Public benefit or private surveillance?

By Yannic Blaschke

92 weeks after the proposal was published, the EU is still waiting for an ePrivacy Regulation. The Regulation is supposed to replace the current ePrivacy Directive, aligning it with the General Data Protection Regulation (GDPR).

While the GDPR regulates the ways in which personal data is processed in general, the ePrivacy Regulation specifically regulates the protection of privacy and confidentiality of electronic communications. The data in question not only includes the content and the “metadata” (data on when, where and to whom a person communicated) of communications, but also other identifiers such as “cookies” that are stored on users’ computers. To make the legislation fit for its purpose in regard to technological developments, the European Commission (EC) proposal addresses some of the major changes in communications of the last decade, including the use of so-called “over the top” services, such as WhatsApp and Viber.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Regulation is currently facing heavy resistance from certain sectors of the publishing and behavioural advertising industry. After an improved text was adopted by the European Parliament (EP), it is now being delayed at the Council of the European Union level, where EU Member States are negotiating the text.

One of the major obstacles in the negotiations is the question to what extent providers such as telecommunication companies can use metadata for other purposes than the original service. Some private companies – the same ones that questioned the need of consent from users in the GDPR – now re-wrapped their argument saying that an “overreliance” on consent would substantially hamper future technologies. Over-reliance on anything is not good, by definition, as is under-reliance, but such sophistry is a mainstay of lobby language.

However, this lobby attack omits reference to the fact that compatible further processing would not lead only to benign applications in the public interest: Since the proposal does not limit further processing to statistical or research purposes, it could just as well be used for commercial purposes such as commercial or political manipulation. But even with regard to the potentially more benevolent applications of AI, it should be kept in mind that automated data processing has in some cases shown to be highly detrimental to parts of society, especially vulnerable groups. This should not be ignored when evaluating the safety and privacy of aggregate data. For instance, while using location data for “smart cities” can make sense in some narrowly-defined circumstances when it is used for traffic control or natural disaster management, it gains a much more chilling undertone when it leads for instance to racial discrimination in company delivery services or law enforcement activities. It is easily imaginable that metadata, one of the most revealing and easiest to process forms of personal data, could be used for equally crude or misaligned applications, yielding highly negative outcomes for vulnerable groups. Moreover, where aggregate, pseudonymised data produces adverse outcomes for an individual, not even a rectification or deletion of the person’s data will lead to an improvement, as long as the accumulated data of similar individuals is still available.

Another pitfall of the supposedly private, ostensibly pseudonymised way of processing is that even if individual users are not targeted, companies may need to maintain the metadata of citizens in identifiable form to link existing data sets with new ones. This could essentially lead to a form of voluntary data retention, which might soon attract the interest of public security actors rapaciously seeking new data sources and new powers. If such access was granted, individuals would essentially be identifiable. Even retaining “only” aggregate data for certain societal groups or minorities might often already be enough to spark discriminatory treatment.

Although the Austrian Presidency of the Council of the European Union did include in their most recent draft compromise some noteworthy safeguards for compatible further processing, most notably the necessity to consult the national Supervisory Authority or to conduct a data protection impact assessment, the current proposal does not adequately empower individuals. Given that the interpretation of what is a “compatible” further processing may vary significantly among Member States (which would lead to years of litigation), it should be up to citizens to decide (and for the industry to prove) which forms of metadata processing are safe, fair and beneficial in society.

Five Reasons to be concerned about the Council ePrivacy draft (26.09.2018)

EU Council considers undermining ePrivacy (25.07.2018)

Your ePrivacy is nobody else’s business (30.05.2018)

e-Privacy revision: Document pool (10.01.2017)

(Contribution by Yannic Blaschke, EDRi intern)



18 Oct 2018

#PrivacyCamp19 – Save the Date and Call for Panel Proposals


Join us for the 7th annual Privacy Camp!

Privacy Camp will take place on 29 January 2019 in Brussels, Belgium, just before the start of the CPDP conference. Privacy Camp brings together civil society, policy-makers and academia to discuss existing and looming problems for human rights in the digital environment.

Take me to the call for panel submissions.
Take me to the call for user story submissions.

Platforms, Politics, Participation

Privacy Camp 2019 will focus on digital platforms, their societal impact and political significance. Due to the rise of a few powerful companies such as Uber, Facebook, Amazon or Google, the term “platform” has moved beyond its initial computational meaning of technological architecture and has come to be understood as a socio-cultural phenomenon. Platforms are said to facilitate and shape human interactions, thus becoming important economic and political actors. While the companies offering platform services are increasingly the target of regulative action, they are also considered as allies of national and supranational actors in enforcing policies voluntarily and gauging political interest and support. Digital platforms employ business models that rely on the collection of large amounts of data and the use of advanced algorithms, which raise concerns about their surveillance potential and their impact on political events. Increasingly rooted in the daily life of many individuals, platforms monetise social interactions and turn to questionable labor practices. Many sectors and social practices are being “platformised”, from public health to security, from news to entertainment services. Lately, some scholars have conceptualised this phenomenon as “platform capitalism” or “platform society”.

Privacy Camp 2019 will unpack the implications of “platformisation” for the socio-political fabric, human rights and policy making. In particular, how does the platform logic shape our experiences and the world we live in? How do institutional actors attempt to regulate platforms? In what ways do the affordances and constraints of platforms shape how people share and make use of their data?


We welcome panel proposals relating to the broad theme of platforms. Besides classic panel proposals we are also seeking short contributions for our workshop “Situating Platforms: User Narratives”.

1. Panel proposals

We are particularly interested in panel proposals on the following topics: platform economy and labour; algorithmic bias; democratic participation and social networks.

Submission guidelines:

  • Indicate a clear objective for your session, i.e. what would be a good outcome for you?
  • Indicate other speakers that could participate in your panel (and let us know which speaker has already confirmed, at least in principle, to participate).
  • Make it as participative as possible, think about how to include the audience and diverse actors. Note that the average panel length is 75 minutes.
  • Send us a description of no more than 400 words.

2. “Situating Platforms: User Narratives” submissions

In an effort to discuss situated contexts with regard to platforms, we will have a session on lived practices and user narratives. Individuals, civil society groups or community associations are welcome to contribute in the format of a short talk or show & tell demonstration. Details and the online submission form are here: [[link to submission form coming soon!]]


The deadline for all submissions is 18 November. After the deadline, we will review your submission and let you know by the end of November whether your proposal can be included in the programme. It is possible that we suggest merging panel proposals if they are very similar.

Please send your proposal via email to privacycamp(at)edri.org!

If you have questions, please contact Kirsten at kirsten.fiedler(at)edri(dot)org or Imge at imge.ozcan(at)vub(dot)be.

About Privacy Camp

Privacy Camp is jointly organised by European Digital Rights (EDRi), the Institute for European Studies of the Université Saint-Louis – Bruxelles (USL-B), the Law, Science, Technology & Society research group of the Vrije Universiteit Brussel (LSTS-VUB), and Privacy Salon.

Participation is free. Registrations will open in early December.


26 Sep 2018

Anatomy of an AI system – from the Earth’s crust to our homes

By SHARE Foundation

The Internet of Things (IoT) and the numerous devices that surround us and let us get through our daily routine with more convenience are becoming more advanced. A “smart” home is not a futuristic notion anymore – it is reality. However, there is another side to this convenient technology: the one that exploits material resources, human labor, and data.

In their latest research, Kate Crawford from New York University AI Now Institute, a research institute examining the social implications of artificial intelligence (AI), and Vladan Joler from EDRi member SHARE Foundation’s SHARE Lab have analysed the extraction of resources across time – represented as a visual description of the birth, life and death of a single Amazon Echo unit. The interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimisation make the scale of this system almost beyond human imagining. The whole process is presented on a detailed large-resolution map.

It is easy to give Alexa a command – you just need to say “play music”, “read my last unread email” or “add milk to my shopping list” – but this small moment of convenience requires a vast planetary network, fuelled by the extraction of non-renewable materials, labour, and data. The scope is overwhelming: hard labour in mines for extracting the minerals that form the physical basis of information technologies, strictly controlled and sometimes dangerous hardware manufacturing and assembly processes in Chinese factories, outsourced cognitive workers in developing countries labelling AI training data sets, all the way to the workers at toxic waste dumps. All these processes create new accumulations of wealth and power, which are concentrated in a very thin social layer.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

These extractive processes have an enormous toll in terms of pollution and energy consumption, although it is not visible until you scratch the surface. Also, many aspects of human behaviour are being recorded, quantified into data and used to train AI systems and enclosed as “intellectual property”. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with errors, yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed.

Anatomy of an AI system

Map: Anatomy of an AI system

(Contribution by Bojan Perkov, EDRi member SHARE Foundation, Serbia)



26 Sep 2018

Five reasons to be concerned about the Council ePrivacy draft

By IT-Pol

On 19 October 2017, the European Parliament’s LIBE Committee adopted its report on the ePrivacy Regulation. The amendments improve the original proposal by strengthening confidentiality requirements for electronic communication services, and include a ban on tracking walls, legally binding signals for giving or refusing consent to online tracking, and privacy by design requirements for web browsers and apps. Before trilogue negotiations can start, the Council of the European Union (the Member States’ governments) must adopt its “general approach”. The Council Presidency, currently held by Austria, is tasked with securing a compromise among the Member States. This article analyses the most recent draft text from the Austrian Council Presidency 12336/18.

Further processing of electronic communications metadata

The current ePrivacy Directive only allows processing of electronic communications metadata for specific purposes given in the Directive, such as billing. The draft Council ePrivacy text in Article 6(2a) introduces further processing for compatible purposes similar to Article 6(4) of the General Data Protection Regulation (GDPR). This further processing must be based on pseudonymous data, profiling individual users is not allowed, and the Data Protection Authority must be consulted.

Despite these safeguards, this new element represents a huge departure from the current ePrivacy Directive, since the electronic communications service provider will determine what constitutes a compatible purpose. The proposal comes very close to introducing “legitimate interest” loophole as a legal basis for processing sensitive electronic communications metadata. Formally, the further processing must be subject to the original legal basis, but what this means in the ePrivacy context is not entirely clear, since the main legal basis is a specific provision in the Regulation, such as processing for billing or calculating interconnection payments or maintaining or restoring the security of electronic communications networks.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

An example of further processing could be tracking mobile phone users for “smart city” applications such as traffic planning or monitoring travel patterns of tourists via their mobile phone. Even though the purpose of the processing must be obtaining aggregate information, and not targeting individual users, metadata will still be retained for the individual users in identifiable form in order to link existing data records with new data records (using a persistent pseudonymous identifier). Therefore, it becomes a form of voluntary data retention. The mandatory safeguard of pseudonymisation does not prevent the electronic communications service provider from subsequently identifying individual users if law enforcement authorities obtain a court order for access to retained data on individual users.

Communications data only protected in transit

Whereas the text adopted by the European Parliament specifically amends the Commission proposal to ensure that electronic communications data is protected under the ePrivacy Regulation after it has been received, the Council text clarifies that the protection only applies in transit. After the communication has been received by the end-user, the GDPR applies, which gives the service provider much greater flexibility in processing the electronic communication data for other purposes. For a number of modern electronic communications services, storage of electronic communication data on a central server (instead of on the end-user device) is an integral part of the service. An example is the transition from SMS (messages are stored on the phone) to modern messenger services such as WhatsApp or Facebook Messenger (stored on a central server). This makes it important that the protection under the ePrivacy Regulation applies to electronic communications data after it has been received. The Council text fails to address this urgent need.

Tracking walls

The European Parliament introduced a ban on tracking walls, that is the practice of denying users access to a website unless they consent to processing of personal data via cookies (typically tracking for targeted advertising) that is not necessary for providing the service requested.

The Council text goes in the opposite direction by specifically allowing tracking walls in Recital 20 for websites where the content is provided without a monetary payment if the website visitor is presented with an alternative option without this processing (tracking). This could be a subscription to an online news publication. The net effect of this is that personal data will become a commodity that can be traded for access to online news media or other online services. On the issue of tracking walls and coerced consent, the Council ePrivacy text may actually provide a lower level of protection than Article 7(4) of the GDPR, which specifically seeks to prevent that personal data can become the counter-performance for a contract. This is contrary to the stated aim of the ePrivacy Regulation.

Privacy settings and privacy by design

The Commission proposal requires web browsers to offer the option of preventing third parties from storing information in the browser (terminal equipment) or processing information already stored in the browser. An example of this could be an option to block third party cookies. The Council text proposes to delete Article 10 on privacy settings. The effect of this is that fewer users will become aware of privacy settings that protect them from leaking information about their online behaviour to third parties and that software may be placed on the market that does not even offer the user the possibility of blocking data leakage to third parties.

Data retention

Article 15(1) of the current ePrivacy Directive allows Member States to require data retention in national law. Under the case law of the Court of Justice of the European Union (CJEU) in Digital Rights Ireland (joined cases C-293/12 and C-594/12) and Tele2 (joined cases C-203/15 and C-698/15), this data retention must be targeted rather than general and undifferentiated (blanket data retention). In the Commission proposal for the ePrivacy Regulation, Article 11 on restrictions is very similar to Article 15(1) of the current Directive.

In the Council text, Article 2(2)(aa) excludes activities concerning national security and defence from the scope of the ePrivacy Regulation. This includes processing performed by electronic communications service providers when assisting competent authorities in relation to national security or defence, for example retaining metadata (or even communications content) that would otherwise be erased or not generated in the first place. The effect of this is that data retention for national security purposes would be entirely outside the scope of the ePrivacy Regulation and, potentially, the case law of the CJEU on data retention. This circumvents a key part of the Tele2 ruling where the CJEU notes (para 73) that the protection under the ePrivacy Directive would be deprived of its purpose if certain restrictions on the rights to confidentiality of communication and data protection are excluded from the scope of the Directive.

If data retention (or any other processing) for national security purposes is outside the scope of the ePrivacy Regulation, it is unclear whether such data retention is instead subject to the GDPR, and must satisfy the conditions of GDPR Article 23 (which is very similar to Article 11 of the proposed ePrivacy Regulation), or whether it is completely outside the scope of EU law. The Council text would therefore create substantial legal uncertainty for data retention in Member States’ national law, undoubtedly to the detriment of the fundamental rights of many European citizens.

Proposal for a Regulation concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC – Examination of the Presidency text (20.09.2018)

e-Privacy: What happened and what happens next (29.11.2017)

EU Member States fight to retain data retention in place despite CJEU rulings (02.05.2018)

EU Council considers undermining ePrivacy (25.07.2018)

Civil society letter to WP TELE on the ePrivacy Regulation (24.09.2018)

(Contribution by Jesper Lund, EDRi member IT-Pol, Denmark)



29 Aug 2018

What’s your trustworthiness according to Facebook? Find out!

By Bits of Freedom

On 21 August 2018 it was revealed that Facebook rates the trustworthiness of its users in its attempt to tackle misinformation. But how does Facebook judge you, what are the consequences and… how do you score? Ask Facebook by exercising your access right!

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Your reputation is 0 or 1

In an interview with the Washington Post, the product manager who is in charge of fighting misinformation at Facebook, said that one of the factors the company uses to determine if you’re spreading “fake news”, is a so-called “trustworthiness score”. (Users are assigned a score of 0 or 1.) In addition to this score, Facebook apparently also uses many other indicators to judge its users. For example, it takes into account if you abuse the option to flag messages.

Lots of questions

The likelihood of you spreading misinformation (whatever that means) appears to be decided by an algorithm. But how does Facebook determine a user’s score? For which purposes will this score be used and what if the score is incorrect?

Facebook has objected to the description of this system as reputation rating. To the BBC a spokesperson responded: “The idea that we have a centralised ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading.”

It’s unclear exactly how the headline is misleading, because if you’d turn it into a question “Is Facebook rating the trustworthiness of its users?” the answer would be yes. In any event, the above questions remain unanswered. That is unacceptable, because Facebook is not just any old actor. Together with a handful of other tech giants, the company plays an important role in how we communicate and which information we send and receive. The decisions Facebook makes about you have impact. Therefore, assigning you a trustworthiness score comes with great responsibility.

Facebook has to share your score with you

At the very least, such a system should be fair and transparent. If mistakes are made, there should be an easy way for users to have those mistakes rectified. According to Facebook, however, this basic level of courtesy is not possible, because it could lead to people gaming the system.

However, with the new European privacy rules (GDPR) in force, Facebook cannot use this reason as an excuse for dodging these important questions and keeping its trustworthiness assessment opaque. As a Facebook user living in the EU, you have the right to access the personal data Facebook has about you. If these data are incorrect you have the right to rectify them.

Assuming that your trustworthiness score is the result of an algorithm crunching the data Facebook collects about you, and taking into account that this score can have a significant impact, you also have the right to receive meaningful information about the underlying logic of your score and you should be able to contest your score.

Send an access request

Do you live in the European Union and do you want to exercise your right to obtain your trustworthiness score? Send an access request to Facebook! You can send your request by post, email or by using Facebook’s online form. To help you with exercising your access right, Bits of Freedom created a request letter for you. You can find it here.

Read more:

Example of request letter to send by regular mail (.odt file download link)

Example text to use for email / online form (.odt file download link)

Don’t make your community Facebook-dependent! (21.02.2018)

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption (26.04.2018)

(Contribution by David Korteweg, EDRi member Bits of Freedom, the Netherlands)




25 Jul 2018

New Protocol on cybercrime: a recipe for human rights abuse?


From 11 to 13 July 2018, the Electronic Frontier Foundation (EFF) and European Digital Rights (EDRi) took part in the Octopus Conference 2018 at the Council of Europe together with Access Now to present the views of a global coalition of civil society groups on the negotiations of more than 60 countries on access to electronic data by law enforcement in the context of criminal investigations.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

There is a global consensus that mutual legal assistance among countries needs to be improved. However, recognising its inefficiencies should not translate into bypassing Mutual Legal Assistance Treaties (MLATs) by going to service providers directly, thereby losing procedural and human rights safeguards embedded in them. Some of the issues with MLATs can be solved by, for example, technical training for law enforcement authorities, simplification and standarisation of forms, single points of contact or by increasing resources. For instance, thanks to a recent US “MLAT reform programme” that increased resources to handle MLATs, the US Department of Justice reduced the amount of pending cases by a third.

There is a worrisome legislative trend  emerging through the US CLOUD Act and the European Commission’s “e-evidence” proposals to access data directly from service providers. This trend risks creating a race to the bottom in terms of due process, court checks, fair trials, privacy and other human rights safeguards.

If the current Council of Europe negotiations on cybercrime focused on improving mutual legal assistance, they could offer an opportunity to create a human rights-respecting alternative to dangerous shortcuts as proposed in the US CLOUD Act or the EU proposals. However, civil right groups have serious concerns from a procedural and substantive perspective.

This process is being conducted without regular and inclusive participation of civil society, or data protection authorities. Nearly 100 NGOs wrote in April 2018 to the Council of Europe’s Secretary General because they are not duly included in the process. While the Council of Europe issued a response, civil society groups reiterated that civil society participation and inclusion goes beyond a public consultation, participation in a conference and comments on texts preliminary agreed by States. Human rights NGOs should be present in drafting meetings to learn from the law enforcement expertise of the 60+ countries and provide human rights expert input in a timely manner.

From a substantive point of view, the process is being built on the faulty premise that anticipated signatories to the Convention on cybercrime (“the Budapest Convention”) share a common understanding on basic protections of human rights and legal safeguards. As a result of this presumption, it is unclear how the proposed Protocol can provide for strong data protection and critical human rights vetting mechanisms that are embedded in the current MLAT system.

One of the biggest challenges in the Council of Europe process to draft an additional protocol to the Cybercrime convention – a challenge that was evident in the initial Cybercrime convention itself and in its article 15 in particular – is the assumption that signatory Parties share (and will continue to share) a common baseline of understanding with respect to the scope and nature of human rights protections, including privacy.

Unfortunately, there is neither a harmonised legal framework among the countries participating in the negotiations nor a shared human rights understanding. Experience shows that there is a need for countries to bridge the gap between national legal frameworks and practices on the one hand, and human rights standards established by case law of the highest courts on the other. For example, the Court of Justice of the European Union (CJEU) held that blanket data retention is illegal under EU law on several occasions. Yet, the majority of the EU Member States still have blanket data retention laws in place. Other states involved in the protocol negotiations have implemented precisely the type of sweeping, unchecked, and indiscriminate data retention regime that the CJEU ruled out as well, such as Australia, Mexico or Colombia.

As a result of a lack of a harmonised human rights and legal safeguards protection, the forthcoming protocol proposals risk:

– Bypassing critical human rights vetting mechanisms inherent in the current MLAT system that are currently used to, among other things, navigate conflicts in fundamental human rights and legal safeguards that inevitably arise between countries;

– Seeking to encode practices that fall below minimum standards being established in various jurisdictions by ignoring human rights safeguards established primarily by the case law of the European Court of Human Rights, the Court of Justice of the European Union, among others;

– Including few substantial limits and instead relying on the legal systems of signatories to include enough safeguards to ensure human rights are not violated in cross-border access situations and a general and non-specific requirement that signatories ensure adequate safeguards (see Article 15 of the Cybercrime Convention) without any enforcement.

Parties to the negotiations should render human rights safeguards operational – as human rights are the cornerstones of our society. As a starting point, NGOs urge countries to sign, ratify and diligently implement Convention 108+ on data protection. In this sense, EDRi and EFF welcome the comments of the Council of Europe’s Convention 108 Committee.

Finally, civil society groups urge the forthcoming protocol not to engage in a mandatory or voluntary direct access mechanism to obtain data from companies directly without appropriate safeguards. While the proposals seem to be limited to subscriber data, there are serious risks that interpretation of what constitutes subscriber data is expanded so as to lower safeguards, including access to metadata directly from providers by non-judicial requests or demands.

This can conflict clear court rulings from the European Court of Human Rights, such as the Benedik v. Slovenia case or even States’ case law, such as that of Canada’s Supreme Court. The global NGO coalition therefore reiterates that the focus should be put on making mutual legal assistance among countries more efficient.

Civil society is ready to engage in the negotiations. Until now however, the future of the second additional protocol to the Cybercrime Convention remains unclear, raising many concerns and questions.

Read more:

Joint civil society response to discussion guide on a 2nd Additional Protocol to the Budapest Convention on Cybercrime (28.06.2018)

How law enforcement can access data across borders — without crushing human rights (04.07.2018)

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations (03.04.2018)

A Tale of Two Poorly Designed Cross-Border Data Access Regimes (25.04.2018)

Cross-border access to data has to respect human rights principles (20.09.2017)

(Contribution by Maryant Fernández Pérez, EDRi, and Katitza Rodríguez, EFF)



21 Jun 2018

ENAR and EDRi join forces for diligent and restorative solutions to illegal content online

By Maryant Fernández Pérez

The European Network Against Racism (ENAR) and European Digital Rights (EDRi) joined forces to draw up some core principles in the fight against illegal content online. Our position paper springs both from the perspective of victims of racism and that of free speech and privacy protection.

The European Commission has so far not been successful in tackling illegal content in a way that provides a redress mechanism for victims. In fact, the European Commission has been way too long focused on a “public relations regime” on how quickly and how many online posts have been deleted, while not having a diligent approach for addressing the deeper problems behind the removed content. Indeed, the European Commission has been continuously promoting rather superficial “solutions” that are not dealing with the problems faced by victims of illegal activity in a meaningful way.

At the same time, the European Commission’s approach is undermining people’s rights to privacy and freedom of expression by urging and pressuring internet giants to take over privatised law enforcement functions. As a consequence, ENAR and EDRi have agreed a joint position paper following our commitment to ensure fundamental rights for all.

Our joint position paper relies on four basic principles:

1. No place for arbitrary restrictions – Any measure that is implemented must be predictable and subject to real accountability.

2. Diligent review processes – Any measure must be implemented on the basis of neutral assessment, rather than being left entirely to private parties, particularly as they may have significant conflicts of interest.

3. Learning lessons – Any measure implemented must be subject to thorough evidence-gathering and review processes.

4. Different solutions for different problems – No superficial measure in relation to incitement to violence or hatred should be implemented without clear obligations on all relevant stakeholders to play their role in dealing with the content in a comprehensive manner. Illegal racist content inciting to violence or discrimination should be referred to competent and properly resourced law enforcement authorities for adequate sanctions if they meet the criminal threshold. States must also ensure that laws on racism and incitement to violence are based on solid evidence and respect international human rights law.

This paper follows cooperation between the two organisations over the past few years to bring the digital rights community and the anti-racist movement together in a more comprehensive way. The common initiative comes at a time where the European Commission is consulting stakeholders and individuals to provide their opinion on how to tackle illegal content online by 25 June 2018. EDRi has developed an answering guide for individuals that consider that the European Union should take a diligent, long-term approach that protects for the victims of illegal content, such as racism online, and victims of free speech restrictions.

(Contribution by Maryant Fernández Pérez, EDRi Senior Policy Advisor)

Read more:

ENAR-EDRi Joint position paper: Tackling illegal content online – principles for efficient and restorative solutions (20.06.2018)

EDRi Answering guide to EU Commission’s “illegal” content “consultation” (13.06.2018)

Commission’s position on tackling illegal content online is contradictory and dangerous for free speech (28.09.2017)

EU Commission’s Recommendation: Let’s put internet giants in charge of censoring Europe (28.09.2017)

26 Apr 2018

LEAK: British EU Commissioner: ID check & prior approval for online posts

By Joe McNamee

In a letter to Commissioner Mariya Gabriel obtained by EDRi1, the British European Commissioner, Sir Julian King, makes it clear that, not alone does he no longer find it acceptable that people should be able to communicate online without prior approval, he also objects to people communicating without being identified. Commissioner King is pushing the European Union towards an internet where freedom of expression is strangled by filtering and ID checks.


For the past year, Commissioner King and his services have been strongly pushing for “upload filtering (pdf)” – the automatic approval of all uploads in all formats before they are put online. The aim is to ensure that nothing that was previously removed on the basis of the law, or the arbitrary terms of service of an internet company, or that is or has been assessed as being unwelcome or illegal by a guess made by an artificial intelligence programme can be uploaded or re-uploaded to the internet. If the European Commission succeeds in getting this principle accepted by the European Parliament in the Copyright Directive (vote is scheduled for 20-21 June 2018), it plans to rush out new legislation to cover other forms of content within weeks. It seems that some Members of the European Parliament (MEPs) are already being lobbied to push for this.

Paradoxically, while the European Commission uses populist demands about “all parties” making “more efforts and faster progress” on removing “illegal” content, the Commission itself has no idea how many items of allegedly illegal content that were flagged by the EU police cooperation agency Europol led to an investigation or a prosecution – clearly showing a lack of a serious, diligent approach “from all sides”. “From all sides, except ours” might be more accurate.

ID Checks

Now, acting on his own initiative, Commissioner King has decided that “voluntary” identification (by companies that are eager to collect as much data about us as possible) is the next battle – this time in the fight against “online disinformation” (whatever that may mean) and to fight against abuse of data (collecting data as a way of avoiding collected data from being abused). Facebook’s “real-name policy” has previously caused demonstrable harm to vulnerable and marginalised groups.

In the letter, King proposes multiple ways of achieving this control – such as through the WHOIS database of domain name owners, through surveillance of IPv6 internet protocol numbers (the European Court of Human Rights ruled this week (pdf) that a court order is needed to gain access to IP address data), “verified pseudonymity”, and “other identification mechanisms”.

UK perspective

Coincidentally or not, the British Conservative government, which appointed Commissioner King, last week launched an attack on social media companies for not properly verifying the ages of children. Social media companies, which profit from exploitation of our data, are unlikely to be very unhappy about government pressure to gather still more personal data.

When not fighting “fake news”, the government that appointed Commissioner King allegedly spent more than one million pounds on negative Facebook adverts attacking the leader of the main UK opposition party during the 2017 general election. This is highly unlikely to be the kind of activity that Commissioner King is referring to when he talks about plans to “further limit the possibilities for using mined personal information for certain specific purposes, in particular political ones”.

EDRi will keep working to ensure that EU policy-makers respect the Charter of Fundamental Rights of the European Union when proposing any legislative or non-legislative action to privatise law enforcement functions.

1 While we know that the Financial Times, (paywalled) if not others, have obtained a copy of this letter, we are not aware of it having been made public before today.


24 Apr 2018

ePrivacy: Civil society letter calls to ensure privacy and reject data retention


On 23 April 2017, EDRi, together with other civil society organisations, sent a follow up to our previous open letter to the permanent representations of EU Member States in Brussels. The letter highlighted the importance of the ongoing reform of Europe’s ePrivacy legislation for strengthening individuals’ rights to privacy and freedom of expression and for rebuilding trust in online services, in particular in the light of the revelations of the Cambridge Analytica scandal.

Open letter to European member states on the ePrivacy reform

23 April 2018

Dear Minister,
Dear Member of the WP TELE,

We, the undersigned organisations, support the ongoing and much-needed efforts to reform Europe’s ePrivacy legislation. As we mentioned in our recent open letter, the reform is essential in order to strengthen individuals’ rights to privacy and freedom of expression across the EU and to rebuild trust in online services, in particular given the revelations of the Cambridge Analytica scandal.1

Despite the urgent need to protect the confidentiality of communications, we are aware of the political difficulties that were met during debates in Council and at Working Party level, specifically regarding Article 11 of the proposed ePrivacy Regulation.

Given these difficulties and following the recent publication of the full document WK 11127/2017,2 we would like to highlight a number of legal points that may help move the discussion forward:

– The Court of Justice of the European Union (CJEU) clarified, in two different judgements (Digital Rights Ireland – joined cases 293/12 and 594/12 and Tele2-Watson, joined cases C-203/15 and C-698/15), that mandatory bulk retention of communications data breaches the Charter of Fundamental rights. Any attempt to subvert CJEU case law by adding “clarity to the legal context” without a legal basis that respects the Charter is a direct attack on the most basic foundations of the European Union and should be dismissed. In fact, the current legal framework (the e-Privacy Directive, Directive 2002/58) provides legal clarity since mandatory retention of metadata for the purpose of prevention, investigation, detection or prosecution of criminal offences, as well as access to retained metadata for this purpose, is regulated in its Article 15(1).

– A Regulation aimed at protecting personal data and confidentiality of electronic communications would be deprived of its purpose if certain types of processing (“processing for law enforcement purposes”) are completely excluded from its scope. This was also noted by the Court of Justice in paragraph 73 of the Tele2-Watson judgment. Furthermore, such processing requires specific safeguards defined by the Court and must be necessary and proportionate.

– Finally, we have also noted certain attempts by a number of delegations to introduce a minimum storage period (of 6 months) for all categories of data processed under Article 6(2)(b). If approved, this would impose indiscriminate retention of personal data in a way that has already been ruled as unlawful by the Court of Justice of the European Union in Tele2/Watson. If Article 6(2)(b) establishes a legal basis for processing communications data in order to maintain or restore security of electronic communications networks and services, or to detect errors, attacks and abuse of these networks/services, the processing should still be limited to the duration necessary for this purpose. On top of this, the general principles of GDPR Article 5 should apply, e.g. storage limitation in Article 5(1)(e). If the technical purpose can be achieved with anonymised data, this is no justification for processing data for identified or identifiable end-users. Setting a minimum mandatory retention period for communications data processed under Article 6(2)(b) will mean weakening the level of protection guaranteed under the GDPR, which is not only unacceptable but also contradictory to the concept of lex specialis.

We are aware of the political difficulties raised in Council around the issue of data retention, however the clarity provided by the CJEU in two landmark rulings on that matter can not and must not simply be ignored. We strongly encourage you to keep in mind all of the legal points above in the ongoing debates. We count on the Council to swiftly conclude a general approach on the ePrivacy Regulation, which should include a legally sound Article 11 rooted in respect for the EU Charter and the CJEU case law, to provide law enforcement authorities with the legal certainty needed to accomplish their duties.3

Yours faithfully,

European Digital Rights




Privacy International


IT-Political Association of Denmark


https://edri.org/files/eprivacy/20180327-ePrivacy-openletter-final.pdf and https://edri.org/cambridge-analytica-access-to-facebook-messages-a-privacy-violation




03 Apr 2018

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations

By Maryant Fernández Pérez

(This blogpost is also available in French and Spanish)

Today, 3 April 2018 European Digital Rights (EDRi), along with 93 civil society organisations from across the globe, sent a letter to the Secretary General of the Council of Europe, Thorbjørn Jagland. The letter requests transparency and meaningful civil society participation in the Council of Europe’s negotiations of the draft Second Additional Protocol to the Convention on Cybercrime (also known as the “Budapest Convention”) —a new international text that will deal with cross-border access to data by law enforcement authorities. According to the Terms of Reference for the negotiations, it may include ways to improve Mutual Legal Assistance Treaties (MLATs) and allow “direct cooperation” between law enforcement authorities and companies to access people’s “subscriber information”, order “preservation” of data and to make “emergency requests”.

The upcoming Second Additional Protocol is currently being discussed at the Cybercrime Convention Committee (T-CY) of the Council of Europe, a committee that gathers the States Party to the Budapest Convention on Cybercrime and other observer and “ad hoc” countries and organisations. The T-CY aims to finalise the Second Additional Protocol by December 2019. While the Council of Europe has made clear its intention for “close interaction with civil society“, civil society groups are asking to be included throughout the entire process—not just during the Council of Europe’s Octopus Conferences.

Transparency and opportunities for input are needed continuously throughout the process. This ensures that civil society can listen to Member States, and provide targeted advice to the specific discussions taking place. Our opinions can build upon the richness of the discussion among States and experts, a discussion that civil society will miss if we are not invited to participate throughout the process.

— the letter reads

Current negotiations raise “multiple challenges for transparency, participation, inclusion and accountability,” despite the fact that the Council of Europe’s committees are traditionally very inclusive and transparent. We are requesting the T-CY to:

develop a detailed plan for online debriefing sessions after each drafting meeting, both plenary and drafting, and to invite civil society as experts in the meetings, as is customary in all other Council of Europe Committee sessions. With a diligent approach to making all possible documents public and proactively engaging with global civil society, the Council of Europe can both build on its exemplary approach to transparency and ensure that the outcome of this process is of the highest quality and achieves the widest possible support.

In light of the passing of the CLOUD Act in the United States that undermines the rights to privacy and other rights, the forthcoming proposal of the European Union on e-evidence, and other initiatives, it is vitally important that the T-CY listens to and engages with civil society proactively and in a timely manner. Civil society wants to engage in this process to ensure the new protocol will uphold the highest human rights standards.

The letter is available in English, French and Spanish.

The letter was coordinated by European Digital Rights (EDRi) and the Electronic Frontier Foundation (EFF) with the help of IFEX, Asociación por los Derechos Civiles (ADC), Derechos Digitales, and Association for Progressive Communications (APC).