15 Jan 2020

Support our work by investing in a piece of e-clothing!


Your privacy is increasingly under threat. European Digital Rights works hard to have you covered. But there’s only so much we can do.

Help us help you. Help us get you covered.

Click the image to watch the video!

Check out our 2020 collection!*

*The items listed below are e-clothes. That means they are electronic. Not tangible. But still very real – like many other things online.

Your winter stock(ings) – 5€
A pair of hot winter stockings can really help one get through cold and lonely winter days. Help us to fight for your digital rights by investing in a pair of these superb privacy–preserving fishnet stockings. This delight is also a lovely gift for someone special.

A hat you can leave on – 10€
Keep your head undercover with this marvelous piece of surveillance resistance. Adaptable to any temperature types and – for the record – to several CCTV models, the item really lives up to its value. This hat is an indispensable accessory when visiting your favourite public space packed with facial recognition technologies.

Winter/Summer Cape – 25€
Are you feeling heroic yet? Our flamboyant Winter/Summer cape is designed to keep you warm and cool. This stylish accessory takes the weight off your shoulders – purchase it and let us take care of fighting for your digital rights!

Just another White T-Shirt – 50€
A white t-shirt can do wonders when you’re trying to blend in with a white wall. This wildly unexciting but versatile classic is one of the uncontested fundamental pillars of your privacy enhancing e-wardrobe.

THE privacy pants ⭐️ – 100€
This ultimate piece of resistance is engineered to keep your bottom warm in the coldest winter, but also aired up during the hottest summer days. Its colour guarantees the ultimate tree (of knowledge) look. The item comes with a smart zipper.

Anti-tracksuit ⭐️ – 250€
Keep your digital life healthy with the anti-tracking tracksuit. The fabric is engineered to bounce out any attempt to get your privacy off track. Plus, you can impress your beloved babushka too.

Little black dress ⭐️ – 500€
Whether at a work cocktail party, a funeral, shopping spree or Christmas party – this dress will turn you into the center of attention, in a (strangely) privacy-respecting manner.

Sew your own ⭐️ – xxx€
Unsure of any of the items above? Let your inner tailor free, customise your very own unique, designer garment, and put a price tag of your choice on it.

⭐️ The items of value superior to 100€ are delivered with an (actual, analog, non-symbolic) EDRi iron-on privacy patch that you can attach on your existing (actual, analog, non-symbolic) piece of clothing or accessory. If you wish to receive this additional style and privacy enhancer, don’t forget to provide us with your postal address (either via the donation form, or in your bank transfer message)!

Question? Remark? Idea? Please contact us brussels [at] edri [dot] org !

15 Jan 2020

Your face rings a bell: Three common uses of facial recognition

By Ella Jakubowska

Not all applications of facial recognition are created equal. As we explored in the first and second instalments of this series, different uses of facial recognition pose distinct but equally complex challenges. Here we sift through the hype to analyse three increasingly common uses of facial recognition: tagging pictures on Facebook, automated border control gates, and police surveillance.

The chances are that your face has been captured by a facial recognition system, if not today, then at least in the last month. It is worryingly easy to stroll through automated passport gates at an airport, preoccupied with the thought of seeing your loved ones, rather than consider potential threats to your privacy. And you can quite happily walk through a public space or shop without being aware that you are being watched, let alone that your facial expressions might be used to label you a criminal. Social media platforms increasingly employ facial recognition, and governments around the world have rolled it out in public. What does this mean for our human rights? And is it too late to do something about it?

First: What the f…ace? – Asking the right questions about facial recognition!

As the use of facial recognition skyrockets, it can feel that there are more questions than answers. This does not have to be a bad thing: asking the right questions can empower you to challenge the uses that will infringe on your rights before further damage is done.

A good starting point is to look at impacts on fundamental rights such as privacy, data protection, non-discrimination and freedoms, and compliance with international standards of necessity, remedy and proportionality. Do you trust the owners of facial recognition systems (or indeed other types of biometric recognition and surveillance) whether public or private, to keep your data safe and to use it only for specific, legitimate and justifiable purposes? Do they provide sufficient evidence of effectiveness, beyond just the vague notion of “public security”?

Going further, it is important to ask societal questions like: does being constantly watched and analysed make you feel safer, or just creeped out? Will biometric surveillance substantially improve your life and your society, or are there less invasive ways to achieve the same goals?

Looking at biometric surveillance in the wild

As explored in the second instalment of this series, many public face surveillance systems have been shown to violate rights and been deemed illegal by data protection authorities. Even consent-based, optional applications may not be as unproblematic as they first seem. This is our “starter for ten” for thinking through the potentials and risks of some increasingly common uses of facial verification and identification – we’ll be considering classification and other biometrics next time. Think we’ve missed something? Tweet us your ideas @edri using #FacialRecognition.

Automatic tagging of pictures on Facebook

Facebook uses facial recognition to tag users in pictures, as well as other “broader” uses. Under public pressure, in September 2019, they made it opt-in – but this applies only to new, not existing, users.


  • Saves time compared to manual tagging
  • Alerts you when someone has uploaded a picture of you without your knowledge


  • The world’s biggest ad-tech company can find you on photos or videos across the web – forever
  • Facebook will automatically scan, analyse and categorise every photo uploaded
  • You will automatically be tagged in photos you might want to avoid
  • Errors especially for people with very light or very dark skin


Creepy, verging on dystopian, especially as the feature is on by default for some users (here’s how to turn it off: https://www.cnet.com/news/neons-ceo-explains-artificial-humans-to-me-and-im-more-confused-than-ever/). We’ll leave it to you to decide if the potentials outweigh the risks.

Automated border control (ePassport gates)

Automated border control (ABC) systems, sometimes known as e-gates or ePassport gates, are self-serve systems that authenticate travellers against their identity documents – a type of verification.


  • Suggested as a solution for congestion as air travel increases
  • Matches you to your passport, rather than a central database – so in theory your data isn’t stored


  • Longer queues for those who cannot or do not want to use it
  • Lack of evidence that it saves time overall
  • Difficult for elderly passengers to use
  • May cause immigration issues or tax problems
  • Normalises face recognition
  • Disproportionately error-prone for people of colour, leading to unjustified interrogations
  • Supports state austerity measures


  • Stats vary wildly, but credible sources suggest the average border guard takes 10 seconds to process a traveler, faster than the best gates which take 10-15 seconds
  • Starting to be used in conjunction with other data to predict behaviour
  • High volume of human intervention needed due to user or system errors
  • Extended delays for the 5% of people falsely rejected
  • Evidence of falsely criminalising innocent people
  • Evidence of falsely accepting people with wrong passport

Evidence of effectiveness can be contradictory, but the impacts – especially on already marginalised groups – and the ability to combine face data with other data to induce additional information about travellers bear major potential for abuse. We suspect that offline solutions such as funding more border agents and investing in queue management could be equally efficient and less invasive.

Police surveillance

Sometimes referred to as face surveillance, police forces across Europe – often in conjunction with private companies – are using surveillance cameras to perform live identification in public spaces.


  • Facilitates the analysis of video recordings in investigations


  • Police hold a database of faces and are able to track and follow every individual ever scanned
  • Replaces investment in police recruitment and training
  • Can discourage use of public spaces – especially those who have suffered disproportionate targeting
  • Chilling effect on freedom of speech and assembly, an important part of democratic participation
  • May also rely on pseudo-scientific emotion “recognition”
  • Legal ramifications for people wrongly identified
  • No ability to opt out


Increased public security could be achieved by measures to tackle issues such as inequality or antisocial behaviour or generally investing in police capability rather than surveillance technology.

Facing reality: towards a mass surveillance society?

Without intervention, facial recognition is on a path to omniscience. In this post, we have only scratched the surface. However, these examples identify some of the different actors that may want to collect and analyse your face data, what they gain from it, and how they may (ab)use it. They have also shown that benefits of facial surveillance are frequently cost-cutting reasons, rather than user benefit.

We’ve said it before: tech is not neutral. It reflects and reinforces the biases and world views of its makers. The risks are amplified when systems are deployed rapidly, without considering the big picture or the slippery slope towards authoritarianism. The motivations behind each use must be scrutinised and proper assessments carried out before deployment. As citizens, it is our right to demand this.

Your face has a significance beyond just your appearance – it is a marker of your unique identity and individuality. But with prolific facial recognition, your face becomes a collection of data points which can be leveraged against you and infringe on your ability to live your life in safety and with privacy. With companies profiting from the algorithms covertly built using photos of users, faces are literally commodified and traded. This has serious repercussions on our privacy, dignity and bodily integrity.

Facial Recognition and Fundamental Rights 101 (04.12.2019)

The many faces of facial recognition in the EU (18.12.2019)

Data-Driven Policing: The Hardwiring of Discriminatory Policing Practices across Europe (05.11.2019)

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)

What the “digital welfare state” really means for human rights (08.01.2020)

Resist Facial Recognition

(Contribution by Ella Jakubowska, EDRi intern)

18 Dec 2019

The many faces of facial recognition in the EU

By Ella Jakubowska

We previously launched the first article and case study in a series exploring the human rights implications of facial recognition technology. In this post, we look at how different EU Member States, institutions and other countries worldwide are responding to the use of this tech in public spaces.

Live facial recognition technology is increasingly used to identify people in public, often without their knowledge or properly-informed consent. Sometimes referred to as face surveillance, concerns about the use of these technologies in public places is gaining attention across Europe. Public places are not well-defined in law, but can include open spaces like parks or streets, publicly-administered institutions like hospitals, spaces controlled by law enforcement such as borders, and – arguably – any other places where people wanting to take part in society have no ability to opt out from entering. As it stands, there is no EU consensus on the legitimacy nor the desirability of using facial recognition in such spaces.

Public face surveillance is being used by many police forces across Europe to look out for people on their watch-lists; for crowd control at football matches in the UK; and in tracking systems in schools (although so far, attempts to do this in the EU have been stopped). So-called “smart cities” – where technologies that involve identifying people are used to monitor environments with the outward aim of making cities more sustainable – have been implemented to some degree in at least eight EU Member States. Outside the EU, China is reportedly using face surveillance to crack down on the civil liberties of pro-democracy activists in Hong Kong, and there are mounting fears that Chinese surveillance tech is being exported to the EU and even used to influence UN facial recognition standards. Such issues have brought facial recognition firmly onto the human rights agenda, raising awareness of its (mis)use by both democratic and authoritarian governments.

How is the EU grappling with the facial recognition challenge?

Throughout 2019, a number of EU Member States responded to the threat of facial recognition, although their approaches reveal many inconsistencies. In October 2019, the Swedish Data Protection Authority (DPA) – the national body responsible for personal data under the General Data Protection Regulation (GDPR) – approved the use of facial recognition technology for criminal surveillance, finding it legal and legitimate (subject to clarification of how long the biometric data will be kept). Two months earlier, they levied a fine of 20 000 euro for an attempt to use facial recognition in a school. Similarly, the UK DPA has advised police forces to “slow down” due to the volume of unknowns – but have stopped short of calling for a moratorium. UK courts have failed to see their DPA’s problem with facial recognition, despite citizens’ fears that it is highly invasive. In the only European ruling so far, Cardiff’s high court found police use of public face surveillance cameras to be proportionate and lawful, despite accepting that this technology infringes on the right to privacy.

The French DPA took a stronger stance than the UK’s DPA, advising a school in the city of Nice that the intrusiveness of facial recognition means that their planned face recognition project cannot be implemented legally. They emphasised the “particular sensitivity” of facial recognition due to its association with surveillance and its potential to violate rights to freedom and privacy, and highlighting the enhanced protections required for minors. Importantly, France’s DPA concluded that legally-compliant and equally effective alternatives to face recognition, such as using ID badges to manage student access, can and should be used instead. Echoing this stance, the European Data Protection Supervisor, Wojciech Wiewiórowski, issued a scathing condemnation of facial recognition, calling it a symptom of rising populist intolerance and “a solution in search of a problem.”

A lack of justification for the violation of fundamental rights

However, as in the UK, the French DPA’s views have frequently clashed with other public bodies. For example, the French government is pursuing the controversial Alicem digital identification system despite warnings that it does not comply with fundamental rights. There is also an inconsistency in the differentiation made between the surveillance of children and adults. The reason given by both France and Sweden for rejecting child facial recognition is that it will create problems for them in adulthood. Using this same logic, it is hard to see how the justification for any form of public face surveillance – especially when it is unavoidable, as in public spaces – would meet legal requirements of legitimacy or necessity, or be compliant with the GDPR’s necessarily strict rules for biometric data.

The risks and uncertainties outlined thus far have not stopped Member States accelerating their uptake of facial recognition technology. According to the EU’s Fundamental Rights Agency (FRA), Hungary is poised to deploy an enormous facial recognition system for multiple reasons including road safety and the Orwellian-sounding “public order” purposes; the Czech Republic is increasing its facial recognition capacity in Prague airport; “extensive” testing has been carried out by Germany and France; and EU-wide migration facial recognition is in the works. EDRi member SHARE Foundation have also reported on its illegal use in Serbia, where the interior ministry’s new system has failed to meet the most basic requirements under law. And of course, private actors also have a vested interest in influencing and orchestrating European face recognition use and policy: lobbying the EU, tech giant IBM has promoted its facial recognition technology to governments as “potentially life-saving” and even funded research that dismisses concerns about the ethical and human impacts of AI as “exaggerated fears.”

As Interpol admits, “standards and best practices [for facial recognition] are still in the process of being created.” Despite this, facial recognition continues to be used in both public and commercial spaces across the EU – unlike in the US, where four cities including San Francisco have proactively banned facial recognition for policing and other state uses, and a fifth, Portland, has started legislative proceedings to ban facial recognition for both public and private purposes – the widest ban so far.

The need to ask the big societal questions

Once again, these examples return to the idea that the problem is not technological, but societal: do we want the mass surveillance of our public spaces? Do we support methods that will automate existing policing and surveillance practices – along with the biases and discrimination that inevitably come with them? When is the use of technology genuinely necessary, legitimate and consensual, rather than just sexy and exciting? Many studies have shown that – despite claims by law enforcement and private companies – there is no link between surveillance and crime prevention. Even when studies have concluded that “at best” CCTV may help deter petty crime in parking garages, this has only been with exceptionally narrow, well-controlled use, and without the need for facial recognition. And as explored in our previous article, there is overwhelming evidence that rather than improving public safety or security, facial recognition creates a chilling effect on a shocking smorgasbord of human rights.

As in the case of the school in Nice, face recognition cannot be considered necessary and proportionate when there are many other ways to achieve the same aim without violating rights. FRA agrees that general reasons of “crime prevention or public security” are neither legitimate nor legal justifications per se, and so facial recognition must be subject to strict legality criteria.

Human rights exist to help redress the imbalance of power between governments, private entities and citizens. In contrast, the highly intrusive nature of face surveillance opens the door to mass abuses of state power. DPAs and civil society, therefore, must continue to pressure governments and national authorities to stop the illegal deployment and unchecked use of face surveillance in Europe’s public spaces. Governments and DPAs must also take a strong stance to the private sector’s development of face surveillance technologies, demanding and enforcing GDPR and human rights compliance at every step.

Facial Recognition and Fundamental Rights 101 (04.12.2019)

Your face rings a bell: Three common uses of facial recognition (15.01.2020)

In the EU, facial recognition in schools gets an F in data protection (10.12.2019)

Data-Driven Policing: The Hardwiring of Discriminatory Policing Practices across Europe (05.11.2019)

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)

Serbia: Unlawful facial recognition video surveillance in Belgrade (04.12.2019)

At least 10 police forces use face recognition in the EU, AlgorithmWatch reveals (11.12.2019)

(Contribution by Ella Jakubowska, EDRi intern)

22 Nov 2019

ePrivacy: EU Member States push crucial reform on privacy norms close to a dead end


Today, on 22 November 2019, the Permanent Representatives Committee of the Council of the European Union (COREPER) has rejected the Council’s position on a draft ePrivacy Regulation.

“In this era of disinformation and privacy scandals, refusing to ensure strong privacy protections in the ePrivacy Regulation is a step backwards for the EU,” said Diego Naranjo, Head of Policy at European Digital Rights (EDRi). “By first watering down the text and now halting the ePrivacy Regulation, the Council takes a stance to protect the interests of online tracking advertisers and to ensure the dominance of big tech. We hope the European Commission will stand on the side of citizens by defending the proposal and asking the Council to ensure a strong revised text soon in 2020.”

“The ePrivacy Regulation aims to strengthen users’ right to privacy and create protective measures against online tracking. Instead, EU states turned it into a surveillance toolkit,” said Estelle Massé, Senior Policy Analyst at EDRi member’s Access Now. “Today’s rejection should not be a signal that the reform cannot happen. Instead, it should be a signal that states must go back to the negotiating table and deliver what was promised to EU citizens: stronger privacy protections.”

In January 2017, the European Commission launched its proposal for a new ePrivacy Regulation, aiming at complementing the General Data Protection Regulation (GDPR), to protect the right to privacy and to the confidentiality of communications. An update to the outdated 2002 ePrivacy Directive is sorely needed – in today’s world where technology is intertwined in our everyday life, a strong regulation is crucial to protect us against the negative impacts of “surveillance capitalism”, to safeguard the functioning of our democracies, and to put people as the core element of the internet. The European Parliament took a strong stance towards the proposal when it adopted its position in October 2017. For over two years, the Council halted the proposal from advancing, presenting suggestions that lowered the fundamental rights protections that were proposed by the Commission and strengthened by the Parliament.

Today, the Council has voted to reject its own text. This leaves the door open for current practices that endanger citizens’ rights to continue happening. Now it is up to the Commission to either withdraw the entire proposal and leave citizens unprotected, or to the Council to prepare a new text that can get enough support to allow moving forward with the proposal. To meet the aims set for the ePrivacy Regulation, the new text should ensure privacy by design and by default, protect communications in transit and when stored, ban tracking walls, prevent backdoors to scan private communications without a court order and avoid secondary processing of communications data without consent.

Read more:

e-Privacy revision: Document pool

EU states vote on ePrivacy reform: We were promised more privacy. Instead, we are getting a surveillance toolkit. (22.11.2019)

EU Council considers undermining ePrivacy (25.07.2018)

Five reasons to be concerned about the Council ePrivacy draft (26.09.2018)

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)

The most recent European Council ePrivacy text (15.11.2019)

23 Oct 2019

#PrivacyCamp20: Technology and Activism

By Dean Willis

The 8th annual Privacy Camp will take place in Brussels on 21 January 2020.

With the focus on “Technology and Activism”, Privacy Camp 2020 will explore the significant role digital technology plays in activism, enabling people to bypass traditional power structures and fostering new forms of civil disobedience, but also enhancing the surveillance power of repressive regimes. Together with activists and scholars working at the intersection of technology and activism, this event will cover a broad range of topics from surveillance and censorship to civic participation in policy-making and more.

The call for panels invites classical panel submissions, but also interactive formats such as workshops. We have particular interest in providing space for discussions on and around social media and political dissent, hacktivism and civil disobedience, the critical public sphere, data justice and data activism, as well as commons, peer production, and platform cooperativism, and citizen science. The deadline for proposing a panel or a workshop is 10 November 2019.

In addition to traditional panel and workshop sessions, this year’s Privacy Camp invites critical makers to join the debate on technology and activism. We are hosting a Critical Makers Faire for counterculture and DIY artists and makers involved in activism. The Faire will provide a space to feature projects such as biohacking, wearables, bots, glitch art, and much more. The deadline for submissions to the Makers Faire is 30 November.

Privacy Camp is an annual event that brings together digital rights advocates, NGOs, activists, academics and policy-makers from Europe and beyond to discuss the most pressing issues facing human rights online. It is jointly organised by European Digital Rights (EDRi), Research Group on Law, Science, Technology & Society at Vrije Universiteit Brussel (LSTS-VUB), the Institute for European Studies at Université Saint-Louis – Bruxelles (USL-B), and Privacy Salon.

Privacy Camp 2020 takes place on 21 January 2020 in Brussels, Belgium. Participation is free and registrations open in December.

Privacy Camp 2020: Call for submissions

Privacy Camp

(Contribution by Dean Willis, EDRi intern)

23 Oct 2019

Net neutrality overhaul: 5G, zero-rating, parental control, DPI

By Epicenter.works

The Body of European Regulators for Electronic Communications (BEREC) is currently in the process of overhauling their guidelines on the implementation of the Regulation (EU) 2015/2120, which forms the legal basis of the EU’s net neutrality rules. At its most recent plenary, BEREC produced new draft guidelines and opened a public consultation on this draft. The proposed changes to the guidelines seem like a mixed bag

5G network slicing

The new mobile network standard 5G specifies the ability of network operators to provide multiple virtual networks (“slices”) with different quality characteristics over the same network infrastructure, called “network slicing”. Because end-user equipment can be connected to multiple slices at the same time, providers could use the introduction of 5G to create new products where different applications make use of different slices with their associated quality levels. In its draft guidelines, BEREC clarifies that it‘s the user who has to be able to choose which application makes use of which slice. This is a welcome addition.


Zero-rating is a practice of billing the traffic used by different applications differently, and in particular not deducting the traffic created by certain applications from a user’s available data volume. This pratice has been criticised, because it reduces the choice of consumers regarding which applications they can use, and disadvantages new, small application providers against the big, already established players. These offers broadly come in two types: “open” zero-rating offers, where application providers can apply to become part of the programme and have their application zero-rated, and “closed” offers where that is not the case. The draft outlines specific criteria according to which open offers can be assessed.

Parental control filters

While content- and application-specific pricing is an additional challenge for small content and application providers, content-specific blocking can create even greater problems. Nevertheless, the draft contains new language that creates a carve-out for products such as parental control filters operated by the access provider from the provisions of the Regulation that prohibit such blocking, instead subjecting them to a case-by-case assessment by the regulators (as is the case for zero-rating). The language does not clearly exclude filters that are sold in conjunction with the access product and are on by default, and the rules can even be read as to require users who do not want to be subjected to the filtering to manually reconfigure each of their devices.

Deep Packet Inspection

Additionally, BEREC is also running a consultation on two paragraphs in the guidelines to which it hasn‘t yet proposed any changes. These paragraphs establish important privacy protections for end-users. They prohibit access providers from using Deep Packet Inspection (DPI) when applying traffic management measures in their network and thus protect users from having the content of their communications inspected. However, according to statements made during the debriefing session of the latest BEREC plenary, some actors want to allow providers to look at domain names, which themselves can reveal very sensitive information about the user and require DPI to extract from the data stream.

EDRi member epicenter.works will respond to BEREC’s consultation and encourages other stakeholders to participate. The proposed changes are significant. That is why clearer language is required, and users‘ privacy needs to remain protected. The consultation period ends on 28 November 2019.


Public consultation on the document on BEREC Guidelines on the Implementation of the Open Internet Regulation (10.10.2019)

Zero rating: Why it is dangerous for our rights and freedoms (22.06.2016)

NGOs and academics warn against Deep Packet Inspection (15.05.2019)

Net Neutrality vs. 5G: What to expect from the upcoming EU review? (05.12.2018)

(Contribution by Benedikt Gollatz, EDRi member epicenter.works, Austria)

23 Oct 2019

The sixth attempt to introduce mandatory SIM registration in Romania


A tragic failure by the police to save a teenage girl who was abducted but managed to call the 112 emergency number three times before she was murdered, led to the adoption of a new Emergency Ordinance in Romania. The law introduces several measures to improve the 112 system, one of which is mandatory SIM card registration for all prepaid users. Currently approximately ten million prepaid SIM cards are used in Romania.

This is the sixth legislative attempt in the last eight years to pass legislation for registering SIM card users despite a Constitutional Court decision in 2014 deeming it illegal. The measure was adopted through a fast legislative procedure and is supposed to enter into effect on 1 January 2020.

It seems like the main reason to introduce mandatory SIM card registration is that authorities want to localise the call to the emergency number and punish false emergency calls. However, this measure is not likely to be efficient for the purpose, as anyone who buys a SIM card could obviously give it to someone else. Another reason is to identify the caller in real emergency situations, to be able to more easily locate them and send help.

Romania is one of the few countries in the European Union where calling the emergency number without a SIM card is not possible. This has been a deliberate decision taken by Romanian authorities to limit the number of “non-urgent” calls.

What happened?

After the Emergency Ordinance was proposed, EDRi member ApTI, together with two other Romanian NGOs, launched a petition to the Ombudsman and the government calling for this law not to be adopted. After civil society’s calls for a public debate, the Ministry of Communications organised an oral hearing in which the participants were given no more than five minutes to express their views, without the possibility to have an actual dialogue. The Emergency Ordinance was adopted shortly after the hearing, despite the fact that the Romanian Constitution explicitly states that laws which affect fundamental rights cannot be adopted by emergency ordinances (Article 115 of the Romanian Constitution).

What did the court say in 2014?

In 2014, the Constitutional Court held that the “retention and storage of data is an obvious limitation of the right to personal data protection and to the fundamental rights protected by the Constitution on personal and family privacy, secrecy of correspondence and freedom of speech” (para. 43 of Decision nr. 461/2014, unofficial translation). The Court explained that restricting fundamental rights is possible only if the measure is necessary in a democratic society. The measure must also be proportionate, and must be applicable without discrimination and without affecting the essence of the right or liberty.

Collecting and storing the personal data of all citizens who buy prepaid SIM cards for the mere reason of punishing those who might abusively call the emergency number seems like a bluntly disproportionate measure that unjustifiably limits the right to private life. At the same time, such a measure inverses the presumption of innocence and automatically assumes that all prepaid SIM card users are potentially guilty.

What’s the current status?

The Ombudsman listened to civil society’s concerns, and challenged the Ordinance at the Constitutional Court. Together with human rights NGO APADOR-CH, ApTI is preparing an amicus curiae to support the unconstitutionality claims.

In the meantime, the Ordinance moved on to parliamentary approval and the provisions related to mandatory SIM card registration were rejected in the Senate, the first chamber to debate the law. The Chamber of Deputies can still introduce modifications.

Asociatia pentru Tehnologie si Internet (ApTI)

Petition against Emergency Ordinance on mandatory sim card registration (only in Romanian, 12.08.2019)

ApTI’s response to the public consultation on Emergency Ordinance on mandatory SIM card registration (only in Romanian, 21.08.2019)

Constitutional Court decision nr. 461/2014 (only in Romanian)

Timeline of legislative initiatives to introduce mandatory SIM card registration (only in Romanian)

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

23 Oct 2019

EU Commissioners candidates spoke: State of play for digital rights

By Ella Jakubowska

On 1 November 2019, the new College of European Commissioners – comprising 27 representatives (one from each EU Member State), rather than the usual 28, thanks to Brexit – are scheduled to take their seats for the next five years, led by incoming President-elect, Ursula von der Leyen.

A leading role in Europe’s digital future

EU Commissioners are a powerful bunch: as the executive branch of the European Union – complementing the European Parliament and the Council of the European Union as legislators, and the Court of Justice of the European Union (CJEU) as judiciary – the College’s wide-ranging responsibilities cover EU policy, law, budget, and “political and strategic direction”. With digitalisation an issue that transcends borders, the choice of Commissioners could have an impact on digital rights across the world.

Between 30 September and 8 October 2019, the Commissioners-designate underwent marathon confirmation hearings in the European Parliament. These hearings give the EU elected representatives (Members of the European Parliament, MEPs) an opportunity, before voting on the Commissioners-designate, to ask them questions about their capacities and potential priorities if elected. Among the three that did not make the cut was France’s nominee for Internal Market, Sylvie Goulard, whose late-stage rejection may delay the start of the new Commission.

A shared task to update Europe for the digital age

Five of the incoming Commissioners’ portfolios are predicted to have a significant influence on digital policy. Carved up by President von der Leyen, their overlapping responsibilities to make Europe fit for the digital age could make or break citizens’ rights to privacy, data protection and online freedoms in general:

  • Sweden’s Ylva Johansson, Commissioner-designate for Home affairs, will inherit a portfolio including cybercrime, terrorist content Regulation and issues relating to privacy and surveillance. Whilst her hearing was relatively light on digital questions, it was certainly heavy in evasive answers. Her insistence on fundamental rights was a good start, but her call for compromise between security and privacy fell into the age-old myth of the two rights as mutually exclusive.
  • Belgium’s Didier Reynders, Commissioner-designate for Justice and Consumers, championed rights by committing to enforce the General Data Protection Regulation (GDPR) to its fullest extent. On Artificial Intelligence (AI) and data protection, he promised swift law, safety, trust, transparency, and for those making or judging the law to better understand the impacts of algorithmic decisions. He cited plans for a collective redress position in November.
  • No-longer-Commissioner-designate Sylvie Goulard, of France, lost her chance to oversee the Internal Market. Although Goulard pitched increased digital education and maintaining the EU’s data policy leadership, Members of the European Parliament (MEPs) were far more concerned with Goulard’s past. Accusations of impropriety in her former role as a French defence minister, and high earnings as a private consultant in office, led MEPs to conclude that she lacked the integrity to be a Commissioner. Update: Thierry Breton has been appointed as the new Commissioner-designate for the Internal Market. (7 November 2019)
  • The Czech Republic’s Věra Jourová (current Commissioner for Justice) made her case as Commissioner-designate for Values and transparency. Democracy, freedom of expression and cracking down on disinformation were key topics. Despite an understated performance, she called Europeans “the safest people on the planet.” She is right that GDPR sets a strong global standard, but it has faced a rocky implementation, and as of today requires further efforts to ensure the harmonisation that the Regulation prescribed.
  • Last was Denmark’s Margrethe Vestager for Executive Vice-President for a Europe fit for the digital age, and continuing as Competition Commissioner. Her anti-Big-Tech, “privacy-friendly”, pro-equality, redistribution agenda was well received. She faced questions about breaking up Big Tech, leaving it on the table as a “tool” of last resort but emphasising her desire to exhaust other avenues first. But she stumbled when it came to accusations that her aspirations to rein in Big Tech are incompatible with her remit as leader of the EU’s digital affairs.

The implications on digital policy

Throughout the hearings, the Commissioners-designate made many commitments, emphasised their policy priorities, and shared their plans for the future. Although we do not know exactly how this will translate to concrete policy, their hearings give valuable insight into how the new College intend to tackle rights challenges in the online environment. This is not an exact science, but we invite you to join us – and our “rightsometer” – to speculate about what impact the nominees’ ideas will have on citizens’ digital rights over the next five years, based on what the nominees did (and did not) say.

Key legislation: ePrivacy

The currently stalled ePrivacy Regulation was unsurprisingly raised by MEPs – and, reassuringly, Vestager shared that “passing ePrivacy” needs to be “a high priority”.

Result: with Vestager’s support, it is a cautiously optimistic 3/5 on the rightsometer – but the troubled history of the Regulation also warns us not to be too hopeful.

Platform power
Key legislation: E-Commerce Directive (ECD), slated to be replaced by the upcoming Digital Services Act (DSA)

Vestager was the champion of regulating Big Tech throughout her hearing, proposing to redress the balance of power in favour of citizens, and giving consumers more choice about platforms. But she later confessed to uncertainty around the shape that the DSA will take, saying that she needs to “take stock” before committing to a position on E-Commerce. Jourová committed to redress in the event of wrongful takedown of content, and emphasised her strong support for the DSA. However, she suggested her intention to explore platform “responsibility” for illegal content, a move which would threaten myriad human rights.

Result: the rightsometer gives an inconclusive 2.5/5, with commitments to strengthening Big Tech regulation promising, but risks of unintended consequences of some of their ideas remaining a big concern.

Key document: Code of Practice on Disinformation

Jourová committed to tackling the problem of online disinformation, promising to bring in codes of conduct for platforms; to make it clear where political advertisements come from, and by whom they are funded; as well as enforcing “rules” for political campaigning.

Result: it’s a positive 4/5, and we encourage Jourová to analyse the risks of targeted political advertising and the online tracking industry caused by dysfunctional business models. However, a cautionary approach is needed (see Access Now, EDRi and Liberties Guide on Disinformation).

Law enforcement and cross-border access to data
Key legislation: “e-Evidence” proposal

Under direct questioning from MEP Moritz Körner about plans to advance e-Evidence, Commissioner-designate Johansson declined to provide a reply. She also insinuated that fundamental rights to encryption might be incompatible with fighting terrorism.

Result: e-Evidence makes for a pessimistic 0/5 on the rightsometer, with nothing to give confidence that this controversial proposal is being reassessed.

Artificial Intelligence (AI)
Key legislation: none proposed yet – but both von der Leyen and Reynders promised “horizontal” legislation in 100 days

Jourová emphasised that fundamental rights in AI innovation will “ensure that our solutions put people first, and will be more sustainable as a result”. Vestager added that ethics will be at the heart of AI policy, and Reynders that Europe’s “added value” is in bringing protection for privacy and data to future AI legislation.

Result: a promising 4/5 on the rightsometer; we welcome the Commissioners’-designate focus on fundamental rights when implementing AI-based technologies.

Where does that leave our digital rights?

Disinformation, Artificial Intelligence, privacy, and mitigating platform power were all given substantive commitments by the Commissioners-designate. Protecting fundamental rights online was, thankfully, a persistent concern for all the nominees. Certain topics, such as “digital literacy” were mentioned, but not given any flesh, and nominees also declined to answer a number of “too specific” questions. Although there was lots about which we can be optimistic, the balance between rights and law enforcement or innovation means that we should stay cautious.

Access Now: Meet the European Commissioners: Who will shape the next five years of digital policy in the EU? (27.09.2019)

EDRi: Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)

Access Now, Civil Liberties Union for Europe and European Digital Rights: Joint Report on Informing the “Disinformation” Debate (18.10.2018)

(Contribution by Ella Jakubowska, EDRi intern)

10 Oct 2019

Open letter to EU Member States: Deliver ePrivacy now!


On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.

You can read the letter here (pdf) and below:

Open letter to EU Member States

Dear Minister,

We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.

We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2

A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.

What Europe gains with a strong ePrivacy Regulation

The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5

Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.

What Europe loses without a strong ePrivacy Regulation

Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.10 11 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.

As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.

The ePrivacy Regulation is urgently needed

We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.

As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.

Yours sincerely,

The European Consumer Organisation (BEUC)
European Digital Rights (EDRi)
Privacy International
Open Society European Policy Institute (OSEPI)

1 https://www.cookiebot.com/media/1121/cookiebot-report-2019-medium-size.pdf
7 https://www.beuc.eu/publications/beuc-x-2018-108-eprivacy-reform-joint-letter-consumer-organisations-ngos-internet_companies.pdf

Read more:

Open letter to EU Member States on ePrivacy (11.10.2019)

Right a wrong: ePrivacy now! (09.10.2019)

Civil society calls Council to adopt ePrivacy now (05.12.2018)

ePrivacy reform: Open letter to EU member states (27.03.2018)

09 Oct 2019

Right a wrong: ePrivacy now!

By Ella Jakubowska

When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.

This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).

The rise of highly personalised adverts (”microtargeting”) means that online platforms increasingly control and limit the parameters of the world that you see online, based on their biased and potentially discriminatory assumptions about who you are. And as for that online quiz about depression that you took? Well, that might not be as private as you thought.

It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.

e-Privacy revision: Document pool

ePrivacy: Private data retention through the back door (22.05.2019)

Captured states – e-Privacy Regulation victim of a “lobby onslaught” (23.05.2019)

NGOs urge Austrian Council Presidency to finalise e-Privacy reform (07.11.2018)

e-Privacy: What happened and what happens next (29.11.2017)

(Contribution by Ella Jakubowska, EDRi intern)