In the digital era, copyright should be implemented in a way which benefits creators and society. It should support cultural work and facilitate access to knowledge. Copyright should not be used to lock away cultural goods, damaging rather than benefitting access to our cultural heritage. Copyright should be a catalyst of creation and innovation. In the digital environment, citizens face disproportionate enforcement measures from states, arbitrary privatised enforcement measures from companies and a lack of innovative offers, all of which reinforces the impression of a failed and illegitimate legal framework that undermine the relationship between creators and the society they live in. Copyright needs to be fundamentally reformed to be fit for purpose, predictable for creators, flexible and credible.

27 May 2020

COVID-Tech: Surveillance is a pre-existing condition

By Guest author

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue about digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised that “measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.” In this third post of the series, we look at surveillance – situating the measures in their longer term trajectory – particularly of marginalised communities.

One minor highlight in this otherwise bleak public health crisis is that privacy is trending. Now more than ever, conversations about digital privacy are reaching the general public. This is a vital development as states and private actors pose ever greater threats to our digital rights in their responses to COVID-19. The more they watch us, the more we need to watch them.

One concern, however, is that these debates have siphoned this new attention to privacy into a highly technical, digital realm. The debate is dominated by the mechanics of digital surveillance, whether we should have centralised or decentralised contact tracing apps, and how zoom traces us as we work, learn and do yoga at home.

Although important, this is only a partial framing of how privacy and surveillance are experienced during the pandemic. Less prominently featured are the various other privacy infringements being ushered in as a result of COVID-19. We should not forget that for many communities, surveillance is not a COVID-19 issue – it was already there.

The other sides of COVID surveillance

Very real concerns about digital measures proposed as pandemic responses should not overshadow the broader context of mass-scale surveillance emerging before our eyes. Governments across Europe are increasingly rolling out measures to physically track the public, via telecommunications and other data, without explicit reference to how this will impede the spread of the virus, or when the use and storage of this data will end.

We are also seeing the emergence of bio-surveillance dressed in a public health response’s clothing. From the Polish government’s app mandating the use of geo-located selfies, to talks of using facial biometrics to create immunity passports to facilitate the return of of workers in the UK, governments have, and will continue to, use the pandemic as a cover to get into our homes, and closer to us.

Yet, less popular in media coverage are physical surveillance techniques. Such measures are – in many European countries – coupled with heightened punitive powers for law enforcement. Police have deployed drones in France, Belgium and Spain, and communities in cities across Europe are feeling the pressure of increased police presence in their communities. Heightened measures of physical surveillance cannot be accepted at face value or ignored. Instead, they must be viewed in tandem with new digital developments.

Who can afford privacy?

These measures are not neutrally harmful. In unequal societies, surveillance will always target racialised1 people, migrants, and the working classes. These people bear the burden of heightened policing powers and punitive ‘public health’ enforcement – being more likely to need to leave the house for work, take public transport, live in over-policed neighbourhoods, and in general be perceived as suspicious, criminal, necessitating surveillance.

This is a privacy issue as much as it is about inequality. Except, for some, the consequences of intensified surveillance under COVID-19 means heightened exposure to the virus through direct contact with police, increased monitoring of their social media, the anxiety of constant sirens, and in the worst cases, the real bodily harm of police brutality.

In the last few days, Romani communities in Slovakia reported numerous cases of police brutality, some against children playing outside. Black, brown and working class communities across Europe are experiencing the physical and psychological effects of being watched even more than normal. In Brussels, where EDRi is based, a young man has died in contact with the police during raids.

This vulnerability is economic, too – for many, privacy is a sparse commodity.It is purchased by those who live in affluent neighbourhoods, by those with ‘work from home’ jobs. Those who cannot afford privacy in this more basic sense will, unfortunately, not be touched by debates about contact tracing. For many, digital exclusion means that measures such as contact-tracing apps are completely irrelevant. Worse, if future measures in response to COVID-19 are designed with the assumption that we all use smart phones, or have identity documents, they will be immensely harmful.

These measures are being portrayed as ‘new’, at least in our European ‘liberal’ democracies. But for many, surveillance is not new. Governmental responses to the virus have simply brought to the general public a reality reserved for people of colour and other marginalised communities for decades. Prior to COVID-19, European governments have deployed technology and other data-driven tools to identify, ‘risk-score’ and experiment on groups at the margins, whether by way of predicting crime, forecasting benefit fraud, or assessing whether or not asylum applicants are telling the truth by their facial movements.

We need to integrate these experiences of surveillance into the mainstream privacy debate. These conversations have been sidelined or explained away with the logic of individual responsibility. For example, last year, in a public debate on technology and surveillance of marginalised communities, one participant swiftly moved the conversation away from police profiling and toward privacy literacy. They asked the room of anti-racist activists “does everybody here use a VPN?”

Without a holistic picture of how surveillance affects people differently – the vulnerabilities of communities and the power imbalances that produce this – we will easily fall into the trap that quick fix solutions can guarantee our privacy, and that surveillance can be justified.

Is surveillance a price worth paying?

If we don’t root our arguments in people’s real life experiences of surveillance, not only do we devalue the right to privacy for some, but we also risk losing the argument to those who believe that surveillance is a price worth paying.

This narrative is a direct consequence of an abstract, technical and neutral framing of surveillance and its harms. Through this lens, infringements of privacy are minor, necessary evils. As a result, privacy will always lose the the false ‘privacy vs health’ trade-off. We should challenge the trade-off itself, but we can also ask: who will really will pay the price of surveillance? How do people experience breaches of privacy?

Another question we need to ask is who profits from surveillance? Numerous companies have shown their willingness to enter public-private alliances, using COVID-19 as the opportunity to market surveillance based ‘solutions’ to issues of health (often with dubious claims). Yet, again, this is not new – companies like Palantir, contracted by the UK government to process confidential health data during COVID-19, have a much longer-standing role in the surveillance of migrants and people of colour and facilitating deportations. Other large tech companies will use COVID-19 to continue their expansion into areas like ‘digital welfare’. Here, deeply uneven power relationships will be further cemented with the introduction of digitalised tools, making them harder to challenge and posing ever greater risks to those who rely on the state. If unchallenged, this climate of techno-solutionism will only increase the risk of new technology testing and data-extraction from marginalised groups for profit.

A collective privacy

There is a danger to viewing surveillance as exceptional; a feature of COVID-19 times. It suggests that protecting privacy is only newsworthy when it is about ´everyone’ or ‘society as a whole’. What that means, though is that actually we don’t mind if a few don’t have privacy.

Surveillance measures and other threats to privacy have countless times been justified for the ‘public good’. Privacy – framed in abstract, technical and individualistic terms – simply cannot compete, and ever greater surveillance will be justified. This surveillance will be digital and physical and everything in between, and profits will be made. Alternatively, we can fight for privacy as a collective vision – something everybody should have. Collective privacy is not exclusive or abstract – it means looking further than how individuals might adjust their privacy settings, or how privacy can be guaranteed in contact tracing apps.

A collective vision of privacy means contesting ramped-up police monitoring, the use of marginalised groups as guinea pigs for new digital technologies, as well as ensuring new technologies have adequate privacy protections. It also requires us to think about who will be the first to feel the impact of surveillance? How do we support them? To answer these questions, we need to recognise surveillance in all its manifestations, including way before the outbreak of COVID-19.

Original illustration by Miguel Brieva, licensed under CBNA 2020, La Imprenta, included in “Que No Haya Sido en Vano

Read more:

Telco data and Covid-19: A primer (21.04.20)

Slovak police officer said to have beaten five Romani children in Krompachy settlement and threatened to shoot them (29.04.20)

Amid COVID-19 Lockdown, Justice Initiative Calls for End to Excessive Police Checks in France (27.03.20)

Digital divide ‘isolates and endangers’ millions of UK’s poorest (28.04.20)

The EU is funding dystopian Artificial Intelligence projects (22.01.20)

A Price Worth Paying: Tech, Privacy and the Fight Against Covid-19 (24.04.20)

COVID-Tech: Emergency responses to COVID-19 must not extend beyond the crisis (15.04.20)

COVID-Tech: COVID infodemic and the lure of censorship (13.04.2020)


  1. This term refers to racial, ethnic and religious minorities, emphasising that racialisation is a structural process inflicted on people, groups and communities.

(Contribution by Sarah Chander, EDRi senior policy advisor)

27 May 2020

Competition law: Big Tech mergers, a dominance tool

By Laureline Lemoine

This is the third article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this. Read the first article on the impact of competition law on your digital rights here and the second article on what to do against Big tech’s abuse here.

One way Big Tech has been able to achieve a dominant position in our online life, is through mergers and acquisitions. In recent years, the five biggest tech companies (Amazon, Apple, Alphabet – parent company of Google, Facebook and Microsoft) spent billions to strengthen their position in acquisitions that shaped our digital environment. Notorious acquisitions which made headlines include: Facebook/WhatsApp, Facebook/Instagram, Microsoft/LinkedIn, Google/YouTube, and more recently, Amazon/Twitch.

Beyond infamous social media platforms and big deals, Big Tech companies also acquire less known-companies and start-ups, which also greatly contribute to ther growth. While not making big newsworthy acquisitions, Apple still “buys a company every two to three weeks on average” according to its CEO. Since 2001, Google-Alphabet has been acquiring over 250 companies and since 2007, while Facebook acquired over 90. Big Tech’s intensive acquisition policy particularly applies to artificial intelligence (AI) startups. This is worrying because reducing competitors also means reducing diversity, leaving Big Tech in charge of developing these technologies, at a time where AI technologies are more and more used in decisions affecting individuals and are known to be susceptible to bias.

Big Tech’s intensive acquisition policy can have different goals at play, sometimes at the same time. These companies acquire competitors who could have offer, or were offering consumers, an alternative, in order to eliminate or shelve them (“killer acquisitions”), in order to consolidate a position in the same market or in a neighbouring market, or in order to acquire their technical or human skills (“talent acquisitions”). See for example this overview of Google and Facebook’s acquisitions.

And in time of economic trouble, Big Tech is even more lurking. In the US, Senator Warren wants to introduce a moratorium on COVID-era acquisitions.

Big Tech’s mergers are mostly unregulated

While mergers and acquisitions are part of business life, the issue is that most Big Tech’s acquisitions are not subject to any control. And the few ones which are reviewed have been authorised without conditions. This led to debates on the state of competition law: are the current rules fit for today’s age of data-driven acquisitions and technology takeovers?

While some already called for a ban on acquisitions by certain companies, others are discussing the thresholds set in competition law to allow review by competent authorities, but also, more intrinsically, the criteria used to review mergers.

The issue with thresholds is that they depend on monetary turnover, which many companies and startups do not reach, either because they haven’t yet monetised their innovations or because their value is not reflected in their turnover but, for example, in their data trove. Despite low turnovers, Facebook was still willing to spent 1 and 19 billions for, respectively, Instagram and WhatsApp. These data-driven mergers allowed for these companies’ data sets to be aggregated, increasing the (market) power of Facebook.

The French competition authority suggests for example, to introduce an obligation to inform the EU and/or national competition authorities of all the mergers implemented in the EU by “structuring” companies. These “structuring” companies would be defined clearly according to objective criteria and in cases of risks, the authorities would ask these players to notify the mergers for review.

However, although the acquisition of WhatsApp by Facebook was reviewed by the European Commission thanks to a referee from national competition authorities, the operation was still authorised. This poses another issue: the place of data protection and privacy in merger control. Competition authorities assume that, since there is a data protection framework, data protection rights are respected and individuals are exercising their rights and choices. But this assumption does not take into account the reality of the power imbalance between users and Big Tech. In this regard, some academics, such as Orla Lynskey suggests solutions such as the increased cooperation between competition, consumers and data protection authorities to understand and examine the actual barriers to consumer choice in data-driven markets. Moreover, where it is found that consumers value data privacy as a dimension of quality, the competitive assessment should therefore reflect whether a given operation would deteriorate such quality.

A wind of change might already be coming from the US, as the Federal Trade Commission issued last February “Special Orders” to the five Big Tech companies, “requiring them to provide information about prior acquisitions not reported to the antitrust agencies”, including how acquired data has been treated.

Google/Fitbit: the quest for our sensitive data

The debate recently resurfaced when Google’s proposed acquisition of Fitbit was announced. Immediately, a number of concerns were raised, both in terms of competition and of privacy (see for example the European Consumer Organisation BEUC, and the Electronic Frontier Foundation (EFF)’s concerns). From a fundamental rights perspective, the most worrying issue lies in the fact that Google would be acquiring Fitbit’s health data. As Privacy International warns, “a combination of Google / Alphabet’s potentially extensive and growing databases, user profiles and dominant tracking capabilities with Fitbit’s uniquely sensitive health data could have pervasive effects on individuals’ privacy, dignity and equal treatment across their online and offline existence in future.”

Such concerns are also shared beyond civil society, as the announcement led the European Data Protection Board to issue a statement, warning that “the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

It is a fact that Google cannot be trusted with our personal data. As well as a long history of competition and data protection infringements, Google is questionably trying to enter the healthcare market, and already breaking patients’ trust.

Beyond concerns, this operation will be the opportunity for the European Commission to adopt a new approach after the Facebook/WhatsApp debacle. Google is acquiring Fitbit for its data and therefore the competitive assessment should reflect that. Moreover, the Commission should use this case as an opportunity to consult with consumer and data protection authorities.

Read more:

Google wants to acquire Fitbit, and we shouldn’t let it! (13.11.19)

GOOGLE-FITBIT MERGER: Competition concerns and harms to consumers (07.07.20)

Considering Data Protection in Merger Control Proceedings (06.06.18)

Competition law: what to do against Big Tech’s abuse? (01.04.2020)

The impact of competition law on your digital rights (19.02.2020)

(Contribution by Laureline Lemoine, EDRi senior policy advisor)

27 May 2020

More than the sum of our parts: a strategy for the EDRi Network

By Claire Fernandez

It took over a year. From an EDRi members’ survey in early 2019 to the vote by the (online) General Assembly of members at the end of April 2020. In those months we held workshops, webinars, calls, several rounds of comments, draft iterations and about 50 consultations. We won’t lie, it was a lengthy, challenging and resource-consuming process. But it was worth it: we can now announce, proud and excited, the adoption of the EDRi Network 2020-2024 Strategy (link to summary).

Along the process, we learned a great deal about the context EDRi operates in and how the network situates in European societies. We also learned about how strategic planning processes can unveil larger questions about networks’ identity and health, and on what brings people together.

Values vs practices

There are many diverse visions about EDRi and about what a strategy is. EDRi network is comprised of a wide-ranging constellations of distinct voices. There is no ‘one size fits all’ narrative that encompasses some of the most complex issues. Some, like Richard D. Bartlett, would argue that people would rather align on a community of practices than on shared ‘values’. In EDRi’s case, what practices bring us together? ‘EDRis’ do share a passion for working in a community based on expertise, trust and hard work. We therefore worked on a balance and design a strategy that would give a overarching common sense of purpose and direction while leaving enough space for people to carry on with their work.

The strategy

It feels daring and risky to put our vision and assumptions on paper and boil down to what EDRi is all about. The strategy starts with highlighting the problems EDRi faces and showing a sense of urgency for action. While technologies represent opportunities, the near-total digitisation and permanent recording of our lives poses a significant risk to our autonomy and to our democracies.

A significant piece of the strategy is the power analysis, which describes the context in which EDRi operates. Our world is characterised by power asymmetries between state and private actors on the one hand, and people on the other. These power imbalances threaten democracy and people’s behavior. There is a lot to be done to change power structures that allow for injustice and human rights violations in the digital age. And thus, EDRi will not succeed alone. We play a contributing role based on our mission, identity and strengths as a digital rights network. We will aim for a world in which people live with dignity and vitality and to create a fair and open digital environment that enables everyone to flourish and thrive to their fullest potential. This is part and parcel of many other social justice causes as mobilisation and democratic change are highly dependent of technologies.

For EDRi that means that we will work in the next five years to influence decision-makers to regulate and change surveillance-based practices.

What’s next?

Now that our shared vision and purpose are articulated for a range of audiences, implementation work can start. In the coming months, our work as a network focuses on human rights based responses to the Covid-19 pandemic, on meaningful platform regulation and on requesting bans on invasive and risky biometric technologies.

A strategy is a frame, the start of a process rather than a document. We will therefore need to test our assumptions, reflect, iterate and build trust to advance digital rights for all. EDRi’s mission is ambitious. To succeed, we need a healthy network, fierce EDRi member organisations and empowered people. Our vehicle for change is a sustainable and resilient field that combats burn out and toxicity and relies on both personal relationships and professional processes.

The pandemic is an absolute turning point that marks the beginning of a different era, it can leave us feeling vulnerable and afraid for ourselves and our loved ones, but also reminds us that we are part of a broader community. What better time than this crisis for a new beginning for EDRi and the societies we live in to create a world of dignity in the digital age?

Read more:

Strategy summary

EDRi calls for fundamental rights-based responses to COVID-19 (20.03.20)

DSA: Platform Regulation Done Right (09.04.20)

Ban biometric mass surveillance! (13.05.20)

27 May 2020

Hungary: “Opinion police” regulate Facebook commentaries

By Guest author

There have been a number of critical news reports from around the world stating that Hungary’s COVID-19 state-of-emergency legislation is “creating a chilling effect”. Such headlines miss the mark somewhat, as chilling effects are far from new. Individuals who cross government authorities and their allies and supporters with public and private expressions of criticism have been losing their positions for over a decade; and the chilling effects that successive governments have had on citizens’ behaviour were apparent long before the current regime.

What qualifies as news is the sustained media attention that the chilling effect in Hungary has received over the past two weeks. Its COVID-19 emergency legislation has attracted intense scrutiny nationally and globally.

The media furore was triggered by the detention of two persons by local police authorities due to statements posted on Facebook that allegedly posed the risk of “alarming the population” or “interfering with public protection” during the crisis.

Legal retaliation

The first case involved an individual in Eastern Hungary detained for hours for “publishing false facts on a social media site”. The “alarmist content” consisted of disapproval of the lockdown policy with additional remarks, presumably addressed to the Prime Minister (“You’re a merciless tyrant, but bear in mind that dictators invariably fall”). The man recalls half a dozen law enforcement authorities arriving at his home on May 12. The charges were dropped, but the YouTube channel of the Hungarian law enforcement authorities posted a video that was widely viewed of the man being removed from his home and placed into a police vehicle.

The following day the home of an opposition-party member in the South of the country was raided at dawn, with a heavy police presence; his communication devices were confiscated and the man was detained for four hours at police headquarters for having shared a post from an opposition MEP on a closed Facebook group; the post criticised a controversial government decision to empty thousands of hospital beds across the country to free them for potential Coronavirus patients. He remarked that in the town of Gyula “1,170 [hospital] beds were freed as well”. The fact was not in dispute. The Facebook user from Gyula was not charged with a criminal offense.

In a blog addressing the highly mediated cases, the Hungarian Civil Liberties Union (HCLU) maintains that existing legislation could have been used to tackle the problem of publication and dissemination of false information. The blog’s headline, “The opinion police are at the door”, alludes to the legendary Socialist-era terror of a doorbell sounding in the middle of the night.

Social media users warned of “continual monitoring”

An announcement posted online by the authorities in one Eastern county of Hungary overtly alerted social media users to the fact that the police are “continually monitoring the internet,” Politico reports. According to the National Law Enforcement website, 87 people have been targeted in criminal investigations in connection with the COVID-19 measure. Of these cases, only 6 have reached the prosecution phase.

What’s in the public interest?

Back at the end of March when the Hungarian Parliament passed a bill introducing emergency powers without a sunset clause, the move garnered a surprising amount of coverage and criticism. Of particular concern was an amendment to the Criminal Code introducing prison terms of up to five years for individuals convicted of “distorting the truth” or “spreading falsehoods” connected to the Coronavirus pandemic; like many countries worldwide, the aim of the legislation was to protect the public during the pandemic, but anecdotal reports suggest that often it’s the authorities themselves, rather than the public, that stand to gain from special protection.

The Hungarian Daily Népszava reported that on May 19 the Parliament adopted a 160-page bill which will grant the National Security Services a mandate that could entail major data security risks. The NSS will be empowered to “monitor the content of electronic communications networks” at the local and federal level of government to prevent cyber attacks.

News website 444 suggests that a state surveillance system is being established with the passage of this law. In effect, the secret service will be given access to all public data, including tax, social security, health, and criminal records.

Another controversial aspect of the legislative package is that the contact data of persons interrogated over the course of a criminal investigation could be retained by the authorities for up to twenty years, even if the suspect is found innocent.

Privacy experts say that the legislation does not offer sufficient data protection safeguards. The Head of the National Authority for Data Protection and Freedom of Information (NAIH), Attila Péterfalvi, has written to the State Secretary of the Ministry of the Interior with concerns that “unlimited surveillance (…) will not allow for special protection of personal data.”

From state of emergency to surveillance state?

In a resolution adopted by the European Parliament in late April, Hungary was sharply criticised for its COVID-19 measures; limits to free speech under an indefinite state of emergency are “totally incompatible with European values”. The Minister of Justice announced that a bill revoking emergency powers is expected to be adopted on June 20. While the government is already declaring victory on the public relations front and calling for “apologies” from Brussels, the contents of the bill are still unknown. Observers suspect that the government will annul the “state of emergency” while preserving many of the emergency powers.

Read more:

The Impact of Covid-19 Measures on Democracy, the Rule of Law and Fundamental Rights in the EU (23.04.20)

Jourová: Commission looking at Hungary’s emergency changes to labour code and GDPR (15.05.20)

Hungary’s Government Using Pandemic Emergency Powers To Silence Critics (18.05.20)

(In Hungarian) Mindent is megtudhatnak ezután a nemzetbiztonsági szolgálatok az emberekről (19.05.2020)

Open Letter: Commission Has Clear Legal Grounds to Pursue Hungary to Protect Free Speech and Privacy (15.05.2020)

(Contribution by Christiana Mauro, EDRi observer)

27 May 2020

German Constitutional Court stops mass surveillance abroad

By Gesellschaft für Freiheitsrechte

The German Federal Intelligence Service (BND) has so far been able to spy on foreign citizens abroad en masse and without cause—even on sensitive groups such as journalists. In response, EDRi member Gesellschaft für Freiheitsrechte (GFF, Society for Civil Rights), alongside five media organizations, filed a constitutional complaint against the BND law that allowed this surveillance to occur. On May 19, 2020, the German Constitutional Court made clear that the BND may not carry out mass surveillance abroad, and is bound by the German Constitution (Basic Law) even as it relates to foreign citizens and cross-boarder communication.

With regards to their local activities, German authorities—such as the BND—are naturally bound by the Basic Law, the constitution of the Federal Republic of Germany. When acting abroad, however, a 2017 change in the law allowed the BND to act with seemingly limitless power; the BND could monitor the telecommunications of foreigners abroad without any limits or specific restrictions. Within its own borders, such surveillance is a clear violation of Article 10 of the Basic Law, which protects the freedom of communications. However, the 2017 BND law assumed that the secret service, when acting outside of German territory, is not bound by the Basic Law.

The BND law thus created considerable risks for foreign journalists who rely on trust and confidentiality when communicating with their sources. In response to the significant threats to constitutional rights created by the BND law, several journalists—supported by the GFF and partner organizations—filed a complaint in the German Constitutional Court (Bundesverfassungsgericht). This complaint led to a landmark decision regarding the protection of fundamental rights and freedom of the press.

New standards for the work of the BND

The ruling of the Constitutional Court is of fundamental importance: it definitively establishes that German authorities are required to protect the fundamental rights contained in the Basic Law abroad.

“This statement was long overdue and is a great success that goes far beyond this specific case,” says Ulf Buermeyer, Chairman of GFF. “The fact that German authorities are also bound by the Basic Law abroad considerably strengthens human rights worldwide—as well as Germany’s credibility in the world”.

According to the Constitutional Court’s interpretation of the Basic Law, monitoring communications abroad without cause is only permissible in very limited circumstances. In addition, vulnerable groups of people such as journalists must be given special protection. The targeted surveillance of individuals must be subject to stricter limitations. The court also noted that the BND’s surveillance practices should be monitored by financially independent counsels.

This decision sends an international signal

For the first time in over 20 years the Federal Constitutional Court has issued a decision regarding BND surveillance. The Court’s ruling is a landmark decision with international significance. In 2013, Edward Snowden’s NSA disclosures revealed a global system of mass surveillance, in which Germany—particularly the BND—participated. Now, more than seven years after the NSA revelations, Germany’s highest court has ruled that international surveillance must also be in accordance with the German Basic Law. This ruling sends an international signal and could affect the surveillance activities of other countries’ intelligence services.

Read more:

In their current form, surveillance powers of the Federal Intelligence Service regarding foreign telecommunications violate fundamental rights of the Basic Law (19.05.20)

We have filed a lawsuit against the BND law – No Trust No News

BND law (06.11.16)

About GFF

(Contribution by Gesellschaft für Freiheitsrechte, EDRi member from Germany)

27 May 2020

France: First victory against police drones

By La Quadrature du Net

Since the beginning of the COVID-19 crisis, French police has been using drones to watch people and make sure they respect the lockdown. Drones had been used before by the police for the surveillance of protests, but the COVID-19 crisis represented a change of scale: all over France, hundred of drones have been used to broadcast an audio about sanitary instructions, but also to monitor and capture images of people in the street that may or may not respect the lockdown rules.

On May 4, EDRi observer La Quadrature Du Net (LQDN) and their ally La Ligue des Droits de l’Homme used some information published by the newspaper Mediapart to file a lawsuit against the Parisian police and force them to stop using drones for surveillance activity. They based their appeal in particular on the absence of any legal framework concerning the use of images captured by these drones.

On 18 May 2020, the Conseil d’État, the French highest administrative court, issued its decision on the case. It sets as illegal any drone equipped with camera and flying low enough, as such a drone would allow the police to detect individuals by their clothing or a distinctive sign. This decision is a major victory against drone surveillance.

Indeed, according to the Conseil d’État, only a ministerial decree reviewed by the CNIL (National Commission on Informatics and Liberty) could allow the police to use such drones. As long as such a decree has not been issued, the French police will not be able to use its drones anymore. Indeed, the decision is all about the COVID-19 health crisis, a much more important purpose than those usually pursued by the police to deploy drones.

This action was part of the Technopolice campaign, developed by La Quadrature Du Net. Other devices are still being used without a legal framework: automated CCTV, sound sensors, predictive police… With Technopolice, LQDN aims at collectively highlighting and combating the deployment of new police technologies without the necessary legal safeguards. This decision proves they are on the right track.

Read more:

La Quadrature Du Net and La Ligue des Droits de l’Homme public letter (18.05.20)

French Covid-19 Drones Grounded After Privacy Complaint (18.05.2020)

Why COVID-19 is a Crisis for Digital Rights (29.04.20)

Strategic litigation against civil rights violations in police laws (24.04.19)

Data retention: “National security” is not a blank cheque (29.01.20)

(Contribution by Martin Drago, La Quadrature Du Net)

25 May 2020

Open Letter: EDRi urges enforcement and actions for the 2 year anniversary of the GDPR


On 25 May 2020, for the General Data Protection Regulation (GDPR) 2 year anniversary, EDRi sent a letter to Executive Vice-President Jourová and Commissioner Reynders to highlight and urge action to the tackle the GDPR’s vast enforcement gap.

EDRi and its members widely welcomed the increased protections and rights enshrined in GDPR. Two years later, we call for the urgent actions by the EU Commission, the European Data Protection Board (EDPB) and the national data protection authorities (DPA) to ensure strong enforcement and implementation of the GDPR to make these rights a reality.

EDRi is especially concerned by the way many Member States have been implementing the GDPR and the misuses of GDPR by some DPAs. Finally, while we urge the European Commission not to reopen the GDPR, we highlight the need for complimentary and supporting legislation, such as through the upcoming Digital Service Act (DSA) and through a strong and clear ePrivacy Regulation.

You can read the letter here (PDF) and below:

Dear Executive Vice-President Jourová,
Dear Commissioner Reynders,

European Digital Rights (EDRi) is an umbrella organisation with 44 NGO members with representation in 19 countries that promotes and defends fundamental rights in the digital environment.

For the second anniversary of the GDPR’s entry into application, we wish to highlight and urge action to tackle the vast enforcement gap. The GDPR was designed to address information and power asymmetries between individuals and entities that process their data, and to empower people to control it. Two years since it was introduced, this is unfortunately still not the case. Effectiveness and enforcement are two pillars of the EU data protection legislation where national data protection authorities (DPAs) have a crucial role to play.

“Business as usual” should urgently be put to an end

In our experience as EDRi network, we have observed numerous infringements of the very principles of the GDPR but controllers are not being sufficiently held to account. The most striking infringements include:

  • Abuse of consent

Consent for processing data for marketing purposes is notoriously obtained through deceptive design (“dark patterns”)1, bundled into terms of service, or forced on individuals under economic pressure, and used to “legitimise” unnecessary and invasive forms of data processing, including profiling based on their sensitive data. Two years into the GDPR, internet platforms and other companies which rely on monetising information about people still conduct “business as usual”, and users’ weaknesses and vulnerabilities continue to be exploited. In this respect, our members found out as well that the minimization principle is often not fully enforced in the Member States, leading to abuses on the collection of personal data both by private and public entities.2

  • Failure of access to behavioural profiles

While internet platforms generate more and more profit from monetising knowledge about people’s behaviours, they are notorious in ignoring the fact that observations and inferences made about users are personal data as well, and are subject to all safeguards under the GDPR. However, individuals still do not have access to their full behavioural profiles or effective means of controlling them. Infringements do not only further exarcebate the opacity surrounding the online data ecosystem but also constitue a major obstacle to the effective exercise of data subjects’ rights, effectively undermining the protection afforded by the Regulation and equally citizens’ trust in the EU to protect their fundamental rights.

Please see the following articles for further elaboration of this problem:

Uncovering the Hidden Data Ecosystem” by Privacy International; “Your digital identity has three layers, and you can only protect one of them” by Panoptykon Foundation.

Urgent action by DPAs is needed to make the protections in GDPR a reality

Many national DPAs do not have the financial and technical capacity to effectively tackle cases against big online companies. They should therefore be properly equipped with resources, staff, technical knowledge and IT specialists, and they must use these to take action. In this regard, we urge the European Commission to start infringement procedures against Member States that do not provide DPAs with enough resources.

Moreover, our experience as a network, through GDPR and AdTech complaints3, illustrates the urgent need for enforcement, as well as issues with a lack of coordination, a slow pace and sometimes an evasive approach of national DPAs.

Please see the following materials for further elaboration of this problem: Response to the roadmap of the European Commission’s report on the GDPR by Open Rights Group, Panoptykon Foundation and Liberties EU and “Two years under the GDPR” by Access Now.

The role of the EU Commission and of the European Data Protection Board (EDPB) when applying the cooperation and consistency mechanisms is crucial. The EDPB is an essential forum for the DPAs to exchange relevant information regarding enforcement of the GDPR. Even if we understand that not every aspect of the one-stop-shop mechanism is handled at the EDPB level, cooperation between DPAs is of the essence to complete procedures and handle complaints appropriately and promptly, in order to offer to the individuals an effective redress, in particular in cross borders cases.

Furthermore, full transparency should be afforded to the complainant, including information on the investigation made by the DPAs, copies of the reports and the possibility to take part in the procedure if appropriate.

When necessary, we urge DPAs to consider calling upon Article 66 of the GDPR and trigger the urgency procedure to adopt temporary measures, or to force other authorities to act where there is an urgent need to do so. We regret that such possibility has not yet been explored.

Derogations by Members States and DPAs

EDRi is deeply concerned by the way most Member States have implemented the derogations, undermining the GDPR protections and by the misuses of GDPR by some DPAs.

Please see Access Now’s 2019 report “One year under the GDPR” for more details.

Our concerns relate to the introduction of wide and over-arching exemptions under Article 23, removing the protections of GDPR from huge amounts of processing with consequences for people’s rights.4 Moreover, Member States have been stretching the interpretation of the conditions set out in Article 6 and introducing broad conditions for processing special category personal data under Article 9 which are open to exploitation, including for example loopholes that can be abused by political parties.5

The majority of Member States also decided not to implement the provision in Article 80(2) of GDPR allowing for collective complaints. Many of the infringements we see are systemic, vast in scale and complex, yet without Article 80(2) there is no effective redress in place since only individuals are able to lodge complaints, and not associations independently.

Moreover, there are serious concerns as to political independence of DPAs in some countries. In Slovakia6, Hungary7, and Romania8, DPAs are abusing the law to go after journalists and/or NGOs. In Poland the DPA has presented interpretations of the GDPR that support the government’s agenda9. Not only is such an interpretation incorrect, but it risks being political as well as undermining the GDPR as it gives the false impression that the law infringes on free expression and media freedom. Disparities on the (lack of) implementation of Article 85 are also concerning10.

Need for complimentary and supporting legislation

GDPR does not and cannot operate in a silo. Just as the right to data protection interacts with other rights, it is essential that other legal frameworks bolster the protections of GDPR. We urge the Commission not to reopen the GDPR but we emphasise the need for complimentary and supporting legislation11.

The use of algorithms or AI in decisions affecting individuals, which are not fully automated or not based on personal data, are not covered by Article 22 GDPR, despite being potentially harmful. To address this insufficiency, some of our members highlight the need for a complimentary and comprehensive legislation on such decisions.

Moreover, the upcoming Digital Services Act (DSA) is an opportunity for the European Union to make the necessary changes to fix some of the worst outcomes of the advertisement-driven and privacy-invading economy, including the lack of transparency of users’ marketing profiles and of users’ control over their data in the context of profiling and targeted advertisement.

Finally, EDRi and our members repeatedly stated12, we believe that a strong and clear ePrivacy Regulation is urgently needed to further advance Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values.

In May 2018, EDRi and our members widely and warmly welcomed the increased protections and rights enshrined in GDPR. Now and two years on, we call on the EU Commission, EDPB, and DPAs to move forward with the enforcement and implementation of the GDPR to make these rights a reality.


  1. Please see “Deceived by design” report by Norwegian Consumer Council for examples of this practice.
  2. See for example Xnet’s report on Privacy and Data Protection against Institutionalised Abuses in Spain.
  3. See our members complaints:;;;
  4. A deeply concerning example is the immigration exemption introducted in the UK’s Data Protection Act 2018. See also Homo Digitalis complaint regarding Greek Law 4624/2019:
  5. See for example
  6. See
  7. See
  8. See
  9. See
  10. See for example
  11. See part III of the report “Who (really) targets you? Facebook in Polish election campaigns” by Panoptykon Foundation ( for specific recommendations on changes, which should be introduced in the Digital Services Act
  12. See
13 May 2020

Ban biometric mass surveillance!


Across Europe, highly intrusive and rights-violating facial recognition and biometric processing technologies are quietly becoming ubiquitous in our public spaces. As the European Commission consults the public on what to do, EDRi calls on the Commission and EU Member States to ensure that such technologies are comprehensively banned in both law and practice.

Keep walking. Nothing to see here….

By the end of 2019, at least 15 European countries had experimented with invasive biometric mass surveillance technologies, such as facial recognition. These are designed to watch, track or analyse people, score them, and make judgements about them as they go about their daily lives.

Worse still, many governments have done this in collaboration with secretive tech companies, in the absence of public debate, and without having demonstrated that the systems meet even the most basic thresholds of accountability, necessity, proportionality, legitimacy, legality or safeguarding.

A few thousand cameras to rule them all

Without privacy, you do not have the right to a private chat with your friends, your family, your boss or even your doctor. Your activism to save the planet becomes everyone’s business. You will be caught when blowing the whistle on abuse and corruption, or when attending a political march that your government does not want you to attend. You lose the right to go to a religious service or Trade Union meeting without someone keeping an eye on you; to hug your partner without someone snooping; or to wander freely without someone thinking you are being suspicious.

With constant mass surveillance, you lose a way to ever be truly alone. Instead, you become constantly surveilled and controlled.


Since the start of the Coronavirus pandemic, apps and other proposals have been suggested to rapidly expand bodily and health surveillance systems under the guise of public health. However, there is a real risk that the damage caused by widening surveillance measures will last long after the pandemic is over. For example, will employers remove the cameras doing temperature checks in offices after the pandemic?

Biometric mass surveillance systems can exacerbate structural inequalities, accelerate unlawful profiling, have a chilling effect on their freedoms of expression and assembly, and put limits on everyone’s ability to participate in public and social activities.

Fanny Hidvégi, Europe Policy Manager at EDRi member Access Now (AN) explains:

Human rights apply in emergencies and health crises. We don’t have to choose between privacy and health: protecting digital rights also promotes public health. The suspension of data protection rights in Hungary show why the EU needs to step up to protect fundamental rights.

Biometric surveillance – an architecture of oppression

Portrayed as an “architecture of oppression”, the untargeted capture or processing of sensitive biometric data makes it possible for governments and companies to build up incredibly detailed permanent records of who you meet, where you go, and what you do. More, it allows these actors to use all these records against you – whether for law enforcement, public authority or even commercial uses. By linking them to faces and bodies, these permanent records become quite literally carved into your skin. The increased capacity of states to track and identify individuals through facial recognition and other biometric processing is likely to disproportionately impact populations which are already highly policed, surveilled and targeted by abuse, including people of colour, Roma and Muslim communities, social activists, LGBTQ+ people and people with irregular migration status. There can be no place for this in a democratic, rights-based, rule-of-law-respecting society.

Ioannis Kouvakas, Legal Officer at EDRi member Privacy International (PI) warns that:

The introduction of facial recognition into cities is a radical and dystopic idea which significantly threatens our freedoms and poses fundamental questions about the kind of societies we want to live in. As a highly intrusive surveillance technique, it can provide authorities with new opportunities to undermine democracy under the cloak of defending it. We need to permanently ban its roll out now before it’s too late.

EDRi is therefore calling for an immediate and indefinite ban on biometric mass surveillance across the European Union.

Biometric mass surveillance is unlawful

This ban is grounded in the rights and protections enshrined in the Charter of Fundamental Rights of the European Union, the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED) which are currently under the spotlight for their two-year anniversary reviews. Together, these instruments guarantee that the people of the EU can live without fear of arbitrary treatment or abuse of power; with respect for their autonomy and self-development; and in safety and security by setting strong data protection and privacy standards. Biometric mass surveillance constitutes a violation of the essence of these instruments, and a contravention of the very heart of the EU’s fundamental rights.

Once systems are in place that normalise and legitimise the 24/7 watching of everyone, all the time, it’s a slippery slope towards authoritarianism. The EU must ensure, therefore, through legislative and non-legislative means, that biometric mass surveillance is comprehensively banned in law and in practice. Lotte Houwing, Policy Advisor at EDRi member Bits of Freedom (BoF) cautions that:

We are shaping the world of tomorrow with the measures we are taking today. It is of utmost importance that we keep this in mind and do not let the COVID-19 crisis scare us in to a (mass) surveillance state. Surveillance is not a medicine.

The EU regulates everything from medicines to children’s toys. It is unimaginable that a drug which has not been shown to be effective, or a toy which poses significant risks to children’s wellbeing, would be allowed onto the market. However, when it comes to biometric data capture and processing, in particular in an untargeted way in public spaces (i.e. mass surveillance), the EU has been a haven for unlawful biometric experimentation and surveillance. This has happened despite the fact that a 2020 study demonstrated that over 80% of Europeans are against sharing their facial data with authorities.

EDRi calls on the EU Commission, European Parliament and Member States to stick to their values and protect our societies by banning biometric mass surveillance. Failing to do so will increase the risks of an uncontrolled and uncontrollable demise into a digital dystopia.

Read more:

EDRi paper: Ban Biometric Mass Surveillance (13. 05. 2020)

Explainer: Ban Biometric Mass Surveillance (13. 05. 2020)

EDRi calls for fundamental rights-based responses to COVID-19 (20. 03. 2020)

Emergency responses to COVID-19 must not extend beyond the crisis (15. 04. 2020)

COVID-19 & Digital Rights: Document Pool (04. 05. 2020)

13 May 2020

COVID-Tech: COVID infodemic and the lure of censorship

By Chloé Berthélémy

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised the principle that states must “defend freedom of expression and information”. In this second post of the series, we take a look at the impact on freedom of expression and information that the measures to fight the spread of misinformation could have. Automated tools, content-analysing algorithms, state-sponsored content moderation, all have become normal under COVID-19, and it is a threat to many of our essential fundamental rights.

We already knew that social media companies perform pretty badly when it comes to moderate content on their platforms. Regardless of the measures they deploy (whether using automated processes or employing human moderators), they make discriminatory and arbitrary decisions. They fail to understand context and cultural and linguistic nuances. Lastly, they provide no proper effective access to remedies.

In times of a global health crisis where accessing vital health information, keeping social contact and building solidarity networks are so important, online communications, including social media and other content hosting services, have become even more essential tools. Unfortunately, they are also vectors of disinformation and misinformation that erupt in such exceptional situations and threaten public safety and governmental responses. However, private companies – whether voluntarily or pressured by governments – should not impose over-strict, vague, or unpredictable restrictions on people’s conversations about important topics.

Automated tools don’t work: what a surprise!

As the COVID-19 crisis broke out, emergency health guidelines forced big social media companies to send their content moderators home. Facebook and the like promised to live up to expectations by basing daily content moderation on their so-called artificial intelligence. It only took a few hours to observe glitches in the system.

Their “anti-spam” system was striking down quality COVID-19 content from trustworthy sources as violations of the platforms’ community guidelines. Sharing newspaper articles, links to official governmental websites or simply mentioning the term “coronavirus” in a post would result in having your content preemptively blocked.

This whole trend perfectly demonstrates why relying on automated processes can only be detrimental to freedom of expression and to freedom of receiving and imparting information. The current context led even the Alan Turing Institute to suggest that content moderators should be considered “key workers” in the context of the COVID-19 pandemic.

Content filters show high margins of error and are prone to over-censoring. Yet the European Parliament adopted a resolution on the EU’s response to the pandemic which calls on social network companies to proactively monitor and “stop disinformation and hate speech”. In the meantime, the European Commission continues its “voluntary approach” with the social media platforms and contemplates the possibility to propose soon a regulation.

Criminalising misinformation: a step too far

In order to swiftly respond to the spreading of COVID-19 health crisis, some Member States desperately try to control the flow of the information about the spread of the virus. In their efforts, they are seduced by the adoption of hasty legislation that criminalises disinformation and misinformation which may ultimately lead to state sponsored censorship and suppression of public discourse. For instance, Romania granted new powers to its National Authority for Administration and Regulation in Communications to order take-down notices for websites containing “fake news”. A draft legislation in its neighbour Bulgaria originally included the criminalisation of the spread of “internet misinformation” with fines of up to 1,000 euros and even imprisonment of up to three years. In Hungary, new emergency measures include the prosecution and potential imprisonment of those who spread “false” information.

The risks of abuse of such measures and unjustified interference with the right to freedom of expression directly impair the media’s ability to provide objective and critical information to the public, which is crucial for individuals’ well-being in times of national health crisis. While extraordinary situations definitely require extraordinary measures, they have to remain proportional, necessary and legitimate. Both the EU and Member States must refrain from undue interference and censorship and instead focus on measures that promote media literacy and protect and support diverse media both online and offline.

None of the approaches taken so far show a comprehensive understanding of the mechanisms that enable the creation, amplification and dissemination of disinformation as a result of curation algorithms and online advertising models. It is extremely risky for a democratic society to rely only on very few communications channels, owned by private actors of which the business model feeds itself from sensationalism and shock.

The emergency measures that are being adopted in the fight against COVID-19 health crisis will determine how European democracies will look like in its aftermath. The upcoming Digital Services Act (DSA) is a great opportunity for the EU to address the monopolisation of our online communication space. Further action should be done specifically in relation to the micro-targeting practices of the online advertising industry (Ad Tech). This crisis also showed to us that the DSA needs to create meaningful transparency obligations for better understanding of the use of automation and for future research –starting with transparency reports that include information about content blocking and removal.

What we need for a healthy public debate online are not gatekeepers entitled by governments to restrict content as in non-transparent and arbitrary manner. Instead, we need diversified, community-led and user-empowering initiatives, that allow everyone to contribute and participate.

Read more:

Joint report by Access Now, Civil Liberties Union for Europe, European Digital Rights, Informing the “disinformation” debate (18.10.18)

Access Now, Fighting misinformation and defending free expression during COVID-19: Recommendations for States (21.04.20)

Digital rights as a security objective: Fighting disinformation (05.12.18)

ENDitorial: The fake fight against fake news (25.07.18)

(Contribution by Chloé Berthélémy, EDRi Policy Advisor)

13 May 2020

Member in the spotlight: D3 – Defesa dos Direitos Digitais


This is the eleventh article of the series “EDRi member in the Spotlight” in which our members introduce themselves and their work in an in-depth highlight in interview format.

Today we introduce our Portuguese member: D3 – Defesa dos Direitos Digitais.

1. Who are you and what is your organisation’s goal and mission?
We are a volunteer-run association dedicated to the defense of fundamental rights in the digital context. Our focus is to ensure autonomy and freedom of choice; uphold privacy and free access to information, knowledge and culture; and defend digital rights as a reinforcement to the principles of a democratic society.

2. How did it all begin, and how did your organisation develop its work?

For many years there has been a gap within Portuguese civil society: there was no entity dedicated to digital rights issues. Some people would fight for them in their individual capacity, and a few organisations would cover them when absolutely needed, but often going beyond their original scope of action by doing so (such as free software organizations).

Around 2016, some people got together and started to discuss how we could start such an organization. The objective was to coordinate the already existing civil society efforts in an organized and dedicated way, with a wide enough scope that would allow us to cover any issue within the field – not because we wanted to cover everything but because we wanted the freedom to tackle any of them. It finally happened in March 2017, and we have been active since then.

3. The biggest opportunity created by advancements in information and communication technology is…

It can enable further universal access to many rights such as education, participation in democratic life, access to more information and data, tools for both on day to day life and to support the individuals and society in exceptional moments.

4. The biggest threat created by advancements in information and communication technology is…

Lack of oversight and disregard of the dangers associated with technology and its usage; excess of optimism about the capacity of technology to solve problems for both the individuals and the society; magical thinking that comes with the lack of understanding of technology and science in general.

5. Which are the biggest victories/successes/achievements of your organisation?

We managed to become the first divergent voice in matters where there used to be no entity representing the public interest. We brought new issues to the Portuguese public debate like data retention, copyright, net neutrality, electronic voting, public surveillance, and more. Our biggest achievement was having our data retention complaint to the Justice Ombudsman reach the Constitutional Court (the decision is still pending)

6. If your organisation could now change one thing in your country, what would that be?

Exclude non-policy influences from public policy decision making.

7. What is the biggest challenge your organisation is currently facing in your country?

Internally we face the usual issues related to the small scale of the country; for example, we could use more volunteers lending a hand.

Externally, right now it is impossible to escape the subject of COVID-19, which is making some people fall prey to questionable tech-solutionism promises which are inspired by practices of non-democratic countries.

8. How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

You can reach us at

If you speak Portuguese, you can find more information on our website, including how to volunteer and how to donate. An English-language smaller scale version is also in our plans.

Read more:

EDRi Member in the spotlight series

D3 Home Page