08 Jun 2020

Open Letter: ending gag lawsuits in Europe – protecting democracy and fundamental rights

By EDRi

The European Digital Rights network joined 118 civil society organisations from across the globe in signing an open letter (the latest act of a longstanding movement) addressing the need to end gag lawsuits that threaten the public interest by allowing powerful actors to silence those that would speak against them.

Read the letter here or find it below:

The problem: gag lawsuits against public interest defenders

The EU must end gag lawsuits used to silence individuals and organisations that hold those in positions of power to account. Strategic Lawsuits Against Public Participation (SLAPP) are lawsuits brought forward by powerful actors (e.g. companies, public officials in their private capacity, high profile persons) to harass and silence those speaking out in the public interest. Typical victims are those with a watchdog role, for instance: journalists, activists, informal associations, academics, trade unions, media organisations and civil society organisations.

Recent examples of SLAPPs include PayPal suing SumOfUsfor a peaceful protest outside PayPal’s German headquarters; co-owners of Malta’s Satabank suing blogger Manuel Deliafor a blog post denouncing money laundering at Satabank; andBollore Group suing Sherpa and ReAct in France to stop them from reporting human rights abuses in Cameroon. In Italy more than 6,000 or two-thirds of defamation lawsuitsfiled against journalists and media outlets annually are dismissed as meritless by a judge. When Maltese journalist Daphne Caruana Galizia was brutally killed, there were 47 SLAPPs pending against her.

SLAPPs are a threat to the EU legal order, and, in particular:

  • A threat to democracy and fundamental rights. The EU is founded on the rule of law and respect for human rights. SLAPPs impair the right to freedom of expression, to public participation and to assembly of those who speak out in the public interest, and have a chilling effect on the exercise of these rights by the community at large.
  • Threat to access to justice and judicial cooperation. Cross-border judicial cooperation relies on the principles of effective access to justice across the Union and mutual trust between legal systems. That trust must be based on the legally enforceable upholding of common values and minimum standards. To the extent that they distort and abuse the system of civil law remedies, SLAPPs undermine the mutual trust between EU legal systems: member states must be confident that rulings issued by other member states’ courts are not the result of abusive legal strategies and are adopted as the outcome of genuine proceedings.
  • A threat to the enforcement of EU law, including in connection to the internal market and the protection of the EU budget. The effective enforcement of EU law, including the proper functioning of the internal market, depends on the scrutiny of the behaviour of individual entities by the EU, member states and –crucially –informed individuals. Watchdogs, be it media or civil society actors, play a key enforcement role. Therefore, the absence of a system which safeguards public scrutiny is a threat to the enforcement of EU law. The same reasoning applies to the management of EU programmes and budget, which cannot be monitored through the sole vigilance of the European Commission.
  • A threat to freedom of movement. The absence of rules to protect watchdogs from SLAPP has an impact on the exercise of the Treaty’s fundamental freedoms, since it affects the ability of media, civil society organisations and information services providers to confidently operate in jurisdictions where the risk of SLAPPs is higher, and discourages people from working for organisations where they can be the target of SLAPPs.

The solution: an EU set of anti-SLAPP measures

The EU can and must end SLAPPs by adopting the following complementary measures to protect all those affected by SLAPPs:

1. An anti-SLAPP directive

An anti-SLAPP directive is needed to establish a Union-wide minimum standard of protection against SLAPPs, by introducing exemplary sanctions to be applied to claimants bringing abusive lawsuits, procedural safeguards for SLAPP victims, including special motions to contest the admissibility of certain claims and/or rules making the burden shifting to the plaintiff to demonstrate a reasonable probability of succeeding in such claims, as well as other types of preventive measures. The Whistle-Blower Directive sets an important precedent protecting those who report a breach of Union law in a work-related context. Now the EU must ensure a high standard of protection against gag lawsuits for everyone who speaks out –irrespective of the form and the context –in the public interest.

The legal basis for an anti-SLAPP directive is to be found in multiple provisions of the Treaty; for example, Article 114 TFEU on the proper functioning of the internal market, Article 81 TFEU on judicial cooperation and effective access to justice and Article 325 TFEU on combating fraud related to EU programmes and budgets.

2. The reform of Brussels I and Rome II Regulations
Brussels I Regulation (recast) contains rules which grant claimants the ability to choose where to make a claim. This must be amended to end forum shopping in defamation cases, which forces defendants to hire and pay for defence in countries whose legal systems are unknown to them and where they are not based. This is beyond the means of most and falls foul of the principles of fair trial and equality of arms.

Rome II Regulation does not regulate which national law will apply to a defamation case. This allows claimants to select the most favourable substantive law and therefore leads to a race to the bottom. Today, victims may be subject to the lowest standard of freedom of expression applicable to their case.

3. Support all victims of SLAPPs
Funds are needed to morally and financially support all victims of SLAPPs, especially with legal defence. Justice Programme funds should be used to train judges and practitioners, and a system to publicly name and shame the companies that engage in SLAPPs, for example in an EU register, should be created

Finally, the EU must ensure that the scope of anti-SLAPP measures include everybody affected by SLAPPs, including journalists, activists, trade unionists, academics, digital security researchers, human rights defenders, media and civil society organisations, among others.

This paper was signed by 119 media and civil society organisations.

You can find the original letter and the full list of signatories here.

close
04 Jun 2020

EDRi submits response to the European Commission AI consultation – will you?

By EDRi

Today, 4th June 2020, European Digital Rights (EDRi) submitted its response to the European Commission’s public consultation on artificial intelligence (AI). In addition, EDRi released its recommendations for a fundamental rights-based Artificial Intelligence Regulation.

AI is a growing concern for all who care about digital and human rights. AI systems have the ability to exacerbate mass surveillance and intrusion into our personal lives, reflect and reinforce some of the deepest societal inequalities, fundamentally alter the delivery of public and essential services, undermine data protection legislation, and disrupt the democratic process.

In Europe, we have already seen the negative impacts of automated systems at play at the border, in predictive policing systems which only increase over-policing of racialised communities, in ‘fraud detection’ systems which target poor, working class and migrant areas, and countless more examples. Read more in our explainer.

Therefore, EDRi calls for the European Commission to set clear red-lines for impermissible use, ensure democratic oversight, and include the strongest possible human rights protection.

We encourage all people, collectives and organisations to respond to the consultation and make sure these issues are addressed. Need help answering the consultation? Read EDRi’s answering guide for the public here.

Will you make your voice heard in a crucial moment for the future of our societies? Submit your own response to the consultation online here.

Read more:

EDRi Consultation response: European Commission consultation on Artificial Intelligence (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiConsultationResponse.pdf

EDRi Recommendations for a fundamental rights-based Artificial Intelligence Regulation: addressing collective harms, democratic oversight and impermissible use (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

EDRi Explainer AI and fundamental rights: How AI impacts marginalised groups, justice and equality (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiExplainer.pdf

EDRi Answering Guide to the European Commission consultation on AI (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiAnsweringGuide.pdf

close
04 Jun 2020

Can the EU make AI “trustworthy”? No – but they can make it just

By EDRi

Today, 4 June 2020, European Digital Rights (EDRi) submitted its answer to the European Commission’s consultation on the AI White Paper. On top of our response, in our additional paper we outline recommendations to the European Commission for a fundamental rights- based AI regulation. You can find our consultation response, recommendations paper, and answering guide for the public here.

How to ensure a “trustworthy AI” has been highly debated since the European Commission launched its White Paper on AI in February this year. Policymakers and industry have hosted numerous conversations about “innovation”, “Europe becoming a leader in AI”, and promoting a “Fair AI”.

Yet, a “fair” or “trustworthy” artificial intelligence seems a far way off. As governments, institutions and industry swiftly move to incorporate AI into their systems and decision-making processes – grave concerns remain as to how these changes will impact people, democracy and society as a whole.

EDRi’s response outlines the main risks AI poses for people, communities and society, and outlines recommendations for an improved, truly ‘human-centric’ legislative proposal on AI. We argue that the EU must reinforce the protections already embedded in the General Data Protection Regulation (GDPR), outline clear legal limits for AI by focusing on impermissible use, and foreground principles of collective impact, democratic oversight, accountability, and fundamental rights. Here’s a summary of our main points.

Put people before industrial policy

A ‘human centric’ approach to AI requires that considerations of safety, equality, privacy, and fundamental rights are the primary factors underpinning decisions as to whether to promote or invest in AI.

However, the European Commission’s White Paper proposal takes as a point of a departure the inherent economic benefits of promoting AI, particularly in the public sector. Promoting AI in the public sector as a whole, without requiring scientific evidence to justify the need or the purpose of such applications in some potentially harmful situations, is likely to have the most direct consequences on everyday peoples’ lives, particularly on marginalised groups.

Despite wide ranging applications that could advance our societies (such as some uses in the field of health), we have also seen the vast negative impacts of automated systems at play at the border, in predictive policing systems which exacerbate overpolicing of racialised communities, in ‘fraud detection’ systems which target poor, working class and migrant areas, and countless more examples [link to explainer]. All such examples highlight the potentially devastating consequences AI systems can have in the public sector, contesting the case for ‘promoting the uptake of AI.’ These examples highlight the need for AI regulation to be rooted in a human-centric approach.

The development of artificial intelligence technology offers huge potential opportunities for improving our economies and societies, but also extreme risks. Poorly-designed and governed AI will exacerbate power imbalances and inequality, increase discrimination, invade privacy and undermine a whole host of other rights. EU legislation must ensure that cannot happen. Nobody’s rights should be sacrificed on the altar of innovation.

said Chris Jones, Statewatch

Address collective harms of AI

The vast potential scale and impact AI systems challenges existing conceptions of harm. Whilst in many ways we can view the challenges posed by AI as fundamental rights issues, often the harms perpetrated are much broader, disadvantaging communities, economy, democracy and entire societies. From the impending threat of mass surveillance as a a result of biometric processing in publicly-accessible spaces, to the use of automated systems or ‘upload filters’ to moderate content on social media, to severe disruptions to the democratic process, we see the impact goes far beyond the level of the individual. One specificity of regulating AI is the need to address societal-level harms.

Prevent harms by focusing on impermissible use

Just as the problems with AI are collective and structural, so must be the solutions. The European Commission’s White Paper outlines some safeguards to address ‘high-risk’ AI, such as training data to correct for bias and ensuring human oversight. Whilst these safeguards are crucial, they will not address the irreparable harms which will result from a number of uses of AI.

The EU must move beyond technical fixes for the complex problems posed by AI. Instead, the upcoming AI regulation must determine the legal limits, impermissible uses or ‘red-lines’ for AI applications. This is a necessary step for a people-centered, fundamental rights-based AI”

says Sarah Chander, Senior Policy Adviser, EDRi.

The EDRi network lists some of the impermissible uses of AI:

  • indiscriminate biometric surveillance and biometric capture and processing in public spaces1
  • use of AI to solely determine access to or delivery of essential public services (such as social security, policing, migration control)
  • uses of AI which purport to identify, analyse and assess emotion, mood, behaviour, and sensitive identity traits (such as race, disability) in the delivery of essential services
  • predictive policing
  • autonomous lethal weapons and other uses which identify targets for lethal force (such as law and immigration enforcement)

“The EU must ensure that states and companies meet their obligations and responsibilities to respect and promote human rights in the context of automated decision-making systems. EU institutions and national policymakers must explicitly recognise that there are legal limits to the use and impact of automation. No safeguard or remedy would make indiscriminate biometric surveillance or predictive policing acceptable, justified or compatible with human rights”

said Fanny Hidvegi, Europe Policy Manager at Access Now

Require democratic oversight for AI in the public sphere

The rapidly increasing deployment of AI systems presents a major governance issue. Due to the (designed) opacity of the systems, the complete lack of transparency from governments when such systems are deployed for use in public, essential functions, and the systematic lack of democratic oversight and engagement – AI is furthering the ‘power asymmetry between those who develop and employ AI technologies, and those who interact with and are subject to them.’2

As a result, decisions impacting public services will be more opaque, increasingly privately owned, and even less subject to democratic oversight. It is vital that the EU’s regulatory proposal on AI addresses this – implementing mandatory measures of democratic oversight for the procurement and deployment of AI in the public sector and essential services. More, the EU must explore methods of direct public engagement on AI systems. In this regard, authorities should be required to specifically consult marginalised groups likely to be disproportionately impacted by automated systems.

Implement the strongest possible fundamental rights protections

Regulation on AI must reinforce, rather than replace, the protections already embedded in the General Data Protection Regulation (GDPR). The European Commission has the opportunity to complement these protections with safeguards for AI. To put people first and provide the strongest possible protections, all systems should complete mandatory human rights impact assessments. This assessment should evaluate the collective, societal, institutional and governance implications the system poses, and outline adequate steps to mitigate this.

“The deployment of such systems for predictive purposes comes with high risks on human rights violations. Introducing ethical guidelines & standards for the design and deployment of these tools is welcome, but not enough. Instead, we need the European Union and Member States to ensure compliance with the applicable regulatory frameworks, and draw clear legal limits to ensure AI is always compatible with fundamental rights.”

says Eleftherios Chelioudakis – Homo Digitalis

EDRi’s position calls for fundamental rights to be prioritised in the regulatory proposal for all AI systems, not only those categorised as ‘high-risk’. We argue AI regulation should avoid creating loop-holes or exemptions based on sector, size of enterprise, or whether or not the system is deployed in the public sector.

“It is crucial for the EU to recognize that the adoption of AI applications is not inevitable. The design, development and deployment of systems must be tested against human rights standards in order to establish their appropriate and acceptable use. Red lines are thus an important piece of the AI governance puzzle. Recognizing impermissible use at the outset is particularly important because of the disproportionate, unequal and sometimes irreversible ways in which automated decision making systems impact societies.”

said Vidushi Marda, Senior Programme Officer, at ARTICLE 19

The rapid uptake of AI will fundamentally change our society. From a human rights’ perspective, AI systems have the ability to exacerbate surveillance and intrusion into our personal lives, fundamentally alter the delivery of public and essential services, vastly undermine vital data protection legislation, and disrupt the democratic process.

For some, AI will mean reinforced, deeper harms as such systems feed and embed existing processes of marginalisation. For all, the route to remedies, accountability, and justice will be ever-more unclear, as this power asymmetry further shifts to private actors, and public goods and services will be not only automated, but privately owned.

There is no “trustworthy AI” without clear red-lines for impermissable use, democratic oversight, and a truly fundamental rights-based approach to AI regulation. The European Union’s upcoming legislative proposal on artificial intelligence (AI) is a major opportunity change this; to protect people and democracy from the escalating economic, political and social issues posed by AI.

Footnotes:

1EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States’ https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

2Council of Europe (2019). ‘Responsibility and AI DGI(2019)05 Rapporteur: Karen Yeung https://rm.coe.int/responsability-and-ai-en/168097d9c5

Read more:

EDRi Consultation Response: European Commission Consultation on the White Paper on Artificial Intelligence
https://edri.org/wp-content/uploads/2020/06/AI_EDRiConsultationResponse.pdf

EDRI Recommendations for a Fundamental-rights based Artificial Inelligence Regulation: Addressing collective harms, democratic oversight and impermissable use
https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

Access Now Consultation Response: European Commission Consultation on the White Paper on Artificial Intelligence
https://www.accessnow.org/EU-white-paper-consultation

Bits of Freedom (2020). ‘Facial recognition: A convenient and efficient solution, looking for a problem?’
https://www.bitsoffreedom.nl/2020/01/29/facial-recognition-a-convenient-and-efficient-solution-looking-for-a-problem/

EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States’
https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Privacy International and Article 19 (2018). ‘Privacy and Freedom of Expression in the Age of Artificial Intelligence’
https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-1.pdf

close
27 May 2020

COVID-Tech: Surveillance is a pre-existing condition

By Guest author

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue about digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised that “measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.” In this third post of the series, we look at surveillance – situating the measures in their longer term trajectory – particularly of marginalised communities.

One minor highlight in this otherwise bleak public health crisis is that privacy is trending. Now more than ever, conversations about digital privacy are reaching the general public. This is a vital development as states and private actors pose ever greater threats to our digital rights in their responses to COVID-19. The more they watch us, the more we need to watch them.

One concern, however, is that these debates have siphoned this new attention to privacy into a highly technical, digital realm. The debate is dominated by the mechanics of digital surveillance, whether we should have centralised or decentralised contact tracing apps, and how zoom traces us as we work, learn and do yoga at home.

Although important, this is only a partial framing of how privacy and surveillance are experienced during the pandemic. Less prominently featured are the various other privacy infringements being ushered in as a result of COVID-19. We should not forget that for many communities, surveillance is not a COVID-19 issue – it was already there.

The other sides of COVID surveillance

Very real concerns about digital measures proposed as pandemic responses should not overshadow the broader context of mass-scale surveillance emerging before our eyes. Governments across Europe are increasingly rolling out measures to physically track the public, via telecommunications and other data, without explicit reference to how this will impede the spread of the virus, or when the use and storage of this data will end.

We are also seeing the emergence of bio-surveillance dressed in a public health response’s clothing. From the Polish government’s app mandating the use of geo-located selfies, to talks of using facial biometrics to create immunity passports to facilitate the return of of workers in the UK, governments have, and will continue to, use the pandemic as a cover to get into our homes, and closer to us.

Yet, less popular in media coverage are physical surveillance techniques. Such measures are – in many European countries – coupled with heightened punitive powers for law enforcement. Police have deployed drones in France, Belgium and Spain, and communities in cities across Europe are feeling the pressure of increased police presence in their communities. Heightened measures of physical surveillance cannot be accepted at face value or ignored. Instead, they must be viewed in tandem with new digital developments.

Who can afford privacy?

These measures are not neutrally harmful. In unequal societies, surveillance will always target racialised1 people, migrants, and the working classes. These people bear the burden of heightened policing powers and punitive ‘public health’ enforcement – being more likely to need to leave the house for work, take public transport, live in over-policed neighbourhoods, and in general be perceived as suspicious, criminal, necessitating surveillance.

This is a privacy issue as much as it is about inequality. Except, for some, the consequences of intensified surveillance under COVID-19 means heightened exposure to the virus through direct contact with police, increased monitoring of their social media, the anxiety of constant sirens, and in the worst cases, the real bodily harm of police brutality.

In the last few days, Romani communities in Slovakia reported numerous cases of police brutality, some against children playing outside. Black, brown and working class communities across Europe are experiencing the physical and psychological effects of being watched even more than normal. In Brussels, where EDRi is based, a young man has died in contact with the police during raids.

This vulnerability is economic, too – for many, privacy is a sparse commodity.It is purchased by those who live in affluent neighbourhoods, by those with ‘work from home’ jobs. Those who cannot afford privacy in this more basic sense will, unfortunately, not be touched by debates about contact tracing. For many, digital exclusion means that measures such as contact-tracing apps are completely irrelevant. Worse, if future measures in response to COVID-19 are designed with the assumption that we all use smart phones, or have identity documents, they will be immensely harmful.

These measures are being portrayed as ‘new’, at least in our European ‘liberal’ democracies. But for many, surveillance is not new. Governmental responses to the virus have simply brought to the general public a reality reserved for people of colour and other marginalised communities for decades. Prior to COVID-19, European governments have deployed technology and other data-driven tools to identify, ‘risk-score’ and experiment on groups at the margins, whether by way of predicting crime, forecasting benefit fraud, or assessing whether or not asylum applicants are telling the truth by their facial movements.

We need to integrate these experiences of surveillance into the mainstream privacy debate. These conversations have been sidelined or explained away with the logic of individual responsibility. For example, last year, in a public debate on technology and surveillance of marginalised communities, one participant swiftly moved the conversation away from police profiling and toward privacy literacy. They asked the room of anti-racist activists “does everybody here use a VPN?”

Without a holistic picture of how surveillance affects people differently – the vulnerabilities of communities and the power imbalances that produce this – we will easily fall into the trap that quick fix solutions can guarantee our privacy, and that surveillance can be justified.

Is surveillance a price worth paying?

If we don’t root our arguments in people’s real life experiences of surveillance, not only do we devalue the right to privacy for some, but we also risk losing the argument to those who believe that surveillance is a price worth paying.

This narrative is a direct consequence of an abstract, technical and neutral framing of surveillance and its harms. Through this lens, infringements of privacy are minor, necessary evils. As a result, privacy will always lose the the false ‘privacy vs health’ trade-off. We should challenge the trade-off itself, but we can also ask: who will really will pay the price of surveillance? How do people experience breaches of privacy?

Another question we need to ask is who profits from surveillance? Numerous companies have shown their willingness to enter public-private alliances, using COVID-19 as the opportunity to market surveillance based ‘solutions’ to issues of health (often with dubious claims). Yet, again, this is not new – companies like Palantir, contracted by the UK government to process confidential health data during COVID-19, have a much longer-standing role in the surveillance of migrants and people of colour and facilitating deportations. Other large tech companies will use COVID-19 to continue their expansion into areas like ‘digital welfare’. Here, deeply uneven power relationships will be further cemented with the introduction of digitalised tools, making them harder to challenge and posing ever greater risks to those who rely on the state. If unchallenged, this climate of techno-solutionism will only increase the risk of new technology testing and data-extraction from marginalised groups for profit.

A collective privacy

There is a danger to viewing surveillance as exceptional; a feature of COVID-19 times. It suggests that protecting privacy is only newsworthy when it is about ´everyone’ or ‘society as a whole’. What that means, though is that actually we don’t mind if a few don’t have privacy.

Surveillance measures and other threats to privacy have countless times been justified for the ‘public good’. Privacy – framed in abstract, technical and individualistic terms – simply cannot compete, and ever greater surveillance will be justified. This surveillance will be digital and physical and everything in between, and profits will be made. Alternatively, we can fight for privacy as a collective vision – something everybody should have. Collective privacy is not exclusive or abstract – it means looking further than how individuals might adjust their privacy settings, or how privacy can be guaranteed in contact tracing apps.

A collective vision of privacy means contesting ramped-up police monitoring, the use of marginalised groups as guinea pigs for new digital technologies, as well as ensuring new technologies have adequate privacy protections. It also requires us to think about who will be the first to feel the impact of surveillance? How do we support them? To answer these questions, we need to recognise surveillance in all its manifestations, including way before the outbreak of COVID-19.

Original illustration by Miguel Brieva, licensed under CBNA 2020, La Imprenta, included in “Que No Haya Sido en Vano

Read more:

Telco data and Covid-19: A primer (21.04.20)
https://privacyinternational.org/explainer/3679/telco-data-and-covid-19-primer

Slovak police officer said to have beaten five Romani children in Krompachy settlement and threatened to shoot them (29.04.20)
http://www.romea.cz/en/news/world/slovak-police-officer-said-to-have-beaten-five-romani-children-in-krompachy-settlement-and-threatened-to-shoot-them

Amid COVID-19 Lockdown, Justice Initiative Calls for End to Excessive Police Checks in France (27.03.20)
https://www.justiceinitiative.org/newsroom/amid-covid-19-lockdown-justice-initiative-calls-for-end-to-excessive-police-checks-in-france

Digital divide ‘isolates and endangers’ millions of UK’s poorest (28.04.20)
https://www.theguardian.com/world/2020/apr/28/digital-divide-isolates-and-endangers-millions-of-uk-poorest

The EU is funding dystopian Artificial Intelligence projects (22.01.20)
https://www.euractiv.com/section/digital/opinion/the-eu-is-funding-dystopian-artificial-intelligence-projects

A Price Worth Paying: Tech, Privacy and the Fight Against Covid-19 (24.04.20)
https://institute.global/policy/price-worth-paying-tech-privacy-and-fight-against-covid-19

COVID-Tech: Emergency responses to COVID-19 must not extend beyond the crisis (15.04.20)
https://edri.org/emergency-responses-to-covid-19-must-not-extend-beyond-the-crisis/

COVID-Tech: COVID infodemic and the lure of censorship (13.04.2020)
https://edri.org/covid-infodemic-and-the-lure-of-censorship/

Footnotes

  1. This term refers to racial, ethnic and religious minorities, emphasising that racialisation is a structural process inflicted on people, groups and communities.

(Contribution by Sarah Chander, EDRi senior policy advisor)

close
27 May 2020

Competition law: Big Tech mergers, a dominance tool

By Laureline Lemoine

This is the third article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this. Read the first article on the impact of competition law on your digital rights here and the second article on what to do against Big tech’s abuse here.

One way Big Tech has been able to achieve a dominant position in our online life, is through mergers and acquisitions. In recent years, the five biggest tech companies (Amazon, Apple, Alphabet – parent company of Google, Facebook and Microsoft) spent billions to strengthen their position in acquisitions that shaped our digital environment. Notorious acquisitions which made headlines include: Facebook/WhatsApp, Facebook/Instagram, Microsoft/LinkedIn, Google/YouTube, and more recently, Amazon/Twitch.

Beyond infamous social media platforms and big deals, Big Tech companies also acquire less known-companies and start-ups, which also greatly contribute to ther growth. While not making big newsworthy acquisitions, Apple still “buys a company every two to three weeks on average” according to its CEO. Since 2001, Google-Alphabet has been acquiring over 250 companies and since 2007, while Facebook acquired over 90. Big Tech’s intensive acquisition policy particularly applies to artificial intelligence (AI) startups. This is worrying because reducing competitors also means reducing diversity, leaving Big Tech in charge of developing these technologies, at a time where AI technologies are more and more used in decisions affecting individuals and are known to be susceptible to bias.

Big Tech’s intensive acquisition policy can have different goals at play, sometimes at the same time. These companies acquire competitors who could have offer, or were offering consumers, an alternative, in order to eliminate or shelve them (“killer acquisitions”), in order to consolidate a position in the same market or in a neighbouring market, or in order to acquire their technical or human skills (“talent acquisitions”). See for example this overview of Google and Facebook’s acquisitions.

And in time of economic trouble, Big Tech is even more lurking. In the US, Senator Warren wants to introduce a moratorium on COVID-era acquisitions.

Big Tech’s mergers are mostly unregulated

While mergers and acquisitions are part of business life, the issue is that most Big Tech’s acquisitions are not subject to any control. And the few ones which are reviewed have been authorised without conditions. This led to debates on the state of competition law: are the current rules fit for today’s age of data-driven acquisitions and technology takeovers?

While some already called for a ban on acquisitions by certain companies, others are discussing the thresholds set in competition law to allow review by competent authorities, but also, more intrinsically, the criteria used to review mergers.

The issue with thresholds is that they depend on monetary turnover, which many companies and startups do not reach, either because they haven’t yet monetised their innovations or because their value is not reflected in their turnover but, for example, in their data trove. Despite low turnovers, Facebook was still willing to spent 1 and 19 billions for, respectively, Instagram and WhatsApp. These data-driven mergers allowed for these companies’ data sets to be aggregated, increasing the (market) power of Facebook.

The French competition authority suggests for example, to introduce an obligation to inform the EU and/or national competition authorities of all the mergers implemented in the EU by “structuring” companies. These “structuring” companies would be defined clearly according to objective criteria and in cases of risks, the authorities would ask these players to notify the mergers for review.

However, although the acquisition of WhatsApp by Facebook was reviewed by the European Commission thanks to a referee from national competition authorities, the operation was still authorised. This poses another issue: the place of data protection and privacy in merger control. Competition authorities assume that, since there is a data protection framework, data protection rights are respected and individuals are exercising their rights and choices. But this assumption does not take into account the reality of the power imbalance between users and Big Tech. In this regard, some academics, such as Orla Lynskey suggests solutions such as the increased cooperation between competition, consumers and data protection authorities to understand and examine the actual barriers to consumer choice in data-driven markets. Moreover, where it is found that consumers value data privacy as a dimension of quality, the competitive assessment should therefore reflect whether a given operation would deteriorate such quality.

A wind of change might already be coming from the US, as the Federal Trade Commission issued last February “Special Orders” to the five Big Tech companies, “requiring them to provide information about prior acquisitions not reported to the antitrust agencies”, including how acquired data has been treated.

Google/Fitbit: the quest for our sensitive data

The debate recently resurfaced when Google’s proposed acquisition of Fitbit was announced. Immediately, a number of concerns were raised, both in terms of competition and of privacy (see for example the European Consumer Organisation BEUC, and the Electronic Frontier Foundation (EFF)’s concerns). From a fundamental rights perspective, the most worrying issue lies in the fact that Google would be acquiring Fitbit’s health data. As Privacy International warns, “a combination of Google / Alphabet’s potentially extensive and growing databases, user profiles and dominant tracking capabilities with Fitbit’s uniquely sensitive health data could have pervasive effects on individuals’ privacy, dignity and equal treatment across their online and offline existence in future.”

Such concerns are also shared beyond civil society, as the announcement led the European Data Protection Board to issue a statement, warning that “the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

It is a fact that Google cannot be trusted with our personal data. As well as a long history of competition and data protection infringements, Google is questionably trying to enter the healthcare market, and already breaking patients’ trust.

Beyond concerns, this operation will be the opportunity for the European Commission to adopt a new approach after the Facebook/WhatsApp debacle. Google is acquiring Fitbit for its data and therefore the competitive assessment should reflect that. Moreover, the Commission should use this case as an opportunity to consult with consumer and data protection authorities.

Read more:

Google wants to acquire Fitbit, and we shouldn’t let it! (13.11.19)
https://privacyinternational.org/news-analysis/3276/google-wants-acquire-fitbit-and-we-shouldnt-let-it

GOOGLE-FITBIT MERGER: Competition concerns and harms to consumers (07.07.20)
http://www.beuc.eu/publications/beuc-x-2020-035_google-fitbit_merger_competition_concerns_and_harms_to_consumers.pd

Considering Data Protection in Merger Control Proceedings (06.06.18)
https://one.oecd.org/document/DAF/COMP/WD(2018)70/en/pdf

Competition law: what to do against Big Tech’s abuse? (01.04.2020)
https://edri.org/competition-law-what-to-do-against-big-tech-abuse/

The impact of competition law on your digital rights (19.02.2020)
https://edri.org/the-impact-of-competition-law-on-your-digital-rights/

(Contribution by Laureline Lemoine, EDRi senior policy advisor)

close
27 May 2020

More than the sum of our parts: a strategy for the EDRi Network

By Claire Fernandez

It took over a year. From an EDRi members’ survey in early 2019 to the vote by the (online) General Assembly of members at the end of April 2020. In those months we held workshops, webinars, calls, several rounds of comments, draft iterations and about 50 consultations. We won’t lie, it was a lengthy, challenging and resource-consuming process. But it was worth it: we can now announce, proud and excited, the adoption of the EDRi Network 2020-2024 Strategy (link to summary).

Along the process, we learned a great deal about the context EDRi operates in and how the network situates in European societies. We also learned about how strategic planning processes can unveil larger questions about networks’ identity and health, and on what brings people together.

Values vs practices

There are many diverse visions about EDRi and about what a strategy is. EDRi network is comprised of a wide-ranging constellations of distinct voices. There is no ‘one size fits all’ narrative that encompasses some of the most complex issues. Some, like Richard D. Bartlett, would argue that people would rather align on a community of practices than on shared ‘values’. In EDRi’s case, what practices bring us together? ‘EDRis’ do share a passion for working in a community based on expertise, trust and hard work. We therefore worked on a balance and design a strategy that would give a overarching common sense of purpose and direction while leaving enough space for people to carry on with their work.

The strategy

It feels daring and risky to put our vision and assumptions on paper and boil down to what EDRi is all about. The strategy starts with highlighting the problems EDRi faces and showing a sense of urgency for action. While technologies represent opportunities, the near-total digitisation and permanent recording of our lives poses a significant risk to our autonomy and to our democracies.

A significant piece of the strategy is the power analysis, which describes the context in which EDRi operates. Our world is characterised by power asymmetries between state and private actors on the one hand, and people on the other. These power imbalances threaten democracy and people’s behavior. There is a lot to be done to change power structures that allow for injustice and human rights violations in the digital age. And thus, EDRi will not succeed alone. We play a contributing role based on our mission, identity and strengths as a digital rights network. We will aim for a world in which people live with dignity and vitality and to create a fair and open digital environment that enables everyone to flourish and thrive to their fullest potential. This is part and parcel of many other social justice causes as mobilisation and democratic change are highly dependent of technologies.

For EDRi that means that we will work in the next five years to influence decision-makers to regulate and change surveillance-based practices.

What’s next?

Now that our shared vision and purpose are articulated for a range of audiences, implementation work can start. In the coming months, our work as a network focuses on human rights based responses to the Covid-19 pandemic, on meaningful platform regulation and on requesting bans on invasive and risky biometric technologies.

A strategy is a frame, the start of a process rather than a document. We will therefore need to test our assumptions, reflect, iterate and build trust to advance digital rights for all. EDRi’s mission is ambitious. To succeed, we need a healthy network, fierce EDRi member organisations and empowered people. Our vehicle for change is a sustainable and resilient field that combats burn out and toxicity and relies on both personal relationships and professional processes.

The pandemic is an absolute turning point that marks the beginning of a different era, it can leave us feeling vulnerable and afraid for ourselves and our loved ones, but also reminds us that we are part of a broader community. What better time than this crisis for a new beginning for EDRi and the societies we live in to create a world of dignity in the digital age?

Read more:

Strategy summary
https://edri.org/wp-content/uploads/2020/05/EDRi_Strategy_Summary.pdf

EDRi calls for fundamental rights-based responses to COVID-19 (20.03.20)
https://edri.org/covid19-edri-coronavirus-fundamentalrights/

DSA: Platform Regulation Done Right (09.04.20)
https://edri.org/dsa-platform-regulation-done-right/

Ban biometric mass surveillance! (13.05.20)
https://edri.org/blog-ban-biometric-mass-surveillance/

close
27 May 2020

Hungary: “Opinion police” regulate Facebook commentaries

By Guest author

There have been a number of critical news reports from around the world stating that Hungary’s COVID-19 state-of-emergency legislation is “creating a chilling effect”. Such headlines miss the mark somewhat, as chilling effects are far from new. Individuals who cross government authorities and their allies and supporters with public and private expressions of criticism have been losing their positions for over a decade; and the chilling effects that successive governments have had on citizens’ behaviour were apparent long before the current regime.

What qualifies as news is the sustained media attention that the chilling effect in Hungary has received over the past two weeks. Its COVID-19 emergency legislation has attracted intense scrutiny nationally and globally.

The media furore was triggered by the detention of two persons by local police authorities due to statements posted on Facebook that allegedly posed the risk of “alarming the population” or “interfering with public protection” during the crisis.

Legal retaliation

The first case involved an individual in Eastern Hungary detained for hours for “publishing false facts on a social media site”. The “alarmist content” consisted of disapproval of the lockdown policy with additional remarks, presumably addressed to the Prime Minister (“You’re a merciless tyrant, but bear in mind that dictators invariably fall”). The man recalls half a dozen law enforcement authorities arriving at his home on May 12. The charges were dropped, but the YouTube channel of the Hungarian law enforcement authorities posted a video that was widely viewed of the man being removed from his home and placed into a police vehicle.

The following day the home of an opposition-party member in the South of the country was raided at dawn, with a heavy police presence; his communication devices were confiscated and the man was detained for four hours at police headquarters for having shared a post from an opposition MEP on a closed Facebook group; the post criticised a controversial government decision to empty thousands of hospital beds across the country to free them for potential Coronavirus patients. He remarked that in the town of Gyula “1,170 [hospital] beds were freed as well”. The fact was not in dispute. The Facebook user from Gyula was not charged with a criminal offense.

In a blog addressing the highly mediated cases, the Hungarian Civil Liberties Union (HCLU) maintains that existing legislation could have been used to tackle the problem of publication and dissemination of false information. The blog’s headline, “The opinion police are at the door”, alludes to the legendary Socialist-era terror of a doorbell sounding in the middle of the night.

Social media users warned of “continual monitoring”

An announcement posted online by the authorities in one Eastern county of Hungary overtly alerted social media users to the fact that the police are “continually monitoring the internet,” Politico reports. According to the National Law Enforcement website, 87 people have been targeted in criminal investigations in connection with the COVID-19 measure. Of these cases, only 6 have reached the prosecution phase.

What’s in the public interest?

Back at the end of March when the Hungarian Parliament passed a bill introducing emergency powers without a sunset clause, the move garnered a surprising amount of coverage and criticism. Of particular concern was an amendment to the Criminal Code introducing prison terms of up to five years for individuals convicted of “distorting the truth” or “spreading falsehoods” connected to the Coronavirus pandemic; like many countries worldwide, the aim of the legislation was to protect the public during the pandemic, but anecdotal reports suggest that often it’s the authorities themselves, rather than the public, that stand to gain from special protection.

The Hungarian Daily Népszava reported that on May 19 the Parliament adopted a 160-page bill which will grant the National Security Services a mandate that could entail major data security risks. The NSS will be empowered to “monitor the content of electronic communications networks” at the local and federal level of government to prevent cyber attacks.

News website 444 suggests that a state surveillance system is being established with the passage of this law. In effect, the secret service will be given access to all public data, including tax, social security, health, and criminal records.

Another controversial aspect of the legislative package is that the contact data of persons interrogated over the course of a criminal investigation could be retained by the authorities for up to twenty years, even if the suspect is found innocent.

Privacy experts say that the legislation does not offer sufficient data protection safeguards. The Head of the National Authority for Data Protection and Freedom of Information (NAIH), Attila Péterfalvi, has written to the State Secretary of the Ministry of the Interior with concerns that “unlimited surveillance (…) will not allow for special protection of personal data.”

From state of emergency to surveillance state?

In a resolution adopted by the European Parliament in late April, Hungary was sharply criticised for its COVID-19 measures; limits to free speech under an indefinite state of emergency are “totally incompatible with European values”. The Minister of Justice announced that a bill revoking emergency powers is expected to be adopted on June 20. While the government is already declaring victory on the public relations front and calling for “apologies” from Brussels, the contents of the bill are still unknown. Observers suspect that the government will annul the “state of emergency” while preserving many of the emergency powers.

Read more:

The Impact of Covid-19 Measures on Democracy, the Rule of Law and Fundamental Rights in the EU (23.04.20)
https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/651343/IPOL_BRI(2020)651343_EN.pdf

Jourová: Commission looking at Hungary’s emergency changes to labour code and GDPR (15.05.20)
https://www.euractiv.com/section/justice-home-affairs/news/jourova-commission-looking-at-hungarys-emergency-changes-to-labour-code-and-gdpr

Hungary’s Government Using Pandemic Emergency Powers To Silence Critics (18.05.20)
https://www.techdirt.com/articles/20200514/17321444504/hungarys-government-using-pandemic-emergency-powers-to-silence-critics.shtml

(In Hungarian) Mindent is megtudhatnak ezután a nemzetbiztonsági szolgálatok az emberekről (19.05.2020)
https://nepszava.hu/3078579_mindent-is-megtudhatnak-ezutan-a-nemzetbiztonsagi-szolgalatok-az-emberekrol

Open Letter: Commission Has Clear Legal Grounds to Pursue Hungary to Protect Free Speech and Privacy (15.05.2020)
https://www.liberties.eu/en/news/hu-ngo-open-letter-jourova-reynerds/19272

(Contribution by Christiana Mauro, EDRi observer)

close
27 May 2020

German Constitutional Court stops mass surveillance abroad

By Gesellschaft für Freiheitsrechte

The German Federal Intelligence Service (BND) has so far been able to spy on foreign citizens abroad en masse and without cause—even on sensitive groups such as journalists. In response, EDRi member Gesellschaft für Freiheitsrechte (GFF, Society for Civil Rights), alongside five media organizations, filed a constitutional complaint against the BND law that allowed this surveillance to occur. On May 19, 2020, the German Constitutional Court made clear that the BND may not carry out mass surveillance abroad, and is bound by the German Constitution (Basic Law) even as it relates to foreign citizens and cross-boarder communication.

With regards to their local activities, German authorities—such as the BND—are naturally bound by the Basic Law, the constitution of the Federal Republic of Germany. When acting abroad, however, a 2017 change in the law allowed the BND to act with seemingly limitless power; the BND could monitor the telecommunications of foreigners abroad without any limits or specific restrictions. Within its own borders, such surveillance is a clear violation of Article 10 of the Basic Law, which protects the freedom of communications. However, the 2017 BND law assumed that the secret service, when acting outside of German territory, is not bound by the Basic Law.

The BND law thus created considerable risks for foreign journalists who rely on trust and confidentiality when communicating with their sources. In response to the significant threats to constitutional rights created by the BND law, several journalists—supported by the GFF and partner organizations—filed a complaint in the German Constitutional Court (Bundesverfassungsgericht). This complaint led to a landmark decision regarding the protection of fundamental rights and freedom of the press.

New standards for the work of the BND

The ruling of the Constitutional Court is of fundamental importance: it definitively establishes that German authorities are required to protect the fundamental rights contained in the Basic Law abroad.

“This statement was long overdue and is a great success that goes far beyond this specific case,” says Ulf Buermeyer, Chairman of GFF. “The fact that German authorities are also bound by the Basic Law abroad considerably strengthens human rights worldwide—as well as Germany’s credibility in the world”.

According to the Constitutional Court’s interpretation of the Basic Law, monitoring communications abroad without cause is only permissible in very limited circumstances. In addition, vulnerable groups of people such as journalists must be given special protection. The targeted surveillance of individuals must be subject to stricter limitations. The court also noted that the BND’s surveillance practices should be monitored by financially independent counsels.

This decision sends an international signal

For the first time in over 20 years the Federal Constitutional Court has issued a decision regarding BND surveillance. The Court’s ruling is a landmark decision with international significance. In 2013, Edward Snowden’s NSA disclosures revealed a global system of mass surveillance, in which Germany—particularly the BND—participated. Now, more than seven years after the NSA revelations, Germany’s highest court has ruled that international surveillance must also be in accordance with the German Basic Law. This ruling sends an international signal and could affect the surveillance activities of other countries’ intelligence services.

Read more:

In their current form, surveillance powers of the Federal Intelligence Service regarding foreign telecommunications violate fundamental rights of the Basic Law (19.05.20)
https://www.bundesverfassungsgericht.de/SharedDocs/Pressemitteilungen/EN/2020/bvg20-037.html

We have filed a lawsuit against the BND law – No Trust No News
https://notrustnonews.org/?lang=en

BND law (06.11.16)
https://freiheitsrechte.org/bnd-law/

About GFF
https://freiheitsrechte.org/english/

(Contribution by Gesellschaft für Freiheitsrechte, EDRi member from Germany)

close
27 May 2020

France: First victory against police drones

By La Quadrature du Net

Since the beginning of the COVID-19 crisis, French police has been using drones to watch people and make sure they respect the lockdown. Drones had been used before by the police for the surveillance of protests, but the COVID-19 crisis represented a change of scale: all over France, hundred of drones have been used to broadcast an audio about sanitary instructions, but also to monitor and capture images of people in the street that may or may not respect the lockdown rules.

On May 4, EDRi observer La Quadrature Du Net (LQDN) and their ally La Ligue des Droits de l’Homme used some information published by the newspaper Mediapart to file a lawsuit against the Parisian police and force them to stop using drones for surveillance activity. They based their appeal in particular on the absence of any legal framework concerning the use of images captured by these drones.

On 18 May 2020, the Conseil d’État, the French highest administrative court, issued its decision on the case. It sets as illegal any drone equipped with camera and flying low enough, as such a drone would allow the police to detect individuals by their clothing or a distinctive sign. This decision is a major victory against drone surveillance.

Indeed, according to the Conseil d’État, only a ministerial decree reviewed by the CNIL (National Commission on Informatics and Liberty) could allow the police to use such drones. As long as such a decree has not been issued, the French police will not be able to use its drones anymore. Indeed, the decision is all about the COVID-19 health crisis, a much more important purpose than those usually pursued by the police to deploy drones.

This action was part of the Technopolice campaign, developed by La Quadrature Du Net. Other devices are still being used without a legal framework: automated CCTV, sound sensors, predictive police… With Technopolice, LQDN aims at collectively highlighting and combating the deployment of new police technologies without the necessary legal safeguards. This decision proves they are on the right track.

Read more:

La Quadrature Du Net and La Ligue des Droits de l’Homme public letter (18.05.20)
https://www.laquadrature.net/wp-content/uploads/sites/8/2020/05/440442-440445-quadrature-du-net-et-ldh.pdf

French Covid-19 Drones Grounded After Privacy Complaint (18.05.2020)
https://www.bloomberg.com/news/articles/2020-05-18/paris-police-drones-banned-from-spying-on-virus-violators

Why COVID-19 is a Crisis for Digital Rights (29.04.20)
https://edri.org/why-covid-19-is-a-crisis-for-digital-rights

Strategic litigation against civil rights violations in police laws (24.04.19)
https://edri.org/strategic-litigation-against-civil-rights-violations-in-police-laws/

Data retention: “National security” is not a blank cheque (29.01.20)
https://edri.org/data-retention-national-security-is-not-a-blank-cheque

(Contribution by Martin Drago, La Quadrature Du Net)

close
25 May 2020

Open Letter: EDRi urges enforcement and actions for the 2 year anniversary of the GDPR

By EDRi

On 25 May 2020, for the General Data Protection Regulation (GDPR) 2 year anniversary, EDRi sent a letter to Executive Vice-President Jourová and Commissioner Reynders to highlight and urge action to the tackle the GDPR’s vast enforcement gap.

EDRi and its members widely welcomed the increased protections and rights enshrined in GDPR. Two years later, we call for the urgent actions by the EU Commission, the European Data Protection Board (EDPB) and the national data protection authorities (DPA) to ensure strong enforcement and implementation of the GDPR to make these rights a reality.

EDRi is especially concerned by the way many Member States have been implementing the GDPR and the misuses of GDPR by some DPAs. Finally, while we urge the European Commission not to reopen the GDPR, we highlight the need for complimentary and supporting legislation, such as through the upcoming Digital Service Act (DSA) and through a strong and clear ePrivacy Regulation.

You can read the letter here (PDF) and below:

Dear Executive Vice-President Jourová,
Dear Commissioner Reynders,

European Digital Rights (EDRi) is an umbrella organisation with 44 NGO members with representation in 19 countries that promotes and defends fundamental rights in the digital environment.

For the second anniversary of the GDPR’s entry into application, we wish to highlight and urge action to tackle the vast enforcement gap. The GDPR was designed to address information and power asymmetries between individuals and entities that process their data, and to empower people to control it. Two years since it was introduced, this is unfortunately still not the case. Effectiveness and enforcement are two pillars of the EU data protection legislation where national data protection authorities (DPAs) have a crucial role to play.

“Business as usual” should urgently be put to an end

In our experience as EDRi network, we have observed numerous infringements of the very principles of the GDPR but controllers are not being sufficiently held to account. The most striking infringements include:

  • Abuse of consent

Consent for processing data for marketing purposes is notoriously obtained through deceptive design (“dark patterns”)1, bundled into terms of service, or forced on individuals under economic pressure, and used to “legitimise” unnecessary and invasive forms of data processing, including profiling based on their sensitive data. Two years into the GDPR, internet platforms and other companies which rely on monetising information about people still conduct “business as usual”, and users’ weaknesses and vulnerabilities continue to be exploited. In this respect, our members found out as well that the minimization principle is often not fully enforced in the Member States, leading to abuses on the collection of personal data both by private and public entities.2

  • Failure of access to behavioural profiles

While internet platforms generate more and more profit from monetising knowledge about people’s behaviours, they are notorious in ignoring the fact that observations and inferences made about users are personal data as well, and are subject to all safeguards under the GDPR. However, individuals still do not have access to their full behavioural profiles or effective means of controlling them. Infringements do not only further exarcebate the opacity surrounding the online data ecosystem but also constitue a major obstacle to the effective exercise of data subjects’ rights, effectively undermining the protection afforded by the Regulation and equally citizens’ trust in the EU to protect their fundamental rights.

Please see the following articles for further elaboration of this problem:

Uncovering the Hidden Data Ecosystem” by Privacy International; “Your digital identity has three layers, and you can only protect one of them” by Panoptykon Foundation.

Urgent action by DPAs is needed to make the protections in GDPR a reality

Many national DPAs do not have the financial and technical capacity to effectively tackle cases against big online companies. They should therefore be properly equipped with resources, staff, technical knowledge and IT specialists, and they must use these to take action. In this regard, we urge the European Commission to start infringement procedures against Member States that do not provide DPAs with enough resources.

Moreover, our experience as a network, through GDPR and AdTech complaints3, illustrates the urgent need for enforcement, as well as issues with a lack of coordination, a slow pace and sometimes an evasive approach of national DPAs.

Please see the following materials for further elaboration of this problem: Response to the roadmap of the European Commission’s report on the GDPR by Open Rights Group, Panoptykon Foundation and Liberties EU and “Two years under the GDPR” by Access Now.

The role of the EU Commission and of the European Data Protection Board (EDPB) when applying the cooperation and consistency mechanisms is crucial. The EDPB is an essential forum for the DPAs to exchange relevant information regarding enforcement of the GDPR. Even if we understand that not every aspect of the one-stop-shop mechanism is handled at the EDPB level, cooperation between DPAs is of the essence to complete procedures and handle complaints appropriately and promptly, in order to offer to the individuals an effective redress, in particular in cross borders cases.

Furthermore, full transparency should be afforded to the complainant, including information on the investigation made by the DPAs, copies of the reports and the possibility to take part in the procedure if appropriate.

When necessary, we urge DPAs to consider calling upon Article 66 of the GDPR and trigger the urgency procedure to adopt temporary measures, or to force other authorities to act where there is an urgent need to do so. We regret that such possibility has not yet been explored.

Derogations by Members States and DPAs

EDRi is deeply concerned by the way most Member States have implemented the derogations, undermining the GDPR protections and by the misuses of GDPR by some DPAs.

Please see Access Now’s 2019 report “One year under the GDPR” for more details.

Our concerns relate to the introduction of wide and over-arching exemptions under Article 23, removing the protections of GDPR from huge amounts of processing with consequences for people’s rights.4 Moreover, Member States have been stretching the interpretation of the conditions set out in Article 6 and introducing broad conditions for processing special category personal data under Article 9 which are open to exploitation, including for example loopholes that can be abused by political parties.5

The majority of Member States also decided not to implement the provision in Article 80(2) of GDPR allowing for collective complaints. Many of the infringements we see are systemic, vast in scale and complex, yet without Article 80(2) there is no effective redress in place since only individuals are able to lodge complaints, and not associations independently.

Moreover, there are serious concerns as to political independence of DPAs in some countries. In Slovakia6, Hungary7, and Romania8, DPAs are abusing the law to go after journalists and/or NGOs. In Poland the DPA has presented interpretations of the GDPR that support the government’s agenda9. Not only is such an interpretation incorrect, but it risks being political as well as undermining the GDPR as it gives the false impression that the law infringes on free expression and media freedom. Disparities on the (lack of) implementation of Article 85 are also concerning10.

Need for complimentary and supporting legislation

GDPR does not and cannot operate in a silo. Just as the right to data protection interacts with other rights, it is essential that other legal frameworks bolster the protections of GDPR. We urge the Commission not to reopen the GDPR but we emphasise the need for complimentary and supporting legislation11.

The use of algorithms or AI in decisions affecting individuals, which are not fully automated or not based on personal data, are not covered by Article 22 GDPR, despite being potentially harmful. To address this insufficiency, some of our members highlight the need for a complimentary and comprehensive legislation on such decisions.

Moreover, the upcoming Digital Services Act (DSA) is an opportunity for the European Union to make the necessary changes to fix some of the worst outcomes of the advertisement-driven and privacy-invading economy, including the lack of transparency of users’ marketing profiles and of users’ control over their data in the context of profiling and targeted advertisement.

Finally, EDRi and our members repeatedly stated12, we believe that a strong and clear ePrivacy Regulation is urgently needed to further advance Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values.

In May 2018, EDRi and our members widely and warmly welcomed the increased protections and rights enshrined in GDPR. Now and two years on, we call on the EU Commission, EDPB, and DPAs to move forward with the enforcement and implementation of the GDPR to make these rights a reality.

Footnotes

  1. Please see “Deceived by design” report by Norwegian Consumer Council for examples of this practice.
  2. See for example Xnet’s report on Privacy and Data Protection against Institutionalised Abuses in Spain.
  3. See our members complaints: https://privacyinternational.org/legal-action/challenge-hidden-data-ecosystem; https://noyb.eu/en/projects; https://en.panoptykon.org/complaints-google-iab; https://www.openrightsgroup.org/campaigns/adtech-data-protection-complaint
  4. A deeply concerning example is the immigration exemption introducted in the UK’s Data Protection Act 2018. See also Homo Digitalis complaint regarding Greek Law 4624/2019:https://www.homodigitalis.gr/en/posts/4603
  5. See for example https://edri.org/apti-submits-complaint-on-romanian-gdpr-implementation/
  6. See https://www.europarl.europa.eu/doceo/document/E-9-2020-001520_EN.html
  7. See https://ipi.media/court-orders-recall-of-forbes-hungary-following-gdpr-complaint/
  8. See https://www.gdprtoday.org/gdpr-misuse-in-romania-independence-of-dpa-and-transparency-keywords-or-buzzwords/
  9. See https://edpb.europa.eu/news/news/2020/edpb-adopts-letter-polish-presidential-elections-data-disclosure-discusses-recent_sv
  10. See for example https://xnet-x.net/en/complaints-ec-data-protection-spanish-legislation/
  11. See part III of the report “Who (really) targets you? Facebook in Polish election campaigns” by Panoptykon Foundation (https://panoptykon.org/political-ads-report) for specific recommendations on changes, which should be introduced in the Digital Services Act
  12. See https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/
close