04 Jun 2020

EDRi submits response to the European Commission AI consultation – will you?


Today, 4th June 2020, European Digital Rights (EDRi) submitted its response to the European Commission’s public consultation on artificial intelligence (AI). In addition, EDRi released its recommendations for a fundamental rights-based Artificial Intelligence Regulation.

AI is a growing concern for all who care about digital and human rights. AI systems have the ability to exacerbate mass surveillance and intrusion into our personal lives, reflect and reinforce some of the deepest societal inequalities, fundamentally alter the delivery of public and essential services, undermine data protection legislation, and disrupt the democratic process.

In Europe, we have already seen the negative impacts of automated systems at play at the border, in predictive policing systems which only increase over-policing of racialised communities, in ‘fraud detection’ systems which target poor, working class and migrant areas, and countless more examples. Read more in our explainer.

Therefore, EDRi calls for the European Commission to set clear red-lines for impermissible use, ensure democratic oversight, and include the strongest possible human rights protection.

We encourage all people, collectives and organisations to respond to the consultation and make sure these issues are addressed. Need help answering the consultation? Read EDRi’s answering guide for the public here.

Will you make your voice heard in a crucial moment for the future of our societies? Submit your own response to the consultation online here.

Read more:

EDRi Consultation response: European Commission consultation on Artificial Intelligence (04.06.2020)

EDRi Recommendations for a fundamental rights-based Artificial Intelligence Regulation: addressing collective harms, democratic oversight and impermissible use (04.06.2020)

EDRi Explainer AI and fundamental rights: How AI impacts marginalised groups, justice and equality (04.06.2020)

EDRi Answering Guide to the European Commission consultation on AI (04.06.2020)

04 Jun 2020

Can the EU make AI “trustworthy”? No – but they can make it just


Today, 4 June 2020, European Digital Rights (EDRi) submitted its answer to the European Commission’s consultation on the AI White Paper. On top of our response, in our additional paper we outline recommendations to the European Commission for a fundamental rights- based AI regulation. You can find our consultation response, recommendations paper, and answering guide for the public here.

How to ensure a “trustworthy AI” has been highly debated since the European Commission launched its White Paper on AI in February this year. Policymakers and industry have hosted numerous conversations about “innovation”, “Europe becoming a leader in AI”, and promoting a “Fair AI”.

Yet, a “fair” or “trustworthy” artificial intelligence seems a far way off. As governments, institutions and industry swiftly move to incorporate AI into their systems and decision-making processes – grave concerns remain as to how these changes will impact people, democracy and society as a whole.

EDRi’s response outlines the main risks AI poses for people, communities and society, and outlines recommendations for an improved, truly ‘human-centric’ legislative proposal on AI. We argue that the EU must reinforce the protections already embedded in the General Data Protection Regulation (GDPR), outline clear legal limits for AI by focusing on impermissible use, and foreground principles of collective impact, democratic oversight, accountability, and fundamental rights. Here’s a summary of our main points.

Put people before industrial policy

A ‘human centric’ approach to AI requires that considerations of safety, equality, privacy, and fundamental rights are the primary factors underpinning decisions as to whether to promote or invest in AI.

However, the European Commission’s White Paper proposal takes as a point of a departure the inherent economic benefits of promoting AI, particularly in the public sector. Promoting AI in the public sector as a whole, without requiring scientific evidence to justify the need or the purpose of such applications in some potentially harmful situations, is likely to have the most direct consequences on everyday peoples’ lives, particularly on marginalised groups.

Despite wide ranging applications that could advance our societies (such as some uses in the field of health), we have also seen the vast negative impacts of automated systems at play at the border, in predictive policing systems which exacerbate overpolicing of racialised communities, in ‘fraud detection’ systems which target poor, working class and migrant areas, and countless more examples [link to explainer]. All such examples highlight the potentially devastating consequences AI systems can have in the public sector, contesting the case for ‘promoting the uptake of AI.’ These examples highlight the need for AI regulation to be rooted in a human-centric approach.

The development of artificial intelligence technology offers huge potential opportunities for improving our economies and societies, but also extreme risks. Poorly-designed and governed AI will exacerbate power imbalances and inequality, increase discrimination, invade privacy and undermine a whole host of other rights. EU legislation must ensure that cannot happen. Nobody’s rights should be sacrificed on the altar of innovation.

said Chris Jones, Statewatch

Address collective harms of AI

The vast potential scale and impact AI systems challenges existing conceptions of harm. Whilst in many ways we can view the challenges posed by AI as fundamental rights issues, often the harms perpetrated are much broader, disadvantaging communities, economy, democracy and entire societies. From the impending threat of mass surveillance as a a result of biometric processing in publicly-accessible spaces, to the use of automated systems or ‘upload filters’ to moderate content on social media, to severe disruptions to the democratic process, we see the impact goes far beyond the level of the individual. One specificity of regulating AI is the need to address societal-level harms.

Prevent harms by focusing on impermissible use

Just as the problems with AI are collective and structural, so must be the solutions. The European Commission’s White Paper outlines some safeguards to address ‘high-risk’ AI, such as training data to correct for bias and ensuring human oversight. Whilst these safeguards are crucial, they will not address the irreparable harms which will result from a number of uses of AI.

The EU must move beyond technical fixes for the complex problems posed by AI. Instead, the upcoming AI regulation must determine the legal limits, impermissible uses or ‘red-lines’ for AI applications. This is a necessary step for a people-centered, fundamental rights-based AI”

says Sarah Chander, Senior Policy Adviser, EDRi.

The EDRi network lists some of the impermissible uses of AI:

  • indiscriminate biometric surveillance and biometric capture and processing in public spaces1
  • use of AI to solely determine access to or delivery of essential public services (such as social security, policing, migration control)
  • uses of AI which purport to identify, analyse and assess emotion, mood, behaviour, and sensitive identity traits (such as race, disability) in the delivery of essential services
  • predictive policing
  • autonomous lethal weapons and other uses which identify targets for lethal force (such as law and immigration enforcement)

“The EU must ensure that states and companies meet their obligations and responsibilities to respect and promote human rights in the context of automated decision-making systems. EU institutions and national policymakers must explicitly recognise that there are legal limits to the use and impact of automation. No safeguard or remedy would make indiscriminate biometric surveillance or predictive policing acceptable, justified or compatible with human rights”

said Fanny Hidvegi, Europe Policy Manager at Access Now

Require democratic oversight for AI in the public sphere

The rapidly increasing deployment of AI systems presents a major governance issue. Due to the (designed) opacity of the systems, the complete lack of transparency from governments when such systems are deployed for use in public, essential functions, and the systematic lack of democratic oversight and engagement – AI is furthering the ‘power asymmetry between those who develop and employ AI technologies, and those who interact with and are subject to them.’2

As a result, decisions impacting public services will be more opaque, increasingly privately owned, and even less subject to democratic oversight. It is vital that the EU’s regulatory proposal on AI addresses this – implementing mandatory measures of democratic oversight for the procurement and deployment of AI in the public sector and essential services. More, the EU must explore methods of direct public engagement on AI systems. In this regard, authorities should be required to specifically consult marginalised groups likely to be disproportionately impacted by automated systems.

Implement the strongest possible fundamental rights protections

Regulation on AI must reinforce, rather than replace, the protections already embedded in the General Data Protection Regulation (GDPR). The European Commission has the opportunity to complement these protections with safeguards for AI. To put people first and provide the strongest possible protections, all systems should complete mandatory human rights impact assessments. This assessment should evaluate the collective, societal, institutional and governance implications the system poses, and outline adequate steps to mitigate this.

“The deployment of such systems for predictive purposes comes with high risks on human rights violations. Introducing ethical guidelines & standards for the design and deployment of these tools is welcome, but not enough. Instead, we need the European Union and Member States to ensure compliance with the applicable regulatory frameworks, and draw clear legal limits to ensure AI is always compatible with fundamental rights.”

says Eleftherios Chelioudakis – Homo Digitalis

EDRi’s position calls for fundamental rights to be prioritised in the regulatory proposal for all AI systems, not only those categorised as ‘high-risk’. We argue AI regulation should avoid creating loop-holes or exemptions based on sector, size of enterprise, or whether or not the system is deployed in the public sector.

“It is crucial for the EU to recognize that the adoption of AI applications is not inevitable. The design, development and deployment of systems must be tested against human rights standards in order to establish their appropriate and acceptable use. Red lines are thus an important piece of the AI governance puzzle. Recognizing impermissible use at the outset is particularly important because of the disproportionate, unequal and sometimes irreversible ways in which automated decision making systems impact societies.”

said Vidushi Marda, Senior Programme Officer, at ARTICLE 19

The rapid uptake of AI will fundamentally change our society. From a human rights’ perspective, AI systems have the ability to exacerbate surveillance and intrusion into our personal lives, fundamentally alter the delivery of public and essential services, vastly undermine vital data protection legislation, and disrupt the democratic process.

For some, AI will mean reinforced, deeper harms as such systems feed and embed existing processes of marginalisation. For all, the route to remedies, accountability, and justice will be ever-more unclear, as this power asymmetry further shifts to private actors, and public goods and services will be not only automated, but privately owned.

There is no “trustworthy AI” without clear red-lines for impermissable use, democratic oversight, and a truly fundamental rights-based approach to AI regulation. The European Union’s upcoming legislative proposal on artificial intelligence (AI) is a major opportunity change this; to protect people and democracy from the escalating economic, political and social issues posed by AI.


1EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States’ https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

2Council of Europe (2019). ‘Responsibility and AI DGI(2019)05 Rapporteur: Karen Yeung https://rm.coe.int/responsability-and-ai-en/168097d9c5

Read more:

EDRi Consultation Response: European Commission Consultation on the White Paper on Artificial Intelligence

EDRI Recommendations for a Fundamental-rights based Artificial Inelligence Regulation: Addressing collective harms, democratic oversight and impermissable use

Access Now Consultation Response: European Commission Consultation on the White Paper on Artificial Intelligence

Bits of Freedom (2020). ‘Facial recognition: A convenient and efficient solution, looking for a problem?’

EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States’

Privacy International and Article 19 (2018). ‘Privacy and Freedom of Expression in the Age of Artificial Intelligence’

25 Mar 2020

Facial Recognition & Biometric Surveillance: Document Pool


At least 15 European countries have experimented with highly intrusive facial and biometric recognition systems for mass surveillance. The use of these systems can infringe on people’s right to conduct their daily lives in privacy and with respect for their fundamental freedoms. It can prevent them from participating fully in democratic activities, violate their right to equality and much more.

The gathering and use of biometric data for remote identification purposes, for instance through deployment of facial recognition in public places, carries specific risks for fundamental rights.

European Commission, White Paper on Artificial Intelligence

This has happened in the absence of proper public debate on what facial recognition means for our societies, how it amplifies existing inequalities and violations, and whether it fits with our conceptions of democracy, freedom, equality and social justice.

Considering the high risk of abuse, discrimination and violation of fundamental rights to privacy and data protection, the EU and its Member States must develop a strong, privacy-protective approach to all forms of biometric surveillance. In this document pool we will be listing relevant articles and documents related to the issue of facial and biometric recognition. This will allow you to follow the developments of surveillance measures and regulatory actions in Europe.

EDRi’s analysis and recommendations
EDRi members’ actions and reporting
EDRi’s blogposts and press releases
Guidance from data protection authorities
Key dates and official documents
Other useful resources

EDRi’s analysis and recommendations

Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and EU Member States (13.05.2020)

EDRi members’ actions and reporting

EDRi’s blogposts and press releases

Guidance from data protection authorities

Pan-European authorities:

National authorities:

Key dates* and official documents

Other useful resources

* subject to change

19 Feb 2020

A human-centric internet for Europe


The European Union has set digital transformation as one of its key pillars for the next five years. New data-driven technologies, including Artificial Intelligence (AI), offer societal benefits – but addressing their potential risks to our democratic values, the rule of law, and fundamental rights must be a top priority.

“By driving a human rights-centric digital agenda Europe has the opportunity to continue being the leading voice on data protection and privacy,” said Diego Naranjo, Head of Policy at European Digital Rights (EDRi). “This means ensuring fundamental rights protections for personal data processing and digitalisation, and a regulatory framework for governing the full lifecycle of AI applications.”

The EU must proactively ensure that regulatory frameworks (such as GDPR and the future ePrivacy Regulation) are implemented and enforced effectively. Where this doesn’t suffice, the EU and its Member States must ensure that the legislative ecosystem is “fit for the digital age”. This can be done by increasing the comprehensiveness (filling gaps and closing loopholes), clarity (clear interpretation), and transparency of EU and national rules. The principles of necessity and proportionality should always be front and centre whenever there is an inference with fundamental rights.

To deal with technological developments in a thorough way, in addition to data protection and privacy legislation, we need to take a look at other areas, such as competition rules and consumer law – including civil liability for harmful products or algorithms. Adopting a strong ePrivacy Regulation to ensure the privacy and confidentiality of our communications is also crucial.

From a fundamental rights perspective, one specific concern is the deployment of facial recognition technologies – whether AI-based or not.

“It is of utmost importance and urgency that the EU prevents the deployment of mass surveillance and identification technologies without fully understanding their impacts on people and their rights, and without ensuring that these systems are fully compliant with data protection and privacy law as well as all other fundamental rights,” said Naranjo.

Facial recognition and fundamental rights 101 (04.12.2020)

The human rights impacts of migration control technologies (12.02.2020)

A Human-Centric Digital Manifesto for Europe