Artificial Intelligence and Fundamental Rights: Document Pool

Find in this doc pool all EDRi analyses and documents related to Artificial Intelligence (AI) and fundamental rights

By EDRi · April 12, 2021

In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, facilitate violations of the right to claim asylum, predict our emotions and categorise us, and to make crucial decisions that determine our access to public services, welfare, education and employment.

Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making and environmental damage.

Therefore, EDRi calls for the European Union in its upcoming regulation on AI to:

  • Empower affected people by upholding a framework of accountability, transparency, accessibility and redress

This includes requiring fundamental rights impact assessment before deploying high-risk AI systems, registration of high-risk systems in a public database, horizontal and mainstreamed accessibility requirements for all AI systems, a right for lodging complaints when people’s rights are violated by an AI system, and a right to representation and rights to effective remedies.

  • Limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities

When AI systems are used for law enforcement, security and migration control, there is an even greater risk of harm and violations of fundamental rights, especially for already marginalised communities. There need to be clear red lines for such use to prevent harms. This includes bans on all types of remote biometric identification, predictive policing systems, individual risk assessments and predictive analytic systems in migration contexts.

  • Push back on Big Tech lobbying and remove loopholes that undermine the regulation

For the AI Act to be effectively enforced, negotiators need to push back against Big Tech’s lobbying efforts to undermine the regulation. This is especially important when it comes to risk-classification of AI systems. This classification needs to be objective and must not leave room for AI developers to self-determine whether their systems are ‘significant’ enough to be classified as high-risk and require legal scruity. Tech companies, with their profit-making incentives, will always want to under-classify their own AI systems.

1. EDRi analysis
2. EDRi member resources
3. EDRi articles, blogs and press releases
4. Legislative documents

5. Key dates (indicative)
6. Other useful resources


1. EDRi analysis


2. EDRi member resources

3. EDRi articles, blogs and press releases

4. Legislative documents

To expand the infographic, click once on it.


European Commission

European Parliament

Council of the European Union



5. Key dates (indicative)

  • Trilogue dates: 18 July, 03 October, & 26 October 2023
  • EU Parliament plenary vote – 14 June 2023
  • IMCO-LIBE Parliament committee vote – 11 May 2023
  • Publication of White Paper on Artificial Intelligence – 19 February 2020
  • Closing of the consultation on Artificial Intelligence – 14 June 2020
  • Launch of European Commission legislative proposal on artificial intelligence – 21 April 2021

6. Key policymakers

Artificial Intelligence Act (AIA) (IMCO – lead committee)

7. Other useful resources

This document was last updated on 9 August 2023