Civil society calls on EU to protect people’s rights in the AI Act ‘trilogue’ negotiations
As EU institutions start decisive meetings on the Artificial Intelligence (AI) Act, a broad civil society coalition is urging them to prioritise people and fundamental rights in this landmark legislation.
150 civil society organisations are calling on the European Parliament, the European Commission and the Council of the EU to put people and their fundamental rights first in the AI Act as EU institutions proceed to ‘trilogue’ negotiations. These decisive meetings will determine the final legislation and how much it centres human rights and the concerns of people who could be affected by ‘risky’ AI systems.
AI systems are are already having a far-reaching impact in our lives. They’re increasingly being used to monitor and identify us in public spaces, predict our likelihood of criminality, re-direct policing and immigration control to already over-surveilled areas, facilitate violations of the right to claim asylum, predict our emotions and categorise us. They are also used to make crucial decisions about us, for example who gets to access welfare schemes.
Without proper regulation, these will exacerbate existing societal harms of mass surveillance, structural discrimination, and centralised power of large technology companies.
The AI Act is a crucial opportunity to regulate this technology and to prioritise people’s rights over profits. Through this legislation, the EU must ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms:
-
Empower affected people by upholding a framework of accountability, transparency, accessibility and redress
This includes requiring fundamental rights impact assessment before deploying high-risk AI systems, registration of high-risk systems in a public database, horizontal and mainstreamed accessibility requirements for all AI systems, a right for lodging complaints when people’s rights are violated by an AI system, and a right to representation and rights to effective remedies.
-
Limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities
When AI systems are used for law enforcement, security and migration control, there is an even greater risk of harm and violations of fundamental rights, especially for already marginalised communities. There need to be clear red lines for such use to prevent harms. This includes bans on all types of remote biometric identification, predictive policing systems, individual risk assessments and predictive analytic systems in migration contexts.
-
Push back on Big Tech lobbying and remove loopholes that undermine the regulation
For the AI Act to be effectively enforced, negotiators need to push back against Big Tech’s lobbying efforts to undermine the regulation. This is especially important when it comes to risk-classification of AI systems. This classification needs to be objective and must not leave room for AI developers to self-determine whether their systems are ‘significant’ enough to be classified as high-risk and require legal scruity. Tech companies, with their profit-making incentives, will always want to under-classify their own AI systems.