Civil society statement: Council risks failing human rights in the AI Act
In the run up to EU AI Act trilogue negotiation, 16 civil society organisations are urging representatives of the Council of the European Union to effectively regulate the use of AI systems by law enforcement, migration control and national security authorities in the legislation.
In the run up to the EU Artificial Intelligence Act (AI Act) trilogue negotiation on 6 December, 16 civil society organisations are urging representatives of the Council of the European Union to effectively regulate the use of AI systems by law enforcement, migration control and national security authorities throughout Europe.
Increasingly, in Europe and around the world, AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. AI in law enforcement disproportionately targets already marginalised communities, undermines legal and procedural rights, and enables mass surveillance. Without meaningful regulation of the use of AI in law enforcement, the AI Act will not fulfil its promise to put people’ safety first and it will fail human rights at large.
To set a high global standard for human rights-centred AI regulation, ensure public trust in AI and safeguard human rights in the face of this fast-developing technology, EU Member States must change course. To achieve this, civil society is calling on the Council to ensure that the AI Act includes:
- a full ban on remote biometric identification in publicly accessible spaces, as well as other unacceptable uses of AI in law enforcement
- no arbitrary exemptions for national security, migration control and law enforcement
- full accountability to the public for the uses of the most ‘high risk’ AI