EU Parliament calls for ban of public facial recognition, but leaves human rights gaps in final position on AI Act
The final EU Parliament position upholds all of the fundamental rights demands which were added at committee level. Despite efforts to overturn it, the final position also maintains the committees' strong stance against biometric mass surveillance practices. But it is disappointing that the plenary missed the opportunity to increase protections when it comes to empowering people affected by the use of AI and the rights of migrants, refugees and asylum seekers.
Today, June 14, the European Parliament plenary voted in favour of strong fundamental rights protections in their official position on the Artificial Intelligence Act, including maintaining positive steps on fundamental rights impact assessments and transparency requirements.
The vote also upheld red lines against unacceptably harmful uses of AI, including decisively protecting people against live facial recognition and other biometric surveillance in public spaces, emotion recognition in key sectors, biometric categorisation, predictive policing and social scoring.
This is a critical time for AI regulation globally, and the EU Parliament’s final position is in many ways a win for fundamental rights.
Work on the EU AI Act started in 2020, and EDRi’s network and partners have been urging EU lawmakers to prioritise fundamental rights and put people before profits from the beginning.
MEPs choose freedom and democracy over biometric surveillance dystopia
In a historic step, EU Parliamentarians have listened to evidence in ensuring that all live, and most retrospective, uses of remote biometric identification (RBI) systems, in public spaces, are prohibited in their text.
This puts the preservation of free expression, assembly, and non-discrimination in public spaces in a strong position going into trilogues (negotiations) with the Council of the European Union Member States.
The Parliament also voted to ban biometric categorisation on the basis of sensitive characteristics such as perceived sexuality, gender, race or ethnicity, and emotion recognition in education settings, workplaces, by police, and at borders – the prohibitions of which are just as important for preventing discrimination and protecting human rights as bans on RBI.
This was the culmination of years of work by a diverse group of 80 civil society groups in the Reclaim Your Face campaign. We will continue to fight for full protection from all retrospective RBI, all emotion recognition and automated behavioural detection in public spaces.
Despite an aggressive last minute push from the centre-right EPP party to overturn the committee agreement on biometric surveillance, the MEPs showed that they heard the voices of over 250,000 people across Europe who want keep our public spaces free of facial recognition and other biometric mass surveillance systems.
This is the clearest signal yet that the European Parliament has put forward the protection of all our rights to live freely and with dignity in public spaces, over private profits and false claims of ‘security’.
No improvements on empowering people and rights for people on the move
Yet again, the European Parliament has failed to introduce new provisions which would protect the rights of migrants from ever-increasing regimes of discriminatory surveillance even though AI systems are increasingly developed to track, control and monitor migrants in new and harmful ways.
The list of prohibited practices that was adopted today does not include the use of AI to facilitate illegal pushbacks, or to profile people on the move in a discriminatory manner. Without these prohibitions, the European Parliament reinforces the panopticon at the EU’s borders.
Unfortunately, the European Parliament’s support for peoples’ rights stops short of protecting migrants from AI harms, including where AI is used to facilitate pushbacks. The EU is creating a two-tiered AI regulation, with migrants receiving lesser protections than the rest of society.
At committee level, the European Parliament took significant steps to empower people affected by the use of AI systems, including a requirement to provide explanations to people who are affected by AI-based decisions or outcomes, and complain when AI systems violate rights.
However, the Parliament did not extend these rights and mechanisms in today’s vote. They voted to reject the right to an explanation for all AI systems (not just high-risk uses), and for the right of public interest organisations to bring complaints when AI systems do not comply with the regulation.
The European Parliament missed a crucial opportunity to extend the framework of protections for AI harms. In particular for the most marginalised, direct complaints from public interest organisations would have been an important step to fill accountability gaps.
The three-way negotiations on the final text between the European Parliament, Commission and Member States will begin immediately after today’s vote. Negotiations on the Regulation are expected to be finalised by the end of the year, with the aim to pass the law ahead of the European Parliament elections in June 2024. Our broad civil society coalition will continue centering people’s rights in these negotiations.