The European Parliament must go further to empower people in the AI act
Today, 21 April, POLITICO Europe published a leak of the much-anticipated draft report on the Artificial Intelligence (AI) Act proposal. The draft report has taken important steps towards a more people-focused approach, but it has failed to introduce crucial red lines and safeguards on the uses of AI, including ‘place-based’ predictive policing systems, remote biometric identification, emotion recognition, discriminatory or manipulative biometric categorisation, and uses of AI undermining the right to asylum.
Today, POLITICO Europe published a leak of the much-anticipated draft report on the Artificial Intelligence (AI) Act proposal. The draft report has taken important steps towards a more people-focused approach, but it has failed to introduce crucial red lines and safeguards on the uses of AI, including ‘place-based’ predictive policing systems, remote biometric identification, emotion recognition, discriminatory or manipulative biometric categorisation, and uses of AI undermining the right to asylum.
“The Parliament is clearly looking to change the course of the AI Act toward a more people-centred approach. But the true marker of success will be a full suite of rights for people affected by harmful AI systems, and clear accountability requirements (like impact assessments) on those deploying risky AI.” – Sarah Chander, Senior Policy Adviser, EDRi
We need stronger bans to protect human rights
The negotiators made an important statement against harmful and discriminatory AI systems by prohibiting predictive policing systems. As EDRi and other partners have highlighted, predictive policing systems undermine the presumption of innocence, reinforce racial profiling and target already over-policed communities. Whilst including the ban on predictive policing is a positive step, the suggested prohibition would not include ‘place-based’ predictive policing systems. This means that EU negotiators have agreed to enable the use of AI systems to predict if crimes are likely to be committed in certain neighbourhoods.
“Prohibiting predictive policing is a landmark step in European digital policy – never before has data-driven racism been so high on the EU’s agenda. But the predictive policing ban does not include “place-based” predictive policing systems, which can increase experiences of discriminatory policing for racialised and poor communities. The next test will be how the Parliament deals with the full range of oppressive AI systems, including the technologies at the centre of Europe’s border regime.” – Sarah Chander, Senior Policy Adviser, EDRi
The draft has also failed to address the use of harmful AI practices in migration despite the overwhelming evidence that the rights of people on the move are being violated, profiled and discriminated against. It is crucial that legislators also prohibit some of the most dangerous uses of AI in migration and asylum contexts, including discriminatory risk assessments and the use of predictive systems to prevent migration. The high risk technological experiments used for immigration enforcement exacerbate systemic racism and discrimination and can lead to significant harm within an already discretionary system.
Accountability for deployers of risky AI
EDRi has successfully pushed for increased obligations on “users” of high risks AI systems. Under the draft report deployers will have to inform people if they are affected by high risk AI, and public authorities will have to register deployments on the AI database.
However, for real accountability, neogtiators must ensure that both public and private users complete and publish fundamental rights impact assessments, detailing the impact of systems in the context of their use. Similarly, the AI Act must do more to empower people affected by AI systems. While the draft ensures that people are informed and have ways to complain if their rights are violated, it does not include the right to an explanation for AI systems nor the right not to be subject to a prohibited AI system.
Biometrics must receive more protections in the AI Act
The draft report does not tackle remote biometric identification (RBI), however, there are solid reasons to believe that the European Parliament’s upcoming position will take a stronger stance against RBI in publicly-accessible spaces. For example, there is support for a full ban from many shadow Rapporteurs in IMCO-LIBE, and S&D co-lead Rapporteur Brando Benifei has shown a strong public position for banning RBI in publicly-accessible spaces. Nevertheless, the report does propose a new definition of “biometrics-based data”. This shows an important point that civil society has raised which is that harmful biometric categorisation and emotion recognition practices may not always uniquely identify people, but that the data involved should be equally as protected as biometric data, due to high fundamental rights risks.
“Living in a democracy means we should not be surveilled, monitored or judged by dubious technologies based on how we look. The draft report has not yet guaranteed us this basic freedom, but we remain confident that negotiators will introduce this protection in the coming months.” – Ella Jakubowska, Policy Adviser, EDRi
Biometric categorisation and emotion recognition systems have not been prohibited in the draft. This keeps the door open for discrimination and invasion of people’s privacy as the nature of these systems is to put people in arbitrary boxes based on how they look, walk, or talk – or even how they think.
In order to be useful for people affected by high-risk AI systems, the AI act must give people legal rights and introduce ways for them to understand, challenge, and complain to authorities about AI systems affecting their daily lives. Empowering people will be the ultimate marker of a successful AI regulation.
This draft report is the next step in the legislative process after the European Commission proposed the AI Act a year ago. Now, it will be discussed in the European Parliament and then negotiated with the Member States before the text is adopted and becomes law.
There is still significant scope for change in the European Parliament’s position. We look forward to working with Members of the European Parliament to improve the draft report to the highest AI standards highlighted needed as required by 123 civil society groups back in November 2021.