Celebrating a strong European Parliament stance on AI in law enforcement

On 5 October, following a significant push from across civil society, the European Parliament voted to adopt an important new report on Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters by a promising majority of 377 votes in favour, to 248 against. This followed a tense vote earlier as a majority of MEPs opposed all four attempts from the European People's Party (EPP) to remove key fundamental rights provisions from the report.

By EDRi · October 6, 2021

On 5 October, following a significant push from across civil society, the European Parliament voted to adopt an important new report on Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters by a promising majority of 377 votes in favour, to 248 against. This followed a tense vote earlier as a majority of MEPs opposed all four attempts from the European People’s Party (EPP) to remove key fundamental rights provisions from the report.

The report on Artificial intelligence in criminal law is what’s known as an “INI”, or own-initiative, report, which means that the Parliament decided to put forward their position on uses of AI in criminal law and justice, but without this being legally-binding. Led by MEP Petar Vitanov from the Socialists and Democrats (S&D) group, the Parliamentary committee for civil liberties and justice (LIBE) adopted their version of the report in September, with support from across the political spectrum, with the notable exception of the EPP group.

This report has taken what many have seen as the most progressive steps of the Parliament so far in the AI debates – and perhaps some of the most progressive in global AI policy – to recognise elements of structural injustice that can be exacerbated by AI, to call for a ban on the use of predictive policing (one of EDRi’s central ‘red lines’), and – a key demand of EDRi’s Reclaim Your Face campaign – to call for a ban on biometric mass surveillance. It also calls to stop funding research and development programmes that are likely to result in mass surveillance.

Some highlights from the report:

  • Discrimination: Recognises the likelihood that bias in AI systems are ‘inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain ethnic groups or racialised communities’;
  • Impact assessments: Calls for compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or the judiciary, in order to assess any potential risks to fundamental rights;
  • Transparency: Calls on law enforcement and judicial authorities ‘to inform the public and provide sufficient transparency as to their use of AI and related technologies when implementing their powers’;
  • Predictive policing: Opposes the use of AI by law enforcement authorities ‘to make behavioural predictions on individuals or groups on the basis of historical data and past behaviour, group membership, location, or any other such characteristics, thereby attempting to identify people likely to commit a crime’;
  • A ban on biometric mass surveillance: The report urges the European Commission to implement ‘a ban on any processing of biometric data, including facial images, for law enforcement purposes that leads to mass surveillance in publicly accessible spaces’. It also calls for ‘a moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification’, coupled with the ‘permanent prohibition of the use of automated analysis and/or recognition in publicly accessible spaces of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioural signals’. Lastly, it urges an end to the use of private databases (like Clearview AI); and an end of EU funding for mass surveillance projects. More information on this part of the report will be available via the Reclaim Your Face campaign’s statement on this historic success.

This report was a litmus test of the ability to get broad parliamentary support for EDRi’s positions. Before the Plenary (entire European Parliament) vote, the European People’s Party (EPP) group put forward four proposed amendments which would significantly weaken some of the strongest parts of LIBE’s report. Using securitisation narratives, the EPP sought to reserve the ability for law enforcement to adopt predictive policing and biometric mass surveillance practices. However, a majority of MEPs rejected this attempt to weaken the report’s fundamental rights safeguards.

In what is being hailed as an historic moment for the EU, MEPs took note of the call from EDRi and 41 other civil society organisations to maintain a strong report and reject changes, and stood up for fundamental rights by opposing the three amendments which had sought to permit predictive policing and biometric mass surveillance by law enforcement.

Whilst the report does not go directly into EU law, it gives a very positive indication that the Parliament will take a strong line when it comes to issues of predictive policing and biometric mass surveillance. These issues are critical parts of the future EU Artificial Intelligence Act – the law currently being debated within the Parliament and the EU Council – and so this new report gives us hope that, as EDRi has called for, the EU really will put human rights and democracy at the front and center of its AI rules.

The full text of the report is available here. The report as adopted in the LIBE committee is available here.

Image credit: Rozalina Burkova/ The Greats (CC BY-NC-SA 4.0)

(Contribution by: )

Ella Jakubowska

Policy Advisor

Twitter: @ellajakubowska1

C

Sarah Chander

Senior Policy Advisor

Twitter: @sarahchander