This week, MEPs recognised the dangers of certain uses of Artificial Intelligence (AI) in criminal justice. A strong majority rallied around the landmark AI in criminal matters report by the European Parliament’s Civil Liberties, Justice and Home Affairs (LIBE) committee, which opposes AI that ‘predicts’ criminal behaviour and calls for a ban on biometric surveillance.
And yet when it comes to Europol – the European Agency for police cooperation, we are facing an astounding silence from the majority in the same LIBE committee. As things stand, the parliament looks set to vote on a proposal by the European Commission on Thursday (21 October) to further extend Europol’s ever-increasing powers (the latest round was only in 2019).
Effectively, this would give Europol a blank cheque to use or further develop high risk artificial intelligence and data analytics tools.
This appears to be in complete contradiction to the committee’s own position on the use of AI in criminal matters.
If the proposal is accepted on Thursday, the Parliament will be waiving through the European Commission’s proposed revision to Europol’s mandate, which aims to reinforce a data-driven model of policing. In doing so, the Parliament will fail to exercise its duty and responsibility to protect the fundamental rights of people across Europe.
Europol already collects and holds a vast amount of extremely sensitive personal data in its databases and information systems – in 2020 available figures showed the Europol Information System contained “1,300m+ objects” and “~250,000 suspects of serious crime and terrorism”.
The agency also uses a number of advanced data analytics and machine-learning systems on this data and is in the process of developing them further.
The text the parliament would be backing would further increase Europol’s focus on the collection, analysis and use of data to support national law enforcement authorities. Recent cases such as Encrochat and Sky ECC investigations have shown that Europol has already played a key role in gathering, analysing, and processing seized encrypted communications.
Reduced regulation, limited oversight
The proposed revision would also increase the agency’s operational powers and effectively give Europol the ability to develop AI tools to analyse the data that it collects and to supply national law enforcement authorities with automated decision-making and profiling tools.
These new competences will come with an apparently reduced level of regulation, limited oversight, or no accountability in respect of such ‘research and innovation’.
Why aren’t MEPs worried about this? Why is it that they’ve expressed their intent to protect our fundamental rights in the context of the AI file, but not in respect of Europol?
The LIBE Committee report on AI in criminal matters specifically highlights the risks of AI tools in the hands of law enforcement. There is nothing to suggest that Europol would be sheltered from such risks.
The report calls for “a ban on any processing of biometric data, including facial images, for law enforcement purposes that leads to mass surveillance in publicly accessible spaces”. It rightfully points out the danger of including historical racist data in AI training data sets, that inevitably leads to “racist bias in AI-generated findings, scores, and recommendations”.
Lastly, it demands “algorithmic explainability, transparency, traceability and verification as a necessary part of oversight”, to ensure compliance with fundamental rights, and trust of citizens.
These same risks identified by the LIBE report for uses of AI by law enforcement apply to Europol. Europol’s use of AI and data analytics may engage and infringe fundamental rights, including the right to a fair trial, to privacy and data protection rights.
Because the agency will use the data gathered by national police authorities, the very institutions that engage in discriminatory policing practices, it also poses serious risks of discrimination based on race, socio-economic status or class, and nationality.
Regardless of these risks, the commission’s proposed regulation of AI makes specific exemptions for Europol and the AI systems it uses. Europol’s use of AI must be subject to the safeguards and oversight mechanisms provided by EU law for others.
Fair Trials, EDRi and other civil society organisations are calling on MEPs to hold true to their intention to protect our fundamental rights. We urge MEPs to vote against the revision of Europol’s mandate, which distinctly lacks meaningful accountability and safeguards.
The article was first published by EU Observer here.
( Contribution by: )