As AI Act vote nears, the EU needs to draw a red line on racist surveillance

The EU Artificial Intelligence Act, commonly known as the AI Act, is the first of its kind. Not only will it be a landmark as the first binding legislation on AI in the world – it is also one of the first tech-focused laws to meaningfully address how technologies perpetuate structural racism.

By euronews (guest author) · April 25, 2023

The EU Artificial Intelligence Act, commonly known as the AI Act, is the first of its kind.

Not only will it be a landmark as the first binding legislation on AI in the world – it is also one of the first tech-focused laws to meaningfully address how technologies perpetuate structural racism.

From the racially discriminatory impact of predictive policing systems to the use of AI systems to falsely label (mostly racialised) people as fraudsters when claiming benefits, this legislation is deeply informed by a growing awareness of how technology can perpetuate harm.

Lawmakers have taken steps that would ban certain uses of AI when “incompatible with fundamental rights”. This is the case for some uses of facial recognition to identify people in public places and AI systems used to predict where and by whom crimes may occur.

Negotiators in the European Parliament are, however, stopping short of recognising the specific harms that come when AI systems are used in the migration context.

AI as an immigration agent

From AI lie-detectors and AI “risk profiling” used in a multitude of immigration procedures to the rapidly expanding tech surveillance at Europe’s borders, AI systems are increasingly a feature of the EU’s approach to migration.

AI is used to make predictions and assessments about people in their migration claims based on opaque criteria that are hard to know and harder to challenge.

In Europe, there are already plans to use algorithms to assess the “risk profiles” of all visitors – in a context where there is evidence that visa decisions reflect histories of discrimination rooted in colonialism.

Migrants disembark from a ship in the Sicilian port of Catania, April 2023
Salvatore Cavalli/AP

AI systems are also part of an ever-expanding, generalised surveillance apparatus. This includes AI for surveillance at the border and predictive analytic systems to forecast migration trends.

According to the Border Violence Monitoring Network, we are already seeing cases of AI technologies being used in pushbacks amounting to forced disappearances.

In this context, there is a real danger that seemingly innocuous forecasts about migration patterns will be used to facilitate push-backs, pull-backs and other ways to prevent people from exercising their right to seek asylum.

Ethnicity and skin colour are becoming a proxy for immigration status

For people already in Europe, technology is being harnessed by local police in ways that are intended to increase the number of people identified as undocumented through police stops, perpetuating racial profiling.

In France, Germany, the Netherlands and Sweden, police have been given the power to fingerprint people they stop on the street to check their immigration status.

A person runs after crossing a fence in an attempt to access the Channel Tunnel, in Calais, July 2015
Thibault Camus/AP

Local human rights organisations have challenged a Greek programme equipping police with smart devices to scan peoples’ faces with the aim of identifying undocumented people.

Given that race, ethnicity and skin colour are often used by authorities as a proxy for immigration status, these trends will lead to more stops in our communities (and related harassment and opportunities for mistreatment) of racialised people – citizens and non-citizens alike.

The growing use of AI in the immigration context exposes people of colour to more surveillance, more discriminatory decision-making in immigration claims, and more harmful profiling (by software and by humans), exacerbating trends toward criminalisation and dehumanisation in EU migration policy.

Digital rights for all – except migrants

These trends in AI deployments are part of a broader – and growing – surveillance ecosystem developed at and within Europe’s borders.

For example, in the proposed Eurodac reform, the EU is seeking to massively expand surveillance databases with data on people who apply for asylum or who are apprehended at the border, to include facial images and the data of children as young as 6.

Migrants with life jackets sail in a wooden boat as they are being rescued some 26 nautical milessouth of the Italian Lampedusa island, August 2022
AP Photo/Jeremias Gonzalez

This is completely at odds with the standards set in the EU’s General Data Protection Regulation.

This is emblematic of a trend: the EU creates legislation that protects fundamental rights – and then ignores that legislation in its quest to seal its borders.

This exceptionalism costs lives and opens the way to the erosion of the EU’s own standards – and values. It is a double standard that should be called what it is: racism.

Ban harmful AI in migration

As underscored by Professor E Tendayi Achiume, former United Nations rapporteur on contemporary forms of racism, and the strong civil society coalition #ProtectNotSurveil, we need a commitment to dismantling the technologies that perpetuate harm.

This means taking bold steps, including prohibiting some of the most harmful uses of AI: those used in migration contexts.

Migrants from Iraq rest at the refugee camp in the village of Verebiejai, some 145km south from Vilnius, July 2021
AP Photo/Mindaugas Kulbis

In practice, this means that the EU AI Act must prohibit and prevent discriminatory AI profiling tools used in migration procedures and AI-based forecasting used for pushbacks in violation of international refugee law. It must ensure that the use of biometric identification tools – like police handheld devices – (on which the Act is currently silent) is regulated as “high risk”.

The AI Act will not fix Europe’s deeply flawed migration policies. But it can ensure that AI technologies are regulated in a way that prevents further harm and that chooses equal protection over double standards.

This article was first published here by euronews.

Contribution by: Sarah Chander, Senior Policy Advisor, EDRiAlyna Smith, Deputy Director of the PICUM and EDRi board member