Civil society calls for the EU AI act to better protect people on the move

In this open letter, 195 organisations and individuals call on the EU to protect people on the move. As the European Parliament amends the Artificial Intelligence Act (AI Act), preventing AI harms in the field of AI and migration is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways.

By EDRi · December 6, 2022

From AI lie-detectors, AI risk profiling systems used to assess likely good of ‘illegal’ movement, to the rapidly expanding tech-surveillance complex at Europe’s borders, AI systems are increasingly a feature of migration management in the EU.

195 organisations and individuals, led by EDRi, Access Now, Refugee Law Lab and PICUM, support our open letter calling on the EU to make significant changes to the EU AI Act, better addressing the harms of AI when used in the context of migration. As the European Parliament heads toward its position on the AI act, we call for AI technologies fit into a wider system of over-surveillance, discrimination and violence.

Read the open letter

How do these systems affect people?

In the migration context , AI is used to make predictions, assessments and evaluations about people in the context of their migration claims. Of particular concern is the use of AI to assess whether people on the move present a ‘risk’ of illegal activity or security threats. AI systems in this space are inherently discriminatory, pre-judging people on the basis of factors outside of their control. Along with AI lie detectors, polygraphs and emotion recognition, we see how AI is being used and developed within a broader framework of racialised suspicion against migrants.

Not only can AI systems present these severe harms to people on the move in individual ways, they form part of a broader surveillance eco-system increasingly developed at and within Europe’s borders. Increasingly, racialised people and migrants are over-surveilled, targeted, detained and criminalised through EU and national policies. Technological systems form part of those infrastructures of control.

Regulating Migration Technology: What needs to change?

In April 2021, the European Commission launched its legislative proposal to regulate AI in the European Union.

Crucially, the proposal does not prohibit some of the sharpest and most harmful uses of AI in migration control, despite the significant power imbalance that these systems exacerbate. The proposal also includes a carve-out for AI systems that form part of large scale EU IT systems, such as EURODAC. This is a harmful development meaning that the EU itself will largely not be scrutinised for its use of AI in the context of its migration databases.

In many ways, these minimal technical checks required of (a limited set of) high-risk systems in migration control could be seen as enabling, rather than providing meaningful safeguards for people subject to these opaque, discriminatory, surveillance systems.

The proposal does not at all include reference to predictive analytic systems in the migration context, nor the generalised surveillance technologies at borders, in particular those that do not make decisions about, or identify, natural persons. Therefore, systems that pose harm in the migration context in more systemic ways seem to have been completely overlooked.

Amendments: How can the EU AI act better protect people on the move?

Civil society have been working to develop amendments to the AI act to better protect against these harms in the migration context. As highlighted generally, EU institutions still have a long way to go to make the AI act a vehicle for genuine protection of peoples’ fundamental rights, especially for marginalised groups.

The AI act must be updated in three main ways to address AI-related harms in the migration context:

  1. Update the AI act’s prohibited AI practices (Article 5) to include ‘unacceptable uses’ of AI systems in the context of migration. This should include prohibitions on: AI based individual risk assessment and profiling systems in the migration contexts drawing on personal and sensitive data; AI polygraphs in the migration context; predictive analytic systems when used to interdict, curtail and prevent migration; and a full prohibition on remote biometric identification and categorisation in public spaces, including in border and migration control settings.
  2. Include within ‘high-risk’ use cases AI systems in migration control that require clear oversight and accountability measures, including: all other AI-based risk assessments; predictive analytic systems used in migration, asylum and border control management; biometric identification systems; and AI systems used for monitoring and surveillance in border control.
  3. Amend Article 83 to ensure AI as part of large-scale EU IT databases are within the scope of the AI Act and that the necessary safeguards apply for uses of AI in the EU migration context
  4. Ensure transparency and oversight measures apply
    People affected by high-risk AI systems need to be able to understand, challenge, and seek remedies when those systems violate their rights. The EU AI act must include an obligation on users to conduct a fundamental rights impact for high risk systems, public transparency, and mechanisms for people affected to challenge harmful systems.

Drafted by: Access Now, European Digital Rights (EDRi), Platform for International Cooperation on Undocumented Migrants (PICUM), and the Refugee Law Lab. With the support of: Amnesty International, Avaaz, Border Violence Monitoring Network (BVMN), EuroMed Rights, European Center for Not-for-Profit Law (ECNL), European Network Against Racism (ENAR), Homo Digitalis, Privacy International, Statewatch, Dr Derya Ozkul, Dr. Jan Tobias and  Dr Niovi Vavoula.

C

Sarah Chander

Senior Policy Advisor

Twitter: @sarahchander