The AI Act: EU’s chance to regulate harmful border technologies
The AI Act will be the first regional mechanism of its kind in the world, but it needs a serious update to meaningfully address the profileration of harmful technologies tested and deployed at Europe’s borders.
Prison-like refugee camps replete with drone surveillance, biometrics, and artificial intelligence (AI) tools are springing up across the European Union, but these technologies face very little scrutiny and regulation. The EU is now grappling with how to govern AI, but its proposed Regulation on Artificial Intelligence (the AI Act) needs to better protect refugees against high-risk border technologies. The deadline for changes to the Act is June 1, 2022. In their deliberations, Members of the European Parliament should not forget all the ways that border technologies harm people.
We have visited various high-tech refugee camps in Greece, seen securitised borders all over Europe, and spoken with hundreds of people who are at the sharpest edges of technological innovation. AI in migration contexts is increasingly used to make high-risk predictions, assessments, and evaluations.
In addition to airport AI lie detectors, polygraphs, and emotion recognition, in our work we see how AI is being developed, tested and deployed within a broader framework of racialised suspicion against people crossing borders, often in experimental ways. Many of these systems are inherently discriminatory, pre-judging people on factors outside of their control. The AI Act must prevent this.
Border technologies hurt people. These impacts are disproportionately felt by under-resourced and marginalised communities, who already have access to less robust human rights protections and fewer resources with which to defend those rights. The tech community – policy makers, developers, and critics alike – does not sufficiently engage with the need to push the conversation beyond reform and towards abolitionism of technologies that hurt people, break up communities, separate families, and exacerbate the deep structural violence continually felt by Black, indigenous, racialised, LGBTQI+, disabled, and mobile communities.
The development of technology occurs in specific spaces that are not open to everyone, and the status quo determines what technologies should be developed and which communities serve as laboratories for high-risk experiments. Decisions around technologies like AI occurs without consultation, collective community response, or often even without the consent of affected groups. These practices play out along lines of white supremacy and serve to exacerbate the disenfranchisement of anyone who does for fit in the upper echelons of power.
The EU has a unique opportunity to take a global lead on the regulation of AI – indeed, the AI Act will be the first regional mechanism of its kind in the world. However, the proposed legislation needs a serious update to meaningfully address the profileration of harmful technologies tested and deployed at Europe’s borders. More than two dozen international migration experts are also calling on the European Parliament to implement significant changes.
In order to adequately protect the human rights of all people crossing borders, the AI Act must prohibit the use of individual risk assessment and profiling that uses personal and sensitive data; ban AI lie detectors in the migration context; prohibit the use of predictive analytics if used to facilitate pushbacks; and ban remote biometric identification and categorisation in public spaces, including in border and migration control. In the category of ‘high-risk’ the legislation must include a range of surveillance technologies developed specifically to watch and control people travelling to seek asylum. The Act also needs stronger oversight and accountability measures which recognise the risks of inappropriate data sharing impacting people’s fundamental human rights of mobility and asylum.
Technology is never neutral. It reflects norms, values, and power in society. As the AI Act winds itself through the EU’s regulatory machine, people seeking refuge should be front of mind.
Like Asal, a young mother from Afghanistan. As Asal and her family faced a transfer to a hi-tech refugee camp on the Greek island of Samos, she was scared: “If we go there, we will go crazy.”
This article was first published by Thomas Reuters here.
Photography credit: Petra Molnar
(Contribution by: Petra Molnar Associate Director at Refugee Law Lab, Centre for Refugee Studies & Sarah Chander, Senior Policy Advisor, EDRi)