European court supports transparency in risky EU border tech experiments
The Court of Justice of the European Union has ruled that the European Commission must reveal initially-withheld documents relating to the controversial iBorderCtrl project, which experimented with risky biometric ‘lie detection’ systems at EU borders. However, the judgement continued to safeguard some of the commercial interests of iBorderCtrl, despite it being an EU-funded migration technology with implications for the protection of people’s rights.
This press release is an immediate reaction to the judgement. A more in-depth analysis will be published later on our website.
Petra Molnar, a lawyer specialising in migration technologies at the Refugee Law Lab and the author of EDRi’s report Technological Testing Ground: Migration Management Experiments and Reflections from the Ground Up (2020), says:
“Today’s ruling highlights the risky nature of border tech experiments such as AI-powered lie detectors like iBorderCtrl. We need stricter governance mechanisms that recognise the very real harms perpetuated by these experimental and harmful technologies.”
Brought to the General Court of the Court of Justice of the European Union in 2019, the lawsuit from MEP Patrick Breyer (Case T-158/19) against the European Commission raises key questions about the ethics, funding and democratic oversight of surveillance technologies in the EU, and the responsibility of EU research programmes towards people that are not EU nationals.
Breyer (Greens/EFA group) has been seeking the release of documents related to iBorderCtrl, the EU-funded research project which planned to introduce automated lie detection at EU borders. The Horizon 2020 project has come under scrutiny for its human rights implications, including discrimination and privacy concerns.
In its judgement, the General Court supported Breyer’s argument, concluding that there is a public interest in the democratic oversight of the development of surveillance and control technologies (§200). Furthermore, this important ruling agrees that there is a public interest in challenging whether such technologies are desirable and whether their development should even be funded by public money.
iBorderCtrl is just one of many ‘innovations’ introduced into border enforcement and immigration risk assessments and decision-making. The EU is currently grappling with how to regulate such risky technologies in its proposed Artificial Intelligence (AI) Act. However, the AI Act urgently needs to go much farther to safeguard people’s fundamental human rights, especially in contexts like borders and immigration which are rife with projects that treat people as test subjects.
The ruling also has implications for the Act’s treatment of AI systems that use people’s physical, physiological and behavioural data in ways which threaten their fundamental rights and freedoms. EDRi and 119 other civil society organisations have demonstrated that measures in the draft AI Act are insufficient to protect people from harmful and discriminatory uses of AI. Ella Jakubowska, Policy Advisor at EDRi, explains that:
“The EDRi network and Reclaim Your Face campaign have provided vast evidence about how certain uses of biometric data use people’s faces, bodies and behaviours against them. Despite this, the EU has so far failed to prohibit biometric mass surveillance practices including emotion recognition. Unless the forthcoming AI Act draws lines in the sand against undignified and scientifically-questionable practices like iBorderCtrl, it will fail in its promise for trustworthy, human-centric AI.”
Image credit: Kenya-Jade Pinto
( Contribution by: )
- “Civil society calls on the EU to put fundamental rights first in the AI Act”, EDRi
- “Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and EU Member States”, EDRi position paper
- “Technological Testing Ground: Migration Management Experiments and Reflections from the Ground Up”, Petra Molnar