Blogs | Privacy and data protection | Artificial intelligence (AI) | Cross border access to data

Accountable Migration Tech: Transparency, governance and oversight

Migration continues to dominate headlines around the world. For example, given the currently deteriorating situation at the border between Greece and Turkey, with reports of increasingly repressive measures to turn people away, new technologies already play a part in border surveillance and decision-making at the border.

By EDRi · March 11, 2020

Migration continues to dominate headlines around the world. For example, given the currently deteriorating situation at the border between Greece and Turkey, with reports of increasingly repressive measures to turn people away, new technologies already play a part in border surveillance and decision-making at the border.


Our previous two blogposts explored how far-reaching migration control technologies actually are. From refugee camps to border spaces to immigration hearing rooms, we are seeing the rise of automated decision-making tools replacing human officers making decisions about your migration journey. The use of these technologies also opens the door for violations of migrants’ rights.

How are these technologies of migration control impacting fundamental rights and what can we do about it?

Life and liberty

We should not underestimate the far-reaching impacts of new technologies on the lives and security of people on the move. The right to life and liberty is one of the most fundamental internationally protected rights, and highly relevant to migration and refugee contexts. Multiple technological experiments already impinge on the right to life and liberty. The starkest example is the denial of liberty when people are placed in detention. Immigration detention is highly discretionary. The justification of increased incarceration on the basis of algorithms that have been tampered with, such as at the US-Mexico border, shows just how far we are willing to justify incursions on basic human rights under the guise of national security and border enforcement. Errors, mis-calibrations, and deficiencies in training data can result in profound rights infringements of safety, security, and liberty of migrants when they are placed in unlawful detention. For example, aspects of training data which are mere coincidences in reality may be treated as relevant patterns by a machine-learning system, leading to outcomes which are considered arbitrary. This is one reason why the EU General Data Protection Regulation (GDPR) requires the ability to demonstrate that the correlations applied in algorithmic decision-making are “legitimate justifications for the automated decisions”.

Equality rights and freedom from discrimination

Equality and freedom from discrimination are integral to human dignity, particularly in situations where negative inferences against marginalised groups are frequently made. Algorithms are vulnerable to the same decision-making concerns that plague human decision-makers: transparency, accountability, discrimination, bias, and error. The opaque nature of immigration and refugee decision-making creates an environment ripe for algorithmic discrimination. Decisions in this system – from whether a refugee’s life story is “truthful” to whether a prospective immigrant’s marriage is “genuine” – are highly discretionary, and often hinge on assessment of a person’s credibility. In the experimental use of AI lie detectors at EU airports, what will constitute truthfulness and how will differences in cross-cultural communication be dealt with in order to ensure that problematic inferences are not encoded and reinforced into the system? The complexity of migration – and the human experience – is not easily reducible to an algorithm.

Privacy rights

Privacy is not only a consumer or property interest: it is a human right, rooted in foundational democratic principles of dignity and autonomy. We must consider the differential impacts of privacy infringements when looking at the experiences of people on the move. If collected information is shared with repressive governments from whom refugees are fleeing, the ramifications can be life-threatening. Or, if automated decision-making systems designed to predict a person’s sexual orientation are infiltrated by states targeting the LGBTQI+ community, discrimination and threats to life and liberty will likely occur. A facial recognition algorithm developed at Stanford University already tried to discern a person’s sexual orientation from photos. This use of technology has particular ramifications in the refugee and immigration context, where asylum applications based on sexual orientation grounds often rely on having to prove one’s persecution based on outdated tropes around non-heteronormative behaviour. This is why protecting people’s privacy is paramount for their safety, security, and well-being.

Procedural justice

When we talk about human rights of people on the move, we must also consider procedural justice principles that affect how a person’s application is reviewed, assessed, and what due process looks like in an increasingly automated context.

For example, in immigration and refugee decision-making, procedural justice dictates that the person affected by administrative processes has a right to be heard, the right to a fair, impartial and independent decision-maker, the right to reasons – also known as the right to an explanation – and the right to appeal an unfavourable decision. However, it is unclear how administrative law will handle the augmentation or even replacement of human decision-makers by algorithms.

While these technologies are often presented as tools to be used by human decision-makers, the line between machine-made and human-made decision-making is often unclear. Given the persistence of automation bias, or the predisposition towards considering automated decisions as more accurate and fair, what rubrics will human decision-makers use to determine how much weight to place on the algorithmic predictions, as opposed to any other information available to them, including their own judgment and intuition? When things go wrong and you wish to challenge an algorithmic decision, how will we decide what counts as a reasonable decision? It’s not clear how tribunals and courts will deal with automated decision-making, what standards of review will be used, and what redress or appeal will look like for people wishing to challenge incorrect or discriminatory decisions.

What we need: Context-specific governance and oversight

Technology replicates power in society, and its benefits are not experienced equally. Yet no global regulatory framework currently exists to oversee the use of new technologies in the management of migration. Much of technological development occurs in so-called “black boxes”, where intellectual property laws and proprietary considerations shield the public from fully understanding how the technology operates.

While conversations around the ethics of Artificial Intelligence (AI) are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights that recognise the high risk nature of developing and deploying technologies of migration control. Affected communities must also be involved in these conversations. Rather than developing more technology “for” or “about” refugees and migrants and collecting vast amounts of data, people who have themselves experienced displacement should be at the centre of discussions on when and how emerging technologies should be integrated into refugee camps, border security, or refugee hearings – if at all.
As a starting point, states and international organisations developing and deploying migration control technologies should, at the minimum:

  • commit to transparency and report publicly what technology is being developed and used and why;
  • adopt binding directives and laws that comply with internationally protected fundamental human rights obligations that recognise the high risk nature of migration control technologies;
  • establish an independent body to oversee and review all use of automated technologies in migration management;
  • foster conversations between policymakers, academics, technologists, civil society, and affected communities on the risks and promises of using new technologies.

Stay tuned for updates on our AI and migration project over the next couple of months as we document the lived experiences of people on the move who are affected by technologies of migration control. If you are interested in finding out more about this project or have feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca.

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination (26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

Immigration, iris-scanning and iBorderCTRL (26.02.2020)
https://edri.org/immigration-iris-scanning-and-iborderctrl/

Introducing De-Carceral Futures: Bridging Prison and Migrant Justice – Editors’ Introduction: Detention, Prison, and Knowledge Translation in Canada and Beyond
http://carfms.org/introducing-de-carceral-futures/

The Privatization of Migration Control (24.02.2020)
https://www.cigionline.org/articles/privatization-migration-control

Law and Autonomous Systems Series: Automated Decisions Based on Profiling – Information, Explanation or Justification? That is the Question! (27.04.2018)
https://www.law.ox.ac.uk/business-law-blog/blog/2018/04/law-and-autonomous-systems-series-automated-decisions-based-profiling

Briefing: A manufactured refugee crisis at the Greek-Turkish border (04.03.2020)
https://www.thenewhumanitarian.org/analysis/2020/03/04/refugees-greece-turkey-border

Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA (27.02.2020)
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement

Why faces don’t always tell the truth about feelings (26.02.2020)
https://www.nature.com/articles/d41586-020-00507-5

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)