This is the first blogpost of a series on our new project which brings to the forefront the lived experiences of people on the move as they are impacted by technologies of migration control. The project, led by our Mozilla Fellow Petra Molnar, highlights the need to regulate the opaque technological experimentation documented in and around border zones of the EU and beyond. We will be releasing a full report later in 2020, but this series of blogposts will feature some of the most interesting case studies.At the start of this new decade, over 70 million people have been forced to move due to conflict, instability, environmental factors, and economic reasons. As a response to the increased migration into the European Union, many states are looking into various technological experiments to strengthen border enforcement and manage migration. These experiments range from Big Data predictions about population movements in the Mediterranean to automated decision-making in immigration applications and Artificial Intelligence (AI) lie detectors at European borders. However, often these technological experiments do not consider the profound human rights ramifications and real impacts on human lives

A human laboratory of high risk experiments

Technologies of migration management operate in a global context. They reinforce institutions, cultures, policies and laws, and exacerbate the gap between the public and the private sector, where the power to design and deploy innovation comes at the expense of oversight and accountability. Technologies have the power to shape democracy and influence elections, through which they can reinforce the politics of exclusion. The development of technology also reinforces power asymmetries between countries and influence our thinking around which countries can push for innovation, while other spaces like conflict zones and refugee camps become sites of experimentation. The development of technology is not inherently democratic and issues of informed consent and right of refusal are particularly important to think about in humanitarian and forced migration contexts. For example, under the justification of efficiency, refugees in Jordan have their irises scanned in order to receive their weekly rations. Some refugees in the Azraq camp have reported feeling like they did not have the option to refuse to have their irises scanned, because if they did not participate, they would not get food. This is not free and informed consent.

These discussions are not just theoretical: various technologies are already used to control migration, to automate decisions, and to make predictions about people’s behaviour.

Palantir machine says: no

However, are these appropriate tools to use, particularly without any governance or accountability mechanisms in place for if or when things go wrong? Immigration decisions are often opaque, discretionary, and hard to understand, even when human officers, not artificial intelligence, are the ones making decisions. Many of us have had difficult experiences trying to get a work permit, reunite with our spouse, or adopt a baby across borders, not to mention seek refugee protection as a result of a conflict and a war. These technological experiments to augment or replace human immigration officers can have drastic results: in the UK, 7000 students were wrongfully deported because a faulty algorithm accused them of cheating on a language acquisition text. In the US, the Immigration and Customs Enforcement Agency (ICE) has partnered with Palantir Technologies to track and separate families and enforce deportations and detentions of people escaping violence in Central and Latin America.

What if you wanted to challenge one of these automated decisions? Where does responsibility and liability lie – with the designer of the technology, its coder, the immigration officer, or the algorithm itself? Should algorithms have legal personality? It’s paramount to answer these questions, as much of the decision-making related to immigration and refugee decisions already sits at an uncomfortable legal nexus: the impact on the rights of individuals is very significant, even where procedural safeguards are weak.

Sauron Inc. watches you – the role of the private sector

The lack of technical capacity within government and the public sector can lead to potentially inappropriate over-reliance on the private sector. Adopting emerging and experimental tools without in-house talent capable of understanding, evaluating, and managing these technologies is irresponsible and downright dangerous. Private sector actors have an independent responsibility to make sure technologies that they develop do not violate international human rights and domestic legislation. Yet much of technological development occurs in so-called “black boxes,” where intellectual property laws and proprietary considerations shield the public from fully understanding how the technology operates. Powerful actors can easily hide behind intellectual property legislation or various other corporate shields to “launder” their responsibility and create a vacuum of accountability.

While the use of these technologies may lead to faster decisions and shorten delays, they may also exacerbate and create new barriers to access to justice. At the end of the day, we have to ask ourselves, what kind of world do we want to create, and who actually benefits from the development and deployment of technologies used to manage migration, profile passengers, or other surveillance mechanisms?

Technology replicates power structures in society. Affected communities must also be involved in technological development and governance. While conversations around the ethics of AI are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights.

This project builds on critical examinations of the human rights impacts of automated decision-making in Canada’s refugee and immigration system. In the coming months, we will be collecting testimonies in locations including the Mediterranean corridor and various border sites in Europe. Our next blogpost will explore how new technologies are being used before, at, and beyond the border, and we will highlight the very real impacts that these technological experiments have on people’s lives and rights as they are surveilled and as their movement is controlled.

If you are interested in finding out more about this project or have feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca. The project is funded by the Mozilla and Ford Foundations.

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination (26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/

Technology on the margins: AI and global migration management from a human rights perspective, Cambridge International Law Journal, December 2019
https://www.researchgate.net/publication/337780154_Technology_on_the_margins_AI_and_global_migration_management_from_a_human_rights_perspective

Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee Systems, University of Toronto, September 2018
https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf

New technologies in migration: human rights impacts, Forced Migration Review, June 2019
https://www.fmreview.org/ethics/molnar

Once migrants on Mediterranean were saved by naval patrols. Now they have to watch as drones fly over (04.08.2019)
https://www.theguardian.com/world/2019/aug/04/drones-replace-patrol-ships-mediterranean-fears-more-migrant-deaths-eu

Mijente: Who is Behind ICE?
https://mijente.net/notechforice/

The Threat of Artificial Intelligence to POC, Immigrants, and War Zone Civilians
https://towardsdatascience.com/the-threat-of-artificial-intelligence-to-poc-immigrants-and-war-zone-civilians-e163cd644fe0

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)