Technologies like automated decision-making, biometrics, and unpiloted drones are increasingly controlling migration and affecting millions of people on the move. This second blog post in our series on AI and migration highlights some of these uses, to show the very real impacts on people’s lives, exacerbated by a lack of meaningful governance and oversight mechanisms of these technological experiments.

What can happen to you in your migration journey when bots are involved?



Before the border: Want to eat? Get your eyes scanned!

Before you even cross a border, you will be interacting with various technologies. Unpiloted drones are surveilling the Mediterranean corridor under the guise of border control. However, if similar technologies, like the so-called “smart border” surveillance at the US borders, are any indication, this may lead to more people dying as they are seeking safety. Biometrics such as iris scanning are increasingly being rolled out in humanitarian settings – where refugees, on top of their already difficult living conditions, are required to get their eyes scanned in order to eat, for example. And now, not even the personal posts you intended to share only with your friends and family are safe – social media scraping to screen your immigration applications is becoming common practice.

What is happening with all this data? Various international organisations like the United Nations (UN) are using Big Data, or extremely large data sets, to predict population movements. However, data collection is not an apolitical exercise, especially when powerful actors like states or international organisations like the UN collect information on vulnerable populations. In an increasingly anti-immigrant global landscape, migration data has also been misrepresented for political ends, to affect the distribution of aid dollars and resources and support hardline anti-immigration policies.

What is also concerning is the growing role of the private sector in the collection, use, and storage of this data. The World Food Program recently signed a 45 million USD deal with Palantir Technologies, who recently joined the EU lobby register, the same company that has been widely criticised for providing technology that supports the detention and deportation programs run by US Immigration and Customs Enforcement (ICE). What will happen with the data of 92 million aid recipients shared with Palantir? What data accountability mechanisms are in place during this partnership, and can data subjects refuse to have their data shared?

At the border: Are you lying? A bot can tell!

When you arrive at the border, more and more machines have appeared to scan, surveil, and collect information about you. Increasingly, these machines rely on automated decision-making. However, instances of bias in automated decision-making are widely documented. Pilot projects have emerged to monitor your face for signs of lying, and if the system becomes more “skeptical” through a series of increasingly complicated questions, you will be selected for further screening by a human officer. However, can this system account for the cross-cultural differences in which we communicate? What about if you are traumatised and unable to recall details clearly? Discriminatory applications of facial or emotion recognition technologies has far reaching consequences on people’s lives and rights, particularly in the realms of migration.

Beyond the border: Automating decisions

When you are applying for a visa or want to sponsor your spouse to come join you, how do you feel about algorithms making decisions on your applications? A variety of countries have begun experimenting with automating decisions in immigration and refugee applications, visas, and even immigration detention. This use of technology may seem like a good idea, but many immigration applications are complicated. Already two human officers looking at the same set of evidence can make two completely different determinations. How will an automated system be able to deal with the nuances? Or what if you want to challenge in court a decision you don’t agree with? Right now it is unclear who is responsible for when things go wrong – is it the coder who creates the algorithm, the immigration officer using it, or even the algorithm itself?

Mistakes in migration decisions can have lasting repercussions – you can be separated from your family, lose your job, or even be wrongfully deported. This happened to over 7000 students who were deported from the UK based on a faulty algorithm that accused them of cheating on a language test. Where are these students now?

What about your human rights?

A number of your internationally protected rights are already engaged in the increasingly widespread use of new technologies that manage migration. These include equality rights and freedom from discrimination; life, liberty, and security of the person; freedom of expression; and privacy rights. When public entities are making decisions about you, your rights to due process are also affected, including a right to an impartial decision-maker, a right to appeal, and a right to know the case against you. These rights are particularly important to think about in a high-risk context such as when deciding on your unemployent benefits, whether or not you would be selected for a job interview, where the repercussions of getting decisions wrong can separate families, ruin career, or in the extreme circumstances, impact your life and liberty.

However, currently there is no integrated regulatory global governance framework for the use of automated technologies, and no specific regulations in the context of migration management. Much of the global conversation centres on ethics without clear enforceability mechanisms and meaningful accountability. The signing and ratification of Convention 108+ should be a priority for states around the globe, as well as the strong enforcement of the protection it envisages.

Our next blog post will explore some of these rights and suggests a way towards context-specific accountability and governance mechanisms for migration control technologies.

One of the UN’s largest aid programmes just signed a deal with the CIA-backed data monolith Palantir (12.02.2020)
https://privacyinternational.org/news-analysis/2712/one-uns-largest-aid-programmes-just-signed-deal-cia-backed-data-monolith

The U.S. border patrol and an Israeli military contractor are putting a native American reservation under “persistent surveillance” (25.08.2019)
https://theintercept.com/2019/08/25/border-patrol-israel-elbit-surveillance/

Data protection, immigration enforcement and fundamental rights: What the EU’s Regulations on interoperability mean for people with irregular status
https://picum.org/wp-content/uploads/2019/11/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Exec-Summary-EN.pdf

Your body is a passport
https://www.politico.eu/article/future-passports-biometric-risk-profiles-the-codes-we-carry/

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)