Mass facial recognition is the apparatus of police states and must be regulated
Scientists have shown the inherent structural discrimination embedded in biometric systems. Facial analysis algorithms consistently judge black faces to be angrier and more threatening than white faces. We also know that biometric systems are designed with a purportedly “neutral” face and body in mind, which can exclude people with disabilities and anybody that does not conform to an arbitrary norm.
In April, the EU will become the first global power to lay out detailed plans for regulating artificial intelligence, including rules on the use of biometric surveillance technologies, like public facial recognition.
Biometric technologies collect data unique to our bodies and behaviours, which divulge sensitive information about who we are. Over half of EU countries are using facial recognition and similar tools in ways that conflict with their own human rights rules, European Digital Rights (EDRi) advocacy group found.
Despite being promoted as an easy solution to crime, the reality is that these invasive technologies are often based on unscientific and discriminatory foundations. They are deployed by powerful actors trying to play God and benefit only the tech companies that sell them.
Our governments are setting up the perfect conditions for the anti-democratic mass surveillance of entire populations.
Naively at best, our governments are setting up the perfect conditions for the anti-democratic mass surveillance of entire populations. Now, Europeans are fighting back — the Reclaim Your Face movement has launched a pan-European petition to push EU institutions into banning the harmful use of these technologies.
Whilst facial recognition technology has been around for decades, the last few years have seen an increasing desire from cash-strapped councils and police forces to automate using people’s facial and vocal expressions, body movements and behaviours to predict whether or not that person is “suspicious”.
Earlier this month, the EU’s highest court handled a case about controversial AI “lie detector” tests through a project called iBorderCTRL. MEP Patrick Breyer is suing the European Commission for this EU-funded pilot, which ran between 2016 and 2019. According to the Commission, the project aimed to speed up checks for non-EU nationals travelling to the EU. In reality, it used invasive biometric analysis of emotions, which some scientists have said do not work, to determine whether or not to grant someone the freedom to travel.
iBorderCTRL and similar EU-funded projects have been criticised for the secrecy under which these suspected rights-violating experiments have been carried out, for being part of increasingly inhumane migration control strategies, and for infringing on people’s “cognitive liberty“. Another example can be seen in the Netherlands, where authorities are using biometric technologies to predict people’s aggressiveness.
We are witnessing authorities spending huge sums of money on data-driven biometric tech as an easy shortcut to complex societal problems. They are doing this instead of investing in education, welfare or in creating trust with communities, which are more effective methods of reducing crime and improving equal access to opportunities.
These technologies will amplify existing discrimination against people of colour, people with disabilities, and other marginalised groups.
Scientists have shown the inherent structural discrimination embedded in biometric systems. Research shows that facial analysis algorithms consistently judge black faces to be angrier and more threatening than white faces. We also know that biometric systems are designed with a purportedly “neutral” face and body in mind, which can exclude people with disabilities and anybody that does not conform to an arbitrary norm.
Kitting out our streets, supermarkets, and parks with these technologies will amplify existing discrimination against people of colour, people with disabilities, and other marginalised groups. Instead of being contestable, these practices will be hidden under a veil of false scientific authority and deliberate opacity.
London police, for example, claim that facial recognition is never used alone to make a decision. Yet, research from the University of Essex shows that humans are psychologically discouraged from challenging AI-based decisions due to the heavy burden of disproving it. In short: “Computer says no.”
The technology ignores the fact that expressions and emotions vary greatly at both a cultural and an individual level.
In claiming to know what people are thinking or are going to do next, the latest generation of biometric systems is propelling us towards a technologically-enabled thought-police. Through “mind-reading” experiments like iBorderCTRL, the EU is legitimising flimsy pseudoscience and opening the door to ever-more-invasive biometric predictions. The technology ignores the fact that expressions and emotions vary greatly at both a cultural and an individual level. Biometric detection of suspicion, by contrast, forces everyone to act according to an arbitrary technological standard.
Governments are opening a pandora’s box where state control extends not just to everything we do, but to our innermost thoughts, and applying it in contexts where our innocence or guilt is predicted on the basis of our biometric data. These tools and practices are the dreams of any police state par excellence and the antithesis of the rule of law. But we have an opportunity to stop them before they become pervasive in our societies.
History could not give us a clearer warning about what happens when societies are divided and turned against one another on the basis of who is constructed as suspicious.
The European Reclaim Your Face campaign, which EDRi backs, calls to permanently end biometric mass surveillance like iBorderCTRL. Its petition, which it hopes will gather 1 million signatures, launching today, aims to promote a future where people will not be segmented based on how they look, their religion, sexuality, disability, and other factors that make up our diverse identities. History could not give us a clearer warning about what happens when societies are divided and turned against one another on the basis of who is constructed as suspicious, abnormal and aberrant.
EU countries will decide whether or not the development and use of AI will be legally controlled. As the first continent to consider regulating these fast-moving technologies, it has the power to set a global example by, for example, banning biometric mass surveillance practices.
This would ensure that we can all enjoy the presumption of innocence, free from the judgment of machines that strip us of our dignity, violate our bodily and psychological integrity, and threaten our democratic freedoms. We urge you to #ReclaimYourFace alongside us.
The op-ed was first published by Euronews.
Contribution by: