Dangerous by design: A cautionary tale about facial recognition
In this fifth and final installment of EDRi's facial recognition and fundamental rights series, we consider an experience of harm caused by fundamentally violatory biometric surveillance technology.
This series has explored facial recognition as a fundamental rights issue; the EU’s response; evidence about the risks; and the threat of public and commercial data exploitation. In this fifth installment, we consider an experience of harm caused by fundamentally violatory biometric surveillance technology.
Leo Colombo Viña is the founder of a software development company and a professor of Computer Science. A self-professed tech lover, it was “ironic”, he says, that a case of mistaken identity with police facial recognition happened to him. What unfolded next, paints a powerful picture of the intrinsic risks of biometric surveillance. Whilst Leo’s experience occurred in Buenos Aires, Argentina, his story raises serious issues for the deployment of facial and biometric recognition in the EU, too.
“I’m not the guy they’re looking for”
One day in 2019, Leo was leaving the bank mid-afternoon to take the metro back to his office. While waiting for the train, he was approached by a police officer who had received an alert on his phone that Leo was wanted for armed robbery 17 years ago. The alert had been triggered by the metro station’s facial recognition surveillance system, which was recently the subject of a large media campaign.
His first assumption was “okay, there’s something up, I’m not the guy they’re looking for”. But once the police showed him the alert, it clearly showed his picture and personal details. “Okay,” he thought, “what the f***?” When they told him that the problem could not be resolved there and then, and he would have to accompany them to the police station, Leo’s initial surprise turned into concern.
Wrongful criminalisation
It turned out that whilst the picture and ID number in the alert matched Leo’s, bizarrely, the name and date of birth did not. Having never committed a crime, nor even been investigated, Leo still does not know how his face and ID number came to be wrongfully included in a criminal suspect database. Despite subsequent legal requests from across civil society, the government have not made information about the processing of, storage of or access to people’s data available. This is not a unique issue: across Europe, policing technology and processing of personal data is frighteningly opaque.
At the police station, Leo spent four hours in the bizarre position of having to “prove that I am who I am”. He says the police treated him kindly and respectfully – although he thinks that being a caucasian professional meant that they dismissed him as a threat. The evidence for this came later, when a similar false alert happened to another man who also did not have a criminal record, but who had darker skin than Leo and came from a typically poorer area. He was wrongfully jailed for six days because the system’s alert was used to justify imprisoning him – despite the fact that his name was not a match.
Undermining police authority
If the purpose of policing is to catch criminals and keep people safe, then Leo’s experience is a great example of why facial recognition does not work. Four officers spent a combined total of around 20 hours trying to resolve his issue (at the taxpayers’ expense, he points out). That doesn’t include the time spent afterwards by the public prosecutor to try and work out what went wrong. Leo recalls that the police were frustrated to be tied up with bureaucracy and attempts to understand the decision that the system had made, whilst their posts were left vacant and real criminals went free.
The police told Leo that the Commissioner receives a bonus tied to the use of the facial recognition system. They confided that it seemed to be a political move, not a policing or security improvement. Far from helping them solve violent crime – one of the reasons often given for allowing such intrusive systems – it mostly flagged non-violent issues such as witnesses who had not turned up for trials because they hadn’t received a summons, or parents who had overdue child support payments.
The implications on police autonomy are stark. Leo points out that despite swift confirmation that he was not the suspect, the police had neither the ability nor the authority to override the alert. They were held hostage to a system that they did not properly understand or control, but they were compelled to follow its instructions and decisions without knowing how or why it had made them.
Technology is a tool made by humans, not a source of objective truth or legal authority. In Leo’s case, the police assumed early on that the match was not legitimate because he did not fit their perception of a criminal. But for others also wrongfully identified, the assumption was that they did look like a criminal, so the system was assumed to be working correctly. Global anti-racism activists will be familiar with these damaging, prejudicial beliefs. Facial recognition does not solve human bias, but rather supports it by giving discriminatory human assumptions a false sense of “scientific” legitimacy.
Technology cannot fix a broken system
The issues faced by Leo, and the officers who had to resolve his situation, reflect deeper systemic problems which cannot be solved by technology. Biased or inefficient police processes, mistakes with data entry, and a lack of transparency do not disappear when you automate policing – they get worse.
Leo has had other experiences with the fallacies of biometric technology.
A few years ago, he and his colleagues experimented with developing fingerprinting software at the request of a client, but ultimately decided against it. “We realised that biometric systems are not good enough,” he says. “It feels good enough, it[‘s] good marketing, but it’s not safe.” He points to the fact that he was recently able to unlock his phone using a picture of himself. “See? You are not secure.”
Leo shared his story – which quickly went viral on Twitter – because he wanted to show that “there is no magic in technology.” As a software engineer, people see him like a “medieval wizard”. As he sees it, though, he is someone with the responsibility and ability to show people the truth behind government propaganda about facial recognition, starting with his own experience.
Aftermath
I asked Leo if the government considered the experiences of those who had been affected. He laughed sardonically. “No, no, absolutely not, no.” He continues that “I shouldn’t be in that database, because I didn’t commit any crime.” Yet it took the public prosecutor four months to confirm the removal of his data, and the metro facial recognition system is still in use today. Leo thinks it has been a successful marketing tool for a powerful city government wanting to assuage citizens’ safety concerns. He thinks that the people have been lied to, and that fundamentally unsafe technology cannot make the city safer.
A perfect storm of human errors, systemic policing issues and privacy violations led to Leo being included in the database, but this is by no means a uniquely Argentinian problem. The Netherlands, for example, have included millions of people in a criminal database despite them never being charged with a crime. Leo reflects that “the system is the whole thing, from the beginning to end, from the input to the output. The people working in technology just look at the algorithms, the data, the bits. They lose the big picture. That’s why I shared my story … Just because.” We hope the EU is taking notes.
Explore the rest of the facial recognition and fundamental rights series
-
Facial recognition and fundamental rights 101
This is the first post in a series about the fundamental rights impacts of facial recognition. Private companies and governments worldwide are already experimenting with facial recognition technology. Individuals, lawmakers, developers - and everyone in between - should be aware of the rise of facial recognition, and the risks it poses to rights to privacy, freedom, democracy and non-discrimination.
Read more
-
The many faces of facial recognition in the EU
In this second installment of EDRi's facial recognition and fundamental rights series, we look at how different EU Member States, institutions and other countries worldwide are responding to the use of this tech in public spaces.
Read more
-
Your face rings a bell: Three common uses of facial recognition
Not all applications of facial recognition are created equal. In this third installment, we sift through the hype to analyse three increasingly common uses of facial recognition: tagging pictures on Facebook, automated border control gates, and police surveillance.
Read more
-
Stalked by your digital doppelganger?
In this fourth installment of EDRi’s facial recognition and fundamental rights series, we explore what could happen if facial recognition collides with data-hungry business models and 24/7 surveillance.
Read more
-
Facial Recognition & Biometric Mass Surveillance: Document Pool
Despite evidence that public facial recognition and other forms of biometric mass surveillance infringe on a wide range EU fundamental rights, European authorities and companies are deploying these systems at a rapid rate. This has happened without proper consideration for how such practices invade people's privacy on an enormous scale; amplify existing inequalities; and undermine democracy, freedom and justice.
Read more
Dismantling AI Myths and Hype (04.12.2019)
https://daniel-leufer.com/2019/12/05/dismantling-ai-myths-and-hype/
Data-driven policing: The hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.citizensforeurope.eu/learn/data-driven-policing-the-hardwiring-of-discriminatory-policing-practices-across-europe
Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf
As told to Ella Jakubowska @ellajakubowska1, EDRi intern, by Leo Colombo @LeCoVi