Blogs | Open internet and inclusive technology | Biometrics | Data protection standards | Profiling practices | Surveillance and data retention

People, not experiments: why cities must end biometric surveillance

We debated the use of facial recognition in cities with the policy makers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.

By EDRi · October 14, 2020

From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.

On 28 September, EDRi participated in a debate at the NGI Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:

• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.

• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fudamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.

• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.

• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.

The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission have recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.

Explore EDRi’s work on biometrics