The urgent need to #reclaimyourface

The rise of automated video surveillance is often touted as a quick, easy, and efficient solution to complex societal problems. In reality, roll-outs of facial recognition and other biometric mass surveillance tools constitute a systematic invasion into people’s fundamental rights to privacy and data protection. Like with uses of toxic chemicals, these toxic uses of biometric surveillance technologies need to be banned across Europe.

By EDRi · June 2, 2021

So-called artificial intelligence (AI) is probably the most hyped technology of our time. From the promotion of ‘smart’ cities and blockchain to the rise of ‘AI for the common good’, this jargon creates an alluring promise of tech that will do everything from helping people get rich quick to solving humanity’s most pressing problems.

Automated video surveillance is one such technology that has been touted as an easy fix to issues of public safety and security. Yet in reality, automated facial recognition is designed to watch, analyse and judge every single person that passes by, despite them not having done anything wrong, which makes it by definition an authoritarian mass surveillance tool. This is why its purported benefits can never be enough to justify its vast threats. Just like the EU bans uses of toxic chemicals to protect people from harm, we need to ban the specific uses of technology that have no place in a democratic society.

Biometric data are protected by law – yet abuses are widespread

In the EU, the processing of people’s biometric data — the key component of facial recognition — is already prohibited in principle for commercial or administrative (government) uses. When the EU’s latest data protection laws were passed, EU institutions and Member States alike agreed that people’s biometric characteristics (such as faces, bodies, and behaviours) are highly sensitive and shouldn’t be processed without very good reason. Yet as civil society has pointed out, there are wide gaps in this ban that are failing to stop biometric surveillance tools from being used in ways that are in direct contradiction to the essence of the supposed prohibition – for example against children in French high schools. And when it comes to the use of biometric data in policing — one of the most harmful uses — article 10.1 of the Data Protection Law Enforcement Directive, or LED, doesn’t ban it at all, but rather sets out three broad conditions which enable its use.

This has led to a regulatory and enforcement patchwork in which members of the public, civil society groups (like the Reclaim Your Face campaign), activists and data protection authorities are chasing an endless treadmill of harmful uses that keep emerging. Each time we beat back these invasive and abusive uses — for example schools violating children’s rights in France, police threatening the liberties of the population through mass surveillance in Italy, and supermarkets spying on shoppers in the Netherlands — countless more pop up in their place. Rights-violating biometric mass surveillance practices have become systemic in Europe.

This ever increasing roll-out not only overwhelms under-resourced civil society groups and data protection authorities, but it calls into question the integrity of a system that does not stop such patently harmful, rights-violating systems from being rolled out in the first place. A real ban—whereby violations would be the absolute exception, instead of the norm as we see in current practices — would make sure that governments, police forces, and corporations are genuinely prevented from rolling out these undemocratic and harmful systems in the first place. The status quo of myriad seriously harmful pilots and deployments emphasises just how far we are from something that could be considered a real ban to keep our public spaces free from intrusive surveillance.

No such thing as ‘benign’ mass surveillance

The introduction of automated facial recognition into public spaces creates infrastructures that are designed to single people out — sometimes because of their identity, other times because of what they are wearing, how they are acting, or because of the colour of their skin — and then to follow them across time and place. This can be used to track individuals without their knowledge, and to make often unfair and arbitrary judgments about people, which they have no way to challenge or contest. For example, what is their predicted ethnicity or gender? Are they acting in a ‘normal’ way? Are they a potential troublemaker?

We already know that biometric technologies both reflect and amplify societal racism, and that people of colour, migrants, Roma and Sinti and other minoritised communities are already seriously over-policed across Europe and the world. With this combined knowledge, biometric mass surveillance becomes a perfect storm for toxic discrimination which threatens the safety and security of whole communities, hidden behind a lens of false technological objectivity.

Such practices also put a considered and calculated focus on individual and usually low-level ‘criminal’ behaviour (like loitering) which can correlate with people in precarious situations such as homelessness or poverty. Biometric mass surveillance technologies are thus used to punish, criminalise, and further ostractise those who have already been the most let down by society. The billions of euros poured into the pockets of surveillance tech companies for these systems (e.g. here and here) could in fact be much better spent on improving access to education, healthcare, and other measures to genuinely tackle inequality and social exclusion.

What’s worse is that this focus of biometric mass surveillance practices on certain forms of purported ‘criminality’ is a deliberate choice to do so — evidenced by the sorts of behaviours (e.g. petty crimes or social disturbances) and types of people (especially young people, poor people, and people of colour) against which these tools are systematically deployed — at the detriment of investigating white-collar and elite crimes.

This includes crimes such as tax avoidance and evasion, other financial crimes, capitalist exploitation of workers, genocide, political persecution — see Russia recently using facial recognition to suppress and punish pro-democracy protesters and the journalists reporting on the protests — the undermining of the judiciary, the suppression of democracy, and numerous other state crimes. If facial recognition were really about making societies safer, one could argue that the cameras would be turned on political and industry leaders, rather than on the general public.

Faster, easier, cheaper… but not better

The very premise of using automated facial recognition — or any other type of biometric characteristic to track, identify, and profile people in publicly accessible spaces — is that surveillance is good, and therefore that more surveillance, done faster and more efficiently, is a positive thing. This is a logic that is inherently contradictory to the human rights enshrined in European and international human rights treaties, in particular the right to a private life and the right to the protection of our personal data – the unique information about each of us, from our address and phone number to our fingerprints and face shape. Such systems are often smuggled into urban areas under the euphemism of “smart cities”, but the truth is that there’s nothing smart about mass surveillance.

Rights to privacy and data protection are a vital gateway to make sure that each of us can practice our religion, express our gender identity, access healthcare, love who we choose, vote, and so much more, with respect for our dignity and who we are without being singled out, tracked, or targeted for it.

The ability to be anonymous in public life is what allows us to join a protest without fear, to read a news article that expresses opposition to the political party in power without having to look over our shoulder, and to report on corruption and blow the whistle on political or economic wrongdoing without fear for our security.

But the rise of facial recognition cameras on every corner, or the application of facial recognition algorithms to existing video surveillance footage later on (for example with the help of notorious company ClearviewAI), represents a step change in enabling mass surveillance practices, which has the potential to effectively eradicate our ability to be private or anonymous in public life. As privacy and anonymity are some of the cornerstones of democratic processes, biometric mass surveillance poses a threat to the very foundations of democracy and the rule of law. The only ‘smart’ solution is to ban these practices.

EU countries are no stranger to limits and bans on harmful things

Many countries in the EU have been proud to take a strict and proactive approach to things that are too dangerous to be acceptable. Most European countries have smoke-free legislation to stop people from smoking in enclosed public spaces due to the widespread harms from passive smoking, and they all have legal limits which prevent people from being able to drink and drive. In 2006, the EU banned people across the EU from not wearing a seat belt when driving. As Europeans, we are familiar with limits and bans which are designed to protect us from harm, and restrictions in the cases where harms can be mitigated.

Medicines, chemicals, and other dangerous substances give us a possible analogy for regulating a set of items and uses that are in fact very broad, complex and diverse. There are some medicines and chemicals which pose a low risk to people’s health, and are permissible – with the right conditions, like medicines requiring a prescription or agricultural pesticides being subject to strict controls. In the same way, the term ‘artificial intelligence’ encompasses a really wide range of processes and practices which can have very different implications and consequences, some of which can be made safe through a system of mandatory checks and safeguards.

There are also some uses of chemicals, medicines, and other substances however, which are so inherently harmful that the risks they pose cannot be mitigated, and therefore these practices need to be fully prohibited. Examples include aerial crop spraying practices, the use of thalidomide for treating morning sickness and the use of some asbestos compounds – all of which are so harmful that we do not permit them at all in the EU. Likewise, the practice of biometric mass surveillance — which is inherent to any use of remote facial recognition — falls into this latter category of being fundamentally dangerous to individuals, societies, and democracies in ways that cannot be mitigated. No rules nor safeguards can change the fact that biometric mass surveillance practices are authoritarian by design,  given that their intrinsic purpose is to erase anonymity. This means that no longer will any of us be able to vote, go to a doctor, or meet a friend in the privacy to which we are supposed to be entitled in a democratic society – and instead we will all be treated as potential criminal suspects.

A once-in-a-generation opportunity to ban biometric mass surveillance

CCTV has proven to be effective only in an incredibly narrow and limited set of use cases like deterring petty crime in parking garages. Thinking, therefore, that we can solve complex societal problems by bringing in more expensive and more intrusive tech is — at best — naive. Criminalising certain individuals and communities through biometric mass surveillance practices will not help us address the root causes of criminal issues, such as inequality, poverty, historical and ongoing structural discrimination, and pervasive racism. Rather it will further ostracise people and divide societies.

In April 2021, the European Commission proposed a new law to regulate AI, which included tentative first steps towards a true ban on biometric mass surveillance practices. Unfortunately, the proposal failed to go anywhere near far enough to protect people’s rights and freedoms in the EU. Just like we don’t let just anyone manufacture paracetamol, Europe needs smart regulation recognising that a laissez-faire attitude to AI development is simply not going to cut it. And when it comes to biometric mass surveillance, the inherent risks show that we should be treating it more like radioactive waste – and getting it the hell away from our cities, towns and communities.

The article was first published by about:intel here.

Contribution by:

Ella Jakubowska

Policy and Campaigns Officer

Twitter: @ellajakubowska1

Ban Biometric Mass Surveillance Today

If you're an EU citizen, you can help us change EU laws by signing the official #ReclaimYourFace initiative to ban biometric mass surveillance practices:

This is not a regular petition, but an official “European Citizens’ Initiative” (ECI) run by EDRi on behalf of the European Commission. This means your signature must be officially verified by national authorities, according to each EU country’s specific rules. We cannot control the data that they require, since it is required by Regulation (EU) 2019/788 on the European citizens’ initiative for the purpose of confirming your signature. We can only use the information that you provide in Step 2 to contact you with updates, if you choose to enter it. Furthermore, our ECI signature collection system has been verified by the German Federal Information Security Office (BSI) to ensure it is compliant with the EU’s Regulation on ECIs. Please see our “Why ECI?” page for further details, and check out our privacy policy.

This ECI is open to all EU citizens, even if you currently live outside the EU (although there are special rules for Germany). Unfortunately if you are not an EU national, the EU’s official rules say that you cannot sign. Check https://reclaimyourface.eu other ways than non-EU citizens can help the cause.

Note to German citizens: It is possible to sign our ECI petition if you live outside the EU, but German rules mean that for German citizens specifically, your signature will only be valid if you are registered with your current permanent residence at the relevant German diplomatic representation. If you are not registered, then unfortunately your signature will not be counted. You can read more information about the rules. This rule does not apply to citizens of any other EU country.

Legally, if we reach 1 million signatures (with minimum thresholds met in at least 7 EU countries) then the European Commission must meet with us to discuss our proposal for a new law. They must then issue a formal communication (a piece of EU soft law) explaining why they are or are not acting on our proposal, and they may also ask the European Parliament to open a debate on the topic. For these reasons, a European Citizens’ Initiative (ECI) is a powerful tool for getting our topic onto the EU agenda and showing wide public support for banning biometric mass surveillance practices.

Learn more about the campaign to ban biometric mass surveillance practices at our official website

Reclaim Your Face