Your face rings a bell: Three common uses of facial recognition
Not all applications of facial recognition are created equal. In this third installment, we sift through the hype to analyse three increasingly common uses of facial recognition: tagging pictures on Facebook, automated border control gates, and police surveillance.
Not all applications of facial recognition are created equal. As we explored in the first and second instalments of this series, different uses of facial recognition pose distinct but equally complex challenges. Here we sift through the hype to analyse three increasingly common uses of facial recognition: tagging pictures on Facebook, automated border control gates, and police surveillance.
The chances are that your face has been captured by a facial recognition system, if not today, then at least in the last month. It is worryingly easy to stroll through automated passport gates at an airport, preoccupied with the thought of seeing your loved ones, rather than consider potential threats to your privacy. And you can quite happily walk through a public space or shop without being aware that you are being watched, let alone that your facial expressions might be used to label you a criminal. Social media platforms increasingly employ facial recognition, and governments around the world have rolled it out in public. What does this mean for our human rights? And is it too late to do something about it?
First: What the f…ace? – Asking the right questions about facial recognition!
As the use of facial recognition skyrockets, it can feel that there are more questions than answers. This does not have to be a bad thing: asking the right questions can empower you to challenge the uses that will infringe on your rights before further damage is done.
A good starting point is to look at impacts on fundamental rights such as privacy, data protection, non-discrimination and freedoms, and compliance with international standards of necessity, remedy and proportionality. Do you trust the owners of facial recognition systems (or indeed other types of biometric recognition and surveillance) whether public or private, to keep your data safe and to use it only for specific, legitimate and justifiable purposes? Do they provide sufficient evidence of effectiveness, beyond just the vague notion of “public security”?
Going further, it is important to ask societal questions like: does being constantly watched and analysed make you feel safer, or just creeped out? Will biometric surveillance substantially improve your life and your society, or are there less invasive ways to achieve the same goals?
Looking at biometric surveillance in the wild
As explored in the second instalment of this series, many public face surveillance systems have been shown to violate rights and been deemed illegal by data protection authorities. Even consent-based, optional applications may not be as unproblematic as they first seem. This is our “starter for ten” for thinking through the potentials and risks of some increasingly common uses of facial verification and identification – we’ll be considering classification and other biometrics next time. Think we’ve missed something? Tweet us your ideas @edri using #FacialRecognition.
Automatic tagging of pictures on Facebook
Facebook uses facial recognition to tag users in pictures, as well as other “broader” uses. Under public pressure, in September 2019, they made it opt-in – but this applies only to new, not existing, users.
- Saves time compared to manual tagging
- Alerts you when someone has uploaded a picture of you without your knowledge
- The world’s biggest ad-tech company can find you on photos or videos across the web – forever
- Facebook will automatically scan, analyse and categorise every photo uploaded
- You will automatically be tagged in photos you might want to avoid
- Errors especially for people with very light or very dark skin
- Facebook has been training algorithms using Instagram photos and then selling them
- Facebook tagging is 98% accurate – but with 2.4bn users, that 2% amounts to hundreds of millions of errors, especially for people of colour
Creepy, verging on dystopian, especially as the feature is on by default for some users (here’s how to turn it off: https://www.cnet.com/news/neons-ceo-explains-artificial-humans-to-me-and-im-more-confused-than-ever/). We’ll leave it to you to decide if the potentials outweigh the risks.
Automated border control (ePassport gates)
- Suggested as a solution for congestion as air travel increases
- Matches you to your passport, rather than a central database – so in theory your data isn’t stored
- Longer queues for those who cannot or do not want to use it
- Lack of evidence that it saves time overall
- Difficult for elderly passengers to use
- May cause immigration issues or tax problems
- Normalises face recognition
- Disproportionately error-prone for people of colour, leading to unjustified interrogations
- Supports state austerity measures
- Stats vary wildly, but credible sources suggest the average border guard takes 10 seconds to process a traveler, faster than the best gates which take 10-15 seconds
- Starting to be used in conjunction with other data to predict behaviour
- High volume of human intervention needed due to user or system errors
- Extended delays for the 5% of people falsely rejected
- Evidence of falsely criminalising innocent people
- Evidence of falsely accepting people with wrong passport
Evidence of effectiveness can be contradictory, but the impacts – especially on already marginalised groups – and the ability to combine face data with other data to induce additional information about travellers bear major potential for abuse. We suspect that offline solutions such as funding more border agents and investing in queue management could be equally efficient and less invasive.
- Facilitates the analysis of video recordings in investigations
- Police hold a database of faces and are able to track and follow every individual ever scanned
- Replaces investment in police recruitment and training
- Can discourage use of public spaces – especially those who have suffered disproportionate targeting
- Chilling effect on freedom of speech and assembly, an important part of democratic participation
- May also rely on pseudo-scientific emotion “recognition”
- Legal ramifications for people wrongly identified
- No ability to opt out
- UK police force says face recognition is helping make up for budget cuts
- Effectiveness of surveillance is nearly impossible to prove
- Evidence of law enforcement abuse of access to data
- Automates existing policing biases and racial profiling
- Makes legitimate anonymous protests impossible
- Undermines privacy rights, making us less free
Increased public security could be achieved by measures to tackle issues such as inequality or antisocial behaviour or generally investing in police capability rather than surveillance technology.
Facing reality: towards a mass surveillance society?
Without intervention, facial recognition is on a path to omniscience. In this post, we have only scratched the surface. However, these examples identify some of the different actors that may want to collect and analyse your face data, what they gain from it, and how they may (ab)use it. They have also shown that benefits of facial surveillance are frequently cost-cutting reasons, rather than user benefit.
We’ve said it before: tech is not neutral. It reflects and reinforces the biases and world views of its makers. The risks are amplified when systems are deployed rapidly, without considering the big picture or the slippery slope towards authoritarianism. The motivations behind each use must be scrutinised and proper assessments carried out before deployment. As citizens, it is our right to demand this.
Your face has a significance beyond just your appearance – it is a marker of your unique identity and individuality. But with prolific facial recognition, your face becomes a collection of data points which can be leveraged against you and infringe on your ability to live your life in safety and with privacy. With companies profiting from the algorithms covertly built using photos of users, faces are literally commodified and traded. This has serious repercussions on our privacy, dignity and bodily integrity.
Read the rest of the facial recognition and fundamental rights series
Facial recognition and fundamental rights 101
This is the first post in a series about the fundamental rights impacts of facial recognition. Private companies and governments worldwide are already experimenting with facial recognition technology. Individuals, lawmakers, developers - and everyone in between - should be aware of the rise of facial recognition, and the risks it poses to rights to privacy, freedom, democracy and non-discrimination.
The many faces of facial recognition in the EU
In this second installment of EDRi's facial recognition and fundamental rights series, we look at how different EU Member States, institutions and other countries worldwide are responding to the use of this tech in public spaces.
Stalked by your digital doppelganger?
In this fourth installment of EDRi’s facial recognition and fundamental rights series, we explore what could happen if facial recognition collides with data-hungry business models and 24/7 surveillance.
Dangerous by design: A cautionary tale about facial recognition
In this fifth and final installment of EDRi's facial recognition and fundamental rights series, we consider an experience of harm caused by fundamentally violatory biometric surveillance technology.
Facial Recognition & Biometric Mass Surveillance: Document Pool
Despite evidence that public facial recognition and other forms of biometric mass surveillance infringe on a wide range EU fundamental rights, European authorities and companies are deploying these systems at a rapid rate. This has happened without proper consideration for how such practices invade people's privacy on an enormous scale; amplify existing inequalities; and undermine democracy, freedom and justice.
Facial Recognition and Fundamental Rights 101 (04.12.2019)
The many faces of facial recognition in the EU (18.12.2019)
Stalked by your digital doppelganger? (29.01.2020)
Data-Driven Policing: The Hardwiring of Discriminatory Policing Practices across Europe (05.11.2019)
Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
What the “digital welfare state” really means for human rights (08.01.2020)
Resist Facial Recognition
Contribution by Ella Jakubowska, EDRi intern [at time of writing, now Policy and Campaigns Officer], with many ideas gratefully received from or inspired by members of the EDRi network