Remote biometric identification: a technical & legal guide

Lawmakers are more aware than ever of the risks posed by automated surveillance systems which track our faces, bodies and movements across time and place. In the EU's AI Act, facial and other biometric systems which can identify people at scale are referred to as 'Remote Biometric Identification', or RBI. But what exactly is RBI, and how can you tell the difference between an acceptable and unacceptable use of a biometric system?

By EDRi · January 23, 2023

Through the Reclaim Your Face campaign, we have called to biometric mass surveillance practices because of how they eliminate our anonymity, interfere with our enjoyment of public spaces, and weaponise our faces and bodies against us. One of the main practices that amounts to biometric mass surveillance is the use of ‘Remote Biometric Identification’ (RBI) systems in publicly-accessible spaces. This includes parks, streets, shopping centers, libraries, sports venues and other places that the public can enter, even if they have to pay to do so. RBI systems have been used by police, public authorities or companies in most EU countries.

In a disappointing move in December 2022, EU digital ministers in the Council of the EU agreed a position which would water down the AI Act’s proposed ban on RBI (despite dissent from Germany and also reportedly from Austria). Worse still, their Council position could create a first step towards a legal basis for these invasive and authoritarian practices, despite going against existing EU data protection and human rights law. The European Parliament, however, are poised to adopt a much more rights-protective approach. Throughout 2023, the Council and Parliament will negotiate until they can agree a final position.

It is not always clear what constitutes RBI. In particular, the distinction between different biometric systems and the definition of ‘remote’ have been points of confusion. This blog clarifies some key differences to help lawmakers achieve an accurate, comprehensive ban on RBI in publicly-accessible spaces in the AI Act.

Part 1: According to EU law, not all biometrics use cases are the same

EU law distinguishes generally accepted uses of biometric data – like unlocking your smart phone using your face or fingerprint – from unacceptable forms – like being tracked and surveilled when you are walking through a public space. But laws designed to prohibit the use of biometric data in ways that are unacceptably harmful need to be reinforced, in response to the almost exponential rise in the capacity and capability of algorithmic processing.

The General Data Protection Regulation (GDPR) and its policing counterpart, the Law Enforcement Directive (LED) are the EU-wide rules on data protection. The GDPR clearly states that the processing of biometric data (which is an essential part of facial recognition systems) is in principle prohibited (Article 9), but may be allowed in certain, specific circumstances. This applies to all private and public authorities except police.

For the police, the LED applies. The LED states that the processing of biometric data is sensitive, and can only happen if it meets specific strict criteria, meaning that some uses are de facto prohibited. Anyone processing biometric data must also be aware of how it can impact upon other human rights that are enshrined in the EU Charter of Fundamental Rights, as well as international human rights law.

Unlocking your phone using your biometrics

Unlocking your phone using your face, iris, fingerprint or other biometric feature is lawful so long as it complies with rules on informed consent, the data are processed in a privacy-preserving and secure manner and not shared with unauthorised third parties, and all other data protection requirements are met. As a result, such a use case is exempt from the ban on the processing of biometric data in General Data Protection Regulation (GDPR). The burden, of course, is on whoever is deploying the system to show that it meets the criteria for exemption. If, for example, people cannot truly give their free and informed consent for a use of biometric verification, then it would not be lawful.

Walking through a public space where there are facial recognition cameras

Being tracked in a public space by a facial recognition system (or other biometric system), however, is fundamentally incompatible with the essence of informed consent. If you want or need to enter that public space, you are forced to agree to being subjected to biometric processing. That is coercive and not compatible with the aims of the GDPR, nor the EU’s human rights regime (in particular rights to privacy and data protection, freedom of expression and freedom of assembly and in many cases non-discrimination). This and similar practices are what we mean when we talk about ‘biometric mass surveillance’ (BMS).

Despite a generally strong prohibition on processing biometric data in the GDPR, many providers continue to use the exceptions or margins of interpretation in the GDPR to deploy systems which are used in a way that amounts to mass surveillance. Dangerous systems are in use across the EU, with data protection authorities chasing after them to try to stop them. Legislative clarity is therefore urgently needed to make the prohibition of RBI in publicly-accessible spaces explicit, and to prevent the future legalisation of these rights-violating practices.

It is also notable that – due to concerns that biometric data may need to be more rigorously controlled – the GDPR (Article 9, paragraph 4) specifically foresees the addition of additional protections on sensitive biometric data. What’s more, the LED does not currently establish a clear prohibition on the processing of biometric data, despite the even greater risks when police use these systems.

The original proposal for an EU Artificial Intelligence Act in April 2021 proposed an in-principle ban on RBI, but with significant caveats and loopholes:

  1. It banned ‘real-time’ (live) uses of RBI systems, but not the far more common ‘post’ uses. This means that authorities could use RBI after the data is collected (hours, days or even months after!) to turn back the clock, identifying journalists, people seeking reproductive healthcare, and more;
  2. It only applied the ban to law enforcement actors (i.e. police). As a result, we could all still be surveilled in public spaces by local councils, central governments, supermarket owners, shopping center managers, university administration and any other public or private actors;
  3. It also contained a series of wide and dangerous exceptions that could be used as a “blueprint” for how to conduct biometric mass surveillance practices – undermining the whole purpose and essence of the ban!

The draft law did not prohibit any other type of biometric system.

The distinction between ‘real-time’ biometric processing (the analysis happens live) compared to ‘post’ processing (the analysis happens at any point after the fact using previously captured inputs, most frequently by using CCTV footage) is largely a technical/procedural distinction relating to how the system has been set up.

In human rights terms, there is no salient difference between real-time and post RBI: the fear of being pervasively watched and tracked, the disincentivisation of peaceful protests and many forms of civic participation, the hesitation to express yourself and your identity. All of these rights and freedoms can be disproportionately curtailed when we are subjected to biometric mass surveillance. These threats are not reduced just because authorities or companies have extra time to review footage.

In some cases, the infringement on people’s rights and threat to democracy can be even worse when it comes to ‘post’ processing. The ability for governments, police, companies or malicious entitites to use your highly-sensitive personal data to see where you went, what you did, with whom you met etc over the course of weeks, months or even years, can have a profoundly damaging impact on people’s rights. Just imagine how much harder it would be for journalists to meet with sources, for people to feel comfortable accessing healthcare, legal advice or going to LGBTQI+ venues.

Part 2: Technically, what’s the difference between biometric identification and verification?

An important difference between the phone unlocking use case, compared to the public surveillance one, is that the former relies on a technical process called ‘biometric verification’. The latter, however, uses a technical process called ‘biometric identification’. This difference is sometimes referred to as 1:1 matching (e.g. phone unlocking) and 1:n matching (e.g. public surveillance).

Biometric verificationBiometric identification
Compared only to own data Usually compared to the data of multiple people
No central database needed* Relies on some form of database
Individual in control Someone else usually in control
Sensitive data doesn’t go anywhere Sensitive data usually sent somewhere else

*Whilst it isn’t necessary to have a central database to perform biometric verification, there are some cases where the biometric verification may be accompanied by some sort of central check. For example, a person going through an ePassport gate may be checked against an identity database. This is a separate issue that we do not explore here, but we emphasise that such an example can lead to harms and rights violations.

Biometric verification includes processes such as unlocking your phone by matching your fingerprint to a template stored on your device

What is biometric verification?

Biometric verification is a technological process most commonly used to authenticate someone’s identity. It’s sometimes referred to as ‘claiming your identity’ because you are in control, and you can use the biometric verification system to demonstrate that you are who you say you are.

Essentially, the system asks: “are you who you say you are?” by comparing your pre-stored biometric template (i.e. a numerical representation of your face, fingerprint etc) with what you are presenting now (e.g. comparing your fingerprint template stored on your passport or on your phone to the finger you are presenting). The intention is that you are the match for the stored data and the system needs to confirm / ‘verify’ this.

Common examples of biometric verification include unlocking your smartphone with your face, your iris or your fingerprint; or using a passport with a biometric chip to go through the ePassport gate at an airport. In both of these examples, there is no central database for this verification, and no-one else (not even the provider of the service) can access your biometric data.*

Your data are stored securely on your phone or in your passport’s smart chip, and never leave there. You must also have a genuine option to use an alternative (e.g. a PIN unlock or a human-operated passport control desk).

*Some providers or authorities may employ central checks outside of this biometric verification of passports. Such checks can have serious fundamental rights implications and are outside the scope of this blog.

Biometric identification, however, usually compares your data to multiple other biometric templates in a database or watchlist

What is biometric identification?

Biometric identification is a process of comparing your data to multiple other sets of data in some form of database. For example, this could be by comparing your face to a database of face templates to see if there is a match. This database might be relatively small (e.g. a watch-list) or very large (e.g. a national identity database).

Biometric identification requires the collection of your sensitive biometric data in order to compare it against other sets of biometric data which are stored in the database, whether locally or ‘in the cloud’. Unlike with biometric verification, where you need to present yourself and your comparison data (e.g. held in your phone, on your passport chip, or in an entry badge), when it comes to biometric identification, you just need to be there.

Whether you’re stood in front of it, or several meters away and totally unaware, the system does all the matching between you and the comparison data. That’s why some providers refer to biometric identification systems as ‘seamless’ or advertise them as ‘without the need for end-user collaboration’. In theory, if a person matches a template in the database, they would be flagged and identified (although in reality, these systems have serious problems with accuracy and bias).

A common example is police using a facial recognition device to check whether people walking down a street are on their watchlist of suspects. Another example is supermarkets putting facial recognition cameras around their stores to identify shoppers.

And although less frequent, biometric identification systems can be used for granting someone access to a space without the person needing to have a card with a chip – for example to give them so-called “seamless” access to events or train stations.

Biometric verification and identification are technical methods. Biometric authentication, however, is an outcome or a use for these methods, rather than a technical method itself. A provider wanting to ‘authenticate’ a person’s identity (see if they are authorised to access something or somewhere) would usually use biometric verification in order to perform that authentication.

However, the growing industry push towards ‘seamless solutions’ has led to an increase in the use of biometric identification techniques in order to authenticate people. Companies and governments do this by getting people to pre-enroll themselves in a database (e.g. by uploading a passport photo) so that they can then be identified without needing to present a chip, badge or other physical item. This is increasingly being done at sports events and travel hubs.

It’s worth being aware that some providers claim that their system is a biometric verification system when in fact at a technical level, what they are actually doing is biometric ‘identification’. They might also use the words ‘validation’, ‘authentication’ or ‘authentification’, when what they really mean is ‘biometric identification’. This seems to be a marketing tactic to try and falsely associate certain biometric systems to technical processes that are generally regarded as less risky.

  • Your privacy can be disproportionately violated by the fact that you are being watched and treated as suspicious simply for going about your public life;
  • Your data protection rights are invaded by the constant and disproportionate processing of your biometric data, often without proper knowledge and consent;
  • Your data protection rights can also be can be violated by the unauthorised sharing of your sensitive data with third parties and the hacking of central databases;
  • Your freedom of expression and association can be violated by the constant feeling of – or even just fear of – being watched, which is sometimes referred to as the ‘chilling effect’
  • Your access to other rights (such as accessing healthcare and education, being involved in political activism, expressing your gender, sexuality, political affiliation or religion, or even participating in democratic processes like voting) can be made harder or suppressed;
  • Your rights to equality and non-discrimination can be violated either through biases in AI systems leading to false identification or through the disproportionate deployment of these systems against minoritised and racialised people.

There are also security / practical risks that are intrinsic to how biometric identification works:

  • Because they are stored in some central way, sensitive biometric data may be more likely to be exploited for commercial or surveillance purposes, either by the original processor, or by third parties;
  • The likelihood that sensitive data can be hacked, breached or leaked increases; and
  • The statistical likelihood of misidentification (e.g. false positives and false negatives) grows as the underlying databases grow.

Biometric identification is therefore much more likely to be used in ways that would amount to BMS, entailing an unacceptable violation of people’s rights to privacy, data protection, free expression and assembly as well as non-discrimination. Biometric verification, by contrast, is more likely to be used in ways which can in theory respect data protection rules and avoid interfering with other rights.**

As we will explore in the next section, not all biometric identification necessarily violates fundamental rights. It’s when biometric identification is used in a remote way (i.e. RBI) in a publicly-accessible space, or when it is disproportionately used against certain groups (e.g. migrants), that it becomes a form of BMS.

Whilst the risks of BMS are not usually associated with biometric verification practices, there are some circumstances where this is not the case. When it comes to education and employment contexts, for example, there is a power dynamic that means that the use of even biometric verification could be an unacceptable violation of human rights. This has been confirmed by several data protection authorities in Europe. Moreover, when biometric verification systems are badly implemented, or are implemented without privacy-preserving features and safeguards, they can present just as great a threat to our rights.

Part 3: How remote is the 'remote' in RBI?

Another distinction between phone unlocking versus public surveillance use cases is the ‘remoteness’ of the example. When you are unlocking your own phone, you are fully aware of what you are doing. It is done one an individual basis, and ‘by’ you, rather than ‘to’ you. Even if your face is a few centimeters away from your phone, you’re still present. Biometric verification is therefore (at least in theory) not remote.

Remote uses of biometric identification, however, have the capacity to scan the face or other bodily characteristics of anyone that comes into view – of a camera, a sensor, or a CCTV feed. So although biometric identification is often referred to as 1:n (1-to-many matching), it’s actually more accurate to think of remote biometric identification as n:n (many-to-many matching). That’s because – even if only one person is being searched for – every single person gets scanned.

Remote biometric identification, therefore, is always risky and violates your rights, because it makes it possible that one or more persons will be surveilled, and that there is a potential that they might not know it is happening. This precludes any possibility of being targeted against a specific individual, and therefore constitutes generalised – or mass – surveillance.

When the EU’s proposed AI Act was first published in April 2021, the EDRi network (and many others) picked apart the ‘remote’ in RBI to see what it really means. There is no specific distance at which a use case becomes remote, and this is further complicated by the fact that some forms of biometric capture that traditionally required contact no longer do so. A good example is fingerprinting, which can now sometimesbe done remotely via a high quality photo or video.

Although it has been criticised as a term, our analysis has indicated that ’remoteness’ is, in fact, likely to be the best way of conveying what exactly it is that turns a use case from a potentially legitimate form of biometric processing to inherently harmful biometric mass surveillance. As such, we were glad when the Council of EU Member States decided not to oppose the use of the term ‘remote’.

We note that ‘remote’ appears elsewhere in the EU legal acquis, but it is important to remember that in the context of the AI Act, we are particularising the word ‘remote’ to the context of ‘biometric identification’. Therefore it isn’t directly equivalent to remote payments or other remote processes.

Non-remote biometric identification Remote biometric identification
Processes the data only of the one person presented to it (e.g. at a kiosk) Can scan the data of persons that haven’t presented themselves
Relies on database, usually of people who have registered themselves*** Relies on database, usually of people who have been registered e.g. by police
Individual usually in control***Individual not in control
Compared to other people’s dataCompared to other people’s data
Acceptable if follows GDPR Never acceptable
Usually not mass surveillance***Is mass surveillance

Whilst the risks of BMS are not usually associated with non-remote biometric identification practices, there are some circumstances where this is not the case. That’s why we use the term ‘usually’ here.

In particular, given the power imbalance and the context of criminalisation of migration, the use of non-remote biometric identification, such as fingerprint scanners and mobile facial recognition by law enforcement agencies, can lead to racial profiling and the infringement of right to non discrimination and other fundamental rights.

When we really interrogated the issue, we realised that ‘remoteness’ cannot be defined as a single thing, but rather a combination of factors that connote surveillance, whereby the surveilled individual(s) may in theory not know that they are being surveilled, and the system is able to scan more than one person.

In essence, it is a combination of physical distance and potential knowledge. This means that:

  • If a person enrols for a facial recognition system using their laptop at home, and then turns up to an individual airport kiosk which automatically lets them through using their face, this would not be remote. No-one but the person at the kiosk would be scanned;
  • But if that same person enrols for a facial recognition system using their laptop at home, and then turns up at the airport to find cameras installed on the ceiling which will flag anyone that didn’t pre-enrol, this would be remote. That’s because everyone at the airport would be scanned by these cameras, without the possibility to genuinely consent;
  • If a company or authority used facial recognition software on a video of people at a public protest, this would be remote;
  • If a person set up a home security system where they could use their fingerprint to get into their home using the technical process of identification, this would not be remote.

It’s a complex issue, and that’s why it is so important that the EU’s AI Act gets it right.

Part 4: Our Recommendations

Of course, biometric systems aren’t bad per se. But biometric data are highly sensitive (they can identify you permanently) and need to be properly protected. The EU's AI Act constitutes a critical opportunity to draw a red line against the most harmful uses of biometric systems.

In order to comprehensively protect people’s rights from the threat of biometric mass surveillance, the AI Act should comprehensively prohibit all remote biometric identification (whether done in real time or post ways) in publicly-accessible spaces, by any actor. There must not be any exceptions: this would be like drilling a hole in a bucket, no matter how small the hole.

Such an approach would not stop providers from developing or deploying biometric verification systems, in accordance with existing rules in the GDPR or LED. And non-banned biometric identification use cases which pose a significant threat to fundamental rights or safety will have to comply with additional requirements for high-risk AI systems in the AI Act.

See below for EDRi’s suggested amendments on RBI:

(b) the placing or making available on the market or putting into service of remote biometric identification systems that are or may be used in publicly-accessible spaces, as well as online spaces; and the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: [including the deletion of all subsequent paragraphs]

‘remote biometric identification system’ means an AI system for the purpose capable of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database or data repository, and without prior knowledge of the user of the AI system whether the person will be present and can be identified

The notion of ‘at a distance’ in Remote Biometric Identification (RBI) means the use of systems as described in Article 3(36), at a distance great enough that the system has the capacity to scan multiple persons in its field of view (or the equivalent generalised scanning of online / virtual spaces), which would mean that the identification could happen without one or more of the data subjects’ knowledge. Because RBI relates to how a system is designed and installed, and not solely to whether or not data subjects have consented, this definition applies even when warning notices are placed in the location that is under the surveillance of the RBI system, and is not de facto annulled by pre-enrollment.

(1)(d)(a). The use of private facial recognition or other private biometric databases for the purpose of law enforcement;

(1)(d)(b). The creation or expansion of facial recognition or other biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or CCTV footage, or equivalent methods;

This blog has specifically considered the issue of RBI. However, such systems are increasingly used in tandem with biometric categorisation, automated behavioural analysis and emotion recognition systems. We additionally recommend a full ban on emotion recognition systems, on automated behavioural analysis systems, and on biometric categorisation in publicly-accessible spaces or on the basis of specific characteristics such as gender or ethnicity.

This is also further enhanced by the joint civil society recommendations to protect people from harmful uses of their biometric data in the border or migration context.

Authored by Ella Jakubowska, Senior Policy Advisor, EDRi

With edits gratefully received from Daniel Leufer and Caterina Rodelli, both Access Now