Remote biometric identification: a technical & legal guide

Lawmakers are more aware than ever of the risks posed by automated surveillance systems which track our faces, bodies and movements across time and place. In the EU's AI Act, facial and other biometric systems which can identify people at scale are referred to as 'Remote Biometric Identification', or RBI. But what exactly is RBI, and how can you tell the difference between an acceptable and unacceptable use of a biometric system?

By EDRi · January 23, 2023

Through the Reclaim Your Face campaign, we have called to biometric mass surveillance practices because of how they eliminate our anonymity, interfere with our enjoyment of public spaces, and weaponise our faces and bodies against us. One of the main practices that amounts to biometric mass surveillance is the use of ‘Remote Biometric Identification’ (RBI) systems in publicly-accessible spaces. This includes parks, streets, shopping centers, libraries, sports venues and other places that the public can enter, even if they have to pay to do so. RBI systems have been used by police, public authorities or companies in most EU countries.

In a disappointing move in December 2022, EU digital ministers in the Council of the EU agreed a position which would water down the AI Act’s proposed ban on RBI (despite dissent from Germany and also reportedly from Austria). Worse still, their Council position could create a first step towards a legal basis for these invasive and authoritarian practices, despite going against existing EU data protection and human rights law. The European Parliament, however, are poised to adopt a much more rights-protective approach. Throughout 2023, the Council and Parliament will negotiate until they can agree a final position.

It is not always clear what constitutes RBI. In particular, the distinction between different biometric systems and the definition of ‘remote’ have been points of confusion. This blog clarifies some key differences to help lawmakers achieve an accurate, comprehensive ban on RBI in publicly-accessible spaces in the AI Act.

Part 1: According to EU law, not all biometrics use cases are the same

EU law distinguishes generally accepted uses of biometric data – like unlocking your smart phone using your face or fingerprint – from unacceptable forms – like being tracked and surveilled when you are walking through a public space. But laws designed to prohibit the use of biometric data in ways that are unacceptably harmful need to be reinforced, in response to the almost exponential rise in the capacity and capability of algorithmic processing.

Unlocking your phone using your biometrics

Unlocking your phone using your face, iris, fingerprint or other biometric feature is lawful so long as it complies with rules on informed consent, the data are processed in a privacy-preserving and secure manner and not shared with unauthorised third parties, and all other data protection requirements are met. As a result, such a use case is exempt from the ban on the processing of biometric data in General Data Protection Regulation (GDPR). The burden, of course, is on whoever is deploying the system to show that it meets the criteria for exemption. If, for example, people cannot truly give their free and informed consent for a use of biometric verification, then it would not be lawful.

Walking through a public space where there are facial recognition cameras

Being tracked in a public space by a facial recognition system (or other biometric system), however, is fundamentally incompatible with the essence of informed consent. If you want or need to enter that public space, you are forced to agree to being subjected to biometric processing. That is coercive and not compatible with the aims of the GDPR, nor the EU’s human rights regime (in particular rights to privacy and data protection, freedom of expression and freedom of assembly and in many cases non-discrimination). This and similar practices are what we mean when we talk about ‘biometric mass surveillance’ (BMS).

Part 2: Technically, what’s the difference between biometric identification and verification?

An important difference between the phone unlocking use case, compared to the public surveillance one, is that the former relies on a technical process called ‘biometric verification’. The latter, however, uses a technical process called ‘biometric identification’. This difference is sometimes referred to as 1:1 matching (e.g. phone unlocking) and 1:n matching (e.g. public surveillance).

Biometric verificationBiometric identification
Compared only to own data Usually compared to the data of multiple people
No central database needed* Relies on some form of database
Individual in control Someone else usually in control
Sensitive data doesn’t go anywhere Sensitive data usually sent somewhere else

*Whilst it isn’t necessary to have a central database to perform biometric verification, there are some cases where the biometric verification may be accompanied by some sort of central check. For example, a person going through an ePassport gate may be checked against an identity database. This is a separate issue that we do not explore here, but we emphasise that such an example can lead to harms and rights violations.

Biometric verification includes processes such as unlocking your phone by matching your fingerprint to a template stored on your device

What is biometric verification?

Biometric verification is a technological process most commonly used to authenticate someone’s identity. It’s sometimes referred to as ‘claiming your identity’ because you are in control, and you can use the biometric verification system to demonstrate that you are who you say you are.

Essentially, the system asks: “are you who you say you are?” by comparing your pre-stored biometric template (i.e. a numerical representation of your face, fingerprint etc) with what you are presenting now (e.g. comparing your fingerprint template stored on your passport or on your phone to the finger you are presenting). The intention is that you are the match for the stored data and the system needs to confirm / ‘verify’ this.

Biometric identification, however, usually compares your data to multiple other biometric templates in a database or watchlist

What is biometric identification?

Biometric identification is a process of comparing your data to multiple other sets of data in some form of database. For example, this could be by comparing your face to a database of face templates to see if there is a match. This database might be relatively small (e.g. a watch-list) or very large (e.g. a national identity database).

Biometric identification requires the collection of your sensitive biometric data in order to compare it against other sets of biometric data which are stored in the database, whether locally or ‘in the cloud’. Unlike with biometric verification, where you need to present yourself and your comparison data (e.g. held in your phone, on your passport chip, or in an entry badge), when it comes to biometric identification, you just need to be there.

Whether you’re stood in front of it, or several meters away and totally unaware, the system does all the matching between you and the comparison data. That’s why some providers refer to biometric identification systems as ‘seamless’ or advertise them as ‘without the need for end-user collaboration’. In theory, if a person matches a template in the database, they would be flagged and identified (although in reality, these systems have serious problems with accuracy and bias).

Biometric identification is therefore much more likely to be used in ways that would amount to BMS, entailing an unacceptable violation of people’s rights to privacy, data protection, free expression and assembly as well as non-discrimination. Biometric verification, by contrast, is more likely to be used in ways which can in theory respect data protection rules and avoid interfering with other rights.**

As we will explore in the next section, not all biometric identification necessarily violates fundamental rights. It’s when biometric identification is used in a remote way (i.e. RBI) in a publicly-accessible space, or when it is disproportionately used against certain groups (e.g. migrants), that it becomes a form of BMS.

Part 3: How remote is the 'remote' in RBI?

Another distinction between phone unlocking versus public surveillance use cases is the ‘remoteness’ of the example. When you are unlocking your own phone, you are fully aware of what you are doing. It is done one an individual basis, and ‘by’ you, rather than ‘to’ you. Even if your face is a few centimeters away from your phone, you’re still present. Biometric verification is therefore (at least in theory) not remote.

Remote uses of biometric identification, however, have the capacity to scan the face or other bodily characteristics of anyone that comes into view – of a camera, a sensor, or a CCTV feed. So although biometric identification is often referred to as 1:n (1-to-many matching), it’s actually more accurate to think of remote biometric identification as n:n (many-to-many matching). That’s because – even if only one person is being searched for – every single person gets scanned.

Remote biometric identification, therefore, is always risky and violates your rights, because it makes it possible that one or more persons will be surveilled, and that there is a potential that they might not know it is happening. This precludes any possibility of being targeted against a specific individual, and therefore constitutes generalised – or mass – surveillance.

Non-remote biometric identification Remote biometric identification
Processes the data only of the one person presented to it (e.g. at a kiosk) Can scan the data of persons that haven’t presented themselves
Relies on database, usually of people who have registered themselves*** Relies on database, usually of people who have been registered e.g. by police
Individual usually in control***Individual not in control
Compared to other people’s dataCompared to other people’s data
Acceptable if follows GDPR Never acceptable
Usually not mass surveillance***Is mass surveillance

Part 4: Our Recommendations

Of course, biometric systems aren’t bad per se. But biometric data are highly sensitive (they can identify you permanently) and need to be properly protected. The EU's AI Act constitutes a critical opportunity to draw a red line against the most harmful uses of biometric systems.

In order to comprehensively protect people’s rights from the threat of biometric mass surveillance, the AI Act should comprehensively prohibit all remote biometric identification (whether done in real time or post ways) in publicly-accessible spaces, by any actor. There must not be any exceptions: this would be like drilling a hole in a bucket, no matter how small the hole.

Such an approach would not stop providers from developing or deploying biometric verification systems, in accordance with existing rules in the GDPR or LED. And non-banned biometric identification use cases which pose a significant threat to fundamental rights or safety will have to comply with additional requirements for high-risk AI systems in the AI Act.

See below for EDRi’s suggested amendments on RBI:

This blog has specifically considered the issue of RBI. However, such systems are increasingly used in tandem with biometric categorisation, automated behavioural analysis and emotion recognition systems. We additionally recommend a full ban on emotion recognition systems, on automated behavioural analysis systems, and on biometric categorisation in publicly-accessible spaces or on the basis of specific characteristics such as gender or ethnicity.

This is also further enhanced by the joint civil society recommendations to protect people from harmful uses of their biometric data in the border or migration context.

Authored by Ella Jakubowska, Senior Policy Advisor, EDRi

With edits gratefully received from Daniel Leufer and Caterina Rodelli, both Access Now